Internal error: Unexpected GitHub remote configuration: 'theirs' - r

I am currently working on a collaborative R-project, and after I forked the repository, I made a couple of changes and wanted to push them. However, when I tried to push it in the Rstudio, it raised the following error.
Internal error: Unexpected GitHub remote configuration: 'theirs'
To push the commits, I've used the following code :
pr_push()
Here are some context:
git remote -v
origin https://github.com/PPBDS/primer.git (fetch)
origin https://github.com/PPBDS/primer.git (push)
The code I used to fork the original repository.
library(usethis)
create_from_github("PPBDS/primer",
fork = TRUE,
destdir = "/mydest/",
protocol = "https")
The branch I've created to work on:
> pr_init(branch ='Python.v')
√ Pulling changes from 'origin/master'
√ Creating and switching to local branch 'Python.v'
I don't know how to fix this. Can anyone help me?

Your git remotes shows that the repo was cloned over https (no push allowed over https) and no fork was configured.
If I'm interpreting you correctly, the output should be:
origin git#github.com:AtillaColak/primer.git (fetch)
origin git#github.com:AtillaColak/primer.git (push)
upstream https://github.com/PPBDS/primer.git (fetch)
upstream https://github.com/PPBDS/primer.git (push)
Perhaps this will work? (Untested) (in a shell, not in R)
git remote rename origin upstream
git remote add origin git#github.com:AtillaColak/primer.git
If those commands do not work, the .git/config file should look like this:
[remote "origin"]
url = git#github.com:AtillaColak/primer.git
fetch = +refs/heads/*:refs/remotes/origin/*
[remote "upstream"]
url = https://github.com/PPBDS/primer.git
fetch = +refs/heads/*:refs/remotes/upstream/*
I don't encourage editing the .git/config file manually if you can avoid it, but this (if nothing else) may provide verification of other attempts.

Related

Why is gmailr not working in docker build process?

I'm using the gmailr package for sending emails from a r script.
Locally it's all working fine, but when I try to run this during a docker build step in google cloud I'm getting an error.
I implemented it in the following way described here.
So basically, locally the part of my code for sending emails looks like this:
gm_auth_configure(path = "credentials.json")
gm_auth(email = TRUE, cache = "secret")
gm_send_message(buy_email)
Please note, that I renamed the .secret folder to secret, because I want to deploy my script with docker in gcloud and didn't want to get any unexpected errors due to the dot in the folder name.
This is the code, which I'm now trying to run in the cloud:
setwd("/home/rstudio/")
gm_auth_configure(path = "credentials.json")
options(
gargle_oauth_cache = "secret",
gargle_oauth_email = "email.which.was.used.to.get.secret#gmail.com"
)
gm_auth(email = "email.which.was.used.to.get.secret#gmail.com")
When running this code in a docker build process, I'm receiving the following error:
Error in gmailr_POST(c("messages", "send"), user_id, class = "gmail_message", :
Gmail API error: 403
Request had insufficient authentication scopes.
Calls: gm_send_message -> gmailr_POST -> gmailr_query
I can reproduce the error locally, when I do not check the
following box.
Therefore my first assumption is, that the secret folder is not beeing pushed correctly in the docker build process and that the authentication tries to authenticate again, but in a non interactive-session the box can't be checked and the error is thrown.
This is the part of the Dockerfile.txt, where I'm pushing the files and running the script:
#2 ADD FILES TO LOCAL
COPY . /home/rstudio/
WORKDIR /home/rstudio
#3 RUN R SCRIPT
CMD Rscript /home/rstudio/run_script.R
and this is the folder, which contains all files / folders beeing pushed to the cloud.
My second assumption is, that I have to somehow specify the scope to use google platform for my docker image, but unfortunately I'm no sure where to do that.
I'd really appreciate any help! Thanks in advance!
For anyone experiencing the same problem, I was finally able to find a solution.
The problem is that GCE auth is set by the "gargle" package, instead of using the "normal user OAuth flow".
To temporarily disable GCE auth, I'm using the following piece of code now:
library(gargle)
cred_funs_clear()
cred_funs_add(credentials_user_oauth2 = credentials_user_oauth2)
gm_auth_configure(path = "credentials.json")
options(
gargle_oauth_cache = "secret",
gargle_oauth_email = "sp500tr.cloud#gmail.com"
)
gm_auth(email = "email.which.was.used.for.credentials.com")
cred_funs_set_default()
For further references see also here.

How do I use JARs stored in Artifactory in spark submits?

I am trying to configure the spark-submits to use JARs that are stored in artifactory.
I've tried a few ways to do this
Attempt 1: Changing the --jars parameter to point to the https end point
Result 1: 401 Error. Credentials are being passed like so: https://username:password#jfrog-endpoint. The link was tested using wget and it authenticates and downloads the JAR fine.
Error
Attempt 2: Using a combination of --packages --repositories
Result 2: URL doesn't resolve to the right location of the jar
Attempt 3:Using combination of --packages and modified ivysettings.xml (containing repo and artifact pattern)
ivy settings
Result 3: URL resolves correctly but still results in "Not Found"
After some research it looks like the error might say "Not Found" and the it looks like it has "tried" the repo, it could still very well be a 401 error.
Error
Any ideas would be helpful! Links i've explored:
Can i do spark-submit application jar directly from maven/jfrog artifactory
spark resolve external packages behind corporate artifactory
How to pass jar file (from Artifactory) in dcos spark run?
https://godatadriven.com/blog/spark-packages-from-a-password-protected-repository/
https://spark.apache.org/docs/latest/submitting-applications.html#advanced-dependency-management
You can use https://username:password#jfrog.com/rep, however you need to specify port.
https://username:password#jfrog.com:443/repo
I discovered this using Artifactory's "set me up" tool for package type Ivy. If you look in the resolver url it specifies the port.

failing to devtools::install_github() from an enterprise github account

I am trying to install an R package from an enterprise github account
devtools::install_github(
repo = "<owner>/<repo>",
host = "github.<org_name>.com/api/v3/",
auth_token = <my_github_pat>
)
I get this error message
Error: Failed to install '<repo>' from GitHub: HTTP error 404. Not Found Did you spell the repo owner ('<owner>') and repo name ('<repo>') correctly? - If spelling is correct, check that you have the required permissions to access the repo.
I have the correct spellings, and I think that I must have the required permissions because it's actually my repo: I can push and pull from the repo just fine. I am doing the install_github() as a test case so colleagues can install my package, but I can't make sense of this error message.
Literally just needed to drop the last "/" in the host string and this worked. SMH
devtools::install_github(
repo = "<owner>/<repo>",
host = "github.<org_name>.com/api/v3",
auth_token = <my_github_pat>
)

Installing a package from private GitLab server on Windows

I am struggling with installing a package from a GitLab repository on a Windows computer.
I found different hints but still have problems to install my package from GitLab. First of all, I generated a public and private key with puttygen.exe. The files need to be changed afterwards, I had to remove comments and stuff so they look like my the file on my Unix system. So now, both public and private key files have just a single line.
I tried to install my package via devtools::install_git which takes very long and I get the error message
Error: Failed to install 'unknown package' from Git:
Error in 'git2r_remote_ls': Failed to authenticate SSH session: Unable to send userauth-publickey request
And with devtools::install_gitlab I get a different error message and I somehow have the feeling, the link which gets generated doesn't fit to my GitLab server.
Error: Failed to install 'unknown package' from GitLab:
cannot open URL 'https://gitlab.rlp.net/api/v4/projects/madejung%2FMQqueue.git/repository/files/DESCRIPTION/raw?ref=master'
My complete code to test at the moment is
creds <- git2r::cred_ssh_key(publickey="~/.ssh/id_rsa_gitlab.pub",
privatekey="~/.ssh/id_rsa_gitlab")
devtools::install_git(
url='git#gitlab.rlp.net:madejung/MQqueue.git',
quiet=FALSE,
credentials=creds)
devtools::install_gitlab(
repo='madejung/MQqueue.git',
host='gitlab.rlp.net',
quiet=FALSE,
credentials=creds
)
My id_rsa_gitlab.pub file looks like this and is just a single line:
ssh-rsa AAAA....fiwbw== rsa-key-20200121
The id_rsa_gitlab file has just the code:
AAABA.....3WNSIAGE=
Update
On my Mac system it works as expected after installing the libssh2 library via homebrew and and recompiling git2r with install.packages("git2r", type = "source").
So the working code on my machine is:
creds <- git2r::cred_ssh_key(publickey="~/.ssh/id_rsa_gitlab.rlp.net.pub",
privatekey="~/.ssh/id_rsa_gitlab.rlp.net")
devtools::install_git(
url='git#gitlab.rlp.net:madejung/MQqueue.git',
quiet=FALSE,
credentials=creds
)
For some strange reason, the devtools::install_git call needs about a minute to fail in the end. I have no idea where the problem here is.
After struggling for almost a day, I found a solution I can live with...
I first created a PAT (Personal Access Token) in my gitlab account and granted full API access. For some reason the read_only access didn't worked and I am now tired to figure out what the problem is.
After this I had still problems to install my package and for some reason, the wininet setting for downloading doesn't work.
I used the command capabilities("libcurl") to check if libcurl is available on my windows, which was and tried to overwrite wininet to libcurl by using method='libcurl' in the install function. Somehow, this was not enough so I overwrote the options variable download.file.method directly.
options("download.file.method"='libcurl')
devtools::install_gitlab(
repo='madejung/MQqueue',
auth_token='Ho...SOMETHING...xugzb',
host='gitlab.rlp.net',
quiet=FALSE, force=TRUE
)

HTTP 400 - Unable to parse remote repository npm metadata

We have 2 remote NPM registries inside of a virtual repository. One of them is the NPM Registry, the other one is from a software provider. When I add the second repository to the virtual repository, I am getting HTTP 400 messages at random.
For example: if I want to install a package from the npm-registry, I see through the logs that Artifactory is trying to get the package from the other repository (which does not have the package) and tries to parse the response as json. The response from the other repository gives back a html file though which results in the following error message:
2017-02-23 09:39:05,424 [http-nio-8080-exec-7112] [ERROR]
(o.a.a.n.r.NpmRemoteRepoHandler:362) - Error while parsing the response of a remote npm
JSON query on 'https://repository.domain.com/api/npm/public/file-loader':
Unexpected character ('<' (code 60)): expected a valid value (number, String, array, object,
'true', 'false' or 'null')
at [Source:org.artifactory.storage.db.binstore.service.UsageTrackingBinaryProvider$ReaderTrackingStream#7360bc6c; line: 1, column: 2]
As you can see, Artifactory is trying to get the package from the other repository. The JSON response of our artifactory, when I try to get the package manually is :
{
"errors" : [ {
"status" : 400,
"message" : "Unable to parse remote repository npm metadata."
} ]
}
Any help would be greatly appreciated, since this makes the NPM Registry completely useless as some requests are returning this HTTP 400 error.
fyi: We are using Artifactory Pro 4.5.1
There are 2 things you should do to avoid this behavior
Configure the virtual repository resolution order so the NPM registry is approached before the software provider registry. The resolution order is controlled by the order they are presented in the Selected Repositories list.
Use include/exclude patterns to control which packages are resolved from the software provider registry. Assuming there is a way to identify the packages which should be resolved from software provider you can define patterns which will limit this registry only for the resolution of certain packages.
Another thing to check is whether the software provider remote repository configured properly. Normally it should not return an HTML response for an API call.

Resources