After upgrading gitlab from 6.3 to 6.5, everything looks good, except I cannot clone any repositories over HTTP. I could clone/fetch/push over SSH, and also browse the whole repositories over HTTP, but just cannot clone it over HTTP.
The error message is:
$ git clone http://mydomain/mygroup/test.git
Cloning into 'test'...
fatal: protocol error: bad line length character: 3ca
fatal: The remote end hung up unexpectedly
Any hints are appreciated.
====== More Information =======
When executing the clone as above, there are two nginx logs:
192.168.1.103 - - [26/Feb/2014:17:51:58 +0800] "GET /mygroup/test.git/info/refs?service=git-upload-pack HTTP/1.1" 200 283 "-" "git/1.8.3.4 (Apple Git-47)" "-"
192.168.1.103 - - [26/Feb/2014:17:51:58 +0800] "POST /mygroup/test.git/git-upload-pack HTTP/1.1" 200 994 "-" "git/1.8.3.4 (Apple Git-47)" "-"
And here is the error I got when trying to "git fetch" inside a project which was cloned over HTTP before:
$ git fetch
fatal: git fetch_pack: expected ACK/NAK, got '
0038ACK 12c5f4a0f130acfd8ef502a28989f42d37228016 common
0038ACK 736453e9369a9bb91b2b49b17419c168e4b61c5b common
0031ACK 736453e9369a9bb91b2b49b17419c168e4b61c5b
0022Counting objects: 114, done.
002eCompressing objects: 100% (67/67), done.
2004PACK'
Git 2.13 (Q2 2017) could add a more explicit/precise error message than expected ACK/NAK.
That would help explaining the particular clone error in the OP's case.
See commit 8e2c7be (12 Apr 2017) by Jonathan Tan (jhowtan).
(Merged by Junio C Hamano -- gitster -- in commit d2617eb, 24 Apr 2017)
fetch-pack: show clearer error message upon ERR
Currently, fetch-pack prints a confusing error message ("expected ACK/NAK") when the server it's communicating with sends a pkt-line starting with "ERR".
Replace it with a less confusing error message.
Also update the documentation describing the fetch-pack/upload-pack
protocol (pack-protocol.txt) to indicate that "ERR" can be sent in the place of "ACK" or "NAK".
In practice, this has been done for quite some
time by other Git implementations (e.g. JGit sends "want $id not valid")
and by Git itself (since commit bdb31ea: "upload-pack: report "not our ref" to client", 2017-02-23) whenever a "want" line references an object that it does not have. (This is uncommon, but can happen if a repository is garbage-collected during a negotiation.)
Related
1. Problem
The git push command returns the following error if one file is larger than ~1MB:
Pushing to http://mygitlabserver.pitunnel.com/root/my_project.git
POST git-receive-pack (1163897 bytes)
error: RPC failed; HTTP 413 curl 22 The requested URL returned error: 413
fatal: the remote end hung up unexpectedly
fatal: the remote end hung up unexpectedly
Everything up-to-date
The server is an RPi 4 with an SSD attached. Accessed via pitunnel (standard subscribtion).
The push fails if one file is larger than 1MB
The push returns no error even if the commit is 150MB (a lot of small files)
The push returns no error if an mp3 file of multiple MBs gets pushed.
2. Problem
Not really a problem but it can be related to the other one
If a large project is imported that was exported from gitlab.com it returns the same error:
413 Request Entity Too Large
nginx/1.10.3 (Ubuntu)
But only if connected via pitunnel (link), it works if the project is uploaded in the local network.
The nginx seems to be the problem.
In the gitlab.rb file the following parameters are set and the gitlab service was restarted according to the gitlab docs:
nginx['enable'] = true
nginx['client_max_body_size'] = '900m'
PS: The repo will use git LFS after this problem is solved.
for all with similar a similair problem:
Pitunnel was the problem.
I have created an AQL that gives me a number of artifacts I want to delete from Artifactory. I can run a search command with the jfrog-cli on it, and get the correct list of artifacts:
jfrog rt s --spec search-aql.json
When I try to delete the same artifacts with the same AQL, everything seems fine:
The artifacts are listed
I get a questions of whether to delete them or not.
I answer yes, and the command goes on to log the deletion of each item.
Each item seems to be returned and printed to my console, followed by this message:
[Error] Artifactory response: 200 OK
Binaries are also printed to the console, so the console output is really messy.
In the end, I get a summary:
{
"status": "failure",
"totals": {
"success": 0,
"failure": 68
}
}
[Error] Artifactory response: 200 OK
With the same user, I can delete individual artifacts using the REST API, so the user does have the necessary rights to do deletion.
I am on version 1.38.2 of the JFrog CLI and 7.2.1 of Artifactory.
Can anybody help me understand what is wrong or how to debug this problem?
Update 2020/08/06:
When setting the log level to debug as suggested by #Prostagma, I get two extra lines of logging for each artifact. Here is an example of logging for two artifacts:
[Info] [Thread 1] Deleting <path>/<artifact>.jar.sha512
[Debug] Sending HTTP DELETE request to: https://repo.enonic.com/<path>/<artifact>.jar.sha512
[Error] Artifactory response: 200 OK
<sha512 hash>
[Info] [Thread 0] Deleting <path>/<artifact>.jar
[Debug] Sending HTTP DELETE request to: https://repo.enonic.com/<path>/<artifact>.jar
[Error] Artifactory response: 200 OK
<binary contents of <artifact>.jar
More detailed answer to whomever encounter this issue in the future -
By diving into the code we can see that the CLI expects to get status 204 from Artifactory after deletion:
https://github.com/jfrog/jfrog-client-go/blob/v0.12.0/artifactory/services/delete.go#L110
In some installations, there may be a proxy that changes the response codes, for example when Artifactory returns 204, it may change the status code to 200.
Please make sure that your proxy doesn't change the status codes returned from Artifactory.
Novice Artifactory user here so please bear with.
Trying to use curl on linux to deploy a file in a repo and failing. trying this running on the same server that's serving artifactory.
% --> curl -u my_idsid -X PUT
"http://localhost:8081/artifactory/test_repo/test_artifact_linux_01" -T
test_aftifact_linux_01
Enter host password for user 'my_idsid':<I entered the pw die my_user here>
HTTP/1.1 100 Continue
HTTP/1.1 403 Forbidden
Server: Artifactory/5.4.6
X-Artifactory-Id: 3444ab9991d26041:29864758:15e2f4940fa:-8000
Content-Type: application/json;charset=ISO-8859-1
Content-Length: 65
Date: Tue, 05 Sep 2017 21:00:20 GMT
Connection: close
{
"errors" : [ {
"status" : 403,
"message" : ""
} ]
}
I was able to put (deploy) a file (an artifact) in this same repo from windows with the drag-drop method for that same user. So I'm thinking that this confirms several things...
1) the repo exists
2) the permissions to allow that user write access is ok
3) the server is up and running ok
And the pw is OK, because when I intentionally enter something wrong, it comes up with a 401 instead.
I looked in artifactory.log and it has this for my attempt...
2017-09-05 17:15:00,332 [http-nio-8081-exec-7] [WARN ]
(o.a.w.s.RequestUtils:155) - Request /test_repo/test_artifact_linux_01
should be a repo request and does not match any repo key
Does not match a repo key ?
I'm using the example here...
https://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API#ArtifactoryRESTAPI-Example-DeployinganArtifact
and plugging in the repo name right after "artifactory" (replacing "my_repository") in the path.
I have a strong suspicion that I'm goofing up the path name, but I don't know what's wrong. How does one determine the proper path to use ?
Thanks in advance for any help.
UPDATE:
My bad. The repo was actually named "test-repo", not "test_repo".
Worked fine once I made that change.
Sorry for the false alarm.
Though I followed https://blog.openshift.com/lightweight-http-serving-using-nginx-on-openshift/ step by step, I ended up getting the error 503, telling me service is unavailable. There are questions on various websites, including stackoverflow, but all of them are about issues after a successful installation of nginx, a point I haven't arrived at yet.
I don't want to use already available cartridges - in part because most of them are out-of-date. Also, an answer to my question might be of interest to some people, inasmuch as it will teach how to run always the latest nginx server on OpenShift.
This is rhc tail result
DL is deprecated, please use Fiddle
==> app-root/logs/diy.log <==
[2014-12-06 16:55:47] INFO WEBrick::HTTPServer#start done.
[2014-12-06 16:55:50] INFO WEBrick 1.3.1
[2014-12-06 16:55:50] INFO ruby 1.8.7 (2013-06-27) [x86_64-linux]
[2014-12-06 16:55:50] INFO WEBrick::HTTPServer#start: pid=255959 port=8080
127.xx.x.xxx - - [06/Dec/2014:17:11:57 EST] "HEAD / HTTP/1.1" 200 0
- -> /
127.xx.x.xxx - - [06/Dec/2014:17:11:57 EST] "HEAD / HTTP/1.1" 200 0
- -> /
[2014-12-06 17:32:02] INFO going to shutdown ...
[2014-12-06 17:32:02] INFO WEBrick::HTTPServer#start done.
==> app-root/logs/server.log <==
nginx: [emerg] invalid port in ":" of the "listen" directive in /var/lib/openshi
ft/xxx/app-root/data//conf/nginx.conf:36
This guide (2012 year) uses environment variables: $OPENSHIFT_INTERNAL_IP:$OPENSHIFT_INTERNAL_PORT
Now it was renamed to:
$OPENSHIFT_DIY_IP:$OPENSHIFT_DIY_PORT
If all you want is nginx on OpenShift I would use this cartridge instead
https://github.com/gsterjov/openshift-nginx-cartridge
Here's a cartridge that's updated to the most recent nginx 1.9.12
I'm having trouble with setting up Sentry server in HTTPS mode. Every now and then, reasonably often while seemingly random, this error message gets written by Raven (Sentry client) into log files:
Unable to reach Sentry log server: <urlopen error [Errno 8] _ssl.c:504: EOF occurred in violation of protocol> (url: https://$(valid_server)/)
Web UI works fine. Vast majority of the messages from Raven are received fine and Sentry processes them into usable output. However, due to these errors, something gets lost from time to time.
I have tried to figure this one out, but dead ends seem to follow another. Basically it seems a lot like this:
Python Requests requests.exceptions.SSLError: [Errno 8] _ssl.c:504: EOF occurred in violation of protocol
But when testing my Sentry server with similar s_client query using TLS 1.2, it leads to a valid session unlike with the example there.
It's also not about this, since SNI isn't used:
python-requests 2.0.0 - [Errno 8] _ssl.c:504: EOF occurred in violation of protocol
I'm not able to reproduce the error coherently. Raven's tests are passed and nothing is acutely wrong, until an error pops up in the log.
My set up is: Raven 4.2.1 in Python 2.7.5, Nginx 1.6.0 as reverse proxy handling HTTPS, and finally Sentry 6.4.4 with default Gunicorn 0.17.4. Nginx configs are pretty much similar to official documentation (http://sentry.readthedocs.org/en/latest/quickstart/nginx.html) with a few alterations due to HTTPS.
I ran into the same issue and got it fixed by installing the following dependencies:
On Ubuntu:
sudo aptitude install libffi-dev
And then via pip:
pip install pyopenssl ndg-httpsclient pyasn1
The problem seems to be that Python 2.X doesn't support SNI (which is needed for TLS) out of the box as explained here.