Deleting artifacts with JFrog CLI gives "200 OK", but artifacts are not deleted - artifactory

I have created an AQL that gives me a number of artifacts I want to delete from Artifactory. I can run a search command with the jfrog-cli on it, and get the correct list of artifacts:
jfrog rt s --spec search-aql.json
When I try to delete the same artifacts with the same AQL, everything seems fine:
The artifacts are listed
I get a questions of whether to delete them or not.
I answer yes, and the command goes on to log the deletion of each item.
Each item seems to be returned and printed to my console, followed by this message:
[Error] Artifactory response: 200 OK
Binaries are also printed to the console, so the console output is really messy.
In the end, I get a summary:
{
"status": "failure",
"totals": {
"success": 0,
"failure": 68
}
}
[Error] Artifactory response: 200 OK
With the same user, I can delete individual artifacts using the REST API, so the user does have the necessary rights to do deletion.
I am on version 1.38.2 of the JFrog CLI and 7.2.1 of Artifactory.
Can anybody help me understand what is wrong or how to debug this problem?
Update 2020/08/06:
When setting the log level to debug as suggested by #Prostagma, I get two extra lines of logging for each artifact. Here is an example of logging for two artifacts:
[Info] [Thread 1] Deleting <path>/<artifact>.jar.sha512
[Debug] Sending HTTP DELETE request to: https://repo.enonic.com/<path>/<artifact>.jar.sha512
[Error] Artifactory response: 200 OK
<sha512 hash>
[Info] [Thread 0] Deleting <path>/<artifact>.jar
[Debug] Sending HTTP DELETE request to: https://repo.enonic.com/<path>/<artifact>.jar
[Error] Artifactory response: 200 OK
<binary contents of <artifact>.jar

More detailed answer to whomever encounter this issue in the future -
By diving into the code we can see that the CLI expects to get status 204 from Artifactory after deletion:
https://github.com/jfrog/jfrog-client-go/blob/v0.12.0/artifactory/services/delete.go#L110
In some installations, there may be a proxy that changes the response codes, for example when Artifactory returns 204, it may change the status code to 200.
Please make sure that your proxy doesn't change the status codes returned from Artifactory.

Related

Artifactory Migration - URL to files not working

I'm in the process of upgrading and migrating Artifactory version 6.11 (zip install, housed on RH7) to the 7.35 version (housed on a new server and hostname, rpm install). I'm doing this on a cloned VM as a test, so the only thing that is different from our original system is the hostname. As the documentation recommends, I first upgraded 6.11 to 7.35 and everything seemed to go well. I followed the upgrade steps and the migration.sh script completed successfully.
The major issue I'm having is that when I go into Artifacts, the 'url to file' is bringing up a 502 Bad Gateway nginx error. It seems to me that a pointer is incorrect somewhere and I'm confused as to where it could be. The upgrade was successful, so I know the data is there, but Artifactory is not able to link to it properly.
Update/clarification: To improve my description: When I head into Application bar / Artifactory / Artifacts and select a repo from the left-hand column, the 'url to file' fails to load. I'm assuming this is the tree view?
On the server that is currently working, a url such as https://acme/artifactory/repo leads to a directory listing. However, on the new server, a url such as https://new-acme-server/artifactory/repo would lead to a 502 Bad Gateway or an nginx error if I use http (no cert is installed on the test VM, but is installed on the orignal server).
In v7.35, I went into the 'http settings' and switched the server provider as both nginx and apache (Tomcat was set as default) and while the site operated fine under both, the url to the repo files still fails with an nginx error, regardless of the server provider.
When I did a full system export of the original server, the documentation had me uncheck "Exclude data". I also exported the repos out as well and imported those in via a path. Everything seems to show up correctly just like on the original server, but I'm still unable to view a directory listing when I click on the url.
Could it be the location of the filestore being different? If so, how would I go about pointing it to the right location?
V7.35: /opt/jfrog/artifactory/var/data/artifactory/filestore
V6.11: /opt/artifactory/artifactory-pro-6.11.3/data/filestore
The base URL is the same as the original installation http(s)://domain/artifactory
Output from artifactory-service.log
2022-03-25T16:58:40.429Z [jfrt ] [INFO ] [3bb67ba1f30d560e] [ifactoryApplicationContext:564] [ttp-nio-8081-exec-10] - Artifactory application context set to READY by reload
2022-03-25T16:58:40.430Z [jfrt ] [INFO ] [3bb67ba1f30d560e] [c.CentralConfigServiceImpl:933] [ttp-nio-8081-exec-10] - Configuration reloaded.
2022-03-25T17:09:04.013Z [jfrt ] [INFO ] [708a8ae7c307ec92] [c.CentralConfigServiceImpl:914] [http-nio-8081-exec-5] - Reloading configuration... old revision 212, new revision 213
2022-03-25T17:09:04.121Z [jfrt ] [INFO ] [708a8ae7c307ec92] [c.CentralConfigServiceImpl:542] [http-nio-8081-exec-5] - New configuration with revision 213 saved.
2022-03-25T17:09:04.121Z [jfrt ] [INFO ] [708a8ae7c307ec92] [ifactoryApplicationContext:564] [http-nio-8081-exec-5] - Artifactory application context set to NOT READY by reload
2022-03-25T17:09:04.181Z [jfrt ] [INFO ] [708a8ae7c307ec92] [ifactoryApplicationContext:564] [http-nio-8081-exec-5] - Artifactory application context set to READY by reload
2022-03-25T17:09:04.181Z [jfrt ] [INFO ] [708a8ae7c307ec92] [c.CentralConfigServiceImpl:933] [http-nio-8081-exec-5] - Configuration reloaded.
2022-03-25T17:36:47.707Z [jfrt ] [INFO ] [d7bb51eedd93b03c] [aseBundleCleanupServiceImpl:84] [art-exec-20 ] - Starting to cleanup incomplete Release Bundles
2022-03-25T17:36:47.708Z [jfrt ] [INFO ] [d7bb51eedd93b03c] [b.ReleaseBundleServiceImpl:415] [art-exec-20 ] - Finished deleting orphan/unidentified items from _intransit repository
2022-03-25T17:36:47.709Z [jfrt ] [INFO ] [d7bb51eedd93b03c] [aseBundleCleanupServiceImpl:90] [art-exec-20 ] - Finished incomplete Release Bundles cleanup
Your filestore location for both Artifactory 6 and Artifactory7 is correct.
This indicates to me that the issue is with your reverse proxy.
In order to confirm, can you check the below two things.
Open your Artifactory on its IP and Port.
http://localhost:8082/ (The default port will be 8082 if you have not modified). Now go to tree view in Application tab of Artifactory and try to download a specific file. If you are able to download, then the issue is might not be with filestore or upgrade. Mostly it should be reverse proxy.
In that case, navigate to Artifactory > Administration > Artifactory > HTTP Settings > Genereate new settings > Place in reverse proxy and restart.
In the above test, if you are still not able to download, then check in the logs ($JFROG_HOME/artifactory/var/log/artifactory-service.log) if you are observing a message something similar to the below.
2022-03-24T20:15:35.072Z [jfrt ] [WARN ] [2a73d62655afd1ad] [.r.ArtifactoryResponseBase:136] [http-nio-8081-exec-9] - Sending HTTP error code 500: Could not process download request: Binary provider has no content for '165c79f8dff2f9e7d3ccadcbc295f7ef8e6e95f0'
If yes, then it indicates that, Artifactory is not able to find the binary. If none of the above helped, post the log snippet from the above log file here while you are trying to download a file along with the file name.
For your update/clarification questions, please allow me to clarify.
In Artifactory 6.x Artifactory was acting as both the server and UI, therefore you could use the "http://acme/artifactory URL, however in Artifactory 7.x, Artifactory changed to work with multiple microservices, and the UI has moved to its own microservice (now it is named "Frontend"). You can try access the "Native Browser" by using this URL http://acme/ui/native/REPOSITORY/.
To add to the above and to Ganapathi's reply, the URL for your Artifactory has changed from http://acme:8081/artifactory to http://acme:8082 since Artifactory is now utilizing the "router" (external port is 8082 and internal 8046) microservice to redirect all the requests to the respective microservices. You can check the full list here.
I hope this clarifies more.

Self-hosted gitlab server with with RPi and pitunnel showing http error 413 when trying to push

1. Problem
The git push command returns the following error if one file is larger than ~1MB:
Pushing to http://mygitlabserver.pitunnel.com/root/my_project.git
POST git-receive-pack (1163897 bytes)
error: RPC failed; HTTP 413 curl 22 The requested URL returned error: 413
fatal: the remote end hung up unexpectedly
fatal: the remote end hung up unexpectedly
Everything up-to-date
The server is an RPi 4 with an SSD attached. Accessed via pitunnel (standard subscribtion).
The push fails if one file is larger than 1MB
The push returns no error even if the commit is 150MB (a lot of small files)
The push returns no error if an mp3 file of multiple MBs gets pushed.
2. Problem
Not really a problem but it can be related to the other one
If a large project is imported that was exported from gitlab.com it returns the same error:
413 Request Entity Too Large
nginx/1.10.3 (Ubuntu)
But only if connected via pitunnel (link), it works if the project is uploaded in the local network.
The nginx seems to be the problem.
In the gitlab.rb file the following parameters are set and the gitlab service was restarted according to the gitlab docs:
nginx['enable'] = true
nginx['client_max_body_size'] = '900m'
PS: The repo will use git LFS after this problem is solved.
for all with similar a similair problem:
Pitunnel was the problem.

I am having an issue recursively pulling a folder from Artifactory using JFrog CLI

I am using an Artifactory server. I am trying to store a folder of third party files in my repo.
I was able to upload a zipfile and explode it so I have my files in MY_REPO/MY_FOLDER/
I am trying to recursively download the contents of the folder using:
jfrog rt dl --recursive MY_REPO/MY_FOLDER
I have enabled debug and I get:
[Info] Searching items to download...
[Debug] Searching Artifactory using AQL query:
items.find({"repo": "MY_REPO","path": {"$ne": "."},"$or": [{"$and":[{"path": {"$match": "MY_FOLDER"},"name": {"$match": "*"}}]},{"$and":[{"path": {"$match": "MY_FOLDER/*"},"name": {"$match": "*"}}]}]}).include("name","repo","path","actual_md5","actual_sha1","size","type","property")
[Error] Artifactory response: 405 Method Not Allowed
{
"status": "failure",
"totals": {
"success": 0,
"failure": 0
}
}
[Error] Download finished with errors. Please review the logs
I have tried variations of MY_FOLDER, MY_FOLDER/, MY_FOLDER/. and with/without recursive. I get the same error.
I know I have permissions because I can upload a file:
jfrog rt u TEST_FILE MY_FOLDER
succeeds just fine. But I can't turn around and pull it down:
jfrog rt dl MY_FOLDER/TEST_FILE
results in:
[Info] Searching items to download...
[Debug] Searching Artifactory using AQL query:
items.find({"repo": "MY_REPO","$or": [{"$and":[{"path": {"$match": "."},"name": {"$match": "MY_FILE"}}]}]}).include("name","repo","path","actual_md5","actual_sha1","size","type","property")
[Error] Artifactory response: 405 Method Not Allowed
{
"status": "failure",
"totals": {
"success": 0,
"failure": 0
}
}
[Error] Download finished with errors. Please review the logs
I'm not certain what isn't working, there's nothing on StackOverflow, JFrog dox or elsewhere with issues using jfrog cli. :(
My jfrog-cli.conf looks like this:
{
"artifactory": [
{
"url": "https://SITE_NAME:PORT_NUM/artifactory/list/MY_REPO/",
"user": "USERNAME",
"serverId": "MY_REPO",
"isDefault": true,
"apiKey": "API_KEY_FROM_ARTIFACTORY"
}
],
"Version": "1"
}
UPDATED
I was running version 1.22.1 for Windows 64-bit. I am now using jfrog cli version 1.34.1 for Windows 64-bit. Per first comment I updated to the latest binary and tried running a search, both with and without MY_FOLDER:
jfrog rt s TEST_FILE and
jfrog rt s MY_FOLDER/TEST_FILE
Both return the same results. I enabled DEBUG and with the newer binary I actually seem to get two errors. The older binary just reported the 405 error. The newer one reports a 404 (Not Found) then reports the error as a 405 Method Not Allowed.
[Debug] Sending usage info...
[Info] Searching artifacts...
[Debug] Searching Artifactory using AQL query:
items.find({"$or":[{"$and":[{"repo":"TEST_FILE","path":{"$match":"*"},"name":{"$match":"*"}}]}]}).include("name","repo","path","actual_md5","actual_sha1","size","type","modified","created","property")
[Debug] Sending HTTP POST request to: https://SITE_NAME:8443/artifactory/list/MY_REPO/api/search/aql
[Debug] Sending HTTP GET request to: https://SITE_NAME:8443/artifactory/list/MY_REPO/api/system/version
[Debug] Artifactory response: 404 Not Found
{
"errors": [
{
"status": 404,
"message": "File not found."
}
]
}
[Error] Artifactory response: 405 Method Not Allowed
Which is still odd because I'm looking for a file I just uploaded. I even deleted the uploaded file and re-uploaded to verify the newer cli can still upload files. I tried a wildcard search to find all files but it returned the same error.
I can also see the file where it is supposed to be using the Artifactory web interface, and download the file and delete it. So Artifactory knows it's there and I have access to it. :-/
The url in your configuration should only contain the base url of Artifactory, i.e: "SITE_NAME:PORT_NUM/artifactory".
You should provide the directory path in the upload and download commands.
Use the ping command to verify your setup works, then progress from there.
BTW: The reason the upload works but download fails is by coincidence. The upload command just uses the upload REST API which expects a path (which you provided in your config), but the download command first sends a search request that does not expect a path, so it fails.

Artifactory, using curl to deploy an artifact not working (code 403)

Novice Artifactory user here so please bear with.
Trying to use curl on linux to deploy a file in a repo and failing. trying this running on the same server that's serving artifactory.
% --> curl -u my_idsid -X PUT
"http://localhost:8081/artifactory/test_repo/test_artifact_linux_01" -T
test_aftifact_linux_01
Enter host password for user 'my_idsid':<I entered the pw die my_user here>
HTTP/1.1 100 Continue
HTTP/1.1 403 Forbidden
Server: Artifactory/5.4.6
X-Artifactory-Id: 3444ab9991d26041:29864758:15e2f4940fa:-8000
Content-Type: application/json;charset=ISO-8859-1
Content-Length: 65
Date: Tue, 05 Sep 2017 21:00:20 GMT
Connection: close
{
"errors" : [ {
"status" : 403,
"message" : ""
} ]
}
I was able to put (deploy) a file (an artifact) in this same repo from windows with the drag-drop method for that same user. So I'm thinking that this confirms several things...
1) the repo exists
2) the permissions to allow that user write access is ok
3) the server is up and running ok
And the pw is OK, because when I intentionally enter something wrong, it comes up with a 401 instead.
I looked in artifactory.log and it has this for my attempt...
2017-09-05 17:15:00,332 [http-nio-8081-exec-7] [WARN ]
(o.a.w.s.RequestUtils:155) - Request /test_repo/test_artifact_linux_01
should be a repo request and does not match any repo key
Does not match a repo key ?
I'm using the example here...
https://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API#ArtifactoryRESTAPI-Example-DeployinganArtifact
and plugging in the repo name right after "artifactory" (replacing "my_repository") in the path.
I have a strong suspicion that I'm goofing up the path name, but I don't know what's wrong. How does one determine the proper path to use ?
Thanks in advance for any help.
UPDATE:
My bad. The repo was actually named "test-repo", not "test_repo".
Worked fine once I made that change.
Sorry for the false alarm.

Gitlab cannot clone over HTTP after upgrade to 6.5

After upgrading gitlab from 6.3 to 6.5, everything looks good, except I cannot clone any repositories over HTTP. I could clone/fetch/push over SSH, and also browse the whole repositories over HTTP, but just cannot clone it over HTTP.
The error message is:
$ git clone http://mydomain/mygroup/test.git
Cloning into 'test'...
fatal: protocol error: bad line length character: 3ca
fatal: The remote end hung up unexpectedly
Any hints are appreciated.
====== More Information =======
When executing the clone as above, there are two nginx logs:
192.168.1.103 - - [26/Feb/2014:17:51:58 +0800] "GET /mygroup/test.git/info/refs?service=git-upload-pack HTTP/1.1" 200 283 "-" "git/1.8.3.4 (Apple Git-47)" "-"
192.168.1.103 - - [26/Feb/2014:17:51:58 +0800] "POST /mygroup/test.git/git-upload-pack HTTP/1.1" 200 994 "-" "git/1.8.3.4 (Apple Git-47)" "-"
And here is the error I got when trying to "git fetch" inside a project which was cloned over HTTP before:
$ git fetch
fatal: git fetch_pack: expected ACK/NAK, got '
0038ACK 12c5f4a0f130acfd8ef502a28989f42d37228016 common
0038ACK 736453e9369a9bb91b2b49b17419c168e4b61c5b common
0031ACK 736453e9369a9bb91b2b49b17419c168e4b61c5b
0022Counting objects: 114, done.
002eCompressing objects: 100% (67/67), done.
2004PACK'
Git 2.13 (Q2 2017) could add a more explicit/precise error message than expected ACK/NAK.
That would help explaining the particular clone error in the OP's case.
See commit 8e2c7be (12 Apr 2017) by Jonathan Tan (jhowtan).
(Merged by Junio C Hamano -- gitster -- in commit d2617eb, 24 Apr 2017)
fetch-pack: show clearer error message upon ERR
Currently, fetch-pack prints a confusing error message ("expected ACK/NAK") when the server it's communicating with sends a pkt-line starting with "ERR".
Replace it with a less confusing error message.
Also update the documentation describing the fetch-pack/upload-pack
protocol (pack-protocol.txt) to indicate that "ERR" can be sent in the place of "ACK" or "NAK".
In practice, this has been done for quite some
time by other Git implementations (e.g. JGit sends "want $id not valid")
and by Git itself (since commit bdb31ea: "upload-pack: report "not our ref" to client", 2017-02-23) whenever a "want" line references an object that it does not have. (This is uncommon, but can happen if a repository is garbage-collected during a negotiation.)

Resources