Artifactory Plugin build.name concatenates with +-+ syntax - artifactory

When using the Aritfactory plug in for Bamboo with spec files the build.name is being concatenated with +-+ on the PUT. This causes a 404. I have replicated this using curl and the same can be seen. If I take out the +-+ in the manual curl, everything works fine.
e.g. this is the put that is generated:
PUT https://our-url.com/artifactory/generic-repo-local/base-artifacts/ansible/ansible_archive.tgz;
vcs.revision=somerevisionstring;
build.timestamp=1525095244921;
build.name=test;
build.name=AWS_CheckPoint_Management+-+BUILD_CONFIG_MANAGEMENT_DRAFT+-+Push_To_Artifactory;
build.number=22;
A 404 is returned from Artifactory. If I take out the build.name in curl from the same build agent, it all works fine.

Related

Can't download file inside nupckg

I have a nuspec file inside a nuget package stored in artifactory.
In Artifact Repository Browser I'm able to view and download this nuspec file.
However I can't download it from jfrog.exe.
I've tried
jfrog.exe rt dl foldername/packagename.nupkg!/filename.nuspec
and I get nothing.
So far my only solution is to download the entire package
jfrog.exe rt dl foldername/packagename.nupkg
and then unzip it to extract filename.nuspec
Do you have a better suggestion ?
Your observation is correct. CLI is not yet supported to download the files under a wrapped file like ZIP/nupkg files.
Alternativly, you can use REST API
curl -uadmin:password -O "http://localhostL8082/artifactory/nuget-local/entityframework.6.2.0.nupkg\!/EntityFramework.nuspec"
Also, make sure to add a backward slash (\) before the special character (!) as an escape.
You may raise an improvement request on https://github.com/jfrog/jfrog-cli/issues

Wget download pdf

I am trying to download a pdf file using wget.
When I do:
wget <url> it downloads a corrupted file however if I run wget -i test.txt with the pdf URL inside this test txt file it works and the file is not corrupted.
Does anyone know why?
From the logs I can see the following.
In the first case, it is downloading a note found page.
Length: 11322 (11K) [text/html] Saving to: ‘media.nl?id=39194.1’
In the second it is a proper pdf.
Length: 58272 (57K) [application/pdf] Saving to:
‘media.nl?id=39194&c=4667446&h=34c63dbaaa7adc7c8a33&_xt=.pdf’
Thanks,
Put your URL into quotes. Not quoting the URL can lead to strange effects, in your case the & is interpreted by the shell.
E.g.
wget "https://www.roofingsuppliesuk.co.uk/core/media/media.nl?id=39194&c=4667446&h=34c63dbaaa7adc7c8a33&_xt=.pdf"
or
wget 'https://www.roofingsuppliesuk.co.uk/core/media/media.nl?id=39194&c=4667446&h=34c63dbaaa7adc7c8a33&_xt=.pdf'
or with escaping of &
wget https://www.roofingsuppliesuk.co.uk/core/media/media.nl?id=39194\&c=4667446\&h=34c63dbaaa7adc7c8a33\&_xt=.pdf
I got the same issue but I changed the command to this and then it worked fine when i tested it:
Wget —-no-check-certificate https://www.roofingsuppliesuk.co.uk/core/media/'media.nl?id=39194&c=4667446&h=34c63dbaaa7adc7c8a33&_xt=.pdf'
i just added single quotes beginning at 'media.nl.......pdf'
Make sure the file with same name doesnt exist. You dont need to add --no-check-certificate if you dont get self-signed certificate error

How to make a groovy script which uploads a file to JFrog's artifactory

I'm trying to write a simple Groovy script which deploys a text file into my artifactory. I read the REST API in order to understand how to write the script but I've seen so many vastly different versions online I'm confused.
I want it to be a simple groovy script using the REST API and curl.
This is what JFrog are suggesting in their website:
curl -u myUser:myP455w0rd! -X PUT "http://localhost:8081/artifactory/my-repository/my/new/artifact/directory/file.txt" -T Desktop/myNewFile.txt
And it might work perfectly but I don't understand each part here, and I don't know if I can simply integrate this into a groovy script as is or some adjustments are needed.
I'm a beginner in this field and I would love any help!
Thanks in advance
As you are using the '-T' flag it is not required also to use the '-X PUT'.
Also, the use of '-T' allows you to not specify the file name on the destination so for example, your path will be "http://localhost:8081/artifactory/my-repository/my/new/artifact/directory/' and the file name will be the same as it is on the origin.
The full command will look like that:
curl -u user:password -T Desktop/myNewFile.txt "http://localhost:8081/artifactory/my-repository/my/new/artifact/directory/"
Now just to be on the safe side, you are going to have the file name and path to file on the destination as variables right?
The -T flag should only be used for uploading files so don't take it as obvious that you can replace all '-X PUT' with '-T' but for this specific case of uploading a file, it is possible.

Where is the documentation for how to access raw markdown for a GitHub wiki page?

I read a blog post describing how to access the raw content inside a GitHub:
This is not entirely obvious (at least it wasn’t for me), but since
Github wikis are actually backed by a proper Git repo, I figured it
should be possible to access the raw markdown for a page by using
Github’s https://raw.github.com/ style URLs.
After some minor trial/error, it turns out to be very predictable (as
many things in github):
https://raw.github.com/wiki/[user]/[project]/[page].md
I have a repo mbigras/hello-world with a wiki page mbigras/hello-world/wiki/foobar. So according to the pattern above the following should work:
https://raw.github.com/wiki/mbigras/hello-world/foobar.md
It seems like GitHub has changed its routing as shown below:
$ curl https://raw.github.com/wiki/mbigras/hello-world/foobar.md
$ curl -Is https://raw.github.com/wiki/mbigras/hello-world/foobar.md 2>&1 | head -n 2
HTTP/1.1 301 Moved Permanently
Location: https://raw.githubusercontent.com/wiki/mbigras/hello-world/foobar.md
$ curl -L https://raw.github.com/wiki/mbigras/hello-world/foobar.md
{
"foo": "bar",
"cat": "dog",
"red": "hat"
}
So the new pattern seems to be:
https://raw.githubusercontent.com/wiki/[user]/[project]/[page].md
Does GitHub publish documentation about how to access the raw markdown source for a wiki page?
Does GitHub publish documentation about how to access the raw markdown source for a wiki page?
Yes, GitHub documents how to export wiki in a blog, when Wikis is released in 2010.
Each wiki is a Git repository, so you're able to push and pull them like anything else. Each wiki respects the same permissions as the source repository. Just add ".wiki" to any repository name in the URL, and you're ready to go.
In your mbigras/hello-world case, the command would be:
git clone https://github.com/mbigras/hello-world.wiki.git
So as of Feb 2019 this works
wget https://raw.githubusercontent.com/wiki/<username>/<repo-name>/<page>.md

Download Folder including Subfolder via wget from Dropbox link to Unix Server

I have a dropbox link like https://www.dropbox.com/sh/w4366ttcz6/AAB4kSz3adZ which opens the ususal dropbox site with folders and files.
Is there any chance to download the complete content (tar or directly as sync) to a unix machine using wget?
I have seen some posts here where single files were downloaded but could not find any answer to this. There is an api from Dropbox but that does not work on my server due to the 64 bit issue on my server and http://www.dropboxwiki.com/dropbox-addons/dropbox-gallery-download#BASH_Version does also not work for me.... any other suggestions?
This help article documents some parameters you can use to get different behaviors from Dropbox shared links:
https://www.dropbox.com/help/201
For example, using this link:
https://www.dropbox.com/sh/igoku2mqsjqsmx1/AAAeF57DR2ou_nZGC4JPoQKfa
We can use the dl parameter to get a direct download. Using curl, we can download it as such:
curl -L https://www.dropbox.com/sh/igoku2mqsjqsmx1/AAAeF57DR2ou_nZGC4JPoQKfa?dl=1 > download.zip
(The -L is necessary in order to follow redirects.)
Or, with wget, something like:
wget --max-redirect=20 -O download.zip https://www.dropbox.com/sh/igoku2mqsjqsmx1/AAAeF57DR2ou_nZGC4JPoQKfa
You can use --content-disposition with wget too.
wget https://www.dropbox.com/sh/igoku2mqsjqsmx1/AAAeF57DR2ou_nZGC4JPoQKfa --content-disposition
It will auto-detect the folder name as the zip filename.
Currently, you're probably better off creating an app that you don't publish, which can either access all your files, or just a dedicated app folder (safer). Click the generate API token button about halfway down the app's settings page, and store it securely! You can then use the dedicated download or zip download API calls to get your files from anywhere like so:
curl -X POST https://content.dropboxapi.com/2/files/download_zip \
--header "Authorization: Bearer $MY_DROPBOX_API_TOKEN" \
--header 'Dropbox-API-Arg: {"path": "/path/to/directory"}' \
> useful-name.zip
Adding your token as an environment variable makes it easier & safer to type/script these operations. If you're using BASH, and you have ignorespace in your $HISTCONTROL you can just type + paste your key with a leading space so it's not saved in your history. For frequent use, save it in a file with 0600 permissions that you can source, as you would an SSH key.
export MY_DROPBOX_API_TOKEN='...'
Yes you can as it is pretty wasy follow below steps
Firstly, get the dropbox share link. It will look like this https://www.dropbox.com/s/ad2arn440pu77si/test.txt
Then add a “?dl=1” to the end of that url and a “-O filename” so that you end up with something like this: wget https://www.dropbox.com/s/ad2arn440pu77si/test.txt?dl=1 -O test.txt
Now you can easily get files onto your linux.

Resources