Need a way to copy a file to Livelink from a cmd prompt (a la davcopy) - livelink

Has anyone written something like davcopy for Livelink? (davcopy works with SharePoint)
I have downloaded davcopy and it hangs when trying to use it with Livelink.
I've asked Open Text and their response is "There is not way to do this out of the box, it will requires writing a webservices application."
I'm not sure how to write a webservice application for livelink; so, before I explore that I was wondering if anyone had done an implementation of davcopy for Livelink.

I know about a command line application which is using MS powershell to do what you want (http://www.gatevillage.net/public/content-server-desktop-library-powershell-suite)
It wouldn't be too difficult to write something like this with Ruby or Perl. Both support WS/SOAP.
With which version of Livelink (or Content Server) do you work?

You can use the curl command line tool to upload, download or delete files in Livelink. It makes HTTP requests against CS REST API, which is available in CS 10.0 or newer.
For example, uploading a file "file.ext" to folder 8372 at http://server/instance/cs as Admin:
curl \
-F "type=144" \
-F "parent_id=8372" \
-F "name=file.ext" \
-F "file=#/path/to/file.ext" \
-u "Admin:password" \
-H "Expect:" \
http://server/instance/cs/api/v1/nodes
The "Expect" header has to be forced empty, because CS REST API does not support persistent connections, but curl would always enable them for this request.

Related

How to convert Curl to url with headers

I have this command in cURL and it works
curl -X GET \
-H "X-Parse-Application-Id: APP_ID" \
-H "X-Parse-REST-API-Key: API_KEY" \
-G \ https://parseapi.back4app.com/classes/Books
I want create a url that will execute the same way on the browser.
The website that I'm working with is back4app.
There is no way to achieve the same thing with just a URL. This relies on HTTP Headers (both -H parameters) that can't be translated to something else easily. To set these headers in a web browser, you'd at least need to execute JavaScript.
There might be a way if the target API supports reading the same fields from the url in some way (technically, there's no reason not to do this). I haven't found something on that topic in their docs, though.

How can I set the version of a raw file in a (Sonatype) Nexus raw repository?

I'm making an automatic script to upload a raw file in Nexus, and I need to set up the version of this file. Is this possible? I've been checking the API but it doesn't seem posible.
The command I'm currently using to upload is:
curl --proxy $my-proxy -v --user 'user:pass' --upload-file ./myRawFile 'http://12.34.56.78:1234/repository/MyRawRepo/LoL/TheUploadedFile'
This command is being used from an automatic script (and working) to upload the file, but I don't know how to set the version.
curl -k -u "xxx:xxx" -H 'Content-Type: multipart/form-data' --data-binary "#output.zip" -X PUT https://nexus.xxx.com/repository/{raw-reponame}/xxx/{version}/output.zip
Version number can be change {version}

JFrog Artifactory API Deploy Artifact with Properties

Reading this API guide. My Artifactory version is 4.12.2.
https://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API#ArtifactoryRESTAPI-ItemProperties
It says to deploy an artifact like so.
curl -u myUser:myP455w0rd! -X PUT "http://localhost:8081/artifactory/my-repository/my/new/artifact/directory/file.txt" -T Desktop/myNewFile.txt
That works fine but I also want to add properties to file.txt while also uploading. I did see a separate API to set properties.
PUT /api/storage/libs-release-local/ch/qos/logback/logback-classic/0.9.9?properties=os=win,linux;qa=done&recursive=1
That works. I thought maybe it would work to do this.
curl -u myUser:myP455w0rd! -X PUT "http://localhost:8081/artifactory/my-repository/my/new/artifact/directory/file.txt?properties=os=win,linux;qa=done&recursive=1" -T Desktop/myNewFile.txt
It didn't work. Is it possible to upload an artifact and simultaneously set properties or does it have to be two different API calls?
I do use jfrog cli but I need an API solution.
The correct format would be something like:
curl -u myUser:myP455w0rd! -X PUT "http://localhost:8081/artifactory/my-repository/my/new/artifact/directory/file.txt;propertyA=valueA;propertyB=valueB" -T Desktop/myNewFile.txt
You can find the relevant documentation here (I agree that it was "well hidden")

CURL Command To Create A File On Server

I have a mini program/server built on one of my computers (Machine1) and I am trying to create or overwrite a file through cURL on another computer (Machine2). So Machine2 is connected to Machine1. Ive been looking through cURL's documentation for command that will do this but have had no luck and as well on stack overflow.
https://curl.haxx.se/docs/manpage.html
I have also tried the examples on this SO post:
HTTP POST and GET using cURL in Linux
Any idea as to what the command might be through command prompt? (equivalent of a POST command). I have tried so far using -O, -K, -C and a multitude of others which have not worked.
In command line, all you need to do is using curl --form to simulate a multipart/form-data POST request:
curl --form "testfile=#thefilename.jpg" http://<Machine2>/<Path>
testfile is the field name used for form, if you don't care, just use any english word.
# is used here to make file thefilename.jpg get attached in the post as a file upload. Refer to curl man doc.
In server side, URL http://<Machine2>/<Path> should be listened. When curl send the previous POST request, server side program should get it, extract the attached file (thefilename.jpg), and save to disk.

Get list of files via http server using cli (zsh/bash)

Greetings to everyone,
I'm on OSX. I use the terminal a lot as a habit from my Linux old days that I never surpassed. I wanted to download the files listed in this http server: http://files.ubuntu-gr.org/ubuntistas/pdfs/
I select them all with the mouse, put them in a txt files and then gave the following command on the terminal:
for i in `cat ../newfile`; do wget http://files.ubuntu-gr.org/ubuntistas/pdfs/$i;done
I guess it's pretty self explanatory.
I was wondering if there's any easier, better, cooler way to download this "linked" pdf files using wget or curl.
Regards
You can do this with one line of wget as follows:
wget -r -nd -A pdf -I /ubuntistas/pdfs/ http://files.ubuntu-gr.org/ubuntistas/pdfs/
Here's what each parameter means:
-r makes wget recursively follow links
-nd avoids creating directories so all files are stored in the current directory
-A restricts the files saved by type
-I restricts by directory (this one is important if you don't want to download the whole internet ;)

Resources