Can I have conflict in Riak with using a vclock getting by a get - riak

I want to know if I can have conflict in this scenario :
#!/usr/bin/env bash
curl -XPUT -d '{"bar":"baz"}' \
-H "Content-Type: application/json" \
http://127.0.0.1:8098/riak/obj/1
response=$(curl -I http://127.0.0.1:8098/riak/obj/1 | grep 'X-Riak-Vclock:' | egrep -o ' .*$')
curl -v -XPUT -d '{"bar":"foo"}' \
-H "Content-Type: application/json" \
-H "X-Riak-Vclock: $response" \
http://127.0.0.1:8098/riak/obj/1
In some words :
First I have no object for the key 1 I put the {"bar":"baz"} value with the PUT of the http api.
Then, I read the value with a get. And I store the vclock in variable.
And finaly I put a new value {"bar":"foo"} for the key 1
Is there a case where I can have {"bar":"baz"} for the key 1 ? If Riak has a conflict, it will be resolve with vclock ?
Thanks !

It depends how your Riak database is configured, either globally or if you changed the default configuration of the bucket you're using. If you keep the default config, your second PUT (with the vclock) might:
- fail, if someone updated the key behind your back (rare), and the vclock data you have is already obsolete. You need to re-read the value and update it. Best is to have a retry mechanism.
- fail, if the write consistency constrains you have is too strict, and too many nodes are down (rare). Usually the default read and write config are sane.
- succeed, if the vclock data is still valid for this key (most of the time)
In case it succeeds, it might be that the network topology was in a split-brain situation. In this case, Riak will solve the issue itself using v-clock data.

Related

Getting output of curl command in a variable

I need to fetch data out of a cloud platform. The process to export data is 2 step:
First make a post call with the username/password details. This will return xml output with a jobid in the response.
Fetch the jobid from the first response and use this jobid, concatenate it to get a new url and then make a get call (execute curl again) using this new url, I will then get data in a json response.
What I did:
I am able to make the first API call and get the jobID. Next,I concatenated this jobId to get new url and saved complete curl statement in a variable (lets call the variable cmd_second_api_call). This variable 'cmd_second_api_call' contains the complete curl statement that I need to execute.
So I did a out=$($cmd_second_api_call), as I want to execute the second curl statement and store the output in a variable.
Problem:
When I execute out=$($cmd_second_api_call), I see out variable as empty. I verified $cmd_second_api_call actually outputs the curl command perfectly. If I execute the output of $cmd_second_api_call on my command prompt I see the output. What am I missing here? How to get the curl output in a variable?
Thanks!
r=$(curl -k -u user:password static_url -d <data I need to pass>)
jobid=$(echo $r | sed -n 's:.*<jobid>\(.*\)<\/jobid>.*:\1:p')
second_url="abc.com/${jobid}/result/ --get -d output=json"
cmd_second_api_call="curl -u user:password -k ${second_url}"
out=$($cmd_second_api_call)
echo $out
Putting a command in a variable or using variables without quotes can be dangerous.
I suggested
out=$(curl -u user:password -k ${second_url})
# or
out=$(curl -u user:password -k abc.com/${jobid}/result/ --get -d output=json)
# and
echo "$out"
Somehow this helped, together with a sleep 5 between both curl calls. You wouldn't expect a lag at the remote server between returning a valid jobid and enabling the interface for that jobid. Perhaps some kind of "defence" against unauthorized calls with random jobid's.

How to convert Curl to url with headers

I have this command in cURL and it works
curl -X GET \
-H "X-Parse-Application-Id: APP_ID" \
-H "X-Parse-REST-API-Key: API_KEY" \
-G \ https://parseapi.back4app.com/classes/Books
I want create a url that will execute the same way on the browser.
The website that I'm working with is back4app.
There is no way to achieve the same thing with just a URL. This relies on HTTP Headers (both -H parameters) that can't be translated to something else easily. To set these headers in a web browser, you'd at least need to execute JavaScript.
There might be a way if the target API supports reading the same fields from the url in some way (technically, there's no reason not to do this). I haven't found something on that topic in their docs, though.

containerized nginx log rotation with logrotate

Nginx doesn't have native log rotation, so an external tool, such as logrotate, is required. Nginx presents a challenge in that the logs have to be reopened post rotation. You can send a USR1 signal to it if the pid is available in /var/run.
But when running in a docker container, the pid file is missing in /var/run (and the pid actually belongs to the host, since it is technically a host process).
If you don't reopen the logs, nginx doesn't log anything at all, though it continues to function otherwise as web server, reverse proxy, etc.
You can get the process id from the Pid attribute using docker inspect and use kill -USR1 {pid} to have nginx reopen the logs.
Here's the /etc/logrotate.d/nginx file I created:
/var/log/nginx/access.log
{
size 2M
rotate 10
missingok
notifempty
compress
delaycompress
postrotate
docker inspect -f '{{ .State.Pid }}' nginx | xargs kill -USR1
endscript
}
If you want to run logrotate in a dedicated container (e.g to rotate both nginx logs and Rails' file log) rather than on the host machine, here's how I did it. The trickiest part by far was as above, getting the reload signals to nginx, Rails, etc so that they would create and log to fresh logfiles post-rotation.
Summary:
put all the logs on a single shared volume
export docker socket to the logrotate container
build a logrotate image with logrotate, cron, curl, and jq
build logrotate.conf with postrotate calls using docker exec API as detailed below
schedule logrotate using cron in the container
The hard part:
To get nginx (/etcetera) to reload thus connect to fresh log files, I sent exec commands to the other containers using Docker's API via socket. It expects a POST with the command in JSON format, to which it responds with an exec instance ID. You then need to explicitly run that instance.
An example postrotate section from my logrotate.conf file:
postrotate
exec_id=`curl -X POST --unix-socket /var/run/docker.sock \
-H "Content-Type: application/json" \
-d '{"cmd": ["nginx", "-s", "reopen"]}' \
http:/v1.41/containers/hofg_nginx_1/exec \
| jq -r '.Id'`
curl -X POST --unix-socket /var/run/docker.sock \
-H "Content-Type: application/json" \
-d '{"Detach": true}' \
http:/v1.41/exec/"$exec_id"/start
endscript
Commentary on the hard part:
exec_id=`curl -X POST --unix-socket /var/run/docker.sock \
This is the first of two calls to curl, saving the result into a variable to use in the second. Also don't forget to (insecurely) mount the socket into the container, '/var/run/docker.sock:/var/run/docker.sock'
-H "Content-Type: application/json" \
-d '{"cmd": ["nginx", "-s", "reopen"]}' \
Docker's API docs say the command can be a string or array of strings, but it only worked for me as an array of strings. I used the nginx command line tool, but something like 'kill -SIGUSR1 $(cat /var/run/nginx.pid)' would probably work too.
http:/v1.41/containers/hofg_nginx_1/exec \
I hard-coded the container name, if you're dealing with something more complicated you're probably also using a fancier logging service
| jq -r '.Id'`
The response is JSON-formatted, I used jq to extract the id (excuse me, 'Id') to use next.
curl -X POST --unix-socket /var/run/docker.sock \
-H "Content-Type: application/json" \
-d '{"Detach": true}' \
The Detach: true is probably not necessary, just a placeholder for POST data that was handy while debugging
http:/v1.41/exec/"$exec_id"/start
Making use of the exec instance ID returned by the first curl to actually run the command.
I'm sure it will evolve (say with error handling), but this should be a good starting point.

CURL Command Line URL Parameters

I am trying to send a DELETE request with a url parameter using CURL. I am doing:
curl -H application/x-www-form-urlencoded -X DELETE http://localhost:5000/locations` -d 'id=3'
However, the server is not seeing the parameter id = 3. I tried using some GUI application and when I pass the url as: http://localhost:5000/locations?id=3, it works. I really would rather use CURL rather than this GUI application. Can anyone please point out what I'm doing wrong?
The application/x-www-form-urlencoded Content-type header is not required (well, kinda depends). Unless the request handler expects parameters coming from the form body. Try it out:
curl -X DELETE "http://localhost:5000/locations?id=3"
or
curl -X GET "http://localhost:5000/locations?id=3"
#Felipsmartins is correct.
It is worth mentioning that it is because you cannot really use the -d/--data option if this is not a POST request. But this is still possible if you use the -G option.
Which means you can do this:
curl -X DELETE -G 'http://localhost:5000/locations' -d 'id=3'
Here it is a bit silly but when you are on the command line and you have a lot of parameters, it is a lot tidier.
I am saying this because cURL commands are usually quite long, so it is worth making it on more than one line escaping the line breaks.
curl -X DELETE -G \
'http://localhost:5000/locations' \
-d id=3 \
-d name=Mario \
-d surname=Bros
This is obviously a lot more comfortable if you use zsh. I mean when you need to re-edit the previous command because zsh lets you go line by line. (just saying)

Send request to cURL with post data sourced from a file

I need to make a POST request via cURL from the command line. Data for this request is located in a file. I know that via PUT this could be done with the --upload-file option.
curl host:port/post-file -H "Content-Type: text/xml" --data "contents_of_file"
You're looking for the --data-binary argument:
curl -i -X POST host:port/post-file \
-H "Content-Type: text/xml" \
--data-binary "#path/to/file"
In the example above, -i prints out all the headers so that you can see what's going on, and -X POST makes it explicit that this is a post. Both of these can be safely omitted without changing the behaviour on the wire. The path to the file needs to be preceded by an # symbol, so curl knows to read from a file.
I need to make a POST request via Curl from the command line. Data for this request is located in a file...
All you need to do is have the --data argument start with a #:
curl -H "Content-Type: text/xml" --data "#path_of_file" host:port/post-file-path
For example, if you have the data in a file called stuff.xml then you would do something like:
curl -H "Content-Type: text/xml" --data "#stuff.xml" host:port/post-file-path
The stuff.xml filename can be replaced with a relative or full path to the file: #../xml/stuff.xml, #/var/tmp/stuff.xml, ...
If you are using form data to upload file,in which a parameter name must be specified , you can use:
curl -X POST -i -F "parametername=#filename" -F "additional_parm=param2" host:port/xxx
Most of answers are perfect here, but when I landed here for my particular problem, I have to upload binary file (XLSX spread sheet) using POST method, I see one thing missing, i.e. usually its not just file you load, you may have more form data elements, like comment to file or tags to file etc as was my case. Hence, I would like to add it here as it was my use case, so that it could help others.
curl -POST -F comment=mycomment -F file_type=XLSX -F file_data=#/your/path/to/file.XLSX http://yourhost.example.com/api/example_url
I was having a similar issue in passing the file as a param. Using -F allowed the file to be passed as form data, but the content type of the file was application/octet-stream. My endpoint was expecting text/csv.
You are able to set the MIME type of the file with the following syntax:
-F 'file=#path/to/file;type=<MIME_TYPE>
So the full cURL command would look like this for a CSV file:
curl -X POST -F 'file=#path/to/file.csv;type=text/csv' https://test.com
There is good documentation on this and other options here: https://catonmat.net/cookbooks/curl/make-post-request#post-form-data
I had to use a HTTP connection, because on HTTPS there is default file size limit.
https://techcommunity.microsoft.com/t5/IIS-Support-Blog/Solution-for-Request-Entity-Too-Large-error/ba-p/501134
curl -i -X 'POST' -F 'file=#/home/testeincremental.xlsx' 'http://example.com/upload.aspx?user=example&password=example123&type=XLSX'

Resources