Emacs Lisp - How do you transfer binary files via HTTP? - http

Recently, I have been experimenting with using Elisp to communicate with a local CouchDB database via HTTP requests. Sending and receiving JSON works nicely, but I hit a bit of a road-block when I tried to upload an attachment to a document. In the CouchDB tutorial they use this curl command to upload the attachment:
curl -vX PUT http://127.0.0.1:5984/albums/6e1295ed6c29495e54cc05947f18c8af/artwork.jpg?rev=2-2739352689 \
--data-binary #artwork.jpg -H "Content-Type:image/jpg"
Does anyone know how I would go about using the built-in url package to achieve this? I know that it is possible to do upload using multipart MIME requests. There is a section in the emacs-request manual about it. But I also read that CouchDB does not support multipart/form-data as part of their public API, even though Futon uses it under-the-hood.

I think you need to use url-retrieve and bind url-request-method to "PUT" around the call.
You will also need to bind url-request-data to your data:
(let ((url-request-data (with-temp-buffer
(insert-file-contents "artwork.jpg")
(buffer-substring-no-properties (point-min) (point-max)))))
(url-retrieve "http://127.0.0.1:5984/albums/6e1295ed6c29495e54cc05947f18c8af/artwork.jpg?rev=2-2739352689"))
Please see also
Creating a POST with url elisp package in emacs: utf-8 problem
How does emacs url package handle authentication?
You might also find enlightening by reading the sources.

Related

Injecting an api key to google script's UrlFetchApp for an HTTP Request

The Problem
Hi. I'm trying to use Google Script's "UrlFetchApp" function to pull data from an email marketing tool (Klaviyo). Klaviyo's documentation, linked here, provides an API endpoint via an HTTP request.
I'm trying to use Google Script's "UrlFetchApp" to pull data from Klaviyo's API. Unfortunately, Klaviyo requires a key parameter in order to access data, but Google doesn't document if (or where) I can add a custom parameter (note, it should look something like: "api_key=xxxxxxxxxxxxxxxx". Note, it's quite easy for me to pull data into my terminal using the api_key parameter, but ideally I'd have it pulled via google scripts and added to a google sheet appropriately. If I can get JSON into google scripts, I can work with the data to output it how i want.
KLAVIYO'S EXAMPLE REQUEST FOR TERMINAL
curl https://a.klaviyo.com/api/v1/metrics -G \
-d api_key=XX_XXXXXXXXXXXXXXX
THIS OUTPUTS CORRECT DATA IN JSON
Note: My ultimate end goal is to pipe the data into Google data studio on a recurring basis for reporting. I thought i'd get the data into a csv for download / upload into google data studio on a recurring basis. If I'm thinking about this the wrong way, let me know.
Regarding the -G flag, from the curl man pages (emphasis mine):
When used, this option will make all data specified with -d, --data,
--data-binary or --data-urlencode to be used in an HTTP GET request instead of the POST request that otherwise would be used. The data
will be appended to the URL with a '?' separator.
Given that the default HTTP method for UrlFetchApp.fetch() is "GET", your request will be very simple:
UrlFetchApp.fetch("https://a.klaviyo.com/api/v1/metrics?api_key=XX_XXXXXXXXXXXXXXX");

wget won't download files I can access through browser

I am an amateur historian trying to access newspaper archives. The server where the scans are located "works" using an outdated tif viewer that doesn't seem to actually work at all anymore. I can access the files individually in chrome without logging in, but when I try to use wget or curl, I'm told that viewing the file is unauthorized, even when I use my login info, and even when using my cookies from chrome.
Here is an example of one of the files: https://ulib.aub.edu.lb/nahar/images2/7810W2/78101001.TIF
When I put this into chrome, it automatically downloads the file even though I cannot access the directory itself, but when I use wget, I get the following response: "401 unauthorized Username/Password Authentication Failed."
This is the basic wget command I'm using (if I can get it to work at all, then I'll input a list of the other files):
wget --no-check-certificate https://ulib.aub.edu.lb/nahar/images2/7810W2/78101001.TIF
I've tried variations with and without cookies, with a blank user, with and without login credentials, As I'm sure you can tell, I'm new to this sort of thing but eager to learn.
From what I can see, authentication on your website is done with HTTP basic. This kind of authentication is not using HTTP cookies, it is using HTTP Authorization header. You can pass HTTP basic credentials to wget with the following arguments.
wget --http-user=YourUsername --http-password=YourPassword https://ulib.aub.edu.lb/nahar/images2/7810W2/78101001.TIF

Acess Issue on Jira/Atlassian with R

I got a Atlassian/Jira account where projects are listed on. I would like to import the various issues in order to make some extra analysis. I found a way to connect to Atlassian/Jira and to import what I want on Python:
from jira import JIRA
import os
impot sys
options = {'server': 'https://xxxxxxxx.atlassian.net'}
jira = JIRA(options, basic_auth=('admin_email', 'admin_password'))
issues_in_proj = jira.search_issues('project=project_ID')
It works very well but I would like to make the same thing in R. Is it possible ? I found the RJIRA package but there are three problems for me:
It's still on a dev version
I am unable to install it as the DESCRIPTION file is "malformed".
It's based on a jira server URL: "https://JIRAServer:port/rest/api/" and I have a xxxxx.atlassian.net URL
I also found out that there are curl queries :
curl -u username:password -X GET -H 'Content-Type: application/json'
"http://jiraServer/rest/api/2/search?jql=created%20>%3D%202015-11-18"
but again it is based on a "https://JIRAServer:port/rest/api/" form and in addition I am using windows.
Do someone have an idea ?
Thank you !
The "https://JIRAServer:port/rest/api/" form is the Jira REST API https://docs.atlassian.com/jira/REST/latest/
As a rest api, it just makes http method calls and gives you data.
All jira instances should expose the rest api, just point your browser to your jira domain like this:
https://xxxxx.atlassian.net/rest/api/2/field
and you will see all the fields you have access to, for example
This means you can use php, java or a simple curl call from linux to get your jira data. I have not used RJIRA but if you dont want to use it, you can still use R (which I have not used) and make an HTTP call to the rest api.
These two links on my blog might give you more insight:
http://javamemento.blogspot.no/2016/06/rest-api-calls-with-resttemplate.html
http://javamemento.blogspot.no/2016/05/jira-confluence-3.html
Good luck :)

How to include or wrap the file or file content or file object when using Softlayer Object Storage REST API

I am using Softlayer Object Storage REST API.
I was told that below command line using CURL has been successful.
$ curl -i -XPUT -H "X-Auth-Token: AUTH_tkabcd" --data-binary "Created for testing REST client" https://dal05.objectstorage.softlayer.net/v1/AUTH_abcd/container2/file10.txt
I wish to upload files using Javascript so I have no clue how do I wrap the file in my request.
Anyone please provide an example? A lot of thanks.
Here some ideas and examples that can help you:
CORS
How to consume a RESTful service using
jQuery
JavaScript REST client Library
[closed]
https://ricardo147.files.wordpress.com/2012/08/image001.png
However, there is a customer who was not able to make rest request through Javascript. I will investigate about it, I will let you know any news.
(Softlayer, Open Stack Swift) How to solve cross domain origin with
object storage
api?

Better file uploading approach: HTTP post multipart or HTTP put?

Use-case: Upload a simple image file to a server, which clients could later retrieve
Designate a FTP Server for the job.
HTTP Put: It can directly upload files to a server without the need of a server side
component to handle the bytestream.
HTTP Post: Handle the bytestream by the server side component.
I think to safely use PUT on a public website requires even more effort than using POST (and is less commonly done) due to potential security issues. See http://bitworking.org/news/PUT_SaferOrDangerous.
OTOH, I think there are plenty of resources for safely uploading files with POST and checking them in the server side script, and that this is the more common practice.
PUT is only appropriate when you know the URL you are putting to.
You could also do:
4) POST to obtain a URL to which you then PUT the file.
edit: how are you going to get the HTTP server to decide whether it is OK to accept a particular PUT request?
What I usually do (via PHP) is HTTP POST.
And employ PHP's move_uploaded_file() to get it to whatever destination I want.

Resources