When I load a html page I have a 5 strings written about a second apart.
<br>1</br>
...... 1 second ......
<br>2</br>
...... 1 second ......
<br>3</br>
...... 1 second ......
<br>4</br>
...... 1 second ......
<br>5</br>
...... 1 second ......
--- end request ---
Chromium and Firefox both load and display the first br then the next as received. (Firefox requires a content encoding however). But Safari refuses to display any of the tags until the request is ended.
Chromium seems to just do it.
Firefox first needs to determine the content encoding https://bugzilla.mozilla.org/show_bug.cgi?id=647203
But Safari seems to just refuse. Is a different response code or header needed? I try setting explicitly the content type to text/html. Didn't work.
I have confirmed in Wireshark that the strings are being sent a second apart, i.e they are not being cached and sent all at once.
I have also confirmed this occurs if I go through localhost or I use my public ip address.
I have tried content-length and keep alive, former just closes request automatically, latter seems to have no effect.
Headers and Responses from Wireshark
Firefox (working)
GET /pe HTTP/1.1
Host: 127.0.01:8080
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:42.0) Gecko/20100101 Firefox/42.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
DNT: 1
Connection: keep-alive
Cache-Control: max-age=0
HTTP/1.1 200 OK
Transfer-Encoding: chunked
Date: Tue, 10 Nov 2015 17:10:20 GMT
Connection: keep-alive
Content-Type: text/html; charset=utf-8
Server: TwistedWeb/13.2.0
1f
<html>
<title>PE</title>
<body>
2e
<br> This is the 1th time I've written. </br>
2e
<br> This is the 2th time I've written. </br>
2e
<br> This is the 3th time I've written. </br>
2e
<br> This is the 4th time I've written. </br>
2e
<br> This is the 5th time I've written. </br>
8
</body>
8
</html>
0
Safari (not working)
GET /pe HTTP/1.1
Host: 127.0.0.01:8080
Accept-Encoding: gzip, deflate
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/601.1.56 (KHTML, like Gecko) Version/9.0 Safari/601.1.56
Accept-Language: en-us
DNT: 1
Connection: keep-alive
HTTP/1.1 200 OK
Transfer-Encoding: chunked
Date: Tue, 10 Nov 2015 17:12:55 GMT
Connection: keep-alive
Content-Type: text/html; charset=utf-8
Server: TwistedWeb/13.2.0
1f
<html>
<title>PE</title>
<body>
2e
<br> This is the 1th time I've written. </br>
2e
<br> This is the 2th time I've written. </br>
2e
<br> This is the 3th time I've written. </br>
2e
<br> This is the 4th time I've written. </br>
2e
<br> This is the 5th time I've written. </br>
8
</body>
8
</html>
0
Demo
import twisted
from twisted.python import log
import sys
log.startLogging(sys.stdout)
from twisted.web.server import Site, NOT_DONE_YET
from twisted.web.resource import Resource
from twisted.internet import reactor
class PersistantExample(Resource):
'''Gives an example of a persistant request'''
# does not exist on Safari until stopping browser / ending connection
isLeaf = True
def render_GET(self, request):
log.msg("Ooooh a render request")
# schedule the reoccuring thing (this could be something else like a deferred result)
reactor.callLater(1.1, self.keeps_going, request, 0) # 1.1 seconds just to show it can take floats
# firefox require the char set see https://bugzilla.mozilla.org/show_bug.cgi?id=647203
request.responseHeaders.addRawHeader("Content-Type",
"text/html; charset=utf-8") # set the MIME header (charset needed for firefox)
# this will cause the connection to keep only open for x length
# (only helpful if the length is known, also does NOT make it render right away)
# request.responseHeaders.addRawHeader("Content-Length", "150")
request.write("<html>\n<title>PE</title>\n<body>")
return NOT_DONE_YET
def keeps_going(self, request, i):
log.msg("I'm going again....")
i = i + 1
request.write("\n<br> This is the %sth time I've written. <br>" % i) ## probably not best to use <br> tag
if i < 5:
reactor.callLater(1.1, self.keeps_going, request, i) # 1.1 seconds just to show it can take floats
if i >= 5 and not request.finished:
log.msg("Done")
request.write("\n</body>")
request.write("\n</html>")
# safari will only render when finished
request.finish()
class Root(Resource):
isLeaf = False
def render_GET(self, request):
return "<html><body>Demo is here</body></html>"
class Site(Site):
pass
root = Root()
pe = PersistantExample()
site = Site(root)
root.putChild("", root)
root.putChild("index", root)
root.putChild("pe", pe)
# listen
if __name__ == "__main__":
reactor.listenTCP(8080, site)
reactor.run()
Are you trying this on Safari for Windows? If yes then it not officially supported any more. If you are try to solve this on Safari for Mac please create a demo or share the link if you already have.
I can test and help you over this.
Ok in order for Safari to render the html at least 1024 bytes have to be written before it starts to render as received.
You can see a demo showing both a working and non working version below. A similar thing happens in Firefox (as it needs 1024 bytes to determine encoding) but you can set the encoding to bypass it in Firefox. Not sure if there is a way to do that in Safari.
+---------------------------------------------------+
| What you send | How much you need |
+---------------------------------------------------+
| Nothing | 1024 bytes |
+---------------------------------------------------+
| Meta Charset | 512 bytes |
+---------------------------------------------------+
| Header Charset | 512 bytes (not including header) |
+---------------------------------------------------+
import twisted
from twisted.python import log
import sys
log.startLogging(sys.stdout)
from twisted.web.server import Site, NOT_DONE_YET
from twisted.web.resource import Resource
from twisted.internet import reactor
import time
# max 133
# 133 * 8 = 1064
html_string_working = "<p>" # 3 bytes
html_string_working += "I'm the version that works! I show up right away!" # 49 bytes
html_string_working += "<!---" # 5 bytes
html_string_working += " " * (1024 - (3+49+5+3+4)) # filler
html_string_working += "-->" # 3 bytes
html_string_working += "</p>" # 4 bytes
print len(html_string_working)
html_string_non_working = "<p>" # 3 bytes
html_string_non_working += "I'm the version that does not work! I don't show up right away!" # 63 bytes
html_string_non_working += "</p>" # 4 bytes
html_string_non_working += "<!---" # 5 bytes
html_string_non_working += " " * (1023 - (3+63+5+3+4)) # filler but one byte short (notice 1023 instead of 1024)
html_string_non_working += "-->" # 3 bytes
print len(html_string_non_working)
# charset maybe? Firefox won't start parsing until 1024
# unless it has charset specified see https://bugzilla.mozilla.org/show_bug.cgi?id=647203
# "Your script does not appear to declare the character encoding. When the character
# encoding is not declared, Gecko won't start parsing until it has received 1024
# bytes of HTTP payload."
# charset works but requires 512 bytes
html_string_charset = "<meta charset=utf-8>" # 20 bytes
html_string_charset += "<p>" # 3 bytes
html_string_charset += "I'm the meta charset version. I may work!" # 41 bytes
html_string_charset += "</p>" # 4 bytes
html_string_charset += "<!---" # 5 bytes
html_string_charset += " " * (512 - (20+3+41+5+3+4)) # filler but one byte short (512 bytes; 511 doesn't work)
html_string_charset += "-->" # 3 bytes
# what about in header form
# charset works but requires 512 bytes not including the headers (so same as meta version)
html_string_charset_headers = "<p>" # 3 bytes
html_string_charset_headers += "I'm the header charset version. I may work!" # 43 bytes
html_string_charset_headers += "</p>" # 4 bytes
html_string_charset_headers += "<!---" # 5 bytes
html_string_charset_headers += " " * (512 - (3+43+5+3+4)) # filler but one byte short (512 bytes; 511 doesn't work)
html_string_charset_headers += "-->" # 3 bytes
print len(html_string_charset_headers)
later = "<p> I'm written later. Now it suddenly will work because bytes >= 1024." # 71 bytes
class PersistantExample(Resource):
'''Gives an example of a persistant request'''
# does not exist on Safari until stopping browser / ending connection
isLeaf = True
def render_GET(self, request):
log.msg("Ooooh a render request")
# schedule the reoccuring thing (this could be something else like a deferred result)
# firefox require the char set see https://bugzilla.mozilla.org/show_bug.cgi?id=647203
# render working version or not
try:
w = int(request.args["w"][0])
except:
w = 1
if w == 1:
request.write(html_string_working)
elif w == 2:
request.write(html_string_non_working)
elif w == 3:
request.write(html_string_charset)
elif w == 4:
# 12 + 24 = 36 bytes but doesn't count towards 512 total
request.responseHeaders.addRawHeader("Content-Type", "text/html; charset=utf-8")
request.write(html_string_charset_headers)
reactor.callLater(2, self.run_later, request)
return NOT_DONE_YET
def run_later(self, request):
request.write(later)
request.finish()
class Root(Resource):
isLeaf = False
def render_GET(self, request):
return """<html><body>
<p>Working version here</p>
<p>Non working version here</p>
<p>Meta charset version here</p>
<p>Header charset version here</p>
</body></html>"""
class Site(Site):
pass
root = Root()
pe = PersistantExample()
site = Site(root)
root.putChild("", root)
root.putChild("index", root)
root.putChild("pe", pe)
# listen
if __name__ == "__main__":
print "Running"
reactor.listenTCP(8080, site)
reactor.run()
Use <br /> or <br> instead of </br> which is not a valid tag.
Related
Let's say I have a REST API using RestRserve like this, is there a way to add an Etag to enable caching on cloud services?
writeLines("Hello World", "myfile.txt")
app <- Application$new(content_type = "application/json")
app$add_static("/", ".")
backend <- BackendRserve$new()
# backend$start(app, http_port = 8080)
req <- Request$new(path = "/myfile.txt", method = "GET")
app$process_request(req)$headers
#> $Server
#> [1] "RestRserve/0.4.1001"
As we see, there is no Etag.
Example using Go fiber
Using GO fiber, I would use it like this:
package main
import (
"flag"
"log"
"github.com/gofiber/fiber/v2"
"github.com/gofiber/fiber/v2/middleware/etag"
)
var (
port = flag.String("port", ":3000", "Port to listen on")
)
func main() {
app := fiber.New()
app.Use(etag.New())
app.Static("/", ".")
log.Fatal(app.Listen(*port))
}
and then querying localhost:3000/myfile.txt I would see headers like this
HTTP/1.1 200 OK
Date: Fri, 18 Mar 2022 13:13:44 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 12
Last-Modified: Fri, 21 Jan 2022 16:24:47 GMT
Etag: "12-823400506"
Connection: close
Hello World
Is there a way to add Etag headers to static files using RestRserve?
As of RestRserve version 1.1.1 (on CRAN), there is an ETag Middleware class.
Use it like so:
# ... code like before
etag <- ETagMiddleware$new()
app$append_middleware(etag)
# ...
See also https://restrserve.org/reference/ETagMiddleware.html
I am trying to get access to the following URL's data in Julia. I can see what appears to be a JSON object when I go to "https://api.stackexchange.com/2.2/questions?order=desc&sort=activity&tagged=Julia&site=stackoverflow". However, when I try printing the resulting r below, it gives me an either text that doesn't render properly, or if I do JSON.print, it shows me a bunch of random numbers.
How can I use Julia to get the same things I see in the browser (preferably in text form).
r = HTTP.request("GET", "https://api.stackexchange.com/2.2/questions?order=desc&sort=activity&tagged=Julia&site=stackoverflow"; verbose=3)
The response body is compressed with gzip, as you can see from the Content-Encoding header:
julia> using HTTP
julia> r = HTTP.request("GET", "https://api.stackexchange.com/2.2/questions?order=desc&sort=activity&tagged=Julia&site=stackoverflow")
HTTP.Messages.Response:
"""
HTTP/1.1 200 OK
Cache-Control: private
Content-Type: application/json; charset=utf-8
Content-Encoding: gzip <----
[...]
so you have to decompress it with e.g. CodecZlib:
julia> using CodecZlib
julia> compressed = HTTP.payload(r);
julia> decompressed = transcode(GzipDecompressor, compressed);
From here you can either create a String (e.g. String(decompressed)) or parse it with e.g. the JSON package:
julia> using JSON
julia> json = JSON.parse(IOBuffer(decompressed))
Dict{String,Any} with 4 entries:
"items" => Any[Dict{String,Any}("link"=>"https://stackoverflow.com/questions/59010720/how-to-make-a-request-to-a-specific-url-in-julia","view_count"=>5,"creation_date"=…
"quota_max" => 300
"quota_remaining" => 297
"has_more" => true
(See also https://github.com/JuliaWeb/HTTP.jl/issues/256)
Python's BaseHTTPRequestHandler has an issue with forms sent through post!
I have seen other people asking the same question (Why GET method is faster than POST?), but the time difference in my case is too much (1 second)
Python server:
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
import datetime
def get_ms_since_start(start=False):
global start_ms
cur_time = datetime.datetime.now()
# I made sure to stay within hour boundaries while making requests
ms = cur_time.minute*60000 + cur_time.second*1000 + int(cur_time.microsecond/1000)
if start:
start_ms = ms
return 0
else:
return ms - start_ms
class MyServer(BaseHTTPRequestHandler, object):
def do_GET(self):
print "Start get method at %d ms" % get_ms_since_start(True)
field_data = self.path
self.send_response(200)
self.end_headers()
self.wfile.write(str(field_data))
print "Sent response at %d ms" % get_ms_since_start()
return
def do_POST(self):
print "Start post method at %d ms" % get_ms_since_start(True)
length = int(self.headers.getheader('content-length'))
print "Length to read is %d at %d ms" % (length, get_ms_since_start())
field_data = self.rfile.read(length)
print "Reading rfile completed at %d ms" % get_ms_since_start()
self.send_response(200)
self.end_headers()
self.wfile.write(str(field_data))
print "Sent response at %d ms" % get_ms_since_start()
return
if __name__ == '__main__':
server = HTTPServer(('0.0.0.0', 8082), MyServer)
print 'Starting server, use <Ctrl-C> to stop'
server.serve_forever()
Get request with python server is very fast
time curl -i http://0.0.0.0:8082\?one\=1
prints
HTTP/1.0 200 OK
Server: BaseHTTP/0.3 Python/2.7.6
Date: Sun, 18 Sep 2016 07:13:47 GMT
/?one=1curl http://0.0.0.0:8082\?one\=1 0.00s user 0.00s system 45% cpu 0.012 total
and on the server side:
Start get method at 0 ms
127.0.0.1 - - [18/Sep/2016 00:26:30] "GET /?one=1 HTTP/1.1" 200 -
Sent response at 0 ms
Instantaneous!
Post request when sending form to python server is very slow
time curl http://0.0.0.0:8082 -F one=1
prints
--------------------------2b10061ae9d79733
Content-Disposition: form-data; name="one"
1
--------------------------2b10061ae9d79733--
curl http://0.0.0.0:8082 -F one=1 0.00s user 0.00s system 0% cpu 1.015 total
and on the server side:
Start post method at 0 ms
Length to read is 139 at 0 ms
Reading rfile completed at 1002 ms
127.0.0.1 - - [18/Sep/2016 00:27:16] "POST / HTTP/1.1" 200 -
Sent response at 1002 ms
Specifically, self.rfile.read(length) is taking 1 second for very little form data
Post request when sending data (not form) to python server is very fast
time curl -i http://0.0.0.0:8082 -d one=1
prints
HTTP/1.0 200 OK
Server: BaseHTTP/0.3 Python/2.7.6
Date: Sun, 18 Sep 2016 09:09:25 GMT
one=1curl -i http://0.0.0.0:8082 -d one=1 0.00s user 0.00s system 32% cpu 0.022 total
and on the server side:
Start post method at 0
Length to read is 5 at 0
Reading rfile completed at 0
127.0.0.1 - - [18/Sep/2016 02:10:18] "POST / HTTP/1.1" 200 -
Sent response at 0
node.js server:
var http = require('http');
var qs = require('querystring');
http.createServer(function(req, res) {
if (req.method == 'POST') {
whole = ''
req.on('data', function(chunk) {
whole += chunk.toString()
})
req.on('end', function() {
console.log(whole)
res.writeHead(200, 'OK', {'Content-Type': 'text/html'})
res.end('Data received.')
})
}
}).listen(8082)
Post request when sending form to node.js server is very fast
time curl -i http://0.0.0.0:8082 -F one=1
prints:
HTTP/1.1 100 Continue
HTTP/1.1 200 OK
Content-Type: text/html
Date: Sun, 18 Sep 2016 10:31:38 GMT
Connection: keep-alive
Transfer-Encoding: chunked
Data received.curl -i http://0.0.0.0:8082 -F one=1 0.00s user 0.00s system 42% cpu 0.013 total
I think this is the answer to your problem: libcurl delays for 1 second before uploading data, command-line curl does not
libcurl is sending the Expect 100-Continue header, and waiting 1 second for a response before sending the form data (in the case of the -F command).
In the case of -d, it does not send the 100-Continue header, for whatever reason.
History:
We are delivering .mp4 video via http partial request streaming. It works fine except on iPad/iPhone and we have followed all of the recommended procedures for encoding .mp4 files for iOS devices so the issue is not the files themselves. The video files are stored outside of the root of the web server so we have written code to open and stream the video upon request. Our testing has brought us to the point of comparing the headers of our "streamed" video (working but not in IOS) and linking directly to the file (working in all instances but not a possible solution as we need to store files outside of web root). Here is a comparison of the working vs. non-working headers.
link direct to file (working for IOS):
request 1
Status Code 200 OK
Accept-Ranges:bytes
Content-Length:62086289
Content-Type:video/mp4
Date:Sun, 26 Jul 2015 08:11:15 GMT
ETag:"60be3570ae7ed01:0"
Last-Modified:Fri, 24 Apr 2015 16:48:06 GMT
Server:Microsoft-IIS/7.0
X-Powered-By:ASP.NET
request 2
Status Code:206 Partial Content
Accept-Ranges:bytes
Content-Length:62086289
Content-Range:bytes 0-62086288/62086289
Content-Type:video/mp4
Date:Sun, 26 Jul 2015 08:11:16 GMT
ETag:"60be3570ae7ed01:0"
Last-Modified:Fri, 24 Apr 2015 16:48:06 GMT
Server:Microsoft-IIS/7.0
X-Powered-By:ASP.NET
opening file for streaming (not working for IOS)
request 1
Status Code 200 OK
Accept-Ranges:bytes
Cache-Control:private
Content-Length:62086289
Content-Range:bytes 0-62086289/62086289
Content-Type:video/mp4
Date:Sun, 26 Jul 2015 16:55:22 GMT
Server:Microsoft-IIS/7.0
X-AspNet-Version:4.0.30319
X-Powered-By:ASP.NET
request 2
Status Code 200 OK
Accept-Ranges:bytes
Cache-Control:private
Content-Length:62086289
Content-Range:bytes 0-62086289/62086289
Content-Type:video/mp4
Date:Sun, 26 Jul 2015 16:55:22 GMT
Server:Microsoft-IIS/7.0
X-AspNet-Version:4.0.30319
X-Powered-By:ASP.NET
Notice that the non working version's second request stays at Status Code 200 and does not change to 206. This is what we are focusing on as possibly being the issue but are at a loss. Below is the code to open the file and send to server. Note the way we are setting up the header.
Dim iStream As System.IO.Stream
' Buffer to read 10K bytes in chunk:
Dim buffer(10000) As Byte
' Length of the file:
Dim length As Integer=0
' Total bytes to read:
Dim dataToRead As Long=0
Dim isPartialFile As Boolean = False
Dim totalDelivered As Long = 0
Dim totalFileSize As Long = 0
Dim filename As String
filename = fileID & file.Extension
iStream = New System.IO.FileStream(filePath, System.IO.FileMode.Open,IO.FileAccess.Read, IO.FileShare.Read)
' Total bytes to read:
dataToRead = iStream.Length
totalFileSize = iStream.Length
Response.ContentType = "video/mp4"
Response.AddHeader("Accept-Ranges", "bytes")
Response.AddHeader("Content-Range", "bytes 0-" + totalFileSize.ToString + "/" + totalFileSize.ToString)
Response.AddHeader("Content-Length", totalFileSize.ToString)
' Read the bytes.
While dataToRead > 0
' Verify that the client is connected.
If Response.IsClientConnected Then
' Read the data in buffer
length = iStream.Read(buffer, 0, 10000)
' Write the data to the current output stream.
Response.OutputStream.Write(buffer, 0, length)
' Flush the data to the HTML output.
Response.Flush()
ReDim buffer(10000) ' Clear the buffer
dataToRead = dataToRead - length
Else
isPartialFile = True
'record the data read in the database HERE (partial file)
'totalDelivered = totalFileSize - dataToRead
'prevent infinite loop if user disconnects
dataToRead = -1
End If
End While
iStream.Close()
Response.End()
How do we modify the above code to make it work with IOS devices assuming that the issue is with the 206 response header?
I got simple file uploads to Google Drive working using httr. The problem is that every document is uploaded as "untitled", and I have to PATCH the metadata to set the title. The PATCH request occasionally fails.
According to the API, I ought to be able to do a multipart upload, allowing me to specify the title as part of the same POST request that uploads the file.
res<-POST(
"https://www.googleapis.com/upload/drive/v2/files?convert=true",
config(token=google_token),
body=list(y=upload_file(file))
)
id<-fromJSON(rawToChar(res$content))$id
if(is.null(id)) stop("Upload failed")
url<-paste(
"https://www.googleapis.com/drive/v2/files/",
id,
sep=""
)
title<-strsplit(basename(file), "\\.")[[1]][1]
Sys.sleep(2)
res<-PATCH(url,
config(token=google_token),
body=paste('{"title": "',title,'"}', sep = ""),
add_headers("Content-Type" = "application/json; charset=UTF-8")
)
stopifnot(res$status_code==200)
cat(id)
What I'd like to do is something like this:
res<-POST(
"https://www.googleapis.com/upload/drive/v2/files?uploadType=multipart&convert=true",
config(token=google_token),
body=list(y=upload_file(file),
#add_headers("Content-Disposition" = "text/json"),
json=toJSON(data.frame(title))
),
encode="multipart",
add_headers("Content-Type" = "multipart/related"),
verbose()
)
The output I get shows that the content encoding of the individual parts is wrong, and it results in a 400 error:
-> POST /upload/drive/v2/files?uploadType=multipart&convert=true HTTP/1.1
-> User-Agent: curl/7.19.7 Rcurl/1.96.0 httr/0.6.1
-> Host: www.googleapis.com
-> Accept-Encoding: gzip
-> Accept: application/json, text/xml, application/xml, */*
-> Authorization: Bearer ya29.ngGLGA9iiOrEFt0ycMkPw7CZq23e6Dgx3Syjt3SXwJaQuH4B6dkDdFXyIC6roij2se7Fs-Ue_A9lfw
-> Content-Length: 371
-> Expect: 100-continue
-> Content-Type: multipart/related; boundary=----------------------------938934c053c6
->
<- HTTP/1.1 100 Continue
>> ------------------------------938934c053c6
>> Content-Disposition: form-data; name="y"; filename="db_biggest_tables.csv"
>> Content-Type: application/octet-stream
>>
>> table rows DATA idx total_size idxfrac
>>
>> ------------------------------938934c053c6
>> Content-Disposition: form-data; name="json"
>>
>> {"title":"db_biggest_tables"}
>> ------------------------------938934c053c6--
<- HTTP/1.1 400 Bad Request
<- Vary: Origin
<- Vary: X-Origin
<- Content-Type: application/json; charset=UTF-8
<- Content-Length: 259
<- Date: Fri, 26 Jun 2015 18:50:38 GMT
<- Server: UploadServer
<- Alternate-Protocol: 443:quic,p=1
<-
Is there any way to set the content encoding properly for individual parts? The second part should be "text/json", for example.
I have been through R documentation, Hadley's httr project pages at Github, this site and some general googling. I can't find any examples of how to do a multipart upload and set content-encoding.
You shoud be able to do this using curl::form_file or its alias httr::upload_file. See also the curl vignette. Following the example from the Google API doc:
library(httr)
media <- tempfile()
png(media, with = 800, height = 600)
plot(cars)
dev.off()
metadata <- tempfile()
writeLines(jsonlite::toJSON(list(title = unbox("My file"))), metadata)
#post
req <- POST("https://httpbin.org/post",
body = list(
metadata = upload_file(metadata, type = "application/json; charset=UTF-8"),
media = upload_file(media, type = "image/png")
),
add_headers("Content-Type" = "multipart/related"),
verbose()
)
unlink(media)
unlink(metadata)
The only difference here is that curl will automatically add a Content-Disposition header for each file, which is required for multipart/form-data but not for multipart/related. The server will probably just ignore this redundant header in this case.
For now there is no way to accomplish this without writing the content to a file. Perhaps we could add something like that in a future version of httr/curl, although this has not come up before.