Why is the server sending an http 200 response instead of expected 206? - http

History:
We are delivering .mp4 video via http partial request streaming. It works fine except on iPad/iPhone and we have followed all of the recommended procedures for encoding .mp4 files for iOS devices so the issue is not the files themselves. The video files are stored outside of the root of the web server so we have written code to open and stream the video upon request. Our testing has brought us to the point of comparing the headers of our "streamed" video (working but not in IOS) and linking directly to the file (working in all instances but not a possible solution as we need to store files outside of web root). Here is a comparison of the working vs. non-working headers.
link direct to file (working for IOS):
request 1
Status Code 200 OK
Accept-Ranges:bytes
Content-Length:62086289
Content-Type:video/mp4
Date:Sun, 26 Jul 2015 08:11:15 GMT
ETag:"60be3570ae7ed01:0"
Last-Modified:Fri, 24 Apr 2015 16:48:06 GMT
Server:Microsoft-IIS/7.0
X-Powered-By:ASP.NET
request 2
Status Code:206 Partial Content
Accept-Ranges:bytes
Content-Length:62086289
Content-Range:bytes 0-62086288/62086289
Content-Type:video/mp4
Date:Sun, 26 Jul 2015 08:11:16 GMT
ETag:"60be3570ae7ed01:0"
Last-Modified:Fri, 24 Apr 2015 16:48:06 GMT
Server:Microsoft-IIS/7.0
X-Powered-By:ASP.NET
opening file for streaming (not working for IOS)
request 1
Status Code 200 OK
Accept-Ranges:bytes
Cache-Control:private
Content-Length:62086289
Content-Range:bytes 0-62086289/62086289
Content-Type:video/mp4
Date:Sun, 26 Jul 2015 16:55:22 GMT
Server:Microsoft-IIS/7.0
X-AspNet-Version:4.0.30319
X-Powered-By:ASP.NET
request 2
Status Code 200 OK
Accept-Ranges:bytes
Cache-Control:private
Content-Length:62086289
Content-Range:bytes 0-62086289/62086289
Content-Type:video/mp4
Date:Sun, 26 Jul 2015 16:55:22 GMT
Server:Microsoft-IIS/7.0
X-AspNet-Version:4.0.30319
X-Powered-By:ASP.NET
Notice that the non working version's second request stays at Status Code 200 and does not change to 206. This is what we are focusing on as possibly being the issue but are at a loss. Below is the code to open the file and send to server. Note the way we are setting up the header.
Dim iStream As System.IO.Stream
' Buffer to read 10K bytes in chunk:
Dim buffer(10000) As Byte
' Length of the file:
Dim length As Integer=0
' Total bytes to read:
Dim dataToRead As Long=0
Dim isPartialFile As Boolean = False
Dim totalDelivered As Long = 0
Dim totalFileSize As Long = 0
Dim filename As String
filename = fileID & file.Extension
iStream = New System.IO.FileStream(filePath, System.IO.FileMode.Open,IO.FileAccess.Read, IO.FileShare.Read)
' Total bytes to read:
dataToRead = iStream.Length
totalFileSize = iStream.Length
Response.ContentType = "video/mp4"
Response.AddHeader("Accept-Ranges", "bytes")
Response.AddHeader("Content-Range", "bytes 0-" + totalFileSize.ToString + "/" + totalFileSize.ToString)
Response.AddHeader("Content-Length", totalFileSize.ToString)
' Read the bytes.
While dataToRead > 0
' Verify that the client is connected.
If Response.IsClientConnected Then
' Read the data in buffer
length = iStream.Read(buffer, 0, 10000)
' Write the data to the current output stream.
Response.OutputStream.Write(buffer, 0, length)
' Flush the data to the HTML output.
Response.Flush()
ReDim buffer(10000) ' Clear the buffer
dataToRead = dataToRead - length
Else
isPartialFile = True
'record the data read in the database HERE (partial file)
'totalDelivered = totalFileSize - dataToRead
'prevent infinite loop if user disconnects
dataToRead = -1
End If
End While
iStream.Close()
Response.End()
How do we modify the above code to make it work with IOS devices assuming that the issue is with the 206 response header?

Related

Transfer Encoding chunked with okhttp only delivers full result

I am trying to get some insights on a chunked endpoint and therefore planned to print what the server sends me chunk by chunk. I failed to do so so I wrote a test to see if OkHttp/Retrofit are working as I expect it.
The following test should deliver some chunks to the console but all I get is the full response.
I am a bit lost what I am missing to even make the MockWebServer of OkHttp3 sending me chunks.
I found this retrofit issue entry but the answer is a bit ambiguous for me: Chunked Transfer Encoding Response
class ChunkTest {
#Rule
#JvmField
val rule = RxImmediateSchedulerRule() // custom rule to run Rx synchronously
#Test
fun `test Chunked Response`() {
val mockWebServer = MockWebServer()
mockWebServer.enqueue(getMockChunkedResponse())
val retrofit = Retrofit.Builder()
.baseUrl(mockWebServer.url("/"))
.addCallAdapterFactory(RxJava2CallAdapterFactory.create())
.client(OkHttpClient.Builder().build())
.build()
val chunkedApi = retrofit.create(ChunkedApi::class.java)
chunkedApi.getChunked()
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe({
System.out.println(it.string())
}, {
System.out.println(it.message)
})
mockWebServer.shutdown()
}
private fun getMockChunkedResponse(): MockResponse {
val mockResponse = MockResponse()
mockResponse.setHeader("Transfer-Encoding", "chunked")
mockResponse.setChunkedBody("THIS IS A CHUNKED RESPONSE!", 5)
return mockResponse
}
}
interface ChunkedApi {
#Streaming
#GET("/")
fun getChunked(): Flowable<ResponseBody>
}
Test console output:
Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$2 execute
INFO: MockWebServer[49293] starting to accept connections
Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$3 processOneRequest
INFO: MockWebServer[49293] received request: GET / HTTP/1.1 and responded: HTTP/1.1 200 OK
THIS IS A CHUNKED RESPONSE!
Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$2 acceptConnections
INFO: MockWebServer[49293] done accepting connections: Socket closed
I expected to be more like (body "cut" every 5 bytes):
Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$2 execute
INFO: MockWebServer[49293] starting to accept connections
Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$3 processOneRequest
INFO: MockWebServer[49293] received request: GET / HTTP/1.1 and responded: HTTP/1.1 200 OK
THIS
IS A
CHUNKE
D RESPO
NSE!
Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$2 acceptConnections
INFO: MockWebServer[49293] done accepting connections: Socket closed
The OkHttp Mockserver does chunk the data, however it looks like the LoggingInterceptor waits until the whole chunks buffer is full then it displays it.
From this nice summary about HTTP streaming:
The use of Transfer-Encoding: chunked is what allows streaming within a single request or response. This means that the data is transmitted in a chunked manner, and does not impact the representation of the content.
With that in mind, we are dealing with 1 "request / response", which means we'll have to do our chunks retrieval before getting the entire response. Then pushing each chunk in our own buffer, all that on an OkHttp network interceptor.
Here is an example of said NetworkInterceptor:
class ChunksInterceptor: Interceptor {
val Utf8Charset = Charset.forName ("UTF-8")
override fun intercept (chain: Interceptor.Chain): Response {
val originalResponse = chain.proceed (chain.request ())
val responseBody = originalResponse.body ()
val source = responseBody!!.source ()
val buffer = Buffer () // We create our own Buffer
// Returns true if there are no more bytes in this source
while (!source.exhausted ()) {
val readBytes = source.read (buffer, Long.MAX_VALUE) // We read the whole buffer
val data = buffer.readString (Utf8Charset)
println ("Read: $readBytes bytes")
println ("Content: \n $data \n")
}
return originalResponse
}
}
Then of course we register this Network Interceptor on the OkHttp client.

Why is do_GET much faster than do_POST

Python's BaseHTTPRequestHandler has an issue with forms sent through post!
I have seen other people asking the same question (Why GET method is faster than POST?), but the time difference in my case is too much (1 second)
Python server:
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
import datetime
def get_ms_since_start(start=False):
global start_ms
cur_time = datetime.datetime.now()
# I made sure to stay within hour boundaries while making requests
ms = cur_time.minute*60000 + cur_time.second*1000 + int(cur_time.microsecond/1000)
if start:
start_ms = ms
return 0
else:
return ms - start_ms
class MyServer(BaseHTTPRequestHandler, object):
def do_GET(self):
print "Start get method at %d ms" % get_ms_since_start(True)
field_data = self.path
self.send_response(200)
self.end_headers()
self.wfile.write(str(field_data))
print "Sent response at %d ms" % get_ms_since_start()
return
def do_POST(self):
print "Start post method at %d ms" % get_ms_since_start(True)
length = int(self.headers.getheader('content-length'))
print "Length to read is %d at %d ms" % (length, get_ms_since_start())
field_data = self.rfile.read(length)
print "Reading rfile completed at %d ms" % get_ms_since_start()
self.send_response(200)
self.end_headers()
self.wfile.write(str(field_data))
print "Sent response at %d ms" % get_ms_since_start()
return
if __name__ == '__main__':
server = HTTPServer(('0.0.0.0', 8082), MyServer)
print 'Starting server, use <Ctrl-C> to stop'
server.serve_forever()
Get request with python server is very fast
time curl -i http://0.0.0.0:8082\?one\=1
prints
HTTP/1.0 200 OK
Server: BaseHTTP/0.3 Python/2.7.6
Date: Sun, 18 Sep 2016 07:13:47 GMT
/?one=1curl http://0.0.0.0:8082\?one\=1 0.00s user 0.00s system 45% cpu 0.012 total
and on the server side:
Start get method at 0 ms
127.0.0.1 - - [18/Sep/2016 00:26:30] "GET /?one=1 HTTP/1.1" 200 -
Sent response at 0 ms
Instantaneous!
Post request when sending form to python server is very slow
time curl http://0.0.0.0:8082 -F one=1
prints
--------------------------2b10061ae9d79733
Content-Disposition: form-data; name="one"
1
--------------------------2b10061ae9d79733--
curl http://0.0.0.0:8082 -F one=1 0.00s user 0.00s system 0% cpu 1.015 total
and on the server side:
Start post method at 0 ms
Length to read is 139 at 0 ms
Reading rfile completed at 1002 ms
127.0.0.1 - - [18/Sep/2016 00:27:16] "POST / HTTP/1.1" 200 -
Sent response at 1002 ms
Specifically, self.rfile.read(length) is taking 1 second for very little form data
Post request when sending data (not form) to python server is very fast
time curl -i http://0.0.0.0:8082 -d one=1
prints
HTTP/1.0 200 OK
Server: BaseHTTP/0.3 Python/2.7.6
Date: Sun, 18 Sep 2016 09:09:25 GMT
one=1curl -i http://0.0.0.0:8082 -d one=1 0.00s user 0.00s system 32% cpu 0.022 total
and on the server side:
Start post method at 0
Length to read is 5 at 0
Reading rfile completed at 0
127.0.0.1 - - [18/Sep/2016 02:10:18] "POST / HTTP/1.1" 200 -
Sent response at 0
node.js server:
var http = require('http');
var qs = require('querystring');
http.createServer(function(req, res) {
if (req.method == 'POST') {
whole = ''
req.on('data', function(chunk) {
whole += chunk.toString()
})
req.on('end', function() {
console.log(whole)
res.writeHead(200, 'OK', {'Content-Type': 'text/html'})
res.end('Data received.')
})
}
}).listen(8082)
Post request when sending form to node.js server is very fast
time curl -i http://0.0.0.0:8082 -F one=1
prints:
HTTP/1.1 100 Continue
HTTP/1.1 200 OK
Content-Type: text/html
Date: Sun, 18 Sep 2016 10:31:38 GMT
Connection: keep-alive
Transfer-Encoding: chunked
Data received.curl -i http://0.0.0.0:8082 -F one=1 0.00s user 0.00s system 42% cpu 0.013 total
I think this is the answer to your problem: libcurl delays for 1 second before uploading data, command-line curl does not
libcurl is sending the Expect 100-Continue header, and waiting 1 second for a response before sending the form data (in the case of the -F command).
In the case of -d, it does not send the 100-Continue header, for whatever reason.

Safari render HTML as received

When I load a html page I have a 5 strings written about a second apart.
<br>1</br>
...... 1 second ......
<br>2</br>
...... 1 second ......
<br>3</br>
...... 1 second ......
<br>4</br>
...... 1 second ......
<br>5</br>
...... 1 second ......
--- end request ---
Chromium and Firefox both load and display the first br then the next as received. (Firefox requires a content encoding however). But Safari refuses to display any of the tags until the request is ended.
Chromium seems to just do it.
Firefox first needs to determine the content encoding https://bugzilla.mozilla.org/show_bug.cgi?id=647203
But Safari seems to just refuse. Is a different response code or header needed? I try setting explicitly the content type to text/html. Didn't work.
I have confirmed in Wireshark that the strings are being sent a second apart, i.e they are not being cached and sent all at once.
I have also confirmed this occurs if I go through localhost or I use my public ip address.
I have tried content-length and keep alive, former just closes request automatically, latter seems to have no effect.
Headers and Responses from Wireshark
Firefox (working)
GET /pe HTTP/1.1
Host: 127.0.01:8080
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:42.0) Gecko/20100101 Firefox/42.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
DNT: 1
Connection: keep-alive
Cache-Control: max-age=0
HTTP/1.1 200 OK
Transfer-Encoding: chunked
Date: Tue, 10 Nov 2015 17:10:20 GMT
Connection: keep-alive
Content-Type: text/html; charset=utf-8
Server: TwistedWeb/13.2.0
1f
<html>
<title>PE</title>
<body>
2e
<br> This is the 1th time I've written. </br>
2e
<br> This is the 2th time I've written. </br>
2e
<br> This is the 3th time I've written. </br>
2e
<br> This is the 4th time I've written. </br>
2e
<br> This is the 5th time I've written. </br>
8
</body>
8
</html>
0
Safari (not working)
GET /pe HTTP/1.1
Host: 127.0.0.01:8080
Accept-Encoding: gzip, deflate
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/601.1.56 (KHTML, like Gecko) Version/9.0 Safari/601.1.56
Accept-Language: en-us
DNT: 1
Connection: keep-alive
HTTP/1.1 200 OK
Transfer-Encoding: chunked
Date: Tue, 10 Nov 2015 17:12:55 GMT
Connection: keep-alive
Content-Type: text/html; charset=utf-8
Server: TwistedWeb/13.2.0
1f
<html>
<title>PE</title>
<body>
2e
<br> This is the 1th time I've written. </br>
2e
<br> This is the 2th time I've written. </br>
2e
<br> This is the 3th time I've written. </br>
2e
<br> This is the 4th time I've written. </br>
2e
<br> This is the 5th time I've written. </br>
8
</body>
8
</html>
0
Demo
import twisted
from twisted.python import log
import sys
log.startLogging(sys.stdout)
from twisted.web.server import Site, NOT_DONE_YET
from twisted.web.resource import Resource
from twisted.internet import reactor
class PersistantExample(Resource):
'''Gives an example of a persistant request'''
# does not exist on Safari until stopping browser / ending connection
isLeaf = True
def render_GET(self, request):
log.msg("Ooooh a render request")
# schedule the reoccuring thing (this could be something else like a deferred result)
reactor.callLater(1.1, self.keeps_going, request, 0) # 1.1 seconds just to show it can take floats
# firefox require the char set see https://bugzilla.mozilla.org/show_bug.cgi?id=647203
request.responseHeaders.addRawHeader("Content-Type",
"text/html; charset=utf-8") # set the MIME header (charset needed for firefox)
# this will cause the connection to keep only open for x length
# (only helpful if the length is known, also does NOT make it render right away)
# request.responseHeaders.addRawHeader("Content-Length", "150")
request.write("<html>\n<title>PE</title>\n<body>")
return NOT_DONE_YET
def keeps_going(self, request, i):
log.msg("I'm going again....")
i = i + 1
request.write("\n<br> This is the %sth time I've written. <br>" % i) ## probably not best to use <br> tag
if i < 5:
reactor.callLater(1.1, self.keeps_going, request, i) # 1.1 seconds just to show it can take floats
if i >= 5 and not request.finished:
log.msg("Done")
request.write("\n</body>")
request.write("\n</html>")
# safari will only render when finished
request.finish()
class Root(Resource):
isLeaf = False
def render_GET(self, request):
return "<html><body>Demo is here</body></html>"
class Site(Site):
pass
root = Root()
pe = PersistantExample()
site = Site(root)
root.putChild("", root)
root.putChild("index", root)
root.putChild("pe", pe)
# listen
if __name__ == "__main__":
reactor.listenTCP(8080, site)
reactor.run()
Are you trying this on Safari for Windows? If yes then it not officially supported any more. If you are try to solve this on Safari for Mac please create a demo or share the link if you already have.
I can test and help you over this.
Ok in order for Safari to render the html at least 1024 bytes have to be written before it starts to render as received.
You can see a demo showing both a working and non working version below. A similar thing happens in Firefox (as it needs 1024 bytes to determine encoding) but you can set the encoding to bypass it in Firefox. Not sure if there is a way to do that in Safari.
+---------------------------------------------------+
| What you send | How much you need |
+---------------------------------------------------+
| Nothing | 1024 bytes |
+---------------------------------------------------+
| Meta Charset | 512 bytes |
+---------------------------------------------------+
| Header Charset | 512 bytes (not including header) |
+---------------------------------------------------+
import twisted
from twisted.python import log
import sys
log.startLogging(sys.stdout)
from twisted.web.server import Site, NOT_DONE_YET
from twisted.web.resource import Resource
from twisted.internet import reactor
import time
# max 133
# 133 * 8 = 1064
html_string_working = "<p>" # 3 bytes
html_string_working += "I'm the version that works! I show up right away!" # 49 bytes
html_string_working += "<!---" # 5 bytes
html_string_working += " " * (1024 - (3+49+5+3+4)) # filler
html_string_working += "-->" # 3 bytes
html_string_working += "</p>" # 4 bytes
print len(html_string_working)
html_string_non_working = "<p>" # 3 bytes
html_string_non_working += "I'm the version that does not work! I don't show up right away!" # 63 bytes
html_string_non_working += "</p>" # 4 bytes
html_string_non_working += "<!---" # 5 bytes
html_string_non_working += " " * (1023 - (3+63+5+3+4)) # filler but one byte short (notice 1023 instead of 1024)
html_string_non_working += "-->" # 3 bytes
print len(html_string_non_working)
# charset maybe? Firefox won't start parsing until 1024
# unless it has charset specified see https://bugzilla.mozilla.org/show_bug.cgi?id=647203
# "Your script does not appear to declare the character encoding. When the character
# encoding is not declared, Gecko won't start parsing until it has received 1024
# bytes of HTTP payload."
# charset works but requires 512 bytes
html_string_charset = "<meta charset=utf-8>" # 20 bytes
html_string_charset += "<p>" # 3 bytes
html_string_charset += "I'm the meta charset version. I may work!" # 41 bytes
html_string_charset += "</p>" # 4 bytes
html_string_charset += "<!---" # 5 bytes
html_string_charset += " " * (512 - (20+3+41+5+3+4)) # filler but one byte short (512 bytes; 511 doesn't work)
html_string_charset += "-->" # 3 bytes
# what about in header form
# charset works but requires 512 bytes not including the headers (so same as meta version)
html_string_charset_headers = "<p>" # 3 bytes
html_string_charset_headers += "I'm the header charset version. I may work!" # 43 bytes
html_string_charset_headers += "</p>" # 4 bytes
html_string_charset_headers += "<!---" # 5 bytes
html_string_charset_headers += " " * (512 - (3+43+5+3+4)) # filler but one byte short (512 bytes; 511 doesn't work)
html_string_charset_headers += "-->" # 3 bytes
print len(html_string_charset_headers)
later = "<p> I'm written later. Now it suddenly will work because bytes >= 1024." # 71 bytes
class PersistantExample(Resource):
'''Gives an example of a persistant request'''
# does not exist on Safari until stopping browser / ending connection
isLeaf = True
def render_GET(self, request):
log.msg("Ooooh a render request")
# schedule the reoccuring thing (this could be something else like a deferred result)
# firefox require the char set see https://bugzilla.mozilla.org/show_bug.cgi?id=647203
# render working version or not
try:
w = int(request.args["w"][0])
except:
w = 1
if w == 1:
request.write(html_string_working)
elif w == 2:
request.write(html_string_non_working)
elif w == 3:
request.write(html_string_charset)
elif w == 4:
# 12 + 24 = 36 bytes but doesn't count towards 512 total
request.responseHeaders.addRawHeader("Content-Type", "text/html; charset=utf-8")
request.write(html_string_charset_headers)
reactor.callLater(2, self.run_later, request)
return NOT_DONE_YET
def run_later(self, request):
request.write(later)
request.finish()
class Root(Resource):
isLeaf = False
def render_GET(self, request):
return """<html><body>
<p>Working version here</p>
<p>Non working version here</p>
<p>Meta charset version here</p>
<p>Header charset version here</p>
</body></html>"""
class Site(Site):
pass
root = Root()
pe = PersistantExample()
site = Site(root)
root.putChild("", root)
root.putChild("index", root)
root.putChild("pe", pe)
# listen
if __name__ == "__main__":
print "Running"
reactor.listenTCP(8080, site)
reactor.run()
Use <br /> or <br> instead of </br> which is not a valid tag.

How to load CSV data into Excel 2013 using https with basic authentication?

To do so, I wrote the following VBA script and bound it a button which sets some parameters and loads the data. Afterwards it removes the connection to avoid a pile of stale connections.
Private Sub LoadData_Click()
Dim ConnString As String
Dim URL As String
URL = "https://some-server/some/path/some-file.csv"
ConnString = "TEXT;" + URL + "?partner_code=" + CStr(Range("B2")) + "&from=" + CStr(Range("B3")) + "&to=" + CStr(Range("B4"))
Sheets("Import").Select
Sheets("Import").UsedRange.Clear
ActiveSheet.Range("A1").Value = "Data is being loaded. Hold on..."
With ActiveSheet.QueryTables.Add(Connection:=ConnString, Destination:=ActiveSheet.Range("A1"))
.Name = _
" "
.FieldNames = True
.RowNumbers = False
.FillAdjacentFormulas = False
.PreserveFormatting = True
.RefreshOnFileOpen = False
.RefreshStyle = xlOverwriteCells
.SavePassword = True
.SaveData = True
.AdjustColumnWidth = True
.RefreshPeriod = 0
.TextFilePromptOnRefresh = False
.TextFilePlatform = xlWindows
.TextFileStartRow = 1
.TextFileParseType = xlDelimited
.TextFileTextQualifier = xlTextQualifierDoubleQuote
.TextFileConsecutiveDelimiter = False
.TextFileTabDelimiter = False
.TextFileSemicolonDelimiter = True
.TextFileCommaDelimiter = False
.TextFileSpaceDelimiter = False
.TextFileColumnDataTypes = Array(1)
.TextFileDecimalSeparator = ","
.TextFileTrailingMinusNumbers = True
.Refresh BackgroundQuery:=False
End With
Do While ActiveWorkbook.Connections.Count > 0
ActiveWorkbook.Connections.Item(ActiveWorkbook.Connections.Count).Delete
Loop
Sheets("Parameter").Select
End Sub
It works fine with Excel 2010, but not with 2013. When clicking my button I'm prompted for credentials and enter them. With Excel 2010 I see on server side in apache logs some superfluous OPTIONS requests and a HEAD, but finally just one GET which pulls the data.
With Excel 2013 I see three OPTIONS, then a HEAD which is successful (200) and finally a GET which is not successful (401). To further investigate it I did activate mod_dumpio module to see what's happening. And Apache is of course right to turn down excel 2013:
HEAD /some/path/some-file.csv?partner_code=25010&from=01.01.2014&to=01.06.2014 HTTP/1.1\r\n
...
Authorization: Basic Y29udHJvbGxpbmc6Rm9vNWFoUGg=\r\n
which is responded with 200 and directly afterwards a GET appears
GET /some/path/some-file.csv?partner_code=25010&from=01.01.2014&to=01.06.2014 HTTP/1.1\r\n
which lacks the basic auth header and therefore is answered by
HTTP/1.1 401 Unauthorized\r\nDate: Thu, 03 Jul 2014 18:18:59 GMT\r\nServer: Apache\r\nPragma: No-cache\r\nCache-Control: no-cache\r\nExpires: Thu, 01 Jan 1970 01:00:00 CET\r\nWWW-Authenticate: Basic realm="Import/Export"\r\nVary: Accept-Encoding\r\nContent-Encoding: gzip\r\nContent-Length: 378\r\nConnection: close\r\nContent-Type: text/html;charset=utf-8\r\n\r\n
I was following instructions in http://support.microsoft.com/kb/2123563/en-us step by step, but it did not help. Also the HEAD is actually working and using Basic auth.
How to load CSV data into Excel 2013 using https with basic authentication?

How to parse http headers in Go

I have http response headers shipped in logs from elsewhere. In my log file I have things like :-
Date: Fri, 21 Mar 2014 06:45:15 GMT\r\nContent-Encoding: gzip\r\nLast-Modified: Tue, 20 Aug 2013 15:45:41 GMT\r\nServer: nginx/0.8.54\r\nAge: 18884\r\nVary: Accept-Encoding\r\nContent-Type: text/html\r\nCache-Control: max-age=864000, public\r\nX-UA-Compatible: IE=Edge,chrome=1\r\nTiming-Allow-Origin: *\r\nContent-Length: 14888\r\nExpires: Mon, 31 Mar 2014 06:45:15 GMT\r\n
Given the above as string, how go I parse it into Header object as described in net/http . One way would be to split the string myself and map the key, values... But I am looking to avoid doing that by hand and use the standard (or well maintained 3rd party) library to parse it... Any pointers?
The builtin parser is in textproto. You can either use this directly, or add a
fake HTTP request header and use ReadRequest in the http package. Either way
you need to wrap your data into a bufio.Reader, here I'm just assuming we're
starting with a string.
With textproto:
logEntry := "Content-Encoding: gzip\r\nLast-Modified: Tue, 20 Aug 2013 15:45:41 GMT\r\nServer: nginx/0.8.54\r\nAge: 18884\r\nVary: Accept-Encoding\r\nContent-Type: text/html\r\nCache-Control: max-age=864000, public\r\nX-UA-Compatible: IE=Edge,chrome=1\r\nTiming-Allow-Origin: *\r\nContent-Length: 14888\r\nExpires: Mon, 31 Mar 2014 06:45:15 GMT\r\n"
// don't forget to make certain the headers end with a second "\r\n"
reader := bufio.NewReader(strings.NewReader(logEntry + "\r\n"))
tp := textproto.NewReader(reader)
mimeHeader, err := tp.ReadMIMEHeader()
if err != nil {
log.Fatal(err)
}
// http.Header and textproto.MIMEHeader are both just a map[string][]string
httpHeader := http.Header(mimeHeader)
log.Println(httpHeader)
and with http.ReadRequest:
logEntry := "Content-Encoding: gzip\r\nLast-Modified: Tue, 20 Aug 2013 15:45:41 GMT\r\nServer: nginx/0.8.54\r\nAge: 18884\r\nVary: Accept-Encoding\r\nContent-Type: text/html\r\nCache-Control: max-age=864000, public\r\nX-UA-Compatible: IE=Edge,chrome=1\r\nTiming-Allow-Origin: *\r\nContent-Length: 14888\r\nExpires: Mon, 31 Mar 2014 06:45:15 GMT\r\n"
// we need to make sure to add a fake HTTP header here to make a valid request.
reader := bufio.NewReader(strings.NewReader("GET / HTTP/1.1\r\n" + logEntry + "\r\n"))
logReq, err := http.ReadRequest(reader)
if err != nil {
log.Fatal(err)
}
log.Println(logReq.Header)
https://golang.org/pkg/net/textproto

Resources