I'm using a custom ashx handler to handle a file upload. When run locally, the file uploads fine.
When I use the same setup on the web server I get a "Index out of range" error.
In firebug I see the binary contents of the file in the post data and the file name is also passed in the query string.
Any one seen this before?
I`m sure its something minor, but its driving me up the wall.
Update: Made some progress. I found out that I'm getting two different errors. One from FF / Chrome and one from IE. I'm focusing on FF now, just because firebug makes debugging easier. Now I get an error "Could not find a part of the path 'C:\inetpub\wwwroot\'"
Update 2: Got this working in FF/Chrome. Turns out IE and FF/Chrome post the data differentlly.
Update 3: Here is the output of the network profiler in IE dev tool:
Request header:
Key Value
Request POST /Secured/UploadHandler.ashx? HTTP/1.1
Accept text/html, application/xhtml+xml, */*
Referer http://cms.webstreet.co.il/Secured/fileUpload.aspx
Accept-Language he-IL
User-Agent Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
Content-Type multipart/form-data; boundary=---------------------------7db13b13d1b12
Accept-Encoding gzip, deflate
Host cms.webstreet.co.il
Content-Length 262854
Connection Keep-Alive
Cache-Control no-cache
Request body:
-----------------------------7db13b13d1b12
Content-Disposition: form-data; name="qqfile"; filename="P-Art_Page_Digital.jpg"
Content-Type: image/jpeg
<Binary File Data Not Shown>
---------------------------7db13b13d1b12--
See the (big) list of comments and replies attached to the original question. Not sure why it works now, but Elad seems to have fixed his problem.
You have to specify the name tag.
<input id="File1" name="file1" type="file" />
I am pretty sure file uploads CANNOT be done via Ajax; you need to use a regular form post.
Also make sure you have the enctype attribute set correctly on your form tag.
Related
Looking at the network panel in the developer tools of google chrome I can read the HTTP request and response messages of each file in a web page and, in particular, I can read the start line and the headers with all their fields.
I know (and I hope that is right) that the start line of each HTTP message has a specific and rigorous structure (different for request and response message, of course) and any element inside a start line cannot be missed.
Unlike the start line, the header of an HTTP message contains additional informations, so, I guess, the header fields are facultative or, at least, not so strictly requested like the fields in the start line.
Considering all this, I'm wondering: who sets the header fields in an HTTP message? Or, in other words, how are determined the header fields of an HTTP message?
For example, i can actually see that the HTTP request message for a web page is this:
GET / HTTP/1.1
Host: www.corriere.it
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.130 Safari/537.36
Accept-Encoding: gzip, deflate, sdch
Accept-Language: it-IT,it;q=0.8,en-US;q=0.6,en;q=0.4,de;q=0.2
Cookie: rccsLocalPref=milano%7CMilano%7C015146; rcsLocalPref=milano%7CMilano; _chartbeat2=DVgclLD1BW8iBl8sAi.1422913713367.1430683372200.1111111111111111; rlId=8725ab22-cbfc-45f7-a737-7c788ad27371; __ric=5334%3ASat%20Jun%2006%202015%2014%3A13%3A31%20GMT+0200%20%28ora%20legale%20Europa%20occidentale%29%7C; optimizelyEndUserId=oeu1433680191192r0.8780217287130654; optimizelySegments=%7B%222207780387%22%3A%22gc%22%2C%222230660652%22%3A%22false%22%2C%222231370123%22%3A%22referral%22%7D; optimizelyBuckets=%7B%7D; __gads=ID=bbe86fc4200ddae2:T=1434976116:S=ALNI_MZnWxlEim1DkFzJn-vDIvTxMXSJ0g; fbm_203568503078644=base_domain=.corriere.it; apw_browser=3671792671815076067.; channel=Direct; apw_cache=1438466400.TgwTeVxF.1437740670.0.0.0...EgjHfb6VZ2K4uRK4LT619Zau06UsXnMdig-EXKOVhvw; ReadSpeakerSettings=enlarge=enlargeoff; _ga=GA1.2.1780902850.1422986273; __utma=226919106.1780902850.1422986273.1439110897.1439114180.19; __utmc=226919106; __utmz=226919106.1439114180.19.18.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=(not%20provided); s_cm_COR=Googlewww.google.it; gvsC=New; rcsddfglr=1441375682.3.2.m0i10Mw-|z1h7I0wH.3671792671815076067..J3ouwyCkNXBCyau35GWCru0I1mfcA3hRLNURnDWREPs; cpmt_xa=5334,5364; utag_main=v_id:014ed4175b8e000f4d2bb480bdd10606d001706500bd0$_sn:74$_ss:1$_st:1439133960323$_pn:1%3Bexp-session$ses_id:1439132160323%3Bexp-session; testcookie=true; s_cc=true; s_nr=1439132160762-Repeat; SC_LNK_CR=%5B%5BB%5D%5D; s_sq=%5B%5BB%5D%5D; dtLatC=116p80.5p169.5p91.5p76.5p130.5p74p246.5p100p74.5p122.5; dtCookie=E4365758C13B82EE9C1C69A59B6F077E|Corriere|1|_default|1; dtPC=-; NSC_Wjq_Dpssjfsf_Dbdif=ffffffff091a1f8d45525d5f4f58455e445a4a423660; hz_amChecked=1
how these header fields are chosen? Who/what chose them? (The browser? Not me, of course...)
p.s.:
hope my question is clear, please, forgive my bad english
All internet websites are hosted on HTTP servers, these headers are set by the http server who is hosting the webpage. They are used to control how pages are shown, cached, and encoded.
Web browsers set the headers when requesting the pages from the servers. This mutual communication protocol is the HTTP protocol linked above.
here is a list of all the possible header fields for a request message: the question is, why the broser chooses only some of them?
The browser doesn't include all possible request headers in every request because either:
They aren't applicable to the current request or
The default value is the desired value
For instance:
Accept tells the server that only certain data formats are acceptable in the response. If any kind of data is acceptable, then it can be omitted as the default is "everything".
Content-Length describes the length of the body of the request. A GET request doesn't have a body, so there is nothing to describe the length of.
Cookie contains a cookie set by the server (or JavaScript) on a previous request. If a cookie hasn't been set, then there isn't one to send back to the server.
and so on.
So I set up an ejabberd XMPP server and use nginx as a proxy on an EC2 instance. The Strophe.js echobot example can connect from my Chrome browser. Here are the Request Headers:
Request URL:http://foo.eu-west-1.compute.amazonaws.com/http-bind
Request Method:POST
Status Code:200 OK
Request Headers
Accept:*/*
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3
Accept-Encoding:gzip,deflate,sdch
Accept-Language:de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4
Connection:keep-alive
Content-Length:115
Content-Type:application/xml
Host:foo.compute.amazonaws.com
Origin:foo.eu-west-1.compute.amazonaws.com
Referer:foo.eu-west-1.compute.amazonaws.com/examples/echobot.html
User-Agent:Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11
Request Payload
<body rid='2692151172' xmlns='http://jabber.org/protocol/httpbind' sid='15f6dc6cbc7a7d69b5db531c2ebf7828e094f043'/>
Response Headers
Access-Control-Allow-Headers:Content-Type
Access-Control-Allow-Origin:*
Connection:keep-alive
Content-Length:286
Content-Type:text/xml; charset=utf-8
Date:Fri, 04 Jan 2013 11:14:14 GMT
Server:nginx/1.2.4
I ported the echobot example to work in Spotify as an app. However, Strophe cannot connect. Here is the Web Inspector network log:
Request URL:http://foo.eu-west-1.compute.amazonaws.com/http-bind
Request Headers
POST http://foo.eu-west-1.compute.amazonaws.com/http-bind HTTP/1.1
Origin: sp://chatify
User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.15 Safari/535.11
Content-Type: application/xml
Request Payload
<body rid='790399813' xmlns='http://jabber.org/protocol/httpbind' to='ec2-54-246-45-111.eu-west-1.compute.amazonaws.com' xml:lang='en' wait='60' hold='1' content='text/xml; charset=utf-8' ver='1.6' xmpp:version='1.0' xmlns:xmpp='urn:xmpp:xbosh'/>
Now. The RequiredPermissons in the manifest.json look as follows:
"RequiredPermissons": [
"http://foo.eu-west-1.compute.amazonaws.com",
"foo.eu-west-1.compute.amazonaws.com",
"http://foo.eu-west-1.compute.amazonaws.com/http-bind",
"foo.eu-west-1.compute.amazonaws.com/http-bind"
]
I can load sources from foo.eu-west-1.compute.amazonaws.com so I think the permissons work.
The Spotify app uses a different origin, "sp://chatify". I read that this can cause problems. And Access-Control-Allow-Origin:* should be added to the header. I did so in the nginx config just to find out that it had been sent all along. You can see it in Response Headers of the first request.
Strophe.js itself says in the logs of the example if turned on:
error 0 happened
So any suggestions?
The manifest seems to be right
Access-Control-Allow-Origin;* seem right
It works in a browser but not Spotify.
Thx for your help!
Update:
If I open the echobot.html locally it still works. The Origin is then null.
Okay, I got it to work. And the solution is a classic. However, my debugging might help others.
At first I looked at the Web Inspector. It showed that all request were canceled. No the inspector is not always right as mentioned in many places. This time it was, though.
So Strophe.js could not establish a connection. To see if any custom Spotify App could establish a connection I tried the Google Maps example from the tutorial app. It did not work with the native Linux version and the Windows under wine. It did however work under windows.
My next step was to see if my app could load any resources from the net. So I simply added an iframe: . It did not load my page.
I took a close look at the manifest of the tutorial app and my app. I could not find any difference. So I just copied "RequiredPermissions" to see that I had a typo. Once that was fixed the iframe and eventually strophe.js worked.
It is funny. I looked for typos a few days ago I could not find any.
To sum up:
Make sure your server sets Access-Control-Allow-Origin;*
Set the Permissions. E.g.
"RequiredPermissions": [
"http://ec2-foo.eu-west-1.compute.amazonaws.com",
"ec2-foo.eu-west-1.compute.amazonaws.com",
"http://ec2-foo.eu-west-1.compute.amazonaws.com/http-bind",
"ec2-foo.eu-west-1.compute.amazonaws.com/http-bind"
]
I'm troubleshooting an integration between an external service which posts multipart/form-data data to a Controller in MVC3.
On the production server I've captured erroneous request using HttpRequest.SaveAs to a file.
Is there any tool I can use to "replay" the request on my localhost so I can debug with Visual Studio?
(I've been trying with fiddler but I can't get it working right. If a dump a local request from a simple form with POST my controller recieves the files correctly. If i dump the same request and copy paste it into fiddler as raw and send the files are missing so there's something wrong.)
Since there's a built-in function to dump the request I'm thinking it might be some official way to resend the request as well. Is there a way to achieve this?
I have used NCAT command line tool to replay requests captured by SaveAs method.
Command looks like this:
NCAT localhost 80 < CapFileName
you can find it in NMAP library
See my blog for more information.
I got it working in fiddler if I do exactly this in the composer:
Open the dumpfile in notepad
Choose Parsed
Only enter the Content-Type as headers (and let fiddler add the others even if they were the same)
Paste the body of the request in request body from notepad
POST: http://localhost/Controller/Action
Request headers:
Content-Type: multipart/form-data; boundary=fJP-UWKXo6xvqX7niGR0StXXFQwdKhHc9quF
Request body:
--fJP-UWKXo6xvqX7niGR0StXXFQwdKhHc9quF
Content-Disposition: form-data; name="mmsimage"; filename="IMG_0959.jpg"
Content-Type: image/jpeg; name=IMG_0959.jpg; charset=ISO-8859-1
Content-Transfer-Encoding: binary
<the encoded file goes here as jibberish>
--fJP-UWKXo6xvqX7niGR0StXXFQwdKhHc9quF
Content-Disposition: form-data; name="somefield"
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
value of somefield
--fJP-UWKXo6xvqX7niGR0StXXFQwdKhHc9quF--
I followed this tutorial, http://symfony.com/doc/current/cookbook/controller/error_pages.html, and have error.html.twig and error.json.twig within app/Resources/TwigBundle/views/Exception/
Even though the content type of the request is set to application/json, all errors default to the html version of the error page.
The format of the route is also defined:
http://symfonyinstall/api/v1/users.json
Request Header:
Accept: application/json
Content-Type: application/json
Connection: keep-alive
Origin: chrome-extension: //rest-console-id
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1025.162 Safari/535.19
Response Header:
Status Code: 404
date: Sun, 29 Apr 2012 06:54:35 GMT
Content-Encoding: gzip
X-Powered-By: PHP/5.3.10
Transfer-Encoding: chunked
Connection: keep-alive
Server: nginx
Content-Type: text/html; charset=UTF-8
cache-control: no-cache
I'm out of ideas... and I really need a json version of the errors for my API to work...
I just hit the same problem, and although your question is quite old and symfony bumped a few versions up since then, the problem is still relevant, so even if you don't need to know it any more, maybe somebody else will.
Your original problem was probaly caused by error described here, but there wasn't much going on about it after initial post. Since then the entire codebase was updated, so while the same symptoms reappeared, the error is not related. I am posting it here because anybody looking for an answer these days will probably find this question (as I did :).
Even when returing JsonResponse directly form kernel exception handler, it will still have Content-Type: text/html; charset=UTF-8. This stumped me so much that I used netcat to make a manual request without any smart software in between, and it turns out that the response in such case has actually two different Content-Type headers:
HTTP/1.0 500 Internal Server Error
Connection: close
X-Powered-By: PHP/5.5.9-1ubuntu4.17
Content-Type: text/html; charset=UTF-8
Cache-Control: private, must-revalidate
Content-Type: application/json
pragma: no-cache
expires: -1
X-Debug-Token: 775c55
X-Debug-Token-Link: http://127.0.0.1:8000/_profiler/775c55
Date: Thu, 27 Oct 2016 23:08:31 GMT
Now, double Content-Type header is not something you see everyday. It seems that this is implemented in Symfony\Component\Debug\ExceptionHandler class that is only used in debug mode. In order to be as robust as possible, it first renders standard Symfony error page that describes thrown exception. Rendered content is not sent back directly, instead it leverages PHP's output buffering feature to buffer and store produced output. Then it attempts to produce custom error page from framework. In case this fails, previously prepared message is sent.
Output buffering however works only for message content, and not for headers - these are always sent directly. This problem only appears in debug environment, and unusual content types on error are only common in WebAPI, where debug mode is arguably of little use. This makes exposure surface relatively small, but if an application that offers both WebAPI and end-user interface needs to be tested, this might become a problem.
Solving this problem without modifying internal Symfony files doesn't seem possible. Output control sits deep within symfony kernel and doesn't offer any configuration. Anyway, I am not convinced of benefits of such solution. If anyone could explain to me what could have happened during custom exception handler that would make default handler useless in case it failed?
Maybe user code messing with ob_* functions?
Do web browsers send the file size in the http header when uploading a file to the server? And if that is the case, then, is it possible to refuse the file just by reading the header and not wait for the whole upload process to finish?
http://www.faqs.org/rfcs/rfc1867.html
HTTP clients are
encouraged to supply content-length for overall file input so that a
busy server could detect if the proposed file data is too large to be
processed reasonably
But the content-length is not required, so you cannot rely on it. Also, an attacker can forge a wrong content-length.
To read the file content is the only reliable way. Having said that, if the content-lenght is present and is too big, to close the connection would be a reasonable thing to do.
Also, the content is sent as multipart, so most of the modern frameworks decode it first. That means you won't get the file byte stream until the framework is done, which could mean "until the whole file is uploaded".
EDIT : before going too far, you may want to check this other answer relying on apache configuration : Using jQuery, Restricting File Size Before Uploading . the description below is only useful if you really need even more custom feedback.
Yes, you can get some information upfront, before allowing the upload of the whole file.
Here's an example of header coming from a form with the enctype="multipart/form-data" attribute :
POST / HTTP/1.1
Host: 127.0.0.1:8000
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.0.3) Gecko/2008092414 Firefox/3.0.3
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.7,fr-be;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Content-Type: multipart/form-data; boundary=---------------------------886261531333586100294758961
Content-Length: 135361
-----------------------------886261531333586100294758961
Content-Disposition: form-data; name=""; filename="IMG_1132.jpg"
Content-Type: image/jpeg
(data starts here and ends with -----------------------------886261531333586100294758961 )
You have the Content-Length in the header, and additionally there is the Content-Type in the header of the file part ( each file has its own header, which is the purpose of multipart encoding ). Beware that it's the browser responsibility to set a relevant Content-Type by guessing the file type ; you can't guarantee it, but it should be fairly reliable for early rejection ( yet you'd better check the whole file when it's entirely available ).
Now, there is a gotcha. I used to filter image files like that, not on the size, but on the content-type ; but as you want to stop the request as soon as possible, the same problem arises : the browser only gets your response once the whole request is sent, including form content and thus uploaded files.
If you don't want the provided content and stop the upload, you have no choice but to brutally close the socket. The user will only see a confusing "connection reset by peer" message. And that sucks, but it's by design.
So you only want to use this method in cases of background asynchronous checks ( using a timer that checks the file field ). So I had that hack :
I use jquery to tell me if the file field has changed
When a new file is chosen, disable all other file fields on the same form to get only that one.
Send the file asynchronously ( jQuery can do it for you, it uses a hidden frame )
Server-side, check the header ( content-length, content-type, ... ), cut the connection as soon as you got what you need.
Set a session variable telling if that file was OK or not.
Client-side, as the file is uploaded to a frame you don't even get any kind of feedback if the connection is closed. Your only alternative is a timer.
Client-side, a timer polls the server to get a status for the uploaded file. Server side, you have that session variable set, send it back to the brower.
The client has the status code ; render it to your form : error message, green checkmark/red X, whatever. Reset the file field or disable the form, you decide. Don't forget to re-enable other file fields.
Quite messy, eh ? If any of you has a better alternative, I'm all ears.
I'm not sure, but you should not really trust anything sent in the header, as it could be faked by the user.
It depends on how the server works. For example in PHP your script will not run until the file upload is complete, so this wouldn't be possible.