We have a font hosted on S3 as a woff and referenced via CSS. The S3 bucket has CORS configured to allow cross-site access from anywhere. Normally, the font displays with no issues. Sometimes, however, Firefox fails to render the font and the error console reports "bad URI or cross-site access not allowed". I'm trying, without much success, to figure out exactly what sequence of HTTP calls FF uses to obtain the font resource from S3 so I can debug the situation with curl. Thus far, debugging with
curl -s -I -X OPTIONS --header "Origin: http://www.foo.com" --header "Access-Control-Request-Method: GET" s3.amazonaws.com/font.woff
has yielded only HTTP 200 responses, i.e. provides no information about what might be failing. I'm kind of at a loss as to how to figure out what's going on. I don't know whether S3 is sporadically returning spurious 403 responses to the OPTIONS requests, 403 responses to the requests for the resource itself, or our HTML/CSS is somehow subtly broken and/or exposing a bug in FF. Any suggestions?
This looks to be an intentional preventative measure against xss (cross site scripting) attacks that firefox has implemented.
Read about one possible solution here: Downloadable font on firefox: bad URI or cross-site access not allowed
Related
I'm using IIS 10 server as a gateway for Node.js server.
When client calls download files such as zip file, IIS server download Node.js server internally with HTTP protocol, and then it pass to client with HTTPS.
But in Chrome web browser, It shows error
net::ERR_HTTP_1_1_REQUIRED with status 200, and when I try to download again it works well until I clear the caches.
In Firefox, it returns status 200 too, but nothing's happen.
In Microsoft Edge and IE11 works well too.
I've set enough timeout and buffer size in IIS.
May Chrome and Firefox go wrong at HTTPS - HTTP connection or something else?
There may be some extensions in your Firefox and Chrome that can cause this error. This error means a browser extension blocked the request. The most common culprit is an ad blocker like AdBlock Plus. In short your requests to server have been blocked by an extension. so you can try to disable these extensions and try again.
It seems that the .NET Core team found a related issue and provided a workaround.
Perhaps the same can be applied with other frameworks.
https://github.com/dotnet/aspnetcore/issues/4398
Apparently, when doing an ajax request from a browser, it will sometimes send an OPTIONS request before the real request with a 204 status code which causes the problem.
For me, it seems to have solved my problem to return the file with a response content-type of "text/plain" instead of "application/octet-stream"
I'm not really sure why it works, it just does.
I'm troubleshooting an issue that I think may be related to request filtering. Specifically, it seems every connection to a site made with a blank user agent string is being shown a 403 error. I can generate other 403 errors on the server doing things like trying to browse a directory with no default document while directory browsing is turned off. I can also generate a 403 error by using a tool like Modify Headers for Google Chrome (Google Chrome extension) to set my user agent string to the Baidu spider string which I know has been blocked.
What I can't seem to do is generate a request with a BLANK user agent string to try that. The extensions I've looked at require something in that field. Is there a tool or method I can use to make a GET or POST request to a website with a blank user agent string?
I recommend trying a CLI tool like cURL or a UI tool like Postman. You can carefully craft each header, parameter and value that you place in your HTTP request and trace fully the end to end request-response result.
This example straight from the cURL docs on User Agents shows you how you can play around with setting the user agent via cli.
curl --user-agent "Mozilla/4.73 [en] (X11; U; Linux 2.2.15 i686)" [URL]
In postman its just as easy, just tinker with the headers and params as needed. You can also click the "code" link on the right hand side and view as HTTP when you want to see the resulting request.
You can also use a heap of hther HTTP tools such as Paw and Insomnia, all of which are quite well suited to your task at hand.
One last tip - in your chrome debugging tools, you can right click the specific request from the network tab and copy it as cURL. You can then paste your cURL command and modify as needed. In Postman you can import a request and past from raw text and Postman will interpret the cURL command for you which is particularly handy.
I am trying to generate html reports for a simple test where i do a HTTP GET which involves a 302 Redirect. The stats generated is bit confusing where it shows 3 different HTTP requests as below.
Jmeter statistics
I am using 3000 thread and i see i am getting almost 9000 samples. I assume there can be two HTTP requests 1. The original one and 2. the Redirect following. But why i am getting 3 HTTP requests. Am i missing some thing.
I am using the following to run and generate report:
jmeter.sh -n -t -l
jmeter.sh -g -o ./analysis
The report is fairly straight forward if redirects are not involved though.
They may stand for:
Redirects (HTTP Statuses 3xx). In this case you should cross-check JMeter's behaviour with real browser's network footprint. You can see which requests browser is sending using browser developer tools. If the number matches - you should be good to go as you're properly simulating real browser's behaviour. If not - play with Redirect Automatically and Follow Redirects checkboxes in the HTTP Request sampler:
Embedded Resources (images, scripts, styles, fonts, sounds, etc.). This is pretty normal as downloading resources is what real browsers do. Just make sure to add HTTP Cache Manager to your Test Plan as real browsers request these items only once and make sure that no external resources are in scope (coming from CDNs or 3rd-party services) as you should not include external services into the scope of your load test
I am working on creating an chrome extension to filter http requests. That means when request an URL, chrome extension can filter some object requests and do not send them to web server. I searched for a while, did not find a solution in chrome.* API. Does anyone know ifGoogle chrome support this, or is there any way to accomplish this function?
There is webRequest API. At the moment it is still experimental but will apparently become stable with Chrome 17 (can be tested in Canary builds). There the API is called chrome.webRequest rather than chrome.experimental.webRequest and requires webRequest permission (plus webRequestBlocking if you want to block requests). Other than that the current documentation is correct.
I want to change first line of the HTTP header of my request, modifying the method and/or URL.
The (excellent) Tamperdata firefox plugin allows a developer to modify the headers of a request, but not the URL itself. This latter part is what I want to be able to do.
So something like...
GET http://foo.com/?foo=foo HTTP/1.1
... could become ...
GET http://bar.com/?bar=bar HTTP/1.1
For context, I need to tamper with (make correct) an erroneous request from Flash, to see if an error can be corrected by fixing the url.
Any ideas? Sounds like something that may need to be done on a proxy level. In which case, suggestions?
Check out Charles Proxy (multiplatform) and/or Fiddler2 (Windows only) for more client-side solutions - both of these run as a proxy and can modify requests before they get sent out to the server.
If you have access to the webserver and it's running Apache, you can set up some rewrite rules that will modify the URL before it gets processed by the main HTTP engine.
For those coming to this page from a search engine, I would also recommend the Burp Proxy suite: http://www.portswigger.net/burp/proxy.html
Although more specifically targeted towards security testing, it's still an invaluable tool.
If you're trying to intercept the HTTP packets and modify them on the way out, then Tamperdata may be route you want to take.
However, if you want minute control over these things, you'd be much better off simulating the entire browser session using a utility such as curl
Curl: http://curl.haxx.se/