I'm a bit confused or maybe I don't fully understand http requests.
There is a website on which the search results are fetched through a GET request. I can see the whole parameter list in Firebug and if I click "search" the results are displayed as you would expect. What I don't understand is if I take this request URL (with the same parameters) and copy it in a new browser tab it doesn't return results anymore. Instead I see a 500 - Internal server error.
Can someone explain why is this happening or what can I do to see the results when accessing the URL?
As robert_b_clark suggested the solution is to send the referrer header when making the request.
Related
I'm trying to fetch data from a website (https://gesetze.berlin.de/bsbe/search). Using Mozilla, I've taken a look at the network analysis. Usually, I'm just messing around with the parameters of the POST-Request to see how I might influence the response of the server. But when I simply re-send the request (making no changes at all), I'm getting HTTP-response 500. The server answer states as message: security_notAuthenticated.
Can anyone explain that behaviour? The request is done by the same PC, the same browser in the same session, and there is no login function on that website. Pictures shown below.
Picture 1 - Code 200
Picture 2 - Code 500
The response security_notAuthenticated indicates, that your way of repeating the request omits some authentication-related information.
When I repeat the request, using Mozilla Firefox's "Resend" or "Edit and resend" function, the Cookie header is not sent with the request. Although it occurs in the editable header list when using "Edit and resend" it's missing when watching the actual sent request. I'm not sure whether this is a feature or a bug.
When using Firefox's "Use as Fetch in Console" function, the header will automatically be included and you still have the ability to change the headers and the body. The fetch API is a web standard and some introductory material about fetch can be found on MDN.
If you want to do custom requests, in the browser, fetch is a good option.
In other environments and languages you usually use some HTTP client (just search the web for "...your language... http request" or similar, you will find something).
I have a server log and it shows POST and GET
So, if a page is showing POST /ping and GET /xyz
Does this mean that the user agent is Requesting a page is GET and POST is the response from the server?
Because in my server logs, it's showing a lot of POST with million of /ping while the other pages have been GET is a smaller amount of number.
Which should I focus on? Get the POST pages get index if the server shows this to Search engines?
I would suggest you learn the difference between HTTP GET and POSTS.
This answer is quite good.
In summary, the GET requests are pages/data being requested by clients. POSTs are clients sending data to the server, usually expecting data as a response.
In their comment, Sylwit pretty much explains what this has to do with search engines. I'm going to just describe the differences between GET and POST
GET and POST are two different types of requests to the server. A GET request is normally used to retrieve information from the server and usually has a series of GET parameters. When you search something on Google you're making a GET request.
https://google.com/?q="how+do+i+get"
In this case, the GET parameter is the q after the ?, and has a value of "how do i get". It should be noted that a GET request doesn't need these additional parameters (http://google.com) is still a GET request
POST requests, on the other hand, are normally used to send data to the server. You'll see this anytime you send a message, submit a form etc. When I click submit on this answer, I'll be making a POST request to stackoverflow's servers. The parameters for these aren't immediately visible in the browser. POST requests can also return a HTTP response, with a message.
Hope that shows the differences between the two.
I'm a noob when it comes to ASP.NET. I know few basic commands such as Response.Redirect("URL") to redirect my application web page to a different location.
However i receive HTTP Error 400 - Bad Request, whenever i try to use the code shown below
Response.Redirect(Server.UrlEncode(this.Downloadlink));
where this.Downloadlink is a user defined property which returns something like this
http://mdn.vatsag.net/fp;files/DOWNLOAD/VTSetup.exe
If i post this link in the browser, the .exe file pops up (means the link is good)
However this error comes when i use the ASP.NET code.
Any form of response on this issue/reason is deeply appreciated.
See here: http://www.kirit.com/Response.Redirect%20and%20encoded%20URIs
In short: if you quickly want to fix the issue, remove the part of your code that is UrlEncoding the URL!
For example, day, in a somewhat REST-oriented environment, a request comes in for an object that doesn't exist, like:
GET http://example.com/thing/5
Is there anything wrong with sending back a 404 response who's body is the same as a a different page? For example, responding like:
404 body: [content from "http://example.com/thing/" which is a list of things]
Does it make any sense to do this? Will this cause any problems with certain browsers? Is it confusing to the user? Or is this perfectly fine to do?
Along these same lines, I would have the content of the 404 response match the request's accept headers as best I could. (ie. abide by content negotiation with the user agent)
For example, a xml or json request would get something along the lines of a simple error message and something that says "look here for similar things", while an html request would get an HTML page that has the error message as well as the content of the list page (as I indicated above)
I think it depends on how the Restful web services are being consumed. If I'm programmatically consuming the web service from a different application, then I would want the status code together and a plain text message instead of a message decorated with HTML tags. I mean, say for example, it doesn't make sense to return a bloated 404 content if your user makes the web service call using Curl because the message will not be readable to them.
You could have different "consumes" for each restful webservice. If it's an XML request, then you return 404 and a plain text message. Otherwise, you return the error page content.
I don't see anything wrong with it. In our webservice we always send back a json error object which includes stacktraces and other details about the response. Even on a regular web server, you get at least text which can be displayed in a browser saying that you got a 404 response.
So, from here...
In ASP.NET, you have a choice about how to respond to that - it's in the web.config as CustomErrors. Turn that on, then redirect to a fancy 404 page (maybe you already do). The fancy 404 page, then, could be checking the requested querystring (which gets passed over to the custom error page as yet another querystring) to see if it's a valid redirect, lives in your database, etc. Just do a Response.Redirect() from there.
Then schooner writes:
Thanks, we do have a 404 now but we would prefer this not to be detected as a 404 in the process. We would like ot handle it directly and seperately if possible.
..and I'd like to know just how bad a practice this is. I don't expect to put my "pretty" URLs on the internet (just business cards) and I have a sample of 404-redirecting-to-a-helpful-site code working, but I don't want to get to production and have an issue with a browser that takes the initial 404 too seriously. Can anyone help me understand more about why I wouldn't want to use customErrors / 404 to flow users to the page they actually wanted?
The main problem with using customeErrors as your 404 error handler is that every time customErrors picks up an errored request rather than throwing a 404 error back to your browser and letting your browser know there was a bad request, it instead returns a 302 which indicates that a page has been relocated to whatever your customErrors page is. This isn't bad for most users because they don't know or even notice the difference, the problem comes from the fact that web crawlers DO know the difference and the status code they receive directly affects how their indexing works.
Consider the scenario where you have a page at http://mysite.com/MyAwesomePageAboutStuff.aspx for some period of time and then one day you decide you no longer need it and delete the file. If Google or some other crawler has already indexed that URL and goes back to it after you delete it the crawler will get a 302 status code instead of a 404 error and because of this status code the crawler will update the page's url to point to your error page rather deleting the non-existent link. Now, whenever someone finds that url by way of a search engine they'll end up at your error page.
It's not really a huge issue, but you can definitely see the headaches this can create for your users in the long run.
Look here for some corroborating data.
I created a vanity url system using the 404 as the handler. There's no need for a 302 on my side as the 404 dynamically loads the content and returns that. I am fully able to handle any and all POST / GET and SERVER data.
Works great. If you are interested TarantulaHawk is up on SourceForge.