Hi everyone so here is the question: I want to get all the objects of website not the ones being showed in the browser but the ones we actually issued http requests for
for example I want to get all the objects for which I issued http get request
Related
I know that, in principle, HTTP GET requests shouldn't have any side-effects (i.e. idempotent).
But I'm experimenting and, so far, my test HTTP GET requests that change data on a test database work just fine: it doesn't look like the HTTP GET requests ever get re-requested.
Now my tests are not representative of the real world. In the real world, any middle-man (e.g. Cloudflare) could take in consideration the fact that the HTTP request is GET and re-request the HTTP request, for example upon flaky networking. But I wonder if this actually ever happens?
Note that the HTTP requests in question are all browser-side fetch() JSON requests using the Fetch API. (I think that's relevant because while Cloudflare does re-request HTML resources, I don't think Cloudflare would ever re-request a fetch() JSON request.)
My gut feeling says that, while in theory such HTTP GET requests are allowed to be re-requested, it doesn't happen in practice. Am I right or is there a situation showing that I'm wrong?
Context
I'm the author of Telefunc which is a JavaScript/TypeScript RPC implementation and I'd like to make all Telefunc's HTTP request be GET, even when the user makes database changes. I'd like to do this because this would enable Telefunc to support ETag caching for all requests and without the user having to configure anything.
I have read many discussions on this, such as the fact the PUT is idempotent and POST is not, etc. However, doesn't this ultimately depend on how the server is implemented? A developer can always build the backend server such that the PUT request is not idempotent and creates multiple records for multiple requests. A developer can also build an endpoint for a PUT request such that it acts like a DELETE request and deletes a record in the database.
So my question is, considering that we don't take into account any server side code, is there any real difference between the HTTP methods? For example, GET and POST have real differences in that you can't send a body using a GET request, but you can send a body using a POST request. Also, from my understanding, GET requests are usually cached by default in most browsers.
Are HTTP request methods anything more than just a logical structure (semantics) so that as developers we can "expect" a certain behavior based on the type of HTTP request we send?
You are right that most of the differences are on the semantic level, and if your components decide to assign other semantics, this will work as well. Unless there are components involved that you do not control (libraries, proxies, load balancers, etc).
For instance, some component might take advantage of the fact that PUT it idempotent and thus can re retried, while POST is not.
The Hypertext Transfer Protocol (HTTP) is designed to enable communications between clients and servers.
HTTP works as a request-response protocol between a client and server.
A web browser may be the client, and an application on a computer that hosts a web site may be the server.
Example: A client (browser) submits an HTTP request to the server; then the server returns a response to the client. The response contains status information about the request and may also contain the requested content.
HTTP Methods
GET
POST
PUT
HEAD
DELETE
PATCH
OPTIONS
The GET Method
GET is used to request data from a specified resource.
GET is one of the most common HTTP methods.
Note that the query string (name/value pairs) is sent in the URL of a GET request.
The POST Method
POST is used to send data to a server to create/update a resource.
The data sent to the server with POST is stored in the request body of the HTTP request.
POST is one of the most common HTTP methods.
The PUT Method
PUT is used to send data to a server to create/update a resource.
The difference between POST and PUT is that PUT requests are idempotent. That is, calling the same PUT request multiple times will always produce the same result. In contrast, calling a POST request repeatedly have side effects of creating the same resource multiple times.
The HEAD Method
HEAD is almost identical to GET, but without the response body.
In other words, if GET /users returns a list of users, then HEAD /users will make the same request but will not return the list of users.
HEAD requests are useful for checking what a GET request will return before actually making a GET request - like before downloading a large file or response body.
The DELETE Method
The DELETE method deletes the specified resource.
The OPTIONS Method
The OPTIONS method describes the communication options for the target resource.
src. w3schools
Status code 400 Bad Request is used when the client has sent a request that cannot be processed due to being malformed.
Status code 404 Not Found is used when the requested resource does not exist / cannot be found.
My question is, when a client sends a request to an endpoint my API does not serve, which of these status codes is more appropriate?
Should an endpoint be considered a "resource", and thus a 404 be returned? My issue with this is that if the client only checks the status code, they cannot tell the difference between a 404 indicating that they got to the correct endpoint, but there was no result matching their query, versus a 404 indicating that they queried a non-existing endpoint.
Alternatively, should we expect that a client has prior knowledge of all available API endpoints, and thus treat their request as malformed and return a 400 if the endpoint they are trying to reach does not exist?
Maybe this depends on whether the endpoints are REST or not. If they are REST endpoints, the client should not need prior API knowledge, but be able to learn about all relevant API endpoints by navigating the API from a single root endpoint. In such a case, I guess 404 would be more appropriate.
In my specific case right now, this is an internal (non-REST) HTTP API, where I expect the client to have prior knowledge of all API endpoints, so I am leaning towards 400, to avoid issues where 404 from accessing the wrong endpoint could be misconstrued as a 404 indicating that what they sought from the correct endpoint could not be found.
Thoughts?
As a convenience, many modern APIs provide human-readable endpoints for developer convenience. The intent of REST, however, is that URLs are treated as opaque - they may happen to contain semantic content, but can't be relied upon to do so. There's no such thing as a "malformed" URL. There's only a URL that points to something and a URL that doesn't.
Now, that's the REST dogma (and arguably also the HTTP 1.1 spec). That doesn't mean it's what you should do. If you have a single internal client for your API, and that's not going to change, you have a lot of flexibility in designing your own standards. Just make sure to document them, especially those that might confuse the guy straight out of college that they hire to replace you when you move on.
It's my understanding that caching is one of the main utilities of a proxy server. I'm currently trying to develop a simple one and I would like to know exactly how caching works.
Intuitively I think that it's basically an association between a request and a response. For example: for the following request: "GET google.com" you have the following response: "HTTP/1.0 200 OK..."
That way, whenever the proxy server receives a request for that URL he can reply with the cached response (I'm not really worried right now about when to serve the cached response and when to actually send the request to the real destination).
What I don't understand is how to establish the association between a request and a response since the HTTP response doesn't have any field saying "hey this is the response you get when you request the X URL" (or does it?).
Should I get this information by analyzing the underlying protocols? If so, how?
Your cache proxy server is already putted into play when a request arrives. Therefore you have the requested resource URL. Then you look in your cache and try to find the cached resource for the requested resource URL, if you cannot find the resource in your cache (or the cache is outdated), you fetch the data from the source. Keep in Mind, that you have to invalidate the cached resource if you receive a PUT, POST or DELETE request.
I am setting up a back end API in a script of mine that contacts one of my sites by sending XML to my web server in the form of POST data. This script will be used by many and I want to limit the bandwidth waste for people that accidentally turn the feature on without a proper access key.
I will be denying requests that do not have the correct access key by maybe generating a 403 access code.
Lets say the POST data is ~500kb of data. Does the server receive all 500kb of data when this attempt is made regardless of the status code?
How about if I made the url contain the key mydomain/api/123456789 and generate 403 status on all bad access keys.
Does the POST data still get sent/received regardless or is it negotiated before the data is finally sent.
Thanks in advance!
Generally speaking, the entire request will be sent, including post data. There is often no way for the application layer to return a response like a 403 until it has received the entire request.
In reality, it will depend on the language/framework used and how closely it is linked to the HTTP server. Section 8.2.2 of RFC2616 HTTP/1.1 specification has this to say
An HTTP/1.1 (or later) client sending
a message-body SHOULD monitor the
network connection for an error status
while it is transmitting the request.
If the client sees an error status, it
SHOULD immediately cease transmitting
the body. If the body is being sent
using a "chunked" encoding (section
3.6), a zero length chunk and empty trailer MAY be used to prematurely
mark the end of the message. If the
body was preceded by a Content-Length
header, the client MUST close the
connection.
So, if you can find a language environemnt closely linked with the HTTP server (for example, mod_perl), you could do this in a way which does comply with standards.
An alternative approach you could take is to make an initial, smaller request to obtain a URL to use for the larger POST. The application can then deny providing the URL to clients without an appropriate key.
Here is great book about RESTful Web Services, where it's explained how HTTP works: http://oreilly.com/catalog/9780596529260
You can consider any request as envelope, where on top of it it's written address (URL), some properties (HTTP Headers) and inside it there's some data (if request is initiated by post method). So as you might guess you can't receive envelope partially.
Oh I forgot, it's when you are using HTTP Post with standard HTTP header "application/x-www-form-urlencoded" but if you are uploading files (correspondingly using ""multipart/form-data") Django gives you control over streamed chunks of files using Middleware classes: http://docs.djangoproject.com/en/dev/topics/http/middleware/