I want to create a server similar to Twitter Streaming API, so a client could read the response in real-time staying connected. How to do that in Crystal?
Extracted from this issue:
#MakeNowJust says:
You should append \n to sent text to gets in client and do io.flush.
require "http/server"
port = 5000
server = HTTP::Server.new(port) do |context|
loop do
context.response.puts "Something\n"
context.response.flush
sleep 1
end
end
puts "Listening on #{port}"
server.listen
#rx14 says:
crystal already handles writing chunked responses. Just keep on writing to the output response, and call flush when you want to ensure the client receives the message. If there is no content length header, the response will automatically select chunked encoding for you.
Related
Suppose we have a bidirectional streaming RPC where the client is sending several request messages (i.e. multiple DATA frames) and the server is answering back with several response messages (i.e. multiple DATA frames).
As I understand it, when the RPC is complete, the server will normally send a HEADER frame with the status header as well as possibly some trailer headers like grpc-status and grpc-message to mark the completion of the request/response exchange.
My question is, suppose the server sends a bad response message, is it possible for the client to send the HEADER frame with the grpc-status and grpc-message headers to convey information about the error.
The reason why I'm asking is because in the c++ server code (generated from protobuf defininition), I'm struggling to find a way to get a hold of this last HEADER frame sent by the client to verify the values of the grpc-status and grpc-message headers.
Additionally, after going through the unit tests in the grpc project, it seems like only the server returns the status for the RPC, which further raises doubts.
I was however able to send the HEADER frame out from the client, but based on the above, I'm not certain whether this is the correct behavior even though I was able to do it.
I would appreciate it if someone can clarify this for me as I'm fairly new to HTTP/2 and gRPC.
Additionally, after going through the unit tests in the grpc project, it seems like only the server returns the status for the RPC, which further raises doubts.
Correct! In gRPC, the server is responsible for terminating the RPC with a status and optional trailing metadata. The client never sends a status to the server. The client can indicate it is done sending on the stream without a status (which internally happens by sending an empty data frame with the END_STREAM flag set, but users shouldn't need to be concerned with this detail). The client only sends HEADER frames at the start of the RPC.
I came across the term HTTP. I have done some research and wanted to ensure that I correctly understood the term.
So, is it true that HTTP, in simple words, a letter containing information in the language that both client and server can understand.
Then, that letter is sent to the server thanks to TCP/IP which serves as a car that takes that letter to the server.
Then, after the letter is delivered to the server, the server reads the content of the letter and if it is GET request, the server takes the necessary data and ATTACHES that data to the letter and sends back to the client via again TCP/IP. But if it was POST request then the client ATTACHES the DATA to the letter and sends it to the server so that it saves that data in the database.
Is that true?
Basically, it is true.
However, the server can decide what to do if it is a GET or POST or any other request(it doesn't need to e.g. append it to a file).
I will show you some additional information/try to explain it in my words:
TCP is another communication protocol protocol. It allows a client to open a connection to a server and they can communicate afterwards.
HTTP(hyper text transfer protocol) builds up on TCP.
At first, the client opens a connection to the server.
After that, the client sends the HTTP Request. The first line contains the type of the request, the path and the version. For example, it could be GET / HTTP/1.1.
The next part of the request contains the Request parameters. Every parameter is a line. The parameters are sent like the following: paramName: paramValue
This part of the request ends with an empty line.
If it is a POST Request, query parameters are added next. If it is a GET Request, these query parameters are added with the path(e.g. /index.html?paramName=paramValue)
After rescieving the Request, the server sends a HTTP Response back to the client.
The first line of the response contains the HTTP version, the status code and the status message. For example, it could be HTTP/1.1 200 OK.
Then, just like in the request, the response parameters are following. For example Content-Length: 1024.
The response parameters also end with an empty line.
The last part of the response is the body/content. For example, this could be the HTML code of the website you are visiting.
Obviously, the length of the content/body of the response has to match the Content-Length parameter(in bytes).
After that, the connection will be closed(normally). If the client to e.g. request resources, it will send another request. The server has NO POSSIBILITY to send data to the client after that unless the client sends another request(websockets can bypass this issue).
GET is meant to get the content of a site A web browser will send a GET request if you type in a URL. POST can be used to update a site but in fact, the server can decide that. POST can be also used if the server doesn't want query parameters to be shown in the address bar.
There are other methods like PATCH or DELETE that are used by some APIs.
Some important status codes (and status messages) are:
200 OK (everything went well)
204 No content (like ok but there is no body in the response)
400 Bad Request (something is wrong with the Request)
404 Not found (the requested file(the path) was not found on the server)
500 Internal server error (An error occured while processing the request)
Every status code beginning with 1 is related to inform the client of something.
If it is starting with 2, everything went right.
Status code beginning with 3 forward the client to another site.
If it starts with 4, there is a error on the client side.
Codes starting with 5 represent an error that occured on the server side.
TCP is a network protocol that establishes a connection with the server over a network (or the Internet) and allows two-way communication. The HTTP will traffic inside this TCP tunnel. TCP is a very useful protocol that helps keep things sane, it ensures data packets are read in the correct order and that packets that went missing during transmission are sent again.
Sometimes there will be another protocol layer between HTTP and TCP, called SSL. It is responsible for encrypting the data that traffics over TCP, so that it is transmitted safely over unsafe networks. This is know as HTTPS, and is just HTTP but using this additional layer.
Although almost always true, HTTP doesn't necessarily uses TCP. UPnP requests use HTTP over UDP, a network protocol that uses standalone packets instead of a connection.
HTTP is a plain text protocol, meaning it's designed in such a way that a human can understand it without using any tools. This is very convenient for learning.
If you're using Firefox or Chrome, you can press Ctrl-Shift-C to open the Developer Tools, and under the Network tab you will see every HTTP request your browser is making, see exactly what's the request, what the server answered etc, and get a better view of how this protocol works.
Explaining it in details is... too extensive for this answer. But as you will see it's not that complicated.
I am new to HTTP client and TCP\IP programming, so my question might be vague to experienced persons but please try to answer it.
I am implementing a HTTP client , after sending request to server I am waiting for a read event(Asynchronous socket) and when the read event comes I am extracting the data using read command and storing it in a local buffer.
Here how to know that the server has sent all the data's so that I can start processing the information?
I am confused at this stage
Well the content can be returned all together or in chunks. When the server knows before hand the length of the payload, it will provide the Content-Length header in the response. But sometimes the server does not know the total length of payload before start transmitting, then it uses the chunk transfer.
The response from the server should contain a http-header which has a field named content-length. you can use that length to determine the amount of data the server should send. And you are done receiving data once the server has sent the given amount
I'm writing a small networking program in C++. Among other things it has to download twitter profile pictures. I have a list (stl::vector) of URLs. And I think that my next step is to create for-loop and send GET messages through the socket and save the pictures to different png-files. The problem is when I send the very first message, receive the answer segments and save png-data all things seems to be fine. But right at the next iteration the same message, sent through the same socket, produces 0 received bytes by recv() function. I solved the problem by adding a socket creation code to the cycle body, but I'm a bit confused with the socket concepts. It looks like when I send the message, the socket should be closed and recreated again to send next message to the same server (in order to get next image). Is this a right way of socket's networking programming or it is possible to receive several HTTP response messages through the same socket?
Thanks in advance.
UPD: Here is the code with the loop where I create a socket.
// Get links from xml.
...
// Load images in cycle.
int i=0;
for (i=0; i<imageLinks.size(); i++)
{
// New socket is returned from serverConnect. Why do we need to create new at each iteration?
string srvAddr = "207.123.60.126";
int socketImg = serverConnect(srvAddr);
// Create a message.
...
string message = "GET " + relativePart;
message += " HTTP/1.1\r\n";
message += "Host: " + hostPart + "\r\n";
message += "\r\n";
// Send a message.
BufferArray tempImgBuffer = sendMessage(sockImg, message, false);
fstream pFile;
string name;
// Form the name.
...
pFile.open(name.c_str(), ios::app | ios::out | ios::in | ios::binary);
// Write the file contents.
...
pFile.close();
// Close the socket.
close(sockImg);
}
The other side is closing the connection. That's how HTTP/1.0 works. You can:
Make a different connection for each HTTP GET
Use HTTP/1.0 with the unofficial Connection: Keep-Alive
Use HTTP/1.1. In HTTP 1.1 all connections are considered persistent unless declared otherwise.
Obligatory xkcd link Server Attention Span
Wiki HTTP
The original version of HTTP
(HTTP/1.0) was revised in HTTP/1.1.
HTTP/1.0 uses a separate connection to
the same server for every
request-response transaction, while
HTTP/1.1 can reuse a connection
multiple times
HTTP in its original form (HTTP 1.0) is indeed a "one request per connection" protocol. Once you get the response back, the other side has probably closed the connection. There were unofficial mechanisms added to some implementations to support multiple requests per connection, but they were not standardized.
HTTP 1.1 turns this around. All connections are by default "persistent".
To use this, you need to add "HTTP/1.1" to the end of your request line. Instead of GET http://someurl/, do GET http://someurl/ HTTP/1.1. You'll also need to make sure you provide the "Host:" header when you do this.
Note well, however, that even some otherwise-compliant HTTP servers may not support persistent connections. Note also that the connection may in fact be dropped after very little delay, a certain number of requests, or just randomly. You must be prepared for this, and ready to re-connect and resume issuing your requests where you left off.
See also the HTTP 1.1 RFC.
I have an HTTP server that returns large bodies in response to POST requests (it is a SOAP server). These bodies are "streamed" via chunking. If I encounter an error midway through streaming the response how can I report that error to the client and still keep the connection open? The implementation uses a proprietary HTTP/SOAP stack so I am interested in answers at the HTTP protocol level.
Once the server has sent the status line (the very first line of the response) to the client, you can't change the status code of the response anymore. Many servers delay sending the response by buffering it internally until the buffer is full. While the buffer is filling up, you can still change your mind about the response.
If your client has access to the response headers, you could use the fact that chunked encoding allows the server to add a trailer with headers after the chunked-encoded body. So, your server, having encountered the error, could gracefully stop sending the body, and then send a trailer that sets some header to some value. Your client would then interpret the presence of this header as a sign that an error happened.
Also keep in mind that chunked responses can contain "footers" which are just like HTTP headers. After failing, you can send a footer such as:
X-RealStatus: 500 Some bad stuff happened
Or if you succeed:
X-RealStatus: 200 OK
you can change the status code as long as response.iscommitted() returns false.
(fot HttpServletResponse in java, im sure there exists an equivalent in other languages)