HTTP follows the request-response model, i.e. for every request from a client there will be a response from the server.
Does there exist any protocol that follows only request model (there will be only requests from client)?
I know SMTP. Can I consider SMTP as request only model because we are sending the mail but not receiving any response from server?
If there exists any other such protocol, please explain about it. I googled it but didn't find any answer related to my specific query.
Can I consider SMTP as request only model because we are sending the mail but not receiving any response from server?
No, because for every command the client sends to the SMTP server, the client gets a response. SMTP is a lot more chattier than HTTP. Just to send a single email, an SMTP server and a client communicate with each other at least 5 times.
This is how SMTP works:
Client: EHLO yourdomain.com
Server: 250 smtp.gmail.com
Client: MAIL FROM: you#yourdomain.com
Server: 250 Ok
Client: RCPT TO: someone#gmail.com
Server: 250 Ok
Client: DATA
Server: 354 Start mail input; end with <CRLF>.<CRLF>
Client: Hey how are you?
Client: .
Server: 250 Ok
Client: QUIT
Server: 221 smtp.gmail.com Closing connection. Goodbye!
As you can see, this is also a request-response protocol.
But the Wikipedia page that you linked says that SMTP is a one-way protocol, which is wrong.
Does there exist any protocol that follows only request model (there will be only requests from client)?
Most protocols are there for exchanging data, that is why they follow the request-response model. The client wants to see a page, so he requests it from the server and then the server sends the page in response. The data is being exchanged between the server and the client.
But if you want you can write a protocol which is one way only.
Suppose when we request a resource over HTTP, we get a response as shown below:
GET / HTTP/1.1
Host: www.google.co.in
HTTP/1.1 200 OK
Date: Thu, 20 Apr 2017 10:03:16 GMT
...
But when a browser requests many resources at a time, how can it identify which request got which response?
when a browser requests many resources at a time, how can it identify which request got which response?
A browser can open one or more connections to a web server in order to request resources. For each of those connections the rules regarding HTTP keep-alive are the same and apply to both HTTP 1.0 and 1.1:
If HTTP keep-alive is off, the request is sent by the client, the response is sent by the server, the connection is closed:
Connection 1: [Open][Request1][Response1][Close]
If HTTP keep-alive is on, one "persistent" connection can be reused for succeeding requests. The requests are still issued serially over the same connection, so:
Connection 1: [Open][Request1][Response1][Request3][Response3][Close]
Connection 2: [Open][Request2][Response2][Request4][Response4][Close]
With HTTP Pipelining, introduced with HTTP 1.1, if it is enabled (on most browsers it is by default disabled, because of buggy servers), browsers can issue requests after each other without waiting for the response, but the responses are still returned in the same order as they were requested.
This can happen simultaneously over multiple (persistent) connections:
Connection 1: [Open][Request1][Request2][Response1][Response2][Close]
Connection 2: [Open][Request3][Request4][Response3][Response4][Close]
Both approaches (keep-alive and pipelining) still utilize the default "request-response" mechanism of HTTP: each response will arrive in the order of the requests on that same connection. They also have the "head of line blocking" problem: if [Response1] is slow and/or big, it holds up all responses that follow on that connection.
Enter HTTP 2 multiplexing: What is the difference between HTTP/1.1 pipelining and HTTP/2 multiplexing?. Here, a response can be fragmented, allowing a single TCP connection to transmit fragments of different requests and responses intermingled:
Connection 1: [Open][Rq1][Rq2][Resp1P1][Resp2P1][Rep2P2][Resp1P2][Close]
It does this by giving each fragment an identifier to indicate to which request-response pair it belongs, so the receiver can recompose the message.
I think you are really asking for HTTP Pipelining here. This is a technique introduced in HTTP/1.1, through which all requests would be sent out by the client in order and be responded by the server in the very same order. All the gory details are now in RFC 7230, sec. 6.3.2.
HTTP/1.0 had (or has) a comparable method known as Keep Alive. This would allow a client to issue a new request right after the previous has been answered. The benefit of this approach is that client and server no longer need to negotiate through another TCP handshake for a new request/response cycle.
The important part is that in both methods the order of the responses matches the order of the issued requests over one connection. Therefore, responses can be uniquely mapped to the issuing requests by the order in which the client is receiving them: First response matches, first request, second response matches second request, … and so forth.
I think the answer you are looking for is TCP,
HTTP is a protocol that relies on TCP to establish connection between the Client and the Host
In HTTP/1.0 a different TCP connection is created for each request/response pair,
HTTP/1.1 introduced pipelining, wich allowed mutiple request/response pair, to reuse a single TCP connection, to boost performance (Didnt work very well)
So the request and the corresponding response are linked by the TCP connection they rely on,
It's then easy to associate a specific request with the response it produced,
PS: HTTP is not bound to use TCP forever, for example google is experimenting with other transport protocols like QUIC, that might end up being more efficient than TCP for the needs of HTTP
In addition to the explanations above consider a browser can open many parallel connections, usually up to 6 to the same server. For each connection it uses a different socket. For each request-response in each socket it is easy to determine the correlation.
In each TCP connection, request and response are sequential. A TCP connection can be re-used after finishing a request-response cycle.
With HTTP pipelining, a single connection can be multiplexed for
multiple overlapping requests.
In theory, there can be any number[*1] of simultaneous TCP connections, enabling parallel requests and responses.
In practice, the number of simultaneous connections is usually limited on the browser and often on the server as well.
[*1] The number of simultaneous connections is limited by the number of ephemeral TCP ports the browser can allocate on a given system. Depending on the operating system, ephemeral ports start at 1024 (RFC 6056), 49152 (IANA), or 32768 (some Linux versions).
So, that may allow up to 65,535 - 1023 = 64,512 TCP source ports for an application. A TCP socket connection is defined by its local port number, the local IP address, the remote port number and the remote IP address. Assuming the server uses a single IP address and port number, the limit is the number of local ports you can use.
I am working on a sip client - asterisk server. I am using tcp connections.
The client side is Zoiper as for a first test.
Registration and outbound calls do work as expected, but after 3-4 minutes from registration process or an outgoing call, when testing incoming calls I do get this message on the server:
tcptls.c:446 ast_tcptls_client_start: Unable to connect SIP socket to ip:port: Connection timed out
The invite message (incoming call) never gets on the client (Zoiper softphone).
Why is this error showing up?
The reason why this appears from my assumption is because of the fact that neither the client or the server are sending keep alive messages, so after a tcp socket timeout the client which is behind a nat will not be reachable from the server side anymore.
This error come because your NAT (or 3g if you use 3g) drop connection. As result there are no way use same connection anymore.
Correct behavour of you app - send SIP OPTIONS message, if timeout - do registration again.
And yes, you need send keepalives(recomended method - OPTIONS message) or setup keepalive on asterisk side and setup in your side correct answer.
I used this series of AT commands to be able to connect and transmit/receive data with my laptop via the SIM900D GSM/GPRS module.
https://stackoverflow.com/questions/2...icrocontroller
The TCP connection does not end not until either of the two endpoints decided to terminate the TCP connection.. On the other hand, when I tried to connect with our webserver (e.g. AT+CIPSTART="TCP","www.mydomain.com","80")
It achieves connection.. But if the GPRS module does not immediately send any data, soon the webserver terminates the TCP connection.. If I tried sending by sending the url (e.g. PUT /send.php?g0=21 HTTP/1.1\r\nHost: dlsu-ect.com\r\n\r\n$1A\r), the webserver receives the data but it ends the terminate right after that transmission. Transparent mode only permit one transmission per TCP connection..
Am I doing it right? Is my way of transmitting the data to our webserver the right way for transparent mode?
If your request is correctly saving data to the server, you can try to add a header which requests that the server does not close the connection immediately after responding to the first request.
Try sending:
PUT /send.php?g0=21 HTTP/1.1\r\n
Host: dlsu-ect.com\r\n
Connection: Keep-alive\r\n
Content-length: 3\r\n
\r\n
$1A\r\n
\r\n
That should keep your connection alive, depending on the server configuration. As some servers do not allow keep-alive connections this might not work. You could also test keep-alive using telnet before attempting to do it on your Sim900 right away.
Also note that the timeout between requests differs a lot between servers. Some only allow a few seconds between requests.
I have a Web service based application, where the web server is running in the application on a particular port. Recently in the production environment, I have noticed that application is sending a RST packet to the client side resetting the connection. After analyzing the TCP dump, I have observed that the TCP 4 way connection closure is not happening properly. After sending a response from application web server to the client, the application is sending a FIN packet to the client and receiving an ACK, but there is no FIN packet initiation from the client side to the application, instead some request packet is received. At this point, the application sends a RST packet to the client as the application was expecting a FIN packet initiation from the client. This results in loss of the request packet. I believe this is a normal/expected behavior of the web server application and needs to be fixed in the client side.
Please comment on the above scenario. your comments will be much appreciated.
Thanks in advance
The client is ignoring the EOS condition on the socket and continuing to write. The client will then get a 'connection reset by peer'. This is basically an application protocol error. Either the client shouldn't be sending another request on the same conneciton, or the server should be looking for it instead of closing the connection after the first response.