Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I was studying the Github API and came across the following in their Rate Limiting section
For unauthenticated requests, the rate limit allows for up to 60 requests per hour. Unauthenticated requests are associated with the originating IP address, and not the user making requests.
I was curious to see what HTTP headers are used to track the limits and what happens when they are exceeded, so I wrote a bit of Bash to quickly exceed the 60 requests/hour limit:
for i in `seq 1 200`;
do
curl https://api.github.com/users/diegomacario/repos
done
Pretty quickly I got the following response:
{
"message": "API rate limit exceeded for 104.222.122.245. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"documentation_url": "https://developer.github.com/v3/#rate-limiting"
}
It seems like Github is counting the number of requests from the public IP mentioned in the response to determine when to throttle a client. From what I understand about LANs, there are many devices that share this public IP. Is every device in the LAN behind this IP rate limited because I exceeded the limit?. On a side note, what other ways exist of rate-limiting non-authenticated endpoints?
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
TCP can detect whether a packet was sent successfully anyways so instead of waiting for the pong, why not just check if there's an error when the ping is sent? I just don't find the need for pong.
Having ping and pong creates an end-to-end test for both connectivity and a functional endpoint at the other end.
Using just TCP, only confirms that the TCP stack says the packet was delivered to the next stop in a potential connectivity chain and does not confirm that the other endpoint is actually functioning (only that the packet was delivered to the TCP stack).
This is particularly important when there are proxies or other intermediaries in the networking chain between endpoints which is very often the case in professionally hosted environments. Only a ping and pong confirms that the entire end-to-end chain, including both client and server are fully functioning.
Here's a related answer: WebSockets ping/pong, why not TCP keepalive?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm designing a web API that will give an opprtunity for clients(other apps) to push some work request, receive immediately some id of that work request and later receive the result of that work request.
What is the typical approach for such kind of interactions?
As providing the result of work request reminds me server push interaction I thought about SSE(server sent events) and webscoket technologies, inclinig to websockets(as client may use the same connection for all kind of requets and receive all kinds of response). Is it a good choice for my goal? and how this can be scaled?
The question is about whether websocket technology suits for the described approach of api design and if not I'm wondering what is a better approach. Is it a good choice for my goal?
A webSocket connection is very well suited for receiving results back at some indeterminate time in the future and it would be a recommended way to do this.
Other requests from the client to server can either be ajax calls or sent as webSocket messages, mostly depending upon whether there are other reasons to make the requests as ajax calls or not. If you already have an established webSocket connection, then it is a convenient, easy and fast way to communicate with the server.
Taking the individual parts of what you doing:
Pushing some work request (from client to server).
This can be done equally well via Ajax or webSocket. If there was no other reason to have an already established webSocket connection, then this would traditionally be an Ajax call.
receive immediately some id of that work request
This is actually a little easier to do with an Ajax request because Ajax is a request/response protocol so if you send the work request via Ajax, it would be trivial to get the ID back as the response to that Ajax request. You could also do it via webSocket, but webSocket is just a messaging protocol. When sending the work request to the server, you could send it via a webSocket (as mentioned previously). And, the server could then immediately send back the work ID, but the client would have to develop some way to correlate the work ID coming back with the previously sent request since those two messages would not have any natural connection to one another. One way that correlation could be done is to have the client generate a temporary ID or hash value when sending the initial request (it can literally be anything that is unique for that client such as a timestamp) and then the server would send that same temporary ID back when it sends the work ID. All this is trivial with a request/response protocol like HTTP/Ajax.
later receive the result of that work request
HTTP Polling, webSocket or SSE could all be used. Polling is obviously not particularly efficient. I know a webSocket would work perfectly for this and it would provide an open conduit for any other items the server wants to send to the client in a push fashion. SSE can also be used to solve this problem (pushing data to a client) though I don't personally have any experience with it.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
So anywhere I read anything about UDP, people say this;
Messages can be received out of order
It's possible a message never arrives at all
The first one isn't clear to me. This is what can happen with TCP:
I send 1234, client receives 12 and then 34
So no problem; just prepend the message length and it's all good. After all, an integer is always 4 bytes, so even if the client receives the prepended length in 2 goes, it will know to keep reading until it has at least 4 bytes to know the msg length.
Anyway, back to UDP, what's the deal now when people say 'packages can be received out of order'?
A) Send `1234`, client receives `34` and then `12`
B) Send `1234` and `5678`, client receives `5678` and then `1234`
If it's A, I don't see how I can make UDP work for me at all. How would the client ever know what's what?
It's entirely possible that a network has many paths to reach a given point, so one of the datagram could take one route to reach the other end, another packet could take another path. Given this, the last packet sent could arrive before another packet. UDP takes no measures to correct this, as there's no notion of a connection, and in-order delivery.
At this points it depends on how you send your data. For UDP, each send() or similar call sends one UDP datagram, and recv() receives one datagram. A datagrams can be reordered with respect to other datagrams, or disappear entirely. Data cannot be reordered or dropped within a datagram, you either receive exactly the message that was sent, or you don't receive it at all.
If you need datagrams/messages to arrive in order, you need to add a sequence number to your packets, queue and reorder them at the receiving end.
The usual metaphore is:
TCP is a telephone conversation: words arrive in the same order as they were spoken
UDP is sending a series of letters by mail: the letters may get lost, may arrive, and can arrive in any order.
TCP also involves a connection : if the telephone line is disrupted by a thunderstorm, the connection breaks, and has to be built up again. (you need to dial again)
UDP is connectionless and unreliable: if the mailman is hit by a truck, some letters may be lost. Some letters could also be picked up and delivered by other mailmen. Letters can even be dropped on the floor if your mailbox is full, and even without any reason.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I need to draw a time space diagram of a client connecting to the server, then requesting data, and server sending x bytes of data then server closing the connection.
First, I am not sure exactly how many trips back and forth there would be, I am thinking:
Client requests connection
Server accepts
3 Client sends ACK
Client requests data
Server sends x bytes of data
Client sends ACK
Server closes connection
Client sends ACK
Is that correct??
Also, I need to specify SEQ, ACK numbers and SYN/ACK/FIN bits, I get the first part but what are the SYN/ACK/FIN "bits"?
I found a nice website which can help you out. here This photo shows a three-way handshake and a disconnection at the end. Notice the changing SYN and ACK numbers as the client and server exchange packets. FIN is the trigger for a disconnect. It can be sent by the client or the server. Source : http://www.pcvr.nl/tcpip/tcp_conn.htm
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Following setup: Two servers, one with the (rails) web application and the other which actually sends the emails to the internet through postfix. Which means that any emails created by the web application get sent to the email server who processes them again.
Now, this means that emails got sent out with an email adress like "user#webserver.localdomain", which promptly led to the rejection of the emails by the target mail servers, due to the obviously missing mx record.
That one I fixed, though, with smtp_generic_maps, rewriting the sender adress to a valid one.
However, the sender name displayed in the email consists of two parts - and the first part seems to be automatically set by postfix by the username of the webserver creating the email. In this case "nginx".
So, how do I rewrite the displayed user name in addition to the email adress? Can anyone point me in the right direction, please?
To my defense: I did not setup this system myself, so I'm a bit of a beginner at all things sendmail.
Easy, connect via TCP/IP to 127.0.0.1 port 25, and submit the mail using SMTP. that way you can set the from address to whatever you want. Currently you are submitting mail via the sendmail command, which is picking up the from address from user.
ps. sendmail != postfix