I'm currently requesting data, sent back as XML, via the URL function and passing in parameters in the url, as so
asset.data <- url("http://www.blah.com/?parameter=value", open = "r")
Each request constitutes a vector of length ~10,000. I've had some problems with the request timing out when looped (I'm calling data for about 500 "assets"). I've set the timeout in options to something high (600 seconds, or ten minutes) but still notice that the loop will stop if a call takes longer than 60 seconds or so (definitely less than the 10 minutes I've defined). I feel like I must be missing something about how the connection timeout works - any advice here?
Related
I am recording how long each request takes by capturing Date.now() before and after the request.
I am doing this because the inbuild metric for the response time only records the time taken for the FIRST REQUEST and not for any redirects that it follows.
My method was working fine until I started using the rps option.
The rps option throttles how many requests per second are sent.
The problem that this is causing is that my manual calculations are going up even though the HTTP_REQ_DURATION is roughly the same.
I presume this is because of the RPS throttle i.e. it is WAITING and this is causing my calc using Date.now() to go up - which is not an accurate reflection of what is happening.
How can I calculate the total time taken for a response to a request including all redirects when I am using the rps option?
I'd advise against using the RPS option and using an arrival-rate executor instead, for example, constant-arrival-rate.
Alternatively, you can set the maxRedirects option to 0, so k6 doesn't handle redirects itself. Then, when you handle the redirects yourself, you can get the Response object for each of the requests, not just the last one. Then you can sum their Response.timings.duration (or whatever you care about) and add the result in your custom metric, it will not contain any artificial delays caused by --rps.
My problem is the following. I am making a GET request which returns a stream and after some time I get my desired data but the webserver does not close the connection so I want to close it on my side. My code is as follows:
using HTTP
HTTP.open(:GET, "https://someurl.com", query=Dict(
"somekey"=>"somevalue"
)) do io
for i = 1:4
println("DATA: ---")
#show io
println(String(readavailable(io)))
end
#info "Close read"
closeread(io)
#info "After close read"
#show io
end
println("At the end!")
However I never reach the last line. I have tried dozens of different approaches by consulting the docs of HTTP.jl, but none worked for me and I suspect that is, because this webserver is not sending the Close: Connection, but I have not been able to find an example that closes the connection on the client side manually / forcefully.
Interesting note: When running this from the REPL and closing the connection via hitting Ctrl-C a couple of times and then rerunning the script it hangs forever. I have to wait some random amount of seconds to minutes then before I can run it again "successfully". I suspect this has to do with the stale connection not being closed properly.
As is evident I am neither very proficient in networks programming nor julia, so any help would be highly appreciated!
EDIT: I suspect I was not quite clear enough on the behaviour of the webserver and what I wanna do so I will try to break it down as simple as possible: I want to get responses from the webserver until I detect a certain keyword. After that I wanna close the connection - the webserver would keep on sending me data but I already got all I am interested in so I don't want to wait for another few minutes for the webserver to close the connection for me!
Your code is assuming the 4 times you will get the data by calling readavailable which might not be true depending on the buffer state,
Rather than that your loop should be:
while !eof(io)
println("DATA: ---")
println(String(readavailable(io)))
end
In your case the connection gets stacked because you try to read four chunks of data and perhaps you are getting everything in the first chunk and than the connection blocks.
On top of that, if your a using the do syntax you should not close the resource yourself - it will be done automatically at the end of the block.
I have a program that needs to do something exactly every hour. The catch is that the time needs to be relative to the remote server, which is not synchronised with a time server and is, in fact, about 6 seconds ahead (!). There is no way for me to change that server.
All I have, is access to the HEAD headers of the web server, which have a handy field date (that's how I found out about the discrepancy).
Question: regardless of the language (I use nodeJS, but that's not the point), what would you do to calculate a precise offset between my server and the remote server?
I am especially worried about network latency: I have the following variables:
Local server time
Time when request was sent
Time when the response with the Date header arrived
Remote server time
However, the remote server time was generated when the server received the request -- something that might have taken up to 1 second. And, the time when the response arrived needs to take into account the time it took to receive it...
Right now I am offsetting with (Time request was sent - Time response arrived) / 2. However, it feels lame.
Is there a better, established way to deal with this?
Hmm, i know this kind of problem, though i never had the limitation of not being able to change one of the 2 'actors'. I would say this approximation (Time request was sent - Time response arrived) / 2 feels ok. If you care more about it you could experiment with the approximation in a 'benchmark' kind of way:
don't make one synchronization request but make 10 in sequence, then eliminate the first 3 offsets and the last 3 offsets and average the remaining 4
or:
don't make one synchronization request but make a burst of 10 in 10 different threads, this should theoretically eliminate the client side (local side) time it takes to create the request and should block (if it blocks) on the server side (or remote side in your case). But this would involve some math and i think it's too much trouble for value
P.S. the number 10 is arbitrary (and hopefully the remote server doesn't ban/block you for making too many requests :)
I have the following code that reads from a QTCPSocket:
QString request;
while(pSocket->waitForReadyRead())
{
request.append(pSocket->readAll());
}
The problem with this code is that it reads all of the input and then pauses at the end for 30 seconds. (Which is the default timeout.)
What is the proper way to avoid the long timeout and detect that the end of the input has been reached? (An answer that avoids signals is preferred because this is supposed to be happening synchronously in a thread.)
The only way to be sure is when you have received the exact number of bytes you are expecting. This is commonly done by sending the size of the data at the beginning of the data packet. Read that first and then keep looping until you get it all. An alternative is to use a sentinel, a specific series of bytes that mark the end of the data but this usually gets messy.
If you're dealing with a situation like an HTTP response that doesn't contain a Content-Length, and you know the other end will close the connection once the data is sent, there is an alternative solution.
Use socket.setReadBufferSize to make sure there's enough read buffer for all the data that may be sent.
Call socket.waitForDisconnected to wait for the remote end to close the connection
Use socket.bytesAvailable as the content length
This works because a close of the connection doesn't discard any buffered data in a QTcpSocket.
i have written an online brainfuck interpreter ..!! the problem is when i take the text input , it gives an error !!...
HTTP response was too large: 10485810. The limit is: 10485760.
it seems the max limit of gae is 1mb.. how can i get around it !1
Look again. The limit is 10 MiB.
This is not a limitation in the HTTP protocol, so the limitation is in the server platform that you are using (which you haven't specified in your question).
That's more data that you would reasonably send to the browser, so you clearly have an eternal loop that sends data until the buffer is full.
You can get around the limit by turning off buffering, but that will not remove the problem. Instead your code will just loop until the browser crashes from the huge response.
Optimise your Interpreter.
Whatever BF input you had, you really should not exceed the 10 MB response limit.