I'm working with TcpStream. The basic structure I'm working with is :
loop {
if /* new data in the stream */ { /* handle it */ }
/* do a lot of other stuff */
}
So set_timeout() appears to be what I need, but I'm a little puzzled about how it works. The documentation says :
This function will set a timeout for all blocking operations (including reads and writes) on this stream. The timeout specified is a relative time, in milliseconds, into the future after which point operations will time out. This means that the timeout must be reset periodically to keep it from expiring.
So I would expect to have to reset the timeout each time before checking if new data is available, otherwise I would only have Err(TimeOut) after some time.
But it appears not to be the case : actually if I set a very low timeout (like 10 ms) once and for all, the loop does exactly what I want. It returns new data if there is some, and returns Err(TimeOut) if there is none.
Am I misunderstanding the documentation ? Is it safe for me to use this behavior ?
I would have expected it to work like a socket timeout, like you have as the property for sockets in most operating systems and which is available from with the programming languages with SO_TIMEOUT or similar things. With such socket timeout the timer will be started whenever you start a blocking operation on the socket, like read, write, connect. Either the operation will succeed within the time frame or the timer will be triggered and the operation fail because of a timeout. The timeout is a property of the socket and not of the operation, so there is no need to set it again before each operation.
But according to the documentation Rust implemented a completely different thing. If I interpret the documentation correctly they don't set a timeout per operation, but instead set a deadline for all operations of this type on the socket. That is, when the timer is set up to 10 seconds you can have multiple reads within this time but if there is still a read active after 10 seconds it will be stopped.
When one is used to work with socket timeouts in other languages this behavior is not the expected one and it looks like the Rust developers have similar objections to this (experimental) API. In https://github.com/rust-lang/rust/issues/15802 they suggest to rename these kind of functions from set..timeout to set..deadline to make the name reflect the behavior.
Related
Given an asynchronous IMFSourceReader connected to a synchronous only IMFTransform.
Then for the IMFSourceReaderCallback::OnReadSample() callback is it a good idea not to call IMFTransform::ProcessInput directly within OnReadSample, but instead push the produced sample onto another queue for another thread to call the transforms ProcessInput on?
Or would I just be replicating identical work source readers typically do internally? Or put another way does work within OnReadSample run the risk of blocking any further decoding work within the source reader that could have otherwise happened more asynchronously?
So I am suggesting something like:
WorkQueue transformInputs;
...
// Called back async
HRESULT OnReadSampleCallback(... IMFSample* sample)
{
// Push sample and return immediately
Push(transformInputs, sample);
}
// Different worker thread awoken for transformInputs queue samples
void OnTransformInputWork()
{
// Transform object is not async capable
transform->TransformInput(0, Pop(transformInputs), 0);
...
}
This is touched on, but not elaborated on here 'Implementing the Callback Interface':
https://learn.microsoft.com/en-us/windows/win32/medfound/using-the-source-reader-in-asynchronous-mode
Or is it completely dependent on whatever the source reader sets up internally and not easily determined?
It is not a good idea to perform a long blocking operation in IMFSourceReaderCallback::OnReadSample. Nothing is going to be fatal or serious but this is not the intended usage.
Taking into consideration your previous question about audio format conversion though, audio sample data conversion is fast enough to happen on such callback.
Also, it is not clear or documented (depends on actual implementation), ProcessInput is often instant and only references input data. ProcessOutput would be computationally expensive in this case. If you don't do ProcessOutput right there in the same callback you might run into situation where MFT is no longer accepting input, and so you'd have to implement a queue anyway.
With all this in mind you would just do the processing in the callback neglecting performance impact assuming your processing is not too heavy, or otherwise you would just start doing the queue otherwise.
I'd like to have a TCP connection for a gaming application. It's important to be time efficient. I want to receive many objects efficiently. It's also important to be CPU efficient because of the load.
So far, I can make sure handleConnection is called every time a connection is dialed using go's net library. However once the connection is created, I have to poll (check over and over again to see if new data is ready on the connection). This seems inefficient. I don't want to run that check to see if new data is ready if it's needlessly sucking up CPU.
I was looking for something such as the following two options but didn't find what I was looking for.
(1) Do a read operation that somehow blocks (without sucking CPU) and then unblocks when new stuff is ready on the connection stream. I could not find that.
(2) Do an async approach where a function is called when new data arrives on the connection stream (not just when a new connection is dialed). I could not find that.
I don't want to put any sleep calls in here because that will increase the latency of responding to singles messages.
I also considered dialing out for every single message, but I'm not sure if that's efficient or not.
So I came up with code below, but it's still doing a whole lot of checking for new data with the Decode(p) call, which does not seem optimal.
How can I do this more efficiently?
func handleConnection(conn net.Conn) {
dec := gob.NewDecoder(conn)
p := &P{}
for {
result := dec.Decode(p)
if result != nil {
// do nothing
} else {
fmt.Printf("Received : %+v", p)
fmt.Println("result", result, "\n")
}
}
conn.Close()
}
You say:
So I came up with code below, but it's still doing a whole lot of checking for new data with the Decode(p) call.
Why do you think that? The gob decoder will issue a Read to the conn and wait for it to return data before figuring out what it is and decoding it. This is a blocking operation, and will be handled asynchronously by the runtime behind the scenes. The goroutine will sleep until the appropriate io signal comes in. You should not have to do anything fancy to make that more performant.
You can trace this yourself in the code for decoder.Decode.
I think your code will work just fine. CPU will be idle until it receives more data.
Go is not node. Every api is "blocking" for the most part, but that is not as much as a problem as in other platforms. The runtime manages goroutines very efficiently and delegates appropriate signals to sleeping goroutines as needed.
I have a small IDE for a modeling language I wrote, implemented in PyQt/PySide, and am trying to implement a code navigator that let's you jump to different sections in the file being edited.
The current implementation is: (1) connect to QPlainTextEditor.textChanged, (2) any time a change is made, (sloppily) parse the file and update the navigator pane
It seems to work OK, but I'm worried this could cause major performance issues for large files on slower systems, in particular if more stuff is connected to textChanged in the future.
My question: Has anybody here implemented a delayed reaction to events, so that multiple events (i.e. keystrokes) within a short period only trigger a single update (say once per second)? And is there a proper QT way of doing this?
Thanks,
Michael
You can try using timers if you want some "delay".
There would be 2 ways to use them (with different results).
One is only parse after no input has been done for a certain amount of time
NOTE: I only know C++ Qt but I assume the same things are valid for pyqt so this is kind of "pseudocode" I hope you get the concept though.
QTimer timer; //somewhere
timer.setSingleShot(true); //only fire once
connect(timer,QTimer::timeout(),OnTimerDone(...);
OnTextChanged(...)
{
timer.start(500); //wait 500ms
}
OnTimerDone(...)
{
DoStuff(...);
}
This will restart the timer every input, so when you call that and the timer is not done the timeout signal is not emitted. When no input is done for an amount of time the timer timeouts and you parse the file.
The second option would be to have a periodic timer running (singleShot(false)).
Just start the timer for like each second. and timeout will be called once a second. You can combine that with a variable which you set to true when the input changes and to false when the file is parsed. So you avoid parsing when nothing has changed.
In C++Qt you won't have to worry about multi-threading because the slot gets called in the GUI thread. I assume it is the same for python but you should probably check this.
I've written a Qt console application to try out QSemaphores and noticed some strange behavior. Consider a semaphore with 1 resource and two threads getting and releasing a single resource. Pseudocode:
QSemaphore sem(1); // init with 1 resource available
thread1()
{
while(1)
{
if ( !sem.tryAquire(1 resource, 1 second timeout) )
{
print "thread1 couldn't get a resource";
}
else
{
sem.release(1);
}
}
}
// basically the same thing
thread2()
{
while(1)
{
if ( !sem.tryAquire(1 resource, 1 second timeout) )
{
print "thread2 couldn't get a resource";
}
else
{
sem.release(1);
}
}
}
Seems straightforward, but the threads will often fail to get a resource. A way to fix this is to put the thread to sleep for a bit after sem.release(1). What this tells me is that the release() member does not allow other threads waiting in tryAquire() access to the semaphore before the current thread loops around to the top of while(1) and grabs the resource again.
This surprises me because similar testing with QMutex showed proper behavior... i.e. another thread hanging out in QMutex::tryLock(timeout) gets notified properly when QMutex::unlock() is called.
Any ideas?
I'm not able to fully test this out or find all of the suppporting links at the moment, but here are a few observations...
First, the documentation for QSemaphore.tryAcquire indicates that the timeout value is in milliseconds, not seconds. So your threads are only waiting 1 millisecond for the resource to become free.
Secondly, I recall reading somewhere (I unfortunately can't remember where) a discussion about what happens when multiple threads are trying to acquire the same resource simultaneously. Although the behavior may vary by OS and situation, it seemed that the typical result is that it is a free-for-all with no one thread being given any more priority than another. As such, a thread waiting to acquire a resource would have just as much chance of getting it as would a thread that had just released it and is attempting to immediately reacquire it. I'm unsure if the priority setting of a thread would affect this.
So, why might you get different results for a QSemaphore versus a QMutex? Well, I think a semaphore may be a more complicated system resource that would take more time to acquire and release than a mutex. I did some simple timing recently for mutexes and found that on average it was taking around 15-25 microseconds to lock or unlock one. In the 1 millisecond your threads are waiting, this would be at least 20 cycles of locking and unlocking, and the odds of the same thread always reacquiring the lock in that time are small. The waiting thread is likely to get at least one bite at the apple in the time that it is waiting, so you won't likely see any acquisition failures when using mutexes in your example.
If, however, releasing and acquiring a semaphore takes much longer (I haven't timed them but I'm guessing they might), then it's more likely that you could just by chance get a situation where one thread is able to keep reacquiring the resource repeatedly until the wait condition for the waiting thread runs out.
I'm using perl (which hopefully shouldn't affect anything), but I need to know how I can set a timeout for the connect operation. The problem is I can't wait forever for the connect operation to happen. If it doesn't happen within a few seconds, I'd rather give-up and move on.
socket(my $sock, PF_INET, SOCK_STREAM, (getprotobyname('tcp'))[2]);
setsockopt($sock, SOL_SOCKET, SO_SNDTIMEO, 10); # send timeout
print "connecting...\n";
connect($sock, sockaddr_in(80,scalar gethostbyname('lossy.host.com')));
print "connected...\n";
The problem is, if the connection to "lossy.host.com" is "lossy" or slow or anything but fast, I'd rather give up than make the user wait. (Think of it as a side-effect to a program that does something else... the user probably doesn't expect this script to communicate with a server somewhere...).
Threading Case: How would you interrupt the connect()? Would you just detach the thread and forget about it?
You can use fcntl to set the socket to be non-blocking, then select with a timeout waiting for it to become readable. If it doesn't become readable before the timeout, you could close it at that point.
I know how to do this in C, but not perl, otherwise I'd give you an example. The perlfunc manpage says that all of these functions exist and a cursory read seems to say they'll work like you want.
Edit: sorry, missed the part where perlfunc says they may not be available on non-Unix systems, and indeed, fcntl isn't available on win32. There is an IO::Socket library that you can use that will do the right thing on Windows though.
Here's sample code that works for me (on linux anyway):
#!/usr/bin/perl
use IO::Socket::INET;
use IO::Select;
$sock = IO::Socket::INET->new('PeerAddr' => 'lossy.host.com',
'PeerPort' => 80,
'Blocking' => 0 );
$sel = IO::Select->new( $sock );
#writes = $sel->can_write(10);
if ( $sock->connected ) {
print "socket is connected\n";
} else {
print "socket not connected after however long\n";
$sock->close;
}
You could spawn a separate thread to do it, and then do a timed wait for a result. If you don't receive a result in an appropriate amount of time, give up waiting and just let the thread continue. It will eventually time out, or you might be able to kill the thread.
To answer the initial question, I don't think there's a way to change the connect() timeout, at least not through a sockets API. On Windows, I wouldn't be surprised if there's a registry key you could change that would affect it, but I don't know what it would be.
If you end up doing the threaded case wherein you detach the connecting thread without killing it, beware the following: Windows only lets you have a maximum of 10 pending outgoing TCP connections (the 11th will block until one of the pending ones times out).
This was the cause of much frustration for me. I think MS put this in to prevent botnets from spreading or something. I don't think there's any way to switch it off either.