sleep or wait() - wait

Suppose that you have two threads using synchronized methods to share a buffer, one method for writing to the buffer and one for reading from it. If the reader thread finds the buffer empty, explain which it would be more appropriate for the reader to use: sleep or wait.

Sounds a lot like homework, so I will only give a hint.
Take a look regarding how locks are managed during wait and sleep. The javadoc for both of them will explain the difference.
wait
sleep

Clearly homework so a hint:
Ask yourself, if you were to choose to sleep how long should you sleep for? What happens if you choose a timeout that's too small, and what happens if it's too large?
Conversely, how long does it take for the buffer to be filled? Is it ok for the application to buffer data for a short time or does it need that data ASAP?

Related

How can I make a program wait in OCaml?

I'm trying to make a tetris game in ocaml and i need to have a piece move through the graphics screen at a certain speed.
I think that the best way to do it is to make a recursive function that draws the piece at the top of the screen, waits half a second or so, clears that piece from the screen and redraws it 50 pixels lower. I just don't know how to make the programm wait. I think that you can do it using the Unix module but idk how..
Let's assume you want to take a fairly simple approach, i.e., an approach that works without multi-threading.
Presumably when your game is running it spends virtually all its time waiting for input from the user, i.e., waiting for the user to press a key. Most likely, in fact, you're using a blocking read to do this. Since the user can take any amount of time before typing anything (up to minutes or years), this is incompatible with keeping the graphical part of the game up to date.
A very simple solution then is to have a timeout on the read operation. Instead of waiting indefinitely for the user to press a key, you can wait at most (say) 10 milliseconds.
To do this, you can use the Unix.select function. The simplest way to do this is to switch over to using a Unix file descriptor for your input rather than an OCaml channel. If you can't figure out how to make this work, you might come back to StackOverflow with a more specific question.

I don't understand what exactly does the function bytesToWrite() Qt

I searched for bytesToWrite in doc and that what I found "For buffered devices, this function returns the number of bytes waiting to be written. For devices with no buffer, this function returns 0."
First what does mean buffered devices. And can anyone please explain to me what exactly this function does and where or how can I use it.
Many IO devices are buffered, which means that data isn't sent straight away, but it is accumulated to be sent in bulk when there is a sufficient amount.
This is done essentially to have better performance, as sending data normally has some fixed overhead (at the very least the syscall overhead), which is well amortized when sending data in bulk, but would have to be paid for each write if no buffering would be used.
(notice that here we are only talking about QIODevice buffers, normally there are also all kinds of kernel-mode buffers and buffers internal to hardware devices themselves)
bytesToWrite tells you how much stuff is in the QIODevice write buffer, i.e. how many bytes you wrote that are waiting to be actually written (as in, given to the OS to write).
I never actually had to use that member, but I suppose it could be useful e.g. to in a producer-consumer scenario (=if the write buffer is lower than something, then you have to actually calculate the next chunk of data to send), to manually handle buffering in some places or even just for debugging/logging purposes.
it's actually very usefull when you're using an asynchronous API.
you can for example, use it inside a bytesWritten() slot to tell wether the buffer is empty and the data has been fully written or not.

TCP Connection slows down after 10.000 packets

I am using the QT implementation of the TCP stack to control a robot. We exchange short msgs (<200Byte) and have a round trip time of about 8ms. After maybe 10.000 Packets in each direction, the connection slows down and i have to wait about 1 sec for the answer of my packet. If I restart my program, and reconnect, I again get the 8ms RTT.
For me it sounds like some kind of buffer is filling up but I havn't worked with TCP much, so maybe some one could give me a hint.
The problem is in the code that you're not showing. Likely the slot that gets executed on readyRead() is not emptying the buffer.
It is acceptable for the buffer not to be completely empty, say when you're reading complete lines/packets.
It is not acceptable for the buffer size to be constantly growing.
At the end of your slot reading slot, check if bytesAvailable() is non-zero. It can only be non-zero in case #1. Even then, you should be able to place an upper bound on it - say some small multiple of packet size or maximum line length. If the bound is ever exceeded, you've got a bug in your code.
It is just a wild guess, but a common catch by using qt sockets is that you need to delete the socket object by yourself ( for example with "deleteLater()") on error and disconnection.
Example code:
connect(socket, SIGNAL(disconnected()), socket, SLOT(deleteLater()));
The event loop will then remove the socket the next time it is able to do it.
The QTcpSockets or AbstractSockets don't delete themselfs on close() or on leaving the scope (because then the Signal/Slots won't work).

Long time between submit and start time of a command in OpenCL

I'm running a kernel on a big array. When I profile the clEnqueueNDRange command, the execution time (end-start) is .001 ms but the time between submit and start (start-submit) is around 120 ms which varies with the size of the input data. What happens when a command is submitted until it start to execute. Is it reasonable to get this large time?
OpenCL operates asynchronously. That is to say that when you ask for a piece of work to be done, it may not happen at that time. It will happen at some time in the future. This is a little weird, especially when you start profiling things, but it works like this so that the CPU can queue up lots of work for the OpenGL device, and then go do something else while the work is done.
For example:
clEnqueueWriteBuffer(blah);
clEnqueueNDRange(blah);
clEnqueueReadBuffer(blah, but blocking_read = CL_TRUE);
Here, the writeBuffer and the NDRange will probably appear to take very small amounts of time. All they'll do is record what needs to be done. The blocking readBuffer will take a long time, because it has to wait for the results of the read. For that read to complete, the write, and the kernel execution have to complete, before the read can even start.
Now the read might be very small, but because it's waiting for everything before it to finish the time it appears to take is dependent on the amount of work in the commands before it.
I don't quite understand what you're measuring from your question, but I expect what you're seeing is this effect. The time for work is being charged to other functions because they have to wait for previous work to finish.
Knowing which functions cause the CPU to wait on the GPU is one of the big tricks when it comes to writing high performance code. Any time you introduce a wait like this the CPU stops doing any useful work, and the GPU is likely to go idle whilst the CPU prepares the next lump of work. Sometimes, there no alternative, and you just have to wait.

Why are Asynchronous processes not called Synchronous?

So I'm a little confused by this terminology.
Everyone refers to "Asynchronous" computing as running different processes on seperate threads, which gives the illusion that these processes are running at the same time.
This is not the definition of the word asynchronous.
a⋅syn⋅chro⋅nous
–adjective
1. not occurring at the same time.
2. (of a computer or other electrical machine) having each operation started only after the preceding operation is completed.
What am I not understanding here?
It means that the two threads are not running in sync, that is, they are not both running on the same timeline.
I think it's a case of computer scientists being too clever about their use of words.
Synchronisation, in this context, would suggest that both threads start and end at the same time. Asynchrony in this sense, means both threads are free to start, execute and end as they require.
The word "synchronous" implies that a function call will be synchronized with some other event.
Asynchronous implies that no such synchronization occurs.
It seems like the definition that you have there should really be the definition for "concurrent," or something. That definition looks wrong.
PS:
Here is the wiktionary definition:
asynchronous
Not synchronous; occurring at different times.
(computing, of a request or a message) allowing the client to continue during processing.
Which just so happens to be the exact opposite of what you posted.
I believe that the term was first used for synchronous vs. asynchronous communication. There synchronous means that the two communicating parts have a common clock signal that they run by, so they run in parallel. Asynchronous communication instead has a ready signal, so one part asks for data and gets a signal back when it's available.
The terms was then adapted to processes, but as there are obvious differences some aspects of the terms work differently. For a single thread process the natural way to request for something to be done is to make a synchronous call that transfers control to the subprocess, and then control is returned when it's done, and the process continues.
An asynchronous call works just like asynchronous communication in the aspect that you send a request for something to be done, and the process doing it returns a signal when it's done. The difference in the usage of the terms is that for processes it's in the asynchronous processing that the processes runs in parallel, while for communication it is the synchronous communication that run in parallel.
So "computer or electrical machine" is really a too wide scope for making a correct definition of the term, as it's used in slightly different ways for different techniques.
I would guess it's because they are not synchronized ;)
In other words... if one process gets stopped, killed, or is waiting for something, the other will carry on
I think there's a slant that is slightly different to most of the answers here.
Asynchronous means "not happening at the same time".
In the specific case of threading:
Synchronous means "execute this code now".
Asynchronous means "enqueue this work on a different thread that will be executed at some indeterminate time in the future"
This usually allows you to "do two things at once" because of reasons like:
one thread is just waiting (e.g. for data to arrive on a serial port) so is asleep
You have multiple processors, so the two threads can run concurrently.
However, even with 128 processor cores, the case is the same: the work will be executed "at some time in the future" (if perhaps the very near future) rather than "now".
Your second definition is more helpful here:
2. [...] having each operation started only after the preceding operation is completed.
When you make an asynchronous call, that call might not be completed before the next operation is started. When the call is synchronous, it will be.
It really means that an asynchronous event is happening independently of other events whereas a synchronous event would be happening dependent of other events.
It's like: Flammable, Inflammable ( which mean the same thing )
Seriously -- it's just one of those quirks of the English language. It doesn't really make sense. You can try to explain it, but it would be just as easy to justify the reverse meanings.
Many of the answers here are not correct. IN-dependently has a beginning particle that says NOT dependently, just like A-synchronous, but the meaning of dependent and synchronous are not the same! :D
So three dependent persons would wait for an order, because they are dependent to the order, but they wait, so they are not synchronous.
In english and any other language with common roots with a, syn and chrono (italian: asincrono; spanish: asincrónico; french:
asynchrone; greek: a= not syn=together chronos=time)it means exactly the opposite.
The terminology is UTTERLY counter-intiutive. Async functions ARE synchronous, they happen at the same time, and that's their power. They DO NOT wait, they DO NOT depend, they DO NOT hold the user waiting, but all those NOTs refer to anything but synchronicity :)
The only answer possibly right is the CLOCK one, although it is still confusing. My personal interpretation is this story:
"A professor has an office, and he makes SYNCHRONOUS CALLS for students to come. He says out loud in the main university hall: 'Hey guys who wants to talk to me should come at 10 in the morning tomorrow.', or simply puts a sign saying the same stuff.
RESULT: at 10 in the morning you see a long queue. People had the same time so they came in in the same moment and they got "piled up in the process".
So the professor thinks it would be nice for students not to waste time in the queue (and do synchronous operations, that is, do parallel stuff in their lives at the same time, and that's where the confusion comes).
He decides students can substitute him in making ASYNCHRONOUS CALLS, that is, every time a student ends talking with him, the students may, e.g., call another student saying the professor is free to talk, in a room where students may do whatever they like in the meantime. So every student does not have a single SYNCHRONOUS CALL (10 in the morning, the same time for all) but they have 10, 10.10, 10.18, 10.27.. etc. according to the needed time for each discussion in the professor office."
Is that the meaning of having the same clock, #Guffa?

Resources