I hope to read some characters or strings and display them with QTextBrowse from serial port by Qt 4.8.6 and called the following functions( textBrowser is a object of QTextBrowser):
connect(com, SIGNAL(readyRead()), this, SLOT(readSerialPort()));
connect(textBrowser, SIGNAL(textChanged()), SimApplianceQtClass, SLOT(on_textBrowser_textChanged()));
void SimApplianceQt::on_textBrower_textChanged()
{
ui.textBrowser->moveCursor(QTextCursor::End);
}
void SimApplianceQt::readSerialPort()
{
QByteArray temp = com->readAll();
ui.textBrowser->insertPlainText(temp);
}
However, every time I cannot display characters or strings in the textBrowser rightly. Those input strings are always cut into smaller strings to be displayed in multiple lines in the textBrowser. For example, a string "0123456789" may be displayed as (in multiple lines):
01
2345
6789
How to deal with this issue? Many thanks.
What happens is that the readyRead signal is fired not after everything has been received, but after something has been received and is ready to read.
There is no guarantee that everything will have arrived or is readable by the time you receive the first readyRead.
This is a common "problem" for almost any kind of IO, especially if the data is larger than very few bytes. There is usually no automatic way to know when all the data has been received.
There are a few possible solutions:
All of them will require you to put the data in a buffer in readSerialPort() instead of adding it directly to the text browser. Maybe a simple QByteArray member variable in SimApplianceQt would already do the trick in your case.
The rest depends on the exact solution.
If you have access to the sender of the data, you could send the
number of bytes that will be sent before sending the actual string.
This must always be in an integer type of the same size (for
example, always a quint32). Then, in readSerialPort(), you would
first read that size, and then continue to read bytes to your buffer
in readSerialPort() until everything has been received. And then,
you could finally print it. I'd recommend that one. It is also what is used in almost all cases where this problem arises.
If you have access to the sender of the data, you could send some
kind of "ending sequence" at the end of the string. In your
readSerialPort(), you would then continue to read bytes into your
buffer until you receive that ending sequence. Once the ending
sequence has been received, you can print everything that came in
prior to it. Note that the ending sequence itself could be interrupted,
so you'd have to take care of that, too.
If you do not have access to the sender, the best idea I could come
up with would be to work with a timer. You put everything into a
buffer and re-start that timer each time you readSerialPort() is
called. When the timer runs out, that means no new data has been
sent for a while and you can probably print what you have so far.
This is... risky and I wouldn't recommend it if there is any other way.
Related
I use insertPlainText() to insert data to QTextBrowser in the slot function, but It seems result in lag even no response along with data increasement. But when I add '\n' at the end of data to simulate the append(), the lag phenomenon disappeared. But I don't want to add a new line, how to solve this problem?
I tried to use qApp->processEvents() after the insertPlainText(), but it cause crash.
I tried to start a timer to run qApp->processEvents() to refresh the UI, but it didn't solve the problem.
Should I start a new thread to receive serial port data? But the inserted data(I mean received data) size is not big, but the total data size in the browser is big. Receive data may not cost a lot of time.
insertPlainText() performed not well in my machine(i7,16G). It will take about 100ms to insert data when the total data length is about 4096 bytes. I tried the QScintilla open source widget which is better but still not perfect. So I think maybe it's wrong thoughts to use insertPlainText().
I changed my thoughts. I use QByteArray to store all data and use setText() to display the recent 4096 bytes. It looks like I divide the data into many pages and display the recent page. This method solved the problem of how to store the much data. But there is another little problem is that 4096 bytes data can not fill up my screen when I maximize my application. It's not looking good but more data will result in slow response because the app has high data refresh frequency.
Working on a chat server, I need to receive json via gen_tcp in erlang.
One way is to send a 4byte int header which is a good idea so i can also reject messages from clients if they exceed the max length but add complexity on client side.
Another way is to read line, should work too for json if i am not wrong.
Third idea is to read json using depth tracking (counting '{' maybe?)
That way i can also set max message length and make client code less complex.
How can i do it specially with erlang i.e. check number of square brackets opened and keep receiving till last closes? or if its even a good idea?
How does xmpp and other messaging protocols handle this problem?
Another way is to read line, should work too for json if i am not wrong.
Any key or value in json can contain a newline, and if your read protocol is: "Stop reading when a newline character is read from the socket.", you will not read the whole json if any key or value in the json has a newline character in it.
Third idea is to read json using depth tracking (counting '{' maybe?)
Ugh. Too complex. And json can start with a [ as well. And, a key or value could contain a ] or a } too.
The bottom line is: you need to decide on what should mark the end of a sent message. You could choose some relatively unique string like: --*456?END OF MESSAGE!123**--, but once again a key or value in the json could possibly contain that string--and that is why byte headers are used. You should be able to make an informed choice on how you want to proceed after reading this.
I'm experimenting working with streams in UWP to send data from one machine to another over the network.
On the sending machine, I created a DatagramSocket and serialize the data I want to send into bytes and write that to the output stream.
On the receiving machine, I create another DatagramSocket and handle the MessageReceived event to collect the sent data.
This appears to be working in the sense that when I send data from one machine, I do receive it on the other.
However, the data I'm serializing on the sender is a size of say 8150 bytes, which I write to the stream.
On the receiving end, I'm only getting about 13 bytes of data instead of the full load I expected...
So it appears that I'm responsible on the receiving end for reconstructing the full data object by waiting for all the data to come in over what might be multiple streams...
However, it appears that I'm getting packets 1:1 -- that is, if I set a breakpoint right before the send and right after the receive, that when I write and flush the data to the output stream and send it, the receiving end then triggers and I get what seems to be partial data, but I never get anything else.
so while I send 8150 bytes from the sending machine, the receiving end only gets a single packet about 13 bytes in length...
am I losing packets? it seems to be a consistent 13 bytes, so perhaps it's a buffer setting, but the problem is I the 8150 bytes is arbitrary; sometimes it's larger or smaller...
I'm obviously doing this wrong, but I'm so new to network programming I'm not really sure where to start fixing this; on a high level what's the proper way to send a complete memory object from one machine to another so that I can reconstruct an exact copy of it on the receiving end?
Okay so it turns out that the problem was that when I was writing to the output stream on the sender machine, I was using a regular StreamWriter and sending it the array as an object:
using (StreamWriter writer = new StreamWriter(stream))
{
writer.Write(output);
writer.Flush();
}
I believe this ends up writing the object itself using ToString() so what I was actually writing to the stream was "byte[]" or whatever the type was...
Instead I replaced this with a BinaryWriter and write the full output array and the complete contents are now received on the other end!
using (BinaryWriter writer = new BinaryWriter(stream))
{
writer.Write(output);
writer.Flush();
}
I know this wasn't very well put together, but I barely know what I'm doing here :) still i hope this might be helpful to others.
I have an Arduino hanging off /dev/ttyUSB1, communicating at 115kbaud. The statements below work fine up to the 's next' method call, where Pharo hangs. The Arduino responds to the '99' command by sending a single character $1 back to the computer. If I pull out the cable, the program continues and s contains the character $1 just like it should, but not until I pull out the cable. So it's my impression that 's next' does not return after it reads just a single byte (ok, sure, there's nothing that says it should return after reading a single byte). How do I read a single byte from a stream in Pharo? Or how do I open a read/write byte stream? I haven't found anything in the source classes that seem to do this. I've tried setting the stream to ascii, to binary, to text, and it doesn't change the behavior.
s := FileStream oldFileNamed: '/dev/ttyUSB1'.
s readWrite.
s nextPutAll: '99'. "'99' is successfully received by Arduino"
s next. "hangs here"
s close.
Thanks for your help.
Take a look at the class side of FileStream. There you'll notice that you are getting a MultiByteStream (the concreteStream) when asking Filestream for an oldFileNamed:.
There can be a TextConverter or buffer involved. open:forWrite: of MultiByteStream is called, and that calls super. StandardFileStream>open:forWrite: calls enableReadBuffering.
You probably want to call disableReadBuffering on your stream.
There is an Arduino package that has all these issues solved, take a look at this repo:
http://ss3.gemstone.com/ss/Arduino.html
I have the following code that reads from a QTCPSocket:
QString request;
while(pSocket->waitForReadyRead())
{
request.append(pSocket->readAll());
}
The problem with this code is that it reads all of the input and then pauses at the end for 30 seconds. (Which is the default timeout.)
What is the proper way to avoid the long timeout and detect that the end of the input has been reached? (An answer that avoids signals is preferred because this is supposed to be happening synchronously in a thread.)
The only way to be sure is when you have received the exact number of bytes you are expecting. This is commonly done by sending the size of the data at the beginning of the data packet. Read that first and then keep looping until you get it all. An alternative is to use a sentinel, a specific series of bytes that mark the end of the data but this usually gets messy.
If you're dealing with a situation like an HTTP response that doesn't contain a Content-Length, and you know the other end will close the connection once the data is sent, there is an alternative solution.
Use socket.setReadBufferSize to make sure there's enough read buffer for all the data that may be sent.
Call socket.waitForDisconnected to wait for the remote end to close the connection
Use socket.bytesAvailable as the content length
This works because a close of the connection doesn't discard any buffered data in a QTcpSocket.