I'm new to Qt, so if this is a completely stupid question...
I'm trying to use QTcpSocket.
If I do:
...
QTcpSocket * socket
socket = new QTcpSocket(this);
socket.write(1);
...
It complains about write() not working with integers (not a const char *).
If I do:
...
QTcpSocket * socket
socket = new QTcpSocket(this);
socket.write("1");
...
the other side sees it as the integer 49 (ASCII for 1).
On a similar but different issue, is it possible to send structs or unions over QTcpSocket?
==================================================
EDIT:
The server already accepts integers, and is expecting an integer - I have no control over that.
The problem you have is not really related to Qt, the same issue would arise with any other Socket or Streaming interface.
The provider of the Server needs to give you the protocol description. This description usually contains the ports used (TCP, UDP, numbers), other TCP parameters and the coding of the transmitted data. Sometimes, this protocol (or its implementation) is called a (protocol-) stack.
The coding not only contains the byte ordering, but also a description of how complex structures are transmitted.
The latter information is often coded in something that is called "ASN.1" - Abstract Syntax Notation.
In case your server is really simple and just accepts Integers one after the other without any meta-information and is on the same platform, than you could do something like this:
foreach (int i in my set of integers)
{
ioDevice->write((const char*) &i, sizeof(i));
}
You take the address of your integer as a data buffer start and transmit as many bytes as your integer has.
But note well, this will fail if you transmit data from an Intel architecture to a 16-bit architecture or a motorola PPC.
I suggest using QDataStream with sockets. This will protect you from little endian/big endian conversion problem.
So something as below :
qint32 myData = 1;
QDataStream os( &mySocket );
os << myData;
When you write in a string representation, you also have to interpret the string on the other side. There is, for example, QString::toInt().
When you want to write the integer as integer, you will have more throughput, as it takes less bytes to transmit. However you should read about the topic of network byte order.
In principal it is possible to copy structs, etc. into a buffer and also over the network. However things get complicated again when you transmit data between different architectures or even only different builds of your software. So you shouldn't send the raw data, but use serialization! See this question:
Serialization with Qt
It provides answers on how to generate streams out of objects and objects out of streams. These streams is what you then use to transmit over the network. Then you don't have to deal with the integers themselves anymore!
Different overloads, you are looking for:
qint64 QIODevice::write ( const char * data, qint64 maxSize );
and
qint64 QIODevice::write ( const QByteArray & byteArray );
Related
Usaully, one would have to define a new type and register it with MPI to use it. I am wondering if using protobuf to serialize a object and send it over using MPI as byte stream. I have two questions:
(1) do you foresee any problem with this approach?
(2) do I need to send length information through a separate MPI_Send(), or can I probe and use MPI_Get_count(&status, MPI_BYTE, &count)?
An example would be:
// sender
MyObj myobj;
...
size_t size = myobj.ByteSizeLong();
void *buf = malloc(size);
myobj.SerializePartialToArray(buf, size);
MPI_Isend(buf, size, MPI_BYTE, ... )
...
// receiver
MPI_Iprobe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &flag, &status);
if (flag) {
MPI_Get_count(&status, MPI_BYTE, &size);
MPI_Recv(buf, size, MPI_BYTE, ... , &status);
MyObject obj;
obj.ParseFromArray(buf, size);
...
}
Generally you can do that. Your code sketch looks also fine (except for the omitted buf allocation on the receiver side). As Gilles points out, makes sure to use status.MPI_SOURCE and status.MPI_TAG for the actual MPI_Recv, not MPI_*_ANY.
However, there are some performance limitations.
Protobuf isn't very fast, particularly due to en-/decoding. It very much depends what your performance expectations are. If you run on a high performance network, assume a significant impact. Here are some basic benchmarks.
Not knowing the message size ahead and thus always posting the receive after the send also has performance implications. This means the actual transmission will likely start later, which may or may not have an impact on the senders side since you are using non-blocking sends. There could be cases, where you run into some practical limitations regarding number of unexpected messages. That is not a general correctness issues, but might require some configuration tuning.
If you go ahead with your approach, remember to do some performance analysis on the implementation. Use an MPI-aware performance analysis tool to make sure your approach doesn't introduce critical bottlenecks.
I am working on a project where I have to receive a float value and an integer value. The float has to be saved and the integer value has to be used for another purpose.
Usually an arduino does not receive any integers or floats at all. The usual way an arudino receives something is through the serial interface.
Data is sent as a sequence of bytes.
The meaning of that data is determined by the sender. You can receive monkeys and elephants with an arduino (virtual ones of course) as long as you know the protocol.
Read https://en.wikipedia.org/wiki/Universal_asynchronous_receiver-transmitter
and get some basic knowledge in data types and how to communicate them through a serial interface.
You should define a protocol for your application or simply send the data as strings which you parse on-board.
I'm using an SSD1306 OLED and have a question about it.
When writing data to its buffer via I2C, some libraries write 16 bytes every time.
For example:
void SSD1306::sendFramebuffer(const uint8_t *buffer) {
// Set Column Address (0x00 - 0x7F)
sendCommand(SSD1306_COLUMNADDR);
sendCommand(0x00);
sendCommand(0x7F);
// Set Page Address (0x00 - 0x07)
sendCommand(SSD1306_PAGEADDR);
sendCommand(0x00);
sendCommand(0x07);
for (uint16_t i = 0;i < SSD1306_BUFFERSIZE;) {
i2c.start();
i2c.write(0x40);
for (uint8_t j = 0;j < 16; ++j, ++i) {
i2c.write(buffer[i]);
}
i2c.stop();
}
}
Why don't they write 1024 bytes directly?
Most of the I2C libraries I've seen source code for, including that for the Aruduino, chunk the data in this fashion. While the I2C standard doesn't require this, as other poster mentioned, there may be buffer considerations. The .stop() command here might signal the device to process the 16 bytes just sent and prepare for more.
Invariably, you need to read the datasheet for your device and understand what it expects in order to display properly. They say "RTFM" in software, but hardware is at least as unforgiving. You must read and follow the datasheet when interfacing with external hardware devices.
Segmenting data into more frames helps when the receiving device has not enough buffer space or is simply not fast enough to digest the data at full rate. The START/STOP approach might give the receiving device a bit of time to process the received data. In your specific case, the 16-byte chunks seem to be exactly one line of the display.
Other reasons for segmenting transfers are multi-master operations, but that doesn't seem to be the case here.
I'm developing a desktop application with qt which communicates with stm32 to send and receive data.
The thing is, the data to transfer, follow a well-defined shape, with a previously defined fields. My problem is that I can't find how read () or readall() work or how Qserialport even treats the data. So my question is how can I read data (in real time, whenever there is data in the buffer) and analyze it field by field (or per byte) in order to be displayed in the GUI.
There's nothing to it. read() and readAll() give you bytes, optionally wrapped in a QByteArray. How you deal with those bytes is up to you. The serial port doesn't "treat" or interpret the data in any way.
The major point of confusion is that somehow people think of a serial port as if it was a packet oriented interface. It's not. When the readyRead() signal fires, all that you're guaranteed is that there's at least one new byte available to read. You must cope with such fragmentation.
I am trying to send data(forces) across 2 processes, using MPI_SendRecv. Usually the data will be over written in the received buffer, I do not want to overwrite the data in the received buffer instead I want to add the data it received.
I can do the following. Store the data in the previous time step to a different array and then add it after receiving. But I have huge number of nodes and I do not want to have memory allocated for its storage every time step. (or overwrite the same)
My question is there a way to add the received data directly to the buffer and store it in the received memory using MPI?
Any help in this direction would be really thankful.
I am sure collective communication calls (MPI Reduce)cannot be worked out here. Are there any other commands that can do this?
In short: no, but you should be able to do this.
In long: Your suggestion makes a great deal of sense and the MPI Forum is currently considering new features that would enable essentially what you want.
It is incorrect to suggest that the data must be received before it can be accumulated. MPI_Accumulate does a remote accumulation in a one-sided fashion. You want MPI_Sendrecv_accumulate rather than MPI_Sendrecv_replace. This makes perfect sense and an implementation can internally do much better than you can because it can buffer on a per-packet basis, for example.
For suszterpatt, MPI internally buffers in the eager protocol and in the rendezvous protocol can setup a pipeline to minimize buffering.
The implementation of MPI_Recv_accumulate (for simplicity, as the MPI_Send part need not be considered) looks like this:
int MPI_Recv(void *buf, int count, MPI_Datatype datatype, MPI_Op op, int source, int tag, MPI_Comm comm, MPI_Status *status)
{
if (eager)
MPI_Reduce_local(_eager_buffer, buf, count, datatype, op);
else /* rendezvous */
{
malloc _buffer
while (mycount<count)
{
receive part of the incoming data into _buffer
reduce_local from _buffer into buf
}
}
In short: no.
In long: your suggestion doesn't really make sense. The machine can't perform any operations on your received value without first putting it into local memory somewhere. You'll need a buffer to receive the newest value, and a separate sum that you will increment by the buffer's content after every receive.