I'm experimenting working with streams in UWP to send data from one machine to another over the network.
On the sending machine, I created a DatagramSocket and serialize the data I want to send into bytes and write that to the output stream.
On the receiving machine, I create another DatagramSocket and handle the MessageReceived event to collect the sent data.
This appears to be working in the sense that when I send data from one machine, I do receive it on the other.
However, the data I'm serializing on the sender is a size of say 8150 bytes, which I write to the stream.
On the receiving end, I'm only getting about 13 bytes of data instead of the full load I expected...
So it appears that I'm responsible on the receiving end for reconstructing the full data object by waiting for all the data to come in over what might be multiple streams...
However, it appears that I'm getting packets 1:1 -- that is, if I set a breakpoint right before the send and right after the receive, that when I write and flush the data to the output stream and send it, the receiving end then triggers and I get what seems to be partial data, but I never get anything else.
so while I send 8150 bytes from the sending machine, the receiving end only gets a single packet about 13 bytes in length...
am I losing packets? it seems to be a consistent 13 bytes, so perhaps it's a buffer setting, but the problem is I the 8150 bytes is arbitrary; sometimes it's larger or smaller...
I'm obviously doing this wrong, but I'm so new to network programming I'm not really sure where to start fixing this; on a high level what's the proper way to send a complete memory object from one machine to another so that I can reconstruct an exact copy of it on the receiving end?
Okay so it turns out that the problem was that when I was writing to the output stream on the sender machine, I was using a regular StreamWriter and sending it the array as an object:
using (StreamWriter writer = new StreamWriter(stream))
{
writer.Write(output);
writer.Flush();
}
I believe this ends up writing the object itself using ToString() so what I was actually writing to the stream was "byte[]" or whatever the type was...
Instead I replaced this with a BinaryWriter and write the full output array and the complete contents are now received on the other end!
using (BinaryWriter writer = new BinaryWriter(stream))
{
writer.Write(output);
writer.Flush();
}
I know this wasn't very well put together, but I barely know what I'm doing here :) still i hope this might be helpful to others.
Related
I hope to read some characters or strings and display them with QTextBrowse from serial port by Qt 4.8.6 and called the following functions( textBrowser is a object of QTextBrowser):
connect(com, SIGNAL(readyRead()), this, SLOT(readSerialPort()));
connect(textBrowser, SIGNAL(textChanged()), SimApplianceQtClass, SLOT(on_textBrowser_textChanged()));
void SimApplianceQt::on_textBrower_textChanged()
{
ui.textBrowser->moveCursor(QTextCursor::End);
}
void SimApplianceQt::readSerialPort()
{
QByteArray temp = com->readAll();
ui.textBrowser->insertPlainText(temp);
}
However, every time I cannot display characters or strings in the textBrowser rightly. Those input strings are always cut into smaller strings to be displayed in multiple lines in the textBrowser. For example, a string "0123456789" may be displayed as (in multiple lines):
01
2345
6789
How to deal with this issue? Many thanks.
What happens is that the readyRead signal is fired not after everything has been received, but after something has been received and is ready to read.
There is no guarantee that everything will have arrived or is readable by the time you receive the first readyRead.
This is a common "problem" for almost any kind of IO, especially if the data is larger than very few bytes. There is usually no automatic way to know when all the data has been received.
There are a few possible solutions:
All of them will require you to put the data in a buffer in readSerialPort() instead of adding it directly to the text browser. Maybe a simple QByteArray member variable in SimApplianceQt would already do the trick in your case.
The rest depends on the exact solution.
If you have access to the sender of the data, you could send the
number of bytes that will be sent before sending the actual string.
This must always be in an integer type of the same size (for
example, always a quint32). Then, in readSerialPort(), you would
first read that size, and then continue to read bytes to your buffer
in readSerialPort() until everything has been received. And then,
you could finally print it. I'd recommend that one. It is also what is used in almost all cases where this problem arises.
If you have access to the sender of the data, you could send some
kind of "ending sequence" at the end of the string. In your
readSerialPort(), you would then continue to read bytes into your
buffer until you receive that ending sequence. Once the ending
sequence has been received, you can print everything that came in
prior to it. Note that the ending sequence itself could be interrupted,
so you'd have to take care of that, too.
If you do not have access to the sender, the best idea I could come
up with would be to work with a timer. You put everything into a
buffer and re-start that timer each time you readSerialPort() is
called. When the timer runs out, that means no new data has been
sent for a while and you can probably print what you have so far.
This is... risky and I wouldn't recommend it if there is any other way.
I have a Java program which connects to the internet and send files (emails with attachments, SSL, javamail).
It sends only one email at a time.
Is there a way that my program could track the network traffic it itself is generating?
That way I could track progress of emails being sent...
It would also be nice if it was cross-platform solution...
Here's another approach that only works for sending messages...
The data for a message to be sent is produced by the Message.writeTo method and filtered through various streams that send it directly out the socket. You could subclass MimeMessage, override the writeTo method, wrap the OutputStream with your own OutputStream that counts the data flowing through it (similar to my other suggestion), and reports that to your program. In code...
public class MyMessage extends MimeMessage {
...
public void writeTo(OutputStream os, String[] ignoreList) throws IOException, MessagingException {
super.writeTo(new MyCountingStream(os), ignoreList);
}
}
If you want percent completion you could first use Message.writeTo to write the message to a stream that does nothing but count the amount of data being written, while throwing away the data. Then you know how big the message really is, so when the message is being sent you can tell what percent of the message that is.
Hope that helps...
Another user's approach is here:
Using JProgressBar with Java Mail ( knowing the progress after transport.send() )
At a lower level, if you want to monitor how many bytes are being sent, you should be able to write your own SocketFactory that produces Sockets that produce wrapped InputStreams and OutputStreams that monitor the amount of data passing through them. It's a bit of work, and perhaps lower level than you really want, but it's another approach.
I've been meaning to do this myself for some time, but I'm still waiting for that round tuit... :-)
Anyway, here's just a bit more detail. There might be gotchas I'm not aware of once you get into it...
You need to create your own SocketFactory class. There's a trivial example in the JavaMail SSLNOTES.txt file that delegates to another factory to do the work. Instead of factory.createSocket(...), you need to use "new MySocket(factory.createSocket(...))", where MySocket is a class you write that overrides all the methods to delegate to the Socket that's passed in the constructor. Except the getInputStream and getOutputStream methods, which have to use a similar approach to wrap the returned streams with stream classes you create yourself. Those stream classes then have to override all the read and write methods to keep track of how much data if being transferred, and make that information available however you want to your code that wants to monitor progress. Before you do an operation that you want to monitor, you reset the count. Then as the operation progresses, the count will be updated. What it won't give you is a "percent completion" measure, since you have no idea how much low level data needs to be sent to complete the operation.
I am trying to send image from my midlet to an HTTP server. images are converted into byte
and sent to server using http multipart/form-data request format.
ByteArrayOutputStream bos = new ByteArrayOutputStream();
bos.write(boundaryMessage.getBytes());
bos.write(fileBytes);
bos.write(endBoundary.getBytes());
When the image size is less than around 500Kb then the code works fine, but when the size is greater than it shows: Uncaught exception java.lang.OutOfMemoryError. I tried using Java ME SDK 3.0 and Nokia S40 5th edition FP1. Any help is greatly appreciated. Thanks for looking
I used the following class file: click here
Being forced to read the whole file into memory with the first `getFileBytes(), in order to transmit in one piece, is most likely what's running the system out of memory.
Find a way to read about 100K, transmit it, then read another 100, until the whole file is done.
The HttpMultipartRequest class's constructor as written allows only for the transmission of the file as one single object. Even though it's an implementation of the MIME multipart content protocol, it is limited to the case of transmitting just one part:
The class can be modified to allow sending multiple parts. Have a look at the protocol specification RFC1341, especially the example half-way through.
With these three lines together as they are in the constructor, the whole file is sent in one part;
bos.write(boundaryMessage.getBytes());
bos.write(fileBytes);
bos.write(endBoundary.getBytes());
But in the multipart case, there needs to be multiple boundaries, before the endBoundary:
for(bytes=getMoreFileBytes(); ! bytes.empty; bytes=getMoreFileBytes()){
bos.write(boundaryMessage.getBytes());
bos.write(bytes);
}
bos.write(endBoundary.getBytes());
As a quick fix, let the constructor open the file and read it 100k at a time. It already receives a fileName parameter.
The PHP script on the other end, should reassemble the original file from the pieces.
I am not very familiar with the forum rules, I tried to comment your answer but it shows negative.
Okay..
Now I am getting java.io.IOException: Persistent connection dropped after first chunk sent, cannot retry
previously I tried to use application/x-www-form-urlencoded request type with Base64 encoding using kidcandy's code here: http://forums.sun.com/thread.jspa?threadID=538500
This code divides the imagedata in chunks to avoid 'Persistent connection drop' problem and creates connection with the server using 'for' loop. The problem is maximum chunk size maybe only 500-700 bytes. So to send a 100kb image the code needs to create and close connection 200 times, I tried to run this on nokia 5310 phone, it behaves like it is hibernating... so it is not useful.
Now should I use that for loop for 'multipart/form-data' request?
What is the maximum chunk size for this type of request?
Or any other idea?
Regards
I have the following code that reads from a QTCPSocket:
QString request;
while(pSocket->waitForReadyRead())
{
request.append(pSocket->readAll());
}
The problem with this code is that it reads all of the input and then pauses at the end for 30 seconds. (Which is the default timeout.)
What is the proper way to avoid the long timeout and detect that the end of the input has been reached? (An answer that avoids signals is preferred because this is supposed to be happening synchronously in a thread.)
The only way to be sure is when you have received the exact number of bytes you are expecting. This is commonly done by sending the size of the data at the beginning of the data packet. Read that first and then keep looping until you get it all. An alternative is to use a sentinel, a specific series of bytes that mark the end of the data but this usually gets messy.
If you're dealing with a situation like an HTTP response that doesn't contain a Content-Length, and you know the other end will close the connection once the data is sent, there is an alternative solution.
Use socket.setReadBufferSize to make sure there's enough read buffer for all the data that may be sent.
Call socket.waitForDisconnected to wait for the remote end to close the connection
Use socket.bytesAvailable as the content length
This works because a close of the connection doesn't discard any buffered data in a QTcpSocket.
I need to serve MP3 content that is generated dynamically during the request. My clients (podcatchers I can't configure) are timing out before I'm able to generate the first byte of the response data.
Is there a way to send fodder/throwAway data while I'm generating the real data, to prevent/avoid the timeout, but in a way that allows me to instruct the client to ignore/discard the fodder data once I'm ready to start sending the "real" data?
If the first few bytes of the encoded content are always the same then you could very slowly send back those bytes. I'm not familiar with the MP3 file format, but if the first few bytes are always some magic (and constant) header, this technique could work.
Once the file encoding gets started you could then skip the first few bytes (since you already sent them) and continue from there.
You could have a default, static "hi, welcome to Lance's stream!" stream go out while you're generating the real deal.