Marmalade HTTP GET not receiving entire response - http

Here is my code:
CIwHTTP http;
std::string output="";
char buffer[1024];
int32 httpCallback(void* sys_data, void* user_data) {
http.ReadData(buffer,http.ContentLength());
output += buffer;
return 0;
}
http.Get(url.c_str(), httpCallback, 0);
The content-length header is properly set in the API. For some reason only part of the API output is received. Sometimes it gets the entire API string and sometimes it returns different portions of the string. It seems random. Help!

You are passing ContentLength() to ReadData, but your buffer has only 1024 bytes. Most likely you have stack overflow - pun intended.
You can either call ReadData in a loop with 1024 until it returns zero, or dynamically allocate buffer on the heap.

IwHTTP::Get() only performs the callback once the headers for the response have been received.
You then need to use IwHTTP::ReadContent() to actually read the remainder of the response in a series of callbacks, as hinted at in one of the other comments.
Please see the IwHTTP Example in our API reference documentation for more details.
Hope this helps!

Related

How do I send/receive binary data over TCP using Apache Camel

I see numerous examples of using Apache Camel to send or receive data over a TCP connection, as long as the data is text. For example setting up an endpoint such as netty:tcp://localhost:5000?textline=true. The important thing to note here is the usage of the option textline=true. This works great when you are dealing with text, not so much if it's just bytes.
How do I do the same thing, except that instead of text, I'm sending blobs of binary data (it's not serialized Java objects)? From what I could find, it would seem that I need to specify a codec to use (instead of the default text codec when textline=true is used), but I don't know how to do that.
I can't find any good examples of how to do this either on the Camel site nor anywhere else.
UPDATE [Thu Jul 19 10:43:10 EDT 2018]:
I have managed to attain a certain degree of success. I created a simple DataFormat and plugged it into the stream:
public class ByteArrayFormat implements DataFormat {
#Override
public void marshal(Exchange exchange, Object object, OutputStream ostream) throws Exception {
byte[] bytes = exchange.getContext().getTypeConverter().mandatoryConvertTo(byte[].class, object);
ostream.write(bytes);
}
#Override
public Object unmarshal(Exchange exchange, InputStream istream) throws Exception {
byte[] bytes = exchange.getContext().getTypeConverter().mandatoryConvertTo(byte[].class, istream);
return bytes;
}
}
I am not 100% sure this works, as I may have been seeing only unmarshal in action. Any suggestions or corrections are welcome.
I then modified my route as follows:
public void configure() {
from("netty4:tcp://localhost:5000?allowDefaultCodec=false&sync=false").unmarshal(new ByteArrayFormat())
.process(processor).to("file://test");
}
The testing "processor" does nothing more than grab the input stream from the exchange and hexdump the data to the console. The "to" just places what it gets in a file so I can look at it there as well.
I then did a simple test where I piped the contents of a binary file (via nc) and watched what happened. As far as I can tell, I received the data of the file as bytes.
There is a major caveat though. The result was broken into 1024 byte blocks (which I assume is a limitation imposed by the network stack), so the original file of 3862 bytes came over as four "receives" of 1024 byte blocks (really, 3 x 1024 + 790). Any ideas or suggestions on how to "stitch" this back together would be helpful.

UDP API in Rust

1) API for send here returns Result<usize>. Why is that ? In my head, a UDP send is all or none. The return value seems to suggest that sending can succeed but entire data may not be written which makes me code like:
let mut bytes_written = 0;
while bytes_written < data.len() {
bytes_written += match udp_socket.send_to(&data[bytes_written..]) {
Ok(bytes_tx) => bytes_tx,
Err(_) => break,
}
}
Recently someone told me this is completely unnecessary. But I don't understand. If that were true why is the return not Result<()> instead, which is also what i was expecting ?
2)
For reads though I understand. I could give it a buffer of size 100 bytes but the datagram might only be 50 bytes long. So essentially I should utilise only read_buf[..size_read]. Here my question is what happens if the buffer size is 100 but the datagram size is say 150 bytes ? Will recv_from fill in only 100 bytes and return Ok(100, some_peer_addr) ? If i re-read will it fill in the remaining of the datagram ? What if another datagram of 50 bytes arrived before my second read ? Will i get just the remaining 50 bytes the second time and 50 bytes of new datagram the 3rd time or complete 100 bytes the 2nd time which also contains the new datagram ? Or will be an error and i will lose the 1st datagram on my initial read and never be able to recover it ?
The answer to both of these questions lies in the documentation of the respective BSD sockets functions, sendto() and recvfrom(). If you use some *nix system (OS X or Linux, for example), you can use man sendto and man recvfrom to find it.
1) sendto() man page is rather vague on this; Windows API page explicitly says that it is possible for the return value be less than len argument. See also this question. It looks like that this particular moment is somewhat under-documented. I think that it is probably safe to assume that the return value will always be equal either to len or to the error code. Problems may happen if the length of the data sent through sendto() exceeds the internal buffer size inside the OS kernel, but it seems that at least Windows will return an error in this case.
2) recvfrom() man page unambiguously states that the part of a datagram which does not fit into the buffer will be discarded:
The recvfrom() function shall return the length of the message
written to the buffer pointed to by the buffer argument. For
message-based sockets, such as SOCK_RAW, SOCK_DGRAM, and
SOCK_SEQPACKET, the entire message shall be read in a single
operation. If a message is too long to fit in the supplied buffer,
and MSG_PEEK is not set in the flags argument, the excess bytes shall
be discarded.
So yes, recv_from() will fill exactly 100 bytes, the rest will be discarded, and further calls to recv_from() will return new datagrams.
If you dig down, it just wrapping the C sendto function. This function returns number of bytes sent, so Rust just passes that on (while handling the -1 case and turning errno into actual errors).

recv function gives malformed data Winsock2 C++

In my simple TCP client server application, server send repetitively 1 kB message to the client and client send a reply acknowledgement (just send 'ACK') for each packet. Just think this scenario like client and server passing 1 kB messages here and there in a infinite loop.
I send the same message every time and the fist byte (first char) is always 1. But while testing this client and server application in the same machine for a long time, I noticed first character of some of the received messages are something else in the receive buffer and recv function also returned 1024 (1 kB). This is not happen frequently.
This is the how I receive.
char recvBuff[DEFAULT_BUFFER_SIZE];
int iResult = SOCKET_ERROR;
iResult = recv(curSocket, recvBuff, DEFAULT_BUFFER_SIZE, 0);
if (iResult == SOCKET_ERROR)
{
return iResult;
}
if (recvBuff[0] != 1)
{
//malformed receive
}
MessageHeader *q = (MessageHeader*)recvBuff;
message.header = *q; q++;
std::string temp((char*)q, message.header.fragmentSize);
message.message = temp;
Actually the problem is in constructing the temp string. It breaks since the correct fragment size not received. I tried to drop these kind of malformed data. But the problem is there is a gap between last successfully received fragment ID and first successfully received fragment ID after malformed receives. Any idea why these malformed receives happen?
You’re assuming that you’ve received a complete message when the recv() call completes. If this is a TCP connection (as opposed to UDP), it is byte-oriented, and that means that recv() will return whenever there are any bytes available.
Put more explicitly, there is no reason that doing
send (toServerSocket, someMessage, 1024, 0);
on the client side will cause
recv (fromClientSocket, myBuffer, 1024, 0);
to receive 1,024 bytes. It could just as well receive 27 bytes, with the remaining 997 coming from future calls to recv().
What’s happening in your program, then, is that you’re getting one of these short returns, and it’s causing your program to lose sync. with the message stream. How to fix it? Use recv() to read enough of your message that you know the length (or set a fixed length, though that’s inefficient in many cases). Then continue calling recv() into your buffer until you have read at least that many bytes. Note that you might read more bytes than the length of your message — that is, you may read some bytes that belong to the next message, so you will need to keep those in the buffer after processing the current message.

Arduino client read crlf from http (strip header)

I'm playing with arduino's wifi shield and trying to strip http header out by searching for CRLF (\r\n) in my loop().
while (client.available()) {
char c = client.read();
// I need to check to see if it's crlf
// and parse the response.
}
What would be the easiest way to do this? I'm only interested in the response not the header. I thought about putting this in the buffer and look for current and previous character to match crlf (\r\n).
Suggestions? Thanks.
Try using the TextFinder library http://playground.arduino.cc/Code/TextFinder
You should probably consider using the existing HttpClient library or, perhaps, just take a look at how the parsing of HTTP response is implemented there.

return value of QTcpSocket::write(QByteArray& buf);

Dose this function always return buf.size() or -1?
if not ,dose it mean I need recall the function to write the left data not be written?
for example, if I have a 100 bytes of QByteBuffer.
when I call "tcpSocket.write(buf_100_bytes)" , is it possible that I get 60 or something else?
Additionally, dose this function return immediately?
As with POSIX write(), the QIODevice::write() returns the number of bytes written. That can be any number between 0 and the buffer size. Also, in case of an error, it might return a negative number, which you should check for separately.
QIODevice::write() does not block for sockets (they are set to non-blocking mode), the bytes are just added to a buffer and written later.
To get a notification when bytes are written, you can connect to the bytesWritten(qint64) signal. To block until the bytes are actually written, you can use waitForBytesWritten() (usually not a good idea in the main/UI thread).
I quote Qt documentation:
Writes at most maxSize bytes of data from data to the device. Returns the number of bytes that were actually written, or -1 if an error occurred.
It means, it will return the number of bytes written or -1 in case of an error. You can get error by calling error() method or connection to error() signal.

Resources