Using .NET 4.0, IIS 7.5 (Windows Server 2008 R2). I would like to stream out a binary content of about 10 MB. The content is already in a MemoryStream. I wonder if IIS7 automatically chunks the output stream. From the client receiving the stream, is there any difference between these two approaches:
//#1: Output the entire stream in 1 single chunks
Response.OutputStream.Write(memoryStr.ToArray(), 0, (int) memoryStr.Length);
Response.Flush();
//#2: Output by 4K chunks
byte[] buffer = new byte[4096];
int byteReadCount;
while ((byteReadCount = memoryStr.Read(buffer, 0, buffer.Length)) > 0)
{
Response.OutputStream.Write(buffer, 0, byteReadCount);
Response.Flush();
}
Thanks in advance for any help.
I didn't try your 2nd suggestion passing the original data stream. The memory stream was indeed populated from a Response Stream of a Web Request. Here is the code,
HttpWebRequest webreq = (HttpWebRequest) WebRequest.Create(this._targetUri);
using (HttpWebResponse httpResponse = (HttpWebResponse)webreq.GetResponse())
{
using (Stream responseStream = httpResponse.GetResponseStream())
{
byte[] buffer = new byte[4096];
int byteReadCount = 0;
MemoryStream memoryStr = new MemoryStream(4096);
while ((byteReadCount = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
memoryStr.Write(buffer, 0, byteReadCount);
}
// ... etc ... //
}
}
Do you think it can safely pass the responseStream to Response.OutputStream.Write() ? If yes, can you suggest an economic way of doing so? How to send ByteArray + exact stream length to Response.OutputStream.Write()?
The second option is the best one as ToArray will in fact create a copy of the internal array stored in the MemoryStream.
But, you can also preferably use memoryStr.GetBuffer() that will return a reference to this internal array. In this case, you need to use the memoryStr.Length property because the buffer returned by GetBuffer() is in general bigger than the stored actual data (it's allocated chunk by chunk, not byte by byte).
Note that it would be best to pass the original data as a stream directly to the ASP.NET outputstream, instead of using an intermediary MemoryStream. It depends on how you get your binary data in the first place.
Another option, if you serve the exact same content often, is to save this MemoryStream to a physical file (using a FileStream), and use Response.TransmitFile on all subsequent requests. Response.TransmitFile is using low level Windows socket layers and there's nothing faster to send a file.
Related
The processing of writing data in the ByteArrayOutputStream from InputStream obtained by urlConnection.getInputStream() is taking more than 1 minute.
CODE SNIPPET
URL requestUrl= new URL(_sampleurl_);
HttpURLConnection urlConnection=(HttpURLConnection)requestUrl.openConnection();
urlConnection.setConnectTimeout(10000);
urlConnection.setReadTimeout(10000);
urlConnection.setRequestMethod("GET");
urlConnection.connect();
int statusCode=urlConnection.getResponseCode(); //200
long contentLengthByTen = urlConnection.getHeaderFieldLong("Content-Length", _defaultSize_); //7631029
InputStream inputStream = urlConnection.getInputStream();
final byte[] byteArray = new byte[16384];
int length;
ByteArrayOutputStream byteArrOutStrm = new ByteArrayOutputStream();
int k = 0;
while ((length = inputStream.read(byteArray)) != -1)
{
byteArrOutStrm.write(byteArray, 0, length);
k++;
}
Some of the observations are:
The while loop alone executing for more than one minute and it is iterated for around 2650 times.
The HttpURLConnection response code is 200, so the entire content is available in the InputStream.
The Content-Length of the file is 7631029 (bytes).
I have two questions:
Though the byte array size is 16384 and the status code is 200, the inputStream.read method reads only 2800 bytes averagely. Why and which factors decide these bytes?
Proper solution to reduce the processing time?
Though the byte array size is 16384 and the status code is 200, the inputStream.read method reads only 2800 bytes averagely. Why and which factors decide these bytes?
The number of bytes available in the socket receive buffer, which in turn is a function of the speed of the sender, the frequency of reads at the receiver, the network bandwidth, and the path MTU. What you're seen isn't surprising: it indicates that you're keeping up with the sender pretty well.
Proper solution to reduce the processing time?
Have the sender send faster. There is nothing wrong with your receiving code, although I wonder why you're collecting the entire response in a ByteArrayOutputStream before doing anything with it. This just wastes memory and adds latency.
We have two Qt applications. App1 accepts a connection from App2 through QTcpServer and stores it in an instance of QTcpSocket* tcpSocket. App1 runs a simulation with 30 Hz. For each simulation run, a QByteArray consisting of a few kilobytes is sent using the following code (from the main/GUI thread):
QByteArray block;
/* lines omitted which write data into block */
tcpSocket->write(block, block.size());
tcpSocket->waitForBytesWritten(1);
The receiver socket listens to the QTcpSocket::readDataBlock signal (in main/GUI thread) and prints the corresponding time stamp to the GUI.
When both App1 and App2 run on the same system, the packages are perfectly in sync. However when App1 and App2 are run on different systems connected through a network, App2 is no longer in sync with the simulation in App2. The packages come in much slower. Even more surprising (and indicating our implementation is wrong) is the fact that when we stop the simulation loop, no more packages are received. This surprises us, because we expect from the TCP protocol that all packages will arrive eventually.
We built the TCP logic based on Qt's fortune example. The fortune server, however, is different, because it only sends one package per incoming client. Could someone identify what we have done wrong?
Note: we use MSVC2012 (App1), MSVC2010 (App2) and Qt 5.2.
Edit: With a package I mean the result of a single simulation experiment, which is a bunch of numbers, written into QByteArray block. The first bits, however, contain the length of the QByteArray, so that the client can check whether all data has been received. This is the code which is called when the signal QTcpSocket::readDataBlock is emitted:
QDataStream in(tcpSocket);
in.setVersion(QDataStream::Qt_5_2);
if (blockSize == 0) {
if (tcpSocket->bytesAvailable() < (int)sizeof(quint16))
return; // cannot yet read size from data block
in >> blockSize; // read data size for data block
}
// if the whole data block is not yet received, ignore it
if (tcpSocket->bytesAvailable() < blockSize)
return;
// if we get here, the whole object is available to parse
QByteArray object;
in >> object;
blockSize = 0; // reset blockSize for handling the next package
return;
The problem in our implementation was caused by data packages being piled up and incorrect handling of packages which had only arrived partially.
The answer goes in the direction of Tcp packets using QTcpSocket. However this answer could not be applied in a straightforward manner, because we rely on QDataStream instead of plain QByteArray.
The following code (run each time QTcpSocket::readDataBlock is emitted) works for us and shows how a raw series of bytes can be read from QDataStream. Unfortunately it seems that it is not possible to process the data in a clearer way (using operator>>).
QDataStream in(tcpSocket);
in.setVersion(QDataStream::Qt_5_2);
while (tcpSocket->bytesAvailable())
{
if (tcpSocket->bytesAvailable() < (int)(sizeof(quint16) + sizeof(quint8)+ sizeof(quint32)))
return; // cannot yet read size and type info from data block
in >> blockSize;
in >> dataType;
char* temp = new char[4]; // read and ignore quint32 value for serialization of QByteArray in QDataStream
int bufferSize = in.readRawData(temp, 4);
delete temp;
temp = NULL;
QByteArray buffer;
int objectSize = blockSize - (sizeof(quint16) + sizeof(quint8)+ sizeof(quint32));
temp = new char[objectSize];
bufferSize = in.readRawData(temp, objectSize);
buffer.append(temp, bufferSize);
delete temp;
temp = NULL;
if (buffer.size() == objectSize)
{
//ready for parsing
}
else if (buffer.size() > objectSize)
{
//buffer size larger than expected object size, but still ready for parsing
}
else
{
// buffer size smaller than expected object size
while (buffer.size() < objectSize)
{
tcpSocket->waitForReadyRead();
char* temp = new char[objectSize - buffer.size()];
int bufferSize = in.readRawData(temp, objectSize - buffer.size());
buffer.append(temp, bufferSize);
delete temp;
temp = NULL;
}
// now ready for parsing
}
if (dataType == 0)
{
// deserialize object
}
}
Please not that the first three bytes of the expected QDataStream are part of our own procotol: blockSize indicates the number of bytes for a complete single package, dataType helps deserializing the binary chunk.
Edit
For reducing the latency of sending objects through the TCP connection, disabling packet bunching was very usefull:
// disable Nagle's algorithm to avoid delay and bunching of small packages
tcpSocketPosData->setSocketOption(QAbstractSocket::LowDelayOption,1);
In a TcpClient/TcpListener set up, is there any difference from the receiving end point of view between:
// Will sending a prefixed length before the data...
client.GetStream().Write(data, 0, 4); // Int32 payload size = 80000
client.GetStream().Write(data, 0, 80000); // payload
// Appear as 80004 bytes in the stream?
// i.e. there is no end of stream to demarcate the first Write() from the second?
client.GetStream().Write(data, 0, 80004);
// Which means I can potentially read more than 4 bytes on the first read
var read = client.GetStream().Read(buffer, 0, 4082); // read could be any value from 0 to 4082?
I noticed that DataAvailable and return value of GetStream().Read() does not reliably tell whether there are incoming data on the way. Do I always need to write a Read() loop to exactly read the first 4 bytes?
// Read() loop
var ms = new MemoryStream()
while(ms.length < 4)
{
read = client.GetStream().Read(buffer, 0, 4 - ms.length);
if(read > 0)
ms.Write(buffer, 0, read);
}
The answer seems to be yes, we have to always be responsible for reading the same number of bytes that was sent. In other words, there has to be an application level protocol to read exactly what was written on to the underlying stream because it does not know when a new message start or ends.
I am tring to send a screenshot of a desktop over winsock.
As such, there are four tasks:
Save bitmap to buffer
Write data across wire using a socket
Read data from wire using a socket
Load a bitmap from a buffer
I have saved the bitmap to a char array using GetDIBits.
Writing the data to the server, I have done but I have questions.
For writing data over from server to the client, do I need to use only 1 recv() call (I am using TCP), or do i need to split it up into multiple parts? Ive read that TCP is stream concept and that I wouldnt have to worry about packets because that is abstracted for me?
How would I go about loading the information from GetDIBits into a bitmap and displaying it on the main window?
I am guessing I have to use SetDIBits, but into which device contexts do i use?
The Server screenshot capturer is here:
HDC handle_ScreenDC = GetDC(NULL);
HDC handle_MemoryDC = CreateCompatibleDC(handle_ScreenDC);
BITMAP bitmap;
int x = GetDeviceCaps(handle_ScreenDC, HORZRES);
int y = GetDeviceCaps(handle_ScreenDC, VERTRES);
HBITMAP handle_Bitmap = CreateCompatibleBitmap(handle_ScreenDC, x, y);
SelectObject(handle_MemoryDC, handle_Bitmap);
BitBlt(handle_MemoryDC, 0, 0, x, y, handle_ScreenDC, 0, 0, SRCCOPY);
GetObject(handle_Bitmap, sizeof(BITMAP), &bitmap);
BITMAPFILEHEADER bmfHeader;
BITMAPINFOHEADER bi;
bi.biSize = sizeof(BITMAPINFOHEADER);
bi.biWidth = bitmap.bmWidth;
bi.biHeight = bitmap.bmHeight;
bi.biPlanes = 1;
bi.biBitCount = 16;
bi.biCompression = BI_RGB;
bi.biSizeImage = 0;
bi.biXPelsPerMeter = 0;
bi.biYPelsPerMeter = 0;
bi.biClrUsed = 0;
bi.biClrImportant = 0;
//std::cout<< bitmap.bmWidth;
DWORD dwBmpSize =((bitmap.bmWidth * bi.biBitCount + 5) / 32) * 4 * bitmap.bmHeight;
//int i = bitmap.bmWidth;
//DWORD dwBmpSize = 99;
HANDLE hDIB = GlobalAlloc(GHND, dwBmpSize);
char* bufptr = (char *)GlobalLock(hDIB);
GetDIBits(handle_ScreenDC, handle_Bitmap, 0, (UINT)bitmap.bmHeight, bufptr, (BITMAPINFO *)&bi, DIB_RGB_COLORS);
send(clientsock, bufptr , GlobalSize((char *)GlobalLock(hDIB)), 0);
/*Do i need to packetize/split it up? Or 1 send() is good for the matching Recv on the client?*/
/*I am assuming i must send bi structure over winsock also correct?*/
And The receiveing client code:
case WM_PAINT:{
//Im a Gdi beginner so I dont have a clue what im doing here as far as blitting the recved bits, this is just some stuff i tried myself before asking for help
PAINTSTRUCT paintstruct;
HDC handle_WindowDC = BeginPaint(hwnd, &paintstruct);
handle_MemoryDC = CreateCompatibleDC(handle_WindowDC);
handle_Bitmap = CreateCompatibleBitmap(handle_WindowDC, 640, 360);
std::cout << SetDIBits(handle_MemoryDC, handle_Bitmap, 0, bi.biHeight, buffer, (BITMAPINFO *)&bi, DIB_RGB_COLORS);
SelectObject(handle_MemoryDC, handle_Bitmap);
StretchBlt(handle_WindowDC, 50, 50, 640, 360, handle_MemoryDC, 0, 0, x, y, SRCCOPY);
EndPaint(hwnd, &paintstruct);
}
Sockets do have limited buffer sizes at both ends, typically around 4000 bytes. So if you dump a large block of data (like a full screendump) in one call to a non-blocking send, you will likely get errors, and you will need to manage your own buffers, calling multiple sends. However, if you are using non-blocking socket, you should be OK, as send() will simply block until all the data is sent.
On the receiving side, it is a similar case - a blocking receive can just keep waiting until it has the full data size that you asked for, but a non-blocking receive will return with whatever data is available at that time, which will result in the data filtering through bit by bit, and you will need to reassemble the data from multiple recv() calls.
I have heard of issues with sending really large blocks of data in one hit, so if you are sending 5 megabytes of data in one hit, be aware there might be other issues coming into play as well.
I'm a newbie in QT and C++, I'm trying to create a QTcpserver using QThreadpools so it can handle multiple clients. Multiple clients are able to connect without any issues. But I'm trying to send an image from an android phone, with a footer "IMGPNG", indicating the end of image data. Now the issue when the readyRead signal is emitted I'm tring to read all the data available data and then perform some string operation later and reconstruct the image. I'm not sure how to receive the complete image for each client and then process it accordingly.
void VireClients::readyRead()//read ready
{
int nsize = socket->bytesAvailable();//trying to check the available bytes
qDebug()<< "Bytes Available" << nsize;
while(socket->bytesAvailable() < nsize){
QByteArray data = socket->readAll();//how to receive all the data and then process it
}
/*!These lines call the threadpool instance and reimplement run*/
imageAnalysis = new VireImageAnalysis(); //creating a new instance of the QRunnable
imageAnalysis->setAutoDelete(true);
connect(imageAnalysis,SIGNAL(ImageAnalysisResult(int)),this,SLOT(TaskResult(int)),Qt::QueuedConnection);
QThreadPool::globalInstance()->start(imageAnalysis);
}
Now i'm not sure how to get the data completely or save the received data in an image format. i want to know how to completely receive the image data. Please help.
A call to readAll() will not always read the complete image as it obviously cannot know the size of the image. It will only read all currently available bytes which might be less than your whole file, or more if the sender is really fast and you cannot catch up reading. The same way readyRead() only informs you that there are bytes available but not that a whole file has been received. It could be a single byte or hundreds of bytes.
Either you know the size of your image in the first place because it is always fixed or the sender has to tell the receiver the number of bytes he wants to sent.
Then you can either just ignore all readyRead() signals until bytesAvailable() matches your image size and call readAll() to read the whole image at once. Or you read whenever there are available bytes and fill up your buffer until the number of bytes read matches the bytes the receiver told you he will send.
Solved saving image issue by collecting, the string in temp variable and finally, used opencv imwrite to save the image, this solved this issue:
while(iBytesAvailable > 0 )
{
if(socket->isValid())
{
char* pzBuff = new char[iBytesAvailable];
int iReadBytes = socket->read(pzBuff, iBytesAvailable);
if( iReadBytes > 0 )
{
result1 += iReadBytes;
str += std::string(reinterpret_cast<char const *>(pzBuff), iReadBytes);
if(str.size() > 0){
search = str.find("IMGPNG");
if(search == result1-6){
finalID = QString::fromStdString(str);
Singleton_Global *strPtr = Singleton_Global::instance();
strPtr->setResult(finalID);
/*!Process the received image here*/
SaveImage= new VSaveImage();
SaveImage->setAutoDelete(false);
connect(SaveImage,SIGNAL(SaveImageResult(QString)),this,SLOT(TaskResult(QString)),Qt::QueuedConnection);
threadPool->start(SaveImage);
}
}
}
Finally did the image saving on the run method -->SaveImage, #DavidSchwartz you were a great help thanks. Thanks all for your help.