I have written following code and it does not work. The following error comes while uploading file to the web services:
1.An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full
2.The underlying connection was closed: An unexpected error occurred on a send.
I have used the following code for the web service and when the file size is more than 90 mb the error comes:
LocalService.IphoneService obj = new LocalService.IphoneService();
byte[] objFile = FileToByteArray(#"D:\Brijesh\My Project\WebSite5\IMG_0010.MOV");
int RtnVal = obj.AddNewProject("demo", "demo", "demo#demo.com", "demo#demo.com", 1, 2, 29, "IMG_0010.MOV", objFile,"00.00.06");
public byte[] FileToByteArray(string fileName)
{
byte[] fileContent = null;
System.IO.FileStream fs = new System.IO.FileStream(fileName, System.IO.FileMode.Open, System.IO.FileAccess.Read);
System.IO.BinaryReader binaryReader = new System.IO.BinaryReader(fs);
long byteLength = new System.IO.FileInfo(fileName).Length;
//byteLength = 94371840;
fileContent = binaryReader.ReadBytes((Int32)byteLength);
fs.Close();
fs.Dispose();
binaryReader.Close();
return fileContent;
}
No socket will transfer 200MB in one chunk. Your will receive data in chunks mostly be between 1024 and 4096 bytes(depending on your settings).
Read this data in chunks.
Reassemble your file on the server.
Then use this received file, assembled from bytes, as you need.
For an asp.net webservice:
enable webservice to receive large amounts of data
Increase the ASP.NET limits on the maximum size of SOAP messages and
the maximum number of seconds that a request is allowed to execute by
adding the configuration element to the application's
web.config file. The following code example sets the ASP.NET limit on
the maximum size of an incoming request to 400MB and the maximum
amount of time a request is allowed to execute to 5 minutes (300
seconds).
Put this in your web.config.
<configuration>
<system.web>
<httpRuntime maxMessageLength="409600"
executionTimeoutInSeconds="300"/>
</system.web>
</configuration>
Remember you will be blocking a thread for as long this request is processed. This will not scale for a large number of users.
Related
I'm downloading a file from an HTTP server that has to be requested using POST data. The download completes, and when I target .NET CORE 2.1, it downloads a 1 MB file in around 50 msec. However, I'm developing for an application targeting .NET Framework 4.7.1, and when I run the EXACT same code targeting that framework (in a brand new empty project even), instead of 50 msec, it takes roughly 1200 times longer to download the EXACT same file from the EXACT same server. The file does eventually download successfully, but it takes far too long.
Looking at Wireshark data, I can see the when targeting either framework, the server sends the data mostly as packets with 1300 bytes of payload per packet. When targeting .NET CORE 2.1, it sends all of the packets in rapid succession. When targeting .NET Framework 4.7.1, it rapidly sends exactly 12 packets containing 1300 bytes and one packet containing 784 bytes (totaling exactly 16384 bytes, or 2^14), then it sends nothing for about 1 second, then it sends another burst of 16384 bytes of data, then it pauses for about 1 second, and so forth until the entire file has been sent.
What am I doing wrong? Since I know the server is capable of sending all of the packets in rapid succession, what do I need to change about my request to make it happen?
This is my code:
Uri address = new Uri("http://servername/file.cgi");
HttpWebRequest request = WebRequest.CreateHttp(address);
string postData = "filename=/folder/filename.xml&Download=Download";
var data = Encoding.ASCII.GetBytes(postData);
request.Credentials = new NetworkCredential("myusername", "mypassword");
request.Method = "POST";
request.KeepAlive = true;
request.UserAgent = "ThisApp";
request.Accept = "text/xml";
request.ContentType = "application/x-www-form-urlencoded";
request.ContentLength = data.Length;
using (var stream = request.GetRequestStream())
{
stream.Write(data, 0, data.Length);
}
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
using (Stream output = File.OpenWrite("ReceivedFile.xml"))
using (Stream input = response.GetResponseStream())
{
input.CopyTo(output);
}
Thanks!!
Also, I've already tried several things I've found on other posts (this one was particularly relevant: HttpWebRequest is extremely slow!), but none of them have helped:
ServicePointManager.DefaultConnectionLimit = 200;
ServicePointManager.Expect100Continue = false;
request.Proxy = null;
request.SendChunked = true;
request.ServicePoint.ReceiveBufferSize = 999999;
I've also tried adding this to app.config:
<connectionManagement>
<add address="*" maxconnection="200"/>
</connectionManagement>
Found it! I needed to add BOTH of these lines:
ServicePointManager.Expect100Continue = false;
ServicePointManager.UseNagleAlgorithm = false;
#GetMapping("/")
#ResponseBody
public void getInvoice(#RequestParam String DocumentId,
HttpServletResponse response) {
DocumentDAO documentDAO = null;
try {
documentDAO = service.downloadDocument(DocumentId);
response.setContentType("application/" + documentDAO.getSubtype());
IOUtils.copy(documentDAO.getDocument(), response.getOutputStream());
response.flushBuffer();
documentDAO.getDocument().close();
} catch (IOException e) {
e.printStackTrace();
}
}
The task is to stream pdf document from back-end server (a lot of big documents, up to 200 MB) to the browser via SpringMVC controller. Back-end server outputs document in InputStream - I am copying it to response OutputStream.
IOUtils.copy(documentDAO.getDocument(), response.getOutputStream());
And it works. I just do not like java memory consumption on machine, where this SpringMVC is running.
If it is streaming - why memory consumption increases very high, when customer performs request to this mvc controller?
When big document (i.e. 100 mb) is requested - java heap size increases accordingly.
What I expect is - my java machine should use only some buffer sized amount of memory should not load document to memory, just stream it by.
Is my expectation wrong? Or it is correct and I should do something somehow different?
Thank you in advance.
Here is graph of memory increase when requesting 63MB document
https://i.imgur.com/2fjiHVB.png
and then repeating the request after a while
https://i.imgur.com/Kc77nGM.png
and then GC does its job at the end
https://i.imgur.com/WeIvgoT.png
I am trying to send a file to HTTP server via POST request (c++ and winapi), steps:
// Read file into "buff" and file size into "buffSize"
....
....
....
HINTERNET internetRoot;
HINTERNET httpSession;
HINTERNET httpRequest;
internetRoot = InternetOpen(agent_info, INTERNET_OPEN_TYPE_DIRECT, NULL, NULL, NULL);
//Connecting to the http server
httpSession = InternetConnect(internetRoot, IP,PORT_NUMBER, NULL, NULL, INTERNET_SERVICE_HTTP, NULL, NULL);
//Creating a new HTTP POST request to the default resource on the server
httpRequest = HttpOpenRequest(httpSession, TEXT("POST"), TEXT("/Post.aspx"), NULL, NULL, NULL, INTERNET_FLAG_RELOAD | INTERNET_FLAG_NO_CACHE_WRITE, NULL);
//Send POST request
HttpSendRequest(httpRequest, NULL, NULL, buff, buffSize);
//Closing handles...
In server I am recieving the file using this code (asp.net)
Stream httpStream;
try
{
httpStream = request.RequestContext.HttpContext.Request.InputStream;
}
catch (HttpException)
{
return;
}
byte[] tmp = new byte[httpStream.Length];
int bytesRead = httpStream.Read(tmp, 0, 1024 * 1024);
int totalBytesRead = bytesRead;
while (bytesRead > 0)
{
bytesRead = httpStream.Read(tmp, totalBytesRead, 1024 * 1024);
totalBytesRead += bytesRead;
}
httpStream.Close();
httpStream.Dispose();
//Save "tmp" to file...
I can send large files on local server (visual studio asp server), but I cannot send files over 1 MB to internet server. (HttpOpenRequest is failing)
is there a better way to upload files?
Caveat: My Wininet is very rusty these days.
I wonder whether you ought to be setting the "Content-Length" header yourself. Your code seems to assume that either a) you are making a HTTP/1.0 request or b) that HttpSendRequest will add the header for your (which I don't think it does).
Either way without the server being told how big the incoming request is the default configuration of IIS will reject it if it can't determine the request size itself quickly.
My guess is if you use the lpszHeaders and dwHeadersLength parameters of the HttpSendRequest function to include the appropriate "Content-Length" header the problem will be resolved.
What error do you receive? I mean what does GetLastError() returns? If you send file 800KB then it works OK? I dont really see how because HttpOpenRequest does not know about size of the data.
Maybe it timeouts? But this would mean that HttpSendRequest actually fails. It might buffer all data but since size is huge, then it takes more time than timeout allows.
use following code to query current timeouts (in ms):
InternetQueryOption(h, INTERNET_OPTION_RECEIVE_TIMEOUT, &dwReceiveTimeOut, sizeof(dwReceiveTimeOut));
InternetQueryOption(h, INTERNET_OPTION_SEND_TIMEOUT, &dwSendTimeOut, sizeof(dwSendTimeOut));
and following to set new ones:
InternetSetOption(h, INTERNET_OPTION_RECEIVE_TIMEOUT, &dwNewReceiveTimeOut, sizeof(dwNewReceiveTimeOut));
InternetSetOption(h, INTERNET_OPTION_SEND_TIMEOUT, &dwNewSendTimeOut, sizeof(dwNewSendTimeOut));
I am trying to call a webservice that return too much data just to extract a small piece of data.
So, I decided not to use the standard Client which is generated by Java.
I use the following code to do the connection:
HttpURLConnection connection;
byte[] requestData = .....
URL url = new URL(wsUrl);
connection = (HttpURLConnection) url.openConnection();
connection.setRequestMethod("POST");
connection.setDoOutput(true);
connection.setRequestProperty("Content-Type", "text/xml");
connection.setRequestProperty("Content-Length", String.valueOf(requestData.length));
connection.connect();
OutputStream connOs = connection.getOutputStream();
connOs.write(requestData);
connOs.close();
InputStream is = connection.getInputStream(); // <<< THIS IS THE MOST TIME CONSUMING, it takes about 70 ms
byte[] rply = stream2Bytes(is);
is.close();
connection.disconnect();
The most time is consumed in the call to connection.getInputStream(); which it takes about 70ms.
I am trying setting many request headers to reduce this time but cannot reach.
My understanding it that the HttpUrlConnection uses HTTP1.1 protocol that uses Connection=KEEP-ALIVE header by default so that the underlying TCP connection is reused.
connection.getInputStream(); - function which wait for server response... you can't speed up this proccess.
So we have console-hosted WCF Service and ASP.NET WEB Service (on IIS).
After some tough operation WCF Service must return some (large) data to ASP.NET Web Service for next processing. I tested on small results and everything is ok.
But after testing on real data that is a serialized result object that is near 4.5 MB, an error occurs on ASP.NET Web Service, which is the client in wcf-client-server communication.
This is the error I got:
The socket connection was aborted. This could be caused by an error
processing your message or a receive timeout being exceeded by the
remote host, or an underlying network resource issue. Local socket
timeout was '04:00:00'. Inner Exception: SocketException:"An existing
connection was forcibly closed by the remote host" ErrorCode = 10054
Messages size are configured by next binding (on server and client):
NetTcpBinding netTcpBinding = new NetTcpBinding();
netTcpBinding.TransactionFlow = true;
netTcpBinding.SendTimeout = new TimeSpan(0, 4,0, 0);
netTcpBinding.Security.Mode = SecurityMode.None;
netTcpBinding.Security.Message.ClientCredentialType = MessageCredentialType.None;
netTcpBinding.Security.Transport.ClientCredentialType = TcpClientCredentialType.None;
netTcpBinding.Security.Transport.ProtectionLevel = ProtectionLevel.None;
netTcpBinding.MaxReceivedMessageSize = 2147483647;
netTcpBinding.MaxBufferSize = 2147483647;
netTcpBinding.MaxBufferPoolSize = 0;
netTcpBinding.ReaderQuotas.MaxStringContentLength = int.MaxValue;
netTcpBinding.ReaderQuotas.MaxArrayLength = int.MaxValue;
netTcpBinding.ReaderQuotas.MaxBytesPerRead = int.MaxValue;
netTcpBinding.ReaderQuotas.MaxDepth = 32;
netTcpBinding.ReaderQuotas.MaxNameTableCharCount = 16384;
MaxObjectsInGraph property is configured in a config file.
What can you advise me? And also I need example how programmatically set MaxObjectsInGraph property on client and server.
Thanks for answers.
Problem is fixed by setting programmatically MaxObjectsInGraph(for serializer) as a service attribute.