#GetMapping("/")
#ResponseBody
public void getInvoice(#RequestParam String DocumentId,
HttpServletResponse response) {
DocumentDAO documentDAO = null;
try {
documentDAO = service.downloadDocument(DocumentId);
response.setContentType("application/" + documentDAO.getSubtype());
IOUtils.copy(documentDAO.getDocument(), response.getOutputStream());
response.flushBuffer();
documentDAO.getDocument().close();
} catch (IOException e) {
e.printStackTrace();
}
}
The task is to stream pdf document from back-end server (a lot of big documents, up to 200 MB) to the browser via SpringMVC controller. Back-end server outputs document in InputStream - I am copying it to response OutputStream.
IOUtils.copy(documentDAO.getDocument(), response.getOutputStream());
And it works. I just do not like java memory consumption on machine, where this SpringMVC is running.
If it is streaming - why memory consumption increases very high, when customer performs request to this mvc controller?
When big document (i.e. 100 mb) is requested - java heap size increases accordingly.
What I expect is - my java machine should use only some buffer sized amount of memory should not load document to memory, just stream it by.
Is my expectation wrong? Or it is correct and I should do something somehow different?
Thank you in advance.
Here is graph of memory increase when requesting 63MB document
https://i.imgur.com/2fjiHVB.png
and then repeating the request after a while
https://i.imgur.com/Kc77nGM.png
and then GC does its job at the end
https://i.imgur.com/WeIvgoT.png
Related
I'm facing problem with kestrel server's performance. I have following scenario :
TestClient(JMeter) -> DemoAPI-1(Kestrel) -> DemoAPI-2(IIS)
I'm trying to create a sample application that could get the file content as and when requested.
TestClient(100 Threads) requests to DemoAPI-1 which in turn request to DemoAPI-2. DemoAPI-2 reads a fixed XML file(1 MB max) and returns it's content as a response(In production DemoAPI-2 is not going to be exposed to outside world).
When I tested direct access from TestClient -> DemoAPI-2 I got expected result(good) which is following :
Average : 368ms
Minimum : 40ms
Maximum : 1056ms
Throughput : 40.1/sec
But when I tried to access it through DemoAPI-1 I got following result :
Average : 48232ms
Minimum : 21095ms
Maximum : 49377ms
Throughput : 2.0/sec
As you can see there is a huge difference.I'm not getting even the 10% throughput of DemoAPI-2. I was told has kestrel is more efficient and fast compared to traditional IIS. Also because there is no problem in direct access, I think we can eliminate the possible of problem on DemoAPI-2.
※Code of DemoAPI-1 :
string base64Encoded = null;
var request = new HttpRequestMessage(HttpMethod.Get, url);
var response = await this.httpClient.SendAsync(request, HttpCompletionOption.ResponseContentRead).ConfigureAwait(false);
if (response.StatusCode.Equals(HttpStatusCode.OK))
{
var content = await response.Content.ReadAsByteArrayAsync().ConfigureAwait(false);
base64Encoded = Convert.ToBase64String(content);
}
return base64Encoded;
※Code of DemoAPI-2 :
[HttpGet("Demo2")]
public async Task<IActionResult> Demo2Async(int wait)
{
try
{
if (wait > 0)
{
await Task.Delay(wait);
}
var path = Path.Combine(Directory.GetCurrentDirectory(), "test.xml");
var file = System.IO.File.ReadAllText(path);
return Content(file);
}
catch (System.Exception ex)
{
return StatusCode(500, ex.Message);
}
}
Some additional information :
Both APIs are async.
Both APIs are hosted on different EC2 instances(C5.xlarge Windows Server 2016).
DemoAPI-1(kestrel) is a self-contained API(without reverse proxy)
TestClient(jMeter) is set to 100 thread for this testing.
No other configuration is done for kestrel server as of now.
There are no action filter, middleware or logging that could effect the performance as of now.
Communication is done using SSL on 5001 port.
Wait parameter for DemoAPI2 is set to 0 as of now.
The CPU usage of DEMOAPI-1 is not over 40%.
The problem was due to HttpClient's port exhaustion issue.
I was able to solve this problem by using IHttpClientFactory.
Following article might help someone who faces similar problem.
https://www.stevejgordon.co.uk/httpclient-creation-and-disposal-internals-should-i-dispose-of-httpclient
DEMOAPI-1 performs a non-asynchronous read of the streams:
var bytes = stream.Read(read, 0, DataChunkSize);
while (bytes > 0)
{
buffer += System.Text.Encoding.UTF8.GetString(read, 0, bytes);
// Replace with ReadAsync
bytes = stream.Read(read, 0, DataChunkSize);
}
That can be an issue with throughput on a lot of requests.
Also, I'm not fully aware of why are you not testing the same code with IIS and Kestrel, I would assume you need to make only environmental changes and not the code.
I'm trying to figure out why my webservice is so slow and find ways to get it to respond faster. Current average response time without custom processing involved (i.e. apicontroller action returning a very simple object) is about 75ms.
The setup
Machine:
32GB RAM, SSD disk, 4 x 2.7Ghz CPU's, 8 logical processors, x64 Windows 10
Software:
1 asp.net mvc website running .net 4.0 on IISEXPRESS (System.Web.Mvc v5.2.7.0)
1 asp.net web api website running .net 4.0 on IISEXPRESS (System.Net.Http v4.2.0.0)
1 RabbitMQ messagebus
Asp.net Web API Code (Api Controller Action)
[Route("Send")]
[HttpPost]
[AllowAnonymous)
public PrimitiveTypeWrapper<long> Send(WebsiteNotificationMessageDTO notification)
{
_messageBus.Publish<IWebsiteNotificationCreated>(new { Notification = notification });
return new PrimitiveTypeWrapper<long>(1);
}
The body of this method takes 2ms. Stackify tells me there's a lot of overhead on the AuthenticationFilterResult.ExecuteAsync method but since it's an asp.net thing I don't think it can be optimized much.
Asp.net MVC Code (MVC Controller Action)
The RestClient implementation is shown below. The HttpClientFactory returns a new HttpClient instance with the necessary headers and basepath.
public async Task<long> Send(WebsiteNotificationMessageDTO notification)
{
var result = await _httpClientFactory.Default.PostAndReturnAsync<WebsiteNotificationMessageDTO, PrimitiveTypeWrapper<long>>("/api/WebsiteNotification/Send", notification);
if (result.Succeeded)
return result.Data.Value;
return 0;
}
Executing 100 requests as fast as possible on the backend rest service:
[HttpPost]
public async Task SendHundredNotificationsToMqtt()
{
var sw = new Stopwatch();
sw.Start();
for (int i = 0; i < 100; i++)
{
await _notificationsRestClient.Send(new WebsiteNotificationMessageDTO()
{
Severity = WebsiteNotificationSeverity.Informational,
Message = "Test notification " + i,
Title = "Test notification " + i,
UserId = 1
});
}
sw.Stop();
Debug.WriteLine("100 messages sent, took {0} ms", sw.ElapsedMilliseconds);
}
This takes on average 7.5 seconds.
Things I've tried
Checked the number of available threads on both the REST service and the MVC website:
int workers;
int completions;
System.Threading.ThreadPool.GetMaxThreads(out workers, out completions);
which returned for both:
Workers: 8191
Completions: 1000
Removed all RabbitMQ messagebus connectivity to ensure it's not the culprit. I've also removed the messagebus publish method from the rest method _messageBus.Publish<IWebsiteNotificationCreated>(new { Notification = notification }); So all it does is return 1 inside a wrapping object.
The backend rest is using identity framework with bearer token authentication and to eliminate most of it I've also tried marking the controller action on the rest service as AllowAnonymous.
Ran the project in Release mode: No change
Ran the sample 100 requests twice to exclude service initialization cost: No change
After all these attempts, the problem remains, it will still take about +- 75ms per request. Is this as low as it goes?
Here's a stackify log for the backend with the above changes applied.
The web service remains slow, is this as fast as it can get without an expensive hardware upgrade or is there something else I can look into to figure out what's making my web service this slow?
Inside spring controller writes an excel stream into response
HSSFWorkbook workbook = getWorkbook();
OutputStream out = response.getOutputStream();
response.setHeader("pragma", "public");
response.setHeader("Cache-Control", "public");
response.setContentType("application/vnd.ms-excel");
response.setHeader("Content-Disposition", "attachment;filename=sampleexcel.xls");
workbook.write(out);
out.close();
// response.flushBuffer();
As per this link How to read and copy the HTTP servlet response output stream content for logging
implementated responsewrapper.
Below is Interceptor code,
public void afterCompletion(HttpServletRequest request,
HttpServletResponse response, Object handler, Exception ex)
throws Exception {
HttpServletResponseCopier resp= new HttpServletResponseCopier(response) ;
byte[] responseData = resp.getCopy();
System.out.println("length "+responseData.length); // its 0 bytes
}
Basically want to read the Outputstream contents into a temp file. Then add encryption information in it. Finally write this encrypted file into response stream.
In above code,resp.getCopy() is empty,hence it writes 0 bytes into temp file.
Any pointers what is wrong.Is there alternate way to do achive this.
Spring 3.1, JDK 1.7
Oups, a spring-mvc interceptor is not a filter. It provides hooks to be called before controller execution, after controller execution and after view generation, but cannot replace the response.
The filter used in the referenced post actually replace the response with a wrapper so that everything that is written to the response actually goes into the wrapper and is processed at the time it is written. Here you only create the wrapper once everything has been written so it can only intercept... nothing.
You will have to implement a filter and a custom response wrapper to achieve your goal.
I have a process where I aggregate data and send a request via a http POST out of a map job. I have to wait for the results. Unfortunately I encounter problems with this approach.
When doing so, there is a loss of data during the sending. We managed to investigate this issue to a point where we know that the communication "destroys" sockets and therefore data is lost. Did anyone has experience in doing http POST requests out of a mapper and what to be aware of?
some sample code; mapper:
public void map(final LongWritable key, final Text value, Context context) throws IOException {
String someData = value.toString();
buffer.add(someData);
if (buffer.size() >= MAX_BUFFER_SIZE) {
emit(buffer);
}
}
}
in "emit" I serialize the data (this is fine, I tested this several times) and send it afterwards; sender:
byte[] received = null;
URL connAddress = new URL(someComponentToBeAdressed);
HttpURLConnection urlConn;
urlConn = (HttpURLConnection) connAddress.openConnection();
urlConn.setDoInput(true);
urlConn.setDoOutput(true);
urlConn.setRequestMethod("POST");
urlConn.setRequestProperty("Content-type", "text/plain");
urlConn.getOutputStream().write(serialized_buffer);
urlConn.getOutputStream().flush();
urlConn.getOutputStream().close();
received = IOUtils.toByteArray(urlConn.getInputStream());
urlConn.disconnect();
thanks in advance
we where able to fix this issue. It was no error in hadoop, the error lies in our apache tomcat configuration some timeouts where set for a to small time period. for some bigger chunks of data we overcome the time for the timeout and get errors. unfortunately the exceptions where not that helpful.
I am building a website where i need a page where user can upload large video files, i have created WCF service with streaming but i am calling that WCF service from Button_Click event of web page.
I have used below mentioned article for WCF service creation
WCF Streaming
I have used streaming as it should be efficient and should not be buffered in memory of server.
Now questions
1) I am having doubts that the entire file is uploaded to the web server and then it is transferred to WCF Service server...if this is true then i am not getting advantage of streaming as well as iis and web server will be down very soon if user uploads large file or multiple user are uploading files con currently
2) Is there any other efficient way to do same operation with some other technique
Please help me ...
EDIT :
If I am not calling WCF Service method from ASP .Net code in that case also it is transferring bytes to the web server which i have checked with HTTPFox
I have checked above thing with upload control and putting one button on UI whose click event is bound to one method in code behind.
So, still i am having that confusion that how data is transferred
Client Machine - Web Server (ASP .Net Application) - Service Server (WCF Service)
Client Machine - Service Server (WCF Service)
NOTE : If i am putting a debug point on button_click and uploading 10 kb file it hits that in less then 1 sec. but if i am uploading 50 mb file then it is taking time.
I placed code of calling WCF service inside that button_click event
1) I am having doubts that the entire
file is uploaded to the web server and
then it is transferred to WCF Service
server...if this is true then i am not
getting advantage of streaming as well
as iis and web server will be down
very soon if user uploads large file
or multiple user are uploading files
con currently
No, you're confusing stuff here. When you use WCF streaming to upload a large file, the file is being sent in chunks - in blocks of several Kbyte in size. The WCF server - running in IIS or self-hosted in a NT service or a console app - while receive those chunks and write them to disk, as they arrive.
You don't "upload the whole file to the web server" and then "transfer it" to the WCF service - the WCF service itself is receiving and handling the file - and only once.
If you host your WCF service yourself - in a console app, a Winforms app, or a Windows NT Service - there's not even any IIS or web server involved AT ALL. WCF handles it all by itself.
Using WCF streaming is probably one of the most memory efficient and one of the simplest ways to transfer large files to a server.
Check out some more example and blog posts on the topic:
MSDN WCF Streaming Sample
Data Transfer Using Self Hosted WCF Service
Sending Attachments with WCF
Progress Indication while Uploading/Downloading Files using WCF
Here is your best solution, I went the same route as you and concluded ftp is easier and works flawlessly. Here is some example code:
First get this library, works flawlessly:
http://www.freedownloadscenter.com/Programming/Components_and_Libraries/BytesRoad_NetSuit_Library.html
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.ServiceModel;
using System.IO;
using System.Configuration;
using System.Collections.Specialized;
using System.Drawing;
using System.Drawing.Imaging;
using System.Net;
using BytesRoad.Net.Ftp;
namespace GetMedia
{
class Program
{
static void Main(string[] args)
{
string strPath;
string strThumbPath;
string strThumbLocalPath;
string strURLRoot;
string strVideoFile;
string strThumbfile;
string strError;
BizetDataDataContext db = new BizetDataDataContext();
VCMediaDataContext db2 = new VCMediaDataContext();
db.Connection.ConnectionString = Settings.Default.ConnectionString;
db2.Connection.ConnectionString = Settings.Default.ConnectionString;
//Temp Folder
strPath = Settings.Default.TempFolder;
strThumbLocalPath = Settings.Default.ThumbPath;
download video and thumb
//then upload to mediaserver
IQueryable<BizetInfo> custQuery =
from bizet in db.BizetInfos
where bizet.Path != null
select bizet;
foreach (BizetInfo objbizet in custQuery)
{
//Grab filename and path
strVideoFile = Path.GetFileName(objbizet.Path).Replace("%20", "_").Replace("_medium", "").Replace(" ", "_");
strThumbfile = Path.GetFileName(objbizet.Path).Replace("%20", " ").Replace("_medium.wmv", ".mpg.png");
strURLRoot = objbizet.Path.Replace(Path.GetFileName(objbizet.Path), "");
strThumbPath = strURLRoot + strThumbfile;
strError = "";
try
{
wsViaCastMedia.MediaTransferSoapClient ws = new wsViaCastMedia.MediaTransferSoapClient();
System.Net.WebClient wc = new System.Net.WebClient();
//connect to Bizet
Console.WriteLine("Starting spotID: " + objbizet.SPOTID.ToString().Trim());
Console.WriteLine("connected to ws");
Console.WriteLine("Downloading Video File");
//Download Video
wc.DownloadFile(objbizet.Path, strPath + strVideoFile);
//Download Thumb
Console.WriteLine("Downloading Thumb File");
wc.DownloadFile(strThumbPath, strThumbLocalPath + strThumbfile);
wc.Dispose();
//new ftp code
BytesRoad.Net.Ftp.FtpClient f = new BytesRoad.Net.Ftp.FtpClient();
f.PassiveMode = false;
f.Connect(999999999, "IPADDRESS OF FTP", 21);
f.Login(999999999, "", "");
try
{
f.ChangeDirectory(999999999, objbizet.CLIENTID.ToString().Trim());
}
catch (Exception e)
{
f.CreateDirectory(999999999, objbizet.CLIENTID.ToString().Trim());
f.ChangeDirectory(999999999, objbizet.CLIENTID.ToString().Trim());
Console.WriteLine(e);
}
f.PutFile(999999999, strVideoFile, "E:\\temp\\" + strVideoFile);
Console.WriteLine("Transfer of Video File " + objbizet.Path + " Complete");
//response.Close();
f.Disconnect(999999999);
}
catch (Exception e)
{
Console.WriteLine(e);
strError = e.ToString();
}
finally //Update Data
{
//check if spot Exists ///need to fix
//var myquery = from m in db2.Medias
// where m.SpotID == Convert.ToInt32(objbizet.SPOTID.Trim())
// select m;
//foreach (var mm in myquery)
//{
// //db2.DeleteMedia(objbizet.SPOTID.Trim());
//}
if (strError == "")
{
db2.AddMedia(Convert.ToInt32(objbizet.SPOTID), objbizet.Title, objbizet.Keywords, objbizet.Path, strVideoFile, objbizet.CLIENTNAME, Convert.ToInt32(objbizet.CLIENTID), objbizet.SUBCATEGORYNAME, Convert.ToInt32(objbizet.SUBCATEGORYID), Convert.ToDecimal(objbizet.PRICE), strThumbfile, objbizet.Description);
}
else
{
db2.AddMedia(Convert.ToInt32(objbizet.SPOTID), "Under Maintenance - " + objbizet.Title, objbizet.Keywords, objbizet.Path, strVideoFile, objbizet.CLIENTNAME, Convert.ToInt32(objbizet.CLIENTID), objbizet.SUBCATEGORYNAME, Convert.ToInt32(objbizet.SUBCATEGORYID), Convert.ToDecimal(objbizet.PRICE), strThumbfile, objbizet.Description);
}
}
}
//dispose
db.Dispose();
db2.Dispose();
}
}
}