I'm attempting to write a Java Servlet to receive binary data requests and reply to them, using HttpServletRequest.getOutputStream() and HttpServletResponse.getInputStream(). This is for a project which involves having a request sent by a Silverlight client to which this servlet responds to through an HTTP POST connection. For the time being, to test the Servlet I'm implementing a client in Java which I'm more familiar with than Silverlight.
The problem is that in my test project I send the data from a Servlet client as a byte array and expect to receive a byte array with the same length -- only it doesn't, and instead I'm getting a single byte. Therefore I'm posting here the relevant code snippets in the hopes that you might point me where I'm doing wrong and hopefully provide relevant bibliography to help me further.
So here goes.
The client servlet handles POST requests from a very simple HTML page with a form which I use as front-end. I'm not too worried about using JSP etc, instead I'm focused on making the inter-Servlet communication work.
// client HttpServlet invokes this method from doPost(request,response)
private void process(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
String firstName = (String) request.getParameter("firstname");
String lastName = (String) request.getParameter("lastname");
String xmlRequest = "<MyRequest><Person><Name Firstname=\""+firstName+"\" Lastname=\""+lastName+"\" /></Person></MyRequest>";
OutputStream writer = null;
InputStream reader = null;
try {
URL url = new URL("http://localhost:8080/project/Server");
URLConnection conn = url.openConnection();
conn.setDoInput(true);
conn.setDoOutput(true);
writer = conn.getOutputStream();
byte[] baXml = xmlRequest.getBytes("UTF-8");
writer.write(baXml, 0,baXml.length);
writer.flush();
// perhaps I should be waiting here? how?
reader = conn.getInputStream();
int available = reader.available();
byte[] data = new byte[available];
reader.read(data,0,available);
String xmlResponse = new String(data,"UTF-8");
PrintWriter print = response.getWriter();
print.write("<html><body>Response:<br/><pre>");
print.write(xmlResponse);
print.write("</pre></body></html>");
print.close();
} finally {
if(writer!=null)
writer.close();
if(reader!=null)
reader.close();
}
}
The server servlet handles HTTP POST requests. This is done by receiving requests the requests from a client Servlet for testing purposes above, but in the future I intend to use it for clients in other languages (specifically, Silverlight).
// server HttpServlet invokes this method from doPost(request,response)
private void process(HttpServletRequest request, HttpServetResponse response)
throws ServletException, IOException {
ServletInputStream sis = null;
try {
sis = request.getInputStream();
// maybe I should be using a BufferedInputStream
// instead of the InputStream directly?
int available = sis.available();
byte[] input = new byte[available];
int readBytes = sis.read(input,0,available);
if(readBytes!=available) {
throw new ServletException("Oops! readBytes!=availableBytes");
}
// I ONLY GET 1 BYTE OF DATA !!!
// It's the first byte of the client message, a '<'.
String msg = "Read "+readBytes+" bytes of "
+available+" available from request InputStream.";
System.err.println("Server.process(HttpServletRequest,HttpServletResponse): "+msg);
String xmlReply = "<Reply><Message>"+msg+"</Message></Reply>";
byte[] data = xmlReply.getBytes("UTF-8");
ServletOutputStream sos = response.getOutputStream();
sos.write(data, 0,data.length);
sos.flush();
sos.close();
} finally {
if(sis!=null)
sis.close();
}
}
I have been sticking to byte arrays instead of using BufferInputStreams so far because I've not decided yet if I'll be using e.g. Base64-encoded strings to transmit data or if I'll be sending binary data as-is.
Thank you in advance.
To copy input stream to output stream use the standard way:
InputStream is=request.getInputStream();
OutputStream os=response.getOutputStream();
byte[] buf = new byte[1000];
for (int nChunk = is.read(buf); nChunk!=-1; nChunk = is.read(buf))
{
os.write(buf, 0, nChunk);
}
The one thing I can think of is that you are reading only request.getInputStream().available() bytes, then deciding that you have had everything. According to the documentation, available() will return the number of bytes that can be read without blocking, but I don't see any mention of whether this is actually guaranteed to be the entire content of the input stream, so am inclined to assume that no such guarantees are made.
I'm not sure how to best find out when there is no more data (maybe Content-Length in the request can help?) without risking blocking indefinitely at EOF, but I would try looping until having read all the data from the input stream. To test that theory, you could always scan the input for a known pattern that occurs further into the stream, maybe a > matching the initial < that you are getting.
Related
I receive an image in doPost method from a client application. I'm not supposed to store this image in any folder path, so i use the following code to store this image in session attribute as data byte.
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
// TODO Auto-generated method stub
String fileName = null;
fileName = request.getParameter("filename");
System.out.println("filename: " + fileName);
DataInputStream din = new DataInputStream(request.getInputStream());
byte[] data = new byte[0];
byte[] buffer = new byte[512];
int bytesRead;
while ((bytesRead = din.read(buffer)) > 0) {
// construct an array large enough to hold the data we currently have
byte[] newData = new byte[data.length + bytesRead];
// copy data that was previously read into newData
System.arraycopy(data, 0, newData, 0, data.length);
// append new data from buffer into newData
System.arraycopy(buffer, 0, newData, data.length, bytesRead);
// set data equal to newData in prep for next block of data
data = newData;
}
request.getSession().setAttribute("imageData", data);
}
I want to retrieve this from doGet method after its received. So, i am trying with the following doGet code to retrieve it.
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
byte[] data = (byte[]) request.getSession().getAttribute("imageData");
int len = data.length;
byte[] imageData = new byte[len];
for(int i=0; i < len; i++) {
imageData[i] = data[i];
}
response.setContentType("image/jpg");
response.getOutputStream().write(imageData);
response.getOutputStream().flush();
response.getOutputStream().close();
}
But, its NOT returning this image in doGet, when i call this servlet from an another client after some time.
Could someone please advise me, what I'm doing wrong here for not getting image in doGet?
I'm not supposed to store this image in any folder path, so i use the following code to store this image in session attribute as data byte.
...
But, its NOT returning this image in doGet, when i call this servlet from an other client after some time.
Session attributes are associated with exactly one client, so one client can not get the session attributes from an other client
You could store the image in the servlet context, like:
ServletContext context = request.getSession().getServletContext();
context.setAttribute("imageData", data);
Later, you can read the attribute from the servlet context.
Or, another possibility is to store the image in a static variable.
However, the image will be stored, although it is in memory. Maybe some Servlet Containers also store Servlet Context attributes on the hard disk.
I have the following method that writes a stream in a HttpResponse object.
public HttpResponse ShowPDF(Stream stream)
{
MemoryStream memoryStream = (MemoryStream) stream;
httpResponse.Clear();
httpResponse.Buffer = true;
httpResponse.ContentType = "application/pdf";
httpResponse.BinaryWrite(memoryStream.ToArray());
httpResponse.End();
return httpResponse;
}
In order to test it, I need to recover the processed stream.
Is there someway to read the stream from the httpResponse object?
I have two ideas... one to mock the HttpResponse, and the other is to simulate a web server.
1. Mocking HttpResponse
I wrote this before I knew which mocking framework you used. Here's how you could test your method using TypeMock.
This assumes that you pass your httpResponse variable to the method, changing the method as follows:
public void ShowPDF(Stream stream, HttpResponse httpResponse)
Of course you would change this to passing it to a property on your Page object instead, if it is a member of your Page class.
And here's an example of how you could test using a fake HttpResponse:
internal void TestPDF()
{
FileStream fileStream = new FileStream("C:\\deleteme\\The Mischievous Nerd's Guide to World Domination.pdf", FileMode.Open);
MemoryStream memoryStream = new MemoryStream();
try
{
memoryStream.SetLength(fileStream.Length);
fileStream.Read(memoryStream.GetBuffer(), 0, (int)fileStream.Length);
memoryStream.Flush();
fileStream.Close();
byte[] buffer = null;
var fakeHttpResponse = Isolate.Fake.Instance<HttpResponse>(Members.ReturnRecursiveFakes);
Isolate.WhenCalled(() => fakeHttpResponse.BinaryWrite(null)).DoInstead((context) => { buffer = (byte[])context.Parameters[0]; });
ShowPDF(memoryStream, fakeHttpResponse);
if (buffer == null)
throw new Exception("It didn't write!");
}
finally
{
memoryStream.Close();
}
}
2. Simulate a Web Server
Perhaps you can do this by simulating a web server. It might sound crazy, but it doesn't look like it's that much code. Here are a couple of links about running Web Forms outside of IIS.
Can I run a ASPX and grep the result without making HTTP request?
http://msdn.microsoft.com/en-us/magazine/cc163879.aspx
I would like to use HttpClient to read the chunked (in the sense of HTTP 1.1 chunked transfer encoding) content asynchronously.
I am looking at HttpContent async methods at:
MSDN link
However, in the case of returned Task for byte array, for example:
The returned Task object will complete after all of the content has been written as a byte array
I am getting chunked content precisely because server doesn't know ahead of time when will all of the data be available, thus I don't know when will all of the content arrive. Rather than waiting, possibly for hours, for the task to complete, I would like to be able to read the chunks as they arrive.
Can I somehow read part of the response content, like have some task that would complete when every 4K bytes of content are received in response?
Is using HttpClient advantageous at all in this case?
Using HttpClient.SendAsync you can pass a HttpCompletionOption parameter to tell HttpClient not to buffer the response for you and return as soon as it gets the headers. Then you can use ReadAsStreamAsync to get a stream that will allow you to pull the data as it arrives.
Here is a complete example of how to download a file without reading its content to memory with an explanation. Works beautifully.
static async Task HttpGetForLargeFileInRightWay()
{
using (HttpClient client = new HttpClient())
{
const string url =
"https://github.com/tugberkugurlu/ASPNETWebAPISamples/archive/master.zip";
using (HttpResponseMessage response = await client.GetAsync(url,
HttpCompletionOption.ResponseHeadersRead))
using (Stream streamToReadFrom = await response.Content.ReadAsStreamAsync())
{
string fileToWriteTo = Path.GetTempFileName();
using (Stream streamToWriteTo = File.Open(fileToWriteTo, FileMode.Create))
{
await streamToReadFrom.CopyToAsync(streamToWriteTo);
}
}
}
}
Or instead of using CopyToAsync() you can read stream by using StreamReader
using (var stream = await response.Content.ReadAsStreamAsync())
using (var reader = new StreamReader(stream))
{
int bytesCount = 100;
var buffer = new char[bytesCount];
reader.ReadBlock(buffer, 0, bytesCount);
}
I am trying to implement restful protocol on jetty server. I have runnable server and i can access it from my rest client. My server side project is a maven project. I have a problem about the character encoding.When i check response, before send it from controller, there is no encoding problem. But after i return response to client, i see broken data. Response header is UTF-8. Also i have a listener for this problem and i am setting to request and response to UTF-8. I guess problem happens when i try to write my response data to response.
#GET
#Path("/")
#Produces({"application/xml;charset=UTF-8","application/json;charset=UTF-8"})
public String getPersons(#Context HttpServletRequest request, #Context HttpServletResponse response) {
List<Person> persons = personService.getPersons(testUserId, collectionOption, null);
if (persons == null) {
persons = new ArrayList<Person>();
}
String result = JsonUtil.listToJson(persons);
//result doesnt has any encoding problem at this line
response.setContentType("application/json");
response.setContentLength(result.length());
response.setCharacterEncoding("utf-8");
//i guess problem happen after this line
return result;
}
Is there any jetty configuration or resteasy configuration for it? Or is there any way to solve this problem? Thanks for your helps.
Which resteasy version are you using? There is a known issue (RESTEASY-467) with Strings in 2.0.1 an prior.
These are your options:
1) force the encoding returning byte[]
public byte[] getPersons
and then
return result.getBytes("UTF8");
2) return List (or create a PersonListing if you need it)
public List<Person> getPersons
and let resteasy handle the json transformation.
3) return a StreamingOutput
NOTE: with this option the "Content-Length" header will be unknown.
return new StreamingOutput()
{
public void write(OutputStream outputStream) throws IOException, WebApplicationException
{
PrintStream writer = new PrintStream(outputStream, true, "UTF-8");
writer.println(result);
}
};
4) upgrade to 2.2-beta-1 or newer version.
How can I increase
from my C# code ? I can't do this in Web.config, My application is created to deploy web
application in IIS.
Take a look at http://bytes.com/topic/asp-net/answers/346534-how-i-can-get-httpruntime-section-page
There's how you get access to an instance of HttpRuntimeSection. Then modify the property MaxRequestLength.
An alternative to increasing the max request length is to create an IHttpModule implementation. In the BeginRequest handler, grab the HttpWorkerRequest to process it entirely in your own code, rather than letting the default implementation handle it.
Here is a basic implementation that will handle any request posted to any file called "dropbox.aspx" (in any directory, whether it exists or not):
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
namespace Example
{
public class FileUploadModule: IHttpModule
{
#region IHttpModule Members
public void Dispose() {}
public void Init(HttpApplication context)
{
context.BeginRequest += new EventHandler(context_BeginRequest);
}
#endregion
void context_BeginRequest(object sender, EventArgs e)
{
HttpApplication application = (HttpApplication)sender;
HttpContext context = application.Context;
string filePath = context.Request.FilePath;
string fileName = VirtualPathUtility.GetFileName( filePath );
string fileExtension = VirtualPathUtility.GetExtension(filePath);
if (fileName == "dropbox.aspx")
{
IServiceProvider provider = (IServiceProvider)context;
HttpWorkerRequest wr = (HttpWorkerRequest)provider.GetService(typeof(HttpWorkerRequest));
//HANDLE REQUEST HERE
//Grab data from HttpWorkerRequest instance, as reflected in HttpRequest.GetEntireRawContent method.
application.CompleteRequest(); //bypasses all other modules and ends request immediately
}
}
}
}
You could use something like that, for example, if you're implementing a file uploader, and you want to process the multi-part content stream as it's received, so you can perform authentication based on posted form fields and, more importantly, cancel the request on the server-side before you even receive any file data. That can save a lot of time if you can determine early on in the stream that the upload is not authorized or the file will be too big or exceed the user's disk quota for the dropbox.
This is impossible to do with the default implementation, because trying to access the Form property of the HttpRequest will cause it to try to receive the entire request stream, complete with MaxRequestLength checks. The HttpRequest object has a method called "GetEntireRawContent" which is called as soon as access to the content is needed. That method starts with the following code:
HttpRuntimeSection httpRuntime = RuntimeConfig.GetConfig(this._context).HttpRuntime;
int maxRequestLengthBytes = httpRuntime.MaxRequestLengthBytes;
if (this.ContentLength > maxRequestLengthBytes)
{
if (!(this._wr is IIS7WorkerRequest))
{
this.Response.CloseConnectionAfterError();
}
throw new HttpException(SR.GetString("Max_request_length_exceeded"), null, 0xbbc);
}
The point is that you'll be skipping that code and implementing your own custom content length check instead. If you use Reflector to look at the rest of "GetEntireRawContent" to use it as a model implementation, you'll see that it basically does the following: calls GetPreloadedEntityBody, checks if there's more to load by calling IsEntireEntityBodyIsPreloaded, and finally loops through calls to ReadEntityBody to get the rest of the data. The data read by GetPreloadedEntityBody and ReadEntityBody are dumped into a specialized stream, which automatically uses a temporary file as a backing store once it crosses a size threshold.
A basic implementation would look like this:
MemoryStream request_content = new MemoryStream();
int bytesRemaining = wr.GetTotalEntityBodyLength() - wr.GetPreloadedEntityBodyLength();
byte[] preloaded_data = wr.GetPreloadedEntityBody();
if (preloaded_data != null)
request_content.Write( preloaded_data, 0, preloaded_data.Length );
if (!wr.IsEntireEntityBodyIsPreloaded()) //not a type-o, they use "Is" redundantly in the
{
int BUFFER_SIZE = 0x2000; //8K buffer or whatever
byte[] buffer = new byte[BUFFER_SIZE];
while (bytesRemaining > 0)
{
bytesRead = wr.ReadEntityBody(buffer, Math.Min( bytesRemaining, BUFFER_SIZE )); //Read another set of bytes
bytesRemaining -= bytesRead; // Update the bytes remaining
request_content.Write( buffer, 0, bytesRead ); // Write the chunks to the backing store (memory stream or whatever you want)
}
if (bytesRead == 0) //failure to read or nothing left to read
break;
}
At that point, you'll have your entire request in a MemoryStream. However, rather than download the entire request like that, what I've done is offload that "bytesRemaining" loop into a class with a "ReadEnough( int max_index )" method that is called on demand from a specialized MemoryStream that "loads enough" into the stream to access the byte being accessed.
Ultimately, that architecture allows me to send the request directly to a parser that reads from the memory stream, and the memory stream automatically loads more data from the worker request as needed. I've also implemented events so that as each element of the multi-part content stream is parsed, it fires events when each new part is identified and when each part is completely received.
You can do that in the web.config
<httpRuntime maxRequestLength="11000" />
11000 == 11 mb