I have a ServerResource object that is running within a component. Its purpose is to act in many ways like a basic HTTP server. It uses a Representation to acquire a file and return the file's contents to a browser.
The active function for this application is provided below:
public Representation showPage()
{
Representation rep = null;
if(fileName != null)
{
File path = new File("pages/" + fileName);
rep = new FileRepresentation(path,MediaType.ALL);
}
return(rep);
}
Note that "fileName" is the name of an HTML file (or index.html) which was previously passed in as an attribute. The files that this application serves are all in a subdirectory called "pages" as shown in the code. The idea is that a browser sends an HTTP request for an HTML file, and the server returns that file's contents in the same way that Apache would.
Note also that the restlet application is deployed as a JSE application. I am using Restlet 2.1.
An interesting problem occurs when accessing the application. Sometimes, when the request comes from a Firefox browser, the server simply does not send a response at all. The log output shows the request coming in, but the server sinply does not respond, not even with a 404. The browser waits for a response for a time, then times out.
When using Internet Explorer, sometimes the browser times out due to not receiving a response from the server, but sometimes the server also returns a 304 response. My research into this response indicates that it should not be returned at all -- especially if the HTML files have no- caching tags included.
Is there something in the code that is causing these non- responses??? Is there something missing that is causing the ServerResource object to handle responses so unreliably? Or have I found a bug in Restlet's response mechanisms?
Someone please advise...
Related
FOREWORD
This may well be the weirdest problem I have ever witnessed in 15 years. It is 100% reproducible on a specific machine that sends a specific request when authenticated as a specific user, if the request is sent from Chrome (it doesn't happen from Edge, it doesn't happen from cURL or Postman). I can't expect an exact solution to my disturbingly specific issue, but any pointers about what could theoretically cause it are more than welcome.
WHAT HAPPENS
We have several PCs in our factory, that communicate with a central HTTP server (hosted on premise, if that even matters: they're on the same LAN). Of course, we have users who could work on any of these machines.
When a certain user does a specific action on a certain one of those machines, she gets a message about an "HTTP error". The server responds with a 400, specifying that the JSON in the request is ill-formed. Fine, let's look at the JSON: it's an 80-characters string, and it looks very well-formed. I check its length, and it is in fact an 80-character string, and the request has a Content-Length of 80. All is fine, but the server responds with the 400.
The same user on a different machine, or a different user on the same machine, or any other user on any other machine can do the very same action and the very same corresponding HTTP request. The same user, on that machine, can do the action fine using Edge instead of Chrome (despite both being Chromium-based). If I "export" the request from the browser's Dev Tools into any format (cURL bash, cURL cmd, JS fetch...), the request in Chrome and the one in Edge look the same.
Our UI sends the request using Axios. If I send it with fetch, I still get the error. If I serialize the JSON myself and send the string (instead of letting Axios/fetch handle the serialization), I still get the error. If I send that same request using any other client (cURL from command line, Postman...) I don't get the error - same as in Edge.
WHAT I FINALLY NOTICED (and how I hacked the issue into submission)
The server is ASP.NET Core (using .Net 5), so I added a middleware to record the received request. Apparently, in the specified conditions, the server receives a request body that is different from what was sent by the client. Say the client sends:
{"key1":"value1","key2":"value2"}
Well, the server receives:
{"key1":"value1","key2":"value2"
Notice the newline at the beginning and the missing closing brace at the end. The body apparently gets an extra character at the start, and the final character is lost - either because it is not actually sent/received or because the Content-Length dictated it to be truncated.
This clearly explains the failed deserialization (the string is in fact invalid JSON) and the resulting 400 response.
Since this bug had been blocking or hindering production for several days, I wrote a "healer" middleware, that tries to deserialize the JSON string received (if the Content Type indicates JSON, of course); if it fails, it looks for a single non-opening-brace character at the start of the string, and if it finds it it rewrites the body by removing that character and appending a closing brace. It lets the healed request go down the pipeline and notifies me via e-mail.
THE AFTERMATH
All has been working fine since I released the fix, and we even asked or system managers to replace the PC that was causing problems, since we could only think of a vicious issue with OS/browser setup or configuration that caused conflicts.
However, when they replaced it, I started getting the notification e-mail again... this time from other two users, always on that same machine, each of them having the same issue (that is being healed, btw), each of them on a different request (but always the same request for each user). The requests point to different URLs and their bodies have different lengths and complexity (JSON-wise). I haven't tried all the tests I did before (different browser, cURL, fetch...) but the diagnosis of the problem is the same, and it is being handled by the healer middleware.
A colleague reported that they already had a similar problem several months ago, which they didn't investigate back then. They're not sure it was the very same workstation, but they replaced the PC and the error didn't happen any more. It seems to be pretty much random, and I still have no idea what could cause such a behaviour.
Here is some more info about the platform, if any of this is relevant:
clients: Windows 10 PCs, using Chrome in kiosk mode, launched by a batch that is located on a network share;
UI: React, sending HTTP requests with Axios;
server: .Net 5 ASP.NET Core service.
UPDATE
I've recorded the network traffic using Wireshark on the client PC. Here is what I got:
So apparently the request is already modified when it leaves the client host.
I have a main JSP and process JSP. In process jsp I am committing the response and forward the response to a success page.
request.getRequestDispatcher("success.jsp").forward(request, response);
I am able to commit the response at the server side. Process jsp is also able to forward the response to success JSP.
But the url shows for example: http://process.jsp?param1=value1&parm2=value2
I want my output to display a clean as in url http://success.jsp
Please Note: This works perfectly fine for Java Servlet, i just tried it.
I am using only JSP instead of Java servelet, since this is our project requirement.
Can anyone suggest me a solution for this?
RequestDispatcher#forward() Is supposed to forward both the request and the response objects to another resource within the server. No response goes back to the client when you do a forward() and this is why the client shows the same initial URL.
For the client to show another URL you could use HttpServletResponse#sendRedirect(). This does go back to the client making it do a new request to the URL you want. So change it to:
response.sendRedirect("success.jsp").
Remember not to commit the response before doing this or you'll get an IllegalStateException
As to why you say that on a Servlet works, I'm not sure why, but is not how forward() is supposed to work, and JSP are compiled to Servlets so in the end they should behave the same.
I was working on my server and encountered the need to implement the use of request.headers.referer When I did tests and read headers to determine how to write the parsing functions, I couldn't determine a differentiation between requests that invoke from a link coming from outside the server, outside the directory, or calls for local resources from a given HTML response. For instance,
Going from localhost/dir1 to localhost/dir2 using <a href="http://localhost/dir2"> will yield the response headers:
referer:"http://localhost/dir1" url:"/dir2"
while the HTML file sent from localhost/dir2 asking for resources using local URI style.css will yeild:
referer:"http://localhost/dir2" url:"/style.css"
and the same situation involving an image could end up
referer:"http://localhost/dir2" url:"/_images/image.png"
How would I prevent incorrect resolution, between url and referer, from accidentally being parsed as http://localhost/dir1/dir2 or http://localhost/_images/image.png and so on? Is there a way to tell in what way the URI is being referred by the browser, and how can either the browser or server identify when http://localhost/dir2/../dir1 is intended destination?
What is the difference between "Request" and "Response" terminologies in ASP.net?
I am using ASP.net 3.5.
Suppose I have to make somebody understand about these terms. What should i say ?
The Request is what a web client sends to the web server. The Response is what the web server sends - well, in response. Both are defined in the HTTP specification. (How they are structured, what information and meta data they include, etc.)
ASP.Net encapsulates these concepts in respective classes to make them programmatically accessible.
Edit: Specific examples as requested in the comments:
Request.QueryString
If you have a URL like the following:
http://www.host.com/Page.aspx?name=Henry&lastName=Ford
The part after the ? is the query string. (name=Henry&lastName=Ford <= The query string)
This is one common way to pass arguments to the server as part of the Request. In your server code you can access these arguments by using Request.QueryString:
string name = Request.QueryString["name"];
string lastName = Request.QueryString["lastName"];
Response.Redirect
Your server received a Request for a page and you want to redirect to another location. With the Response.Redirect() method, you add a specific piece of information to the Response that causes the browser to immediately go to this other page.
// This tells the browser to load google
Response.Redirect("http://www.google.com");
There is a IIS (Internet Information Services) Server.. In ASP.Net, you can Request for data from the server, and what the server sends you is a Response
I need to be able to open up an external URL in my website with out revealing it to my users (both in the browser and in the source). I do not want them to be able to copy the URL and edit the query string to their liking. Is there a way to open the URL in an iframe, or something of the like, and hide/mask its source?
This is an asp.net 2.0 website.
Could you do the following:
Accept parameters from the user.
Have a webpage or backend process which uses this to download the PDF to a temporary store.
Then stream this to the client, so they don't know about the URL where the PDF is generated? (or just stream directly, without downloading temporarily.)
This way users would never know about the other site, and it should be much more secure.
This could also use some validation/authentication so users are unable to alter the parameters passed to retrieve other users' PDFs.
No. If you are having the clients machine do something (i.e, point their browser to a web page), you can not keep that information from them.
You can render that page server side in a flash widget or some other container but you can't do it on the clients machine.
Best bet: You can make a server-side XMLHTTP request, grab the response and feed in back into your page using AJAX.
You could possibly do it server side by:
Opening a network connection to the
site you want
Obtaining the HTML from a HTML/Get request on the
URL
Inserting it into your page on the server side
That would probably slow down your page load considerably though, and could come coupled with legal issues.
I had a similar problem myself a while ago and did something along these lines (C# .NET 2.0);
public void StreamURLContents(string URL)
{
WebRequest req = WebRequest.Create(URL);
using (HttpWebResponse resp = (HttpWebResponse)req.GetResponse())
using (Stream dataStream = resp.GetResponseStream())
using (StreamReader reader = new StreamReader(dataStream))
{
string currentLine = reader.ReadLine();
while (currentLine != null)
{
Response.Write(currentLine);
currentLine = reader.ReadLine();
}
}
}
You would have to tailor the writing of the HTML to suit your particular application obviously, and you'd break all of the relative links in the target site (image URLs, CSS links etc.), but if you're only after simple text HTML and want your web app to grab it server-side, then this is a good way to go.
You can make a one-time URL by doing the following:
Store a GUID in a database
Allow the client to request the GUID via hyperlink or other means. The GUID is used in the URL as the fake page; example: http://www.company.com/foo/abc-123-efg-456.aspx
Use URL Rewriting to capture and inspect all requests to the directory "foo" and redirect to your handler.
Confirm the GUID is valid, then mark it as "expired" in the database
Send the appropriate data (a PDF?) to the client as the response to the request.
Any subsequent requests to the same URL fail.