We have a web app that calls a url of an outside search provider to load search results. Each time our page is hit it can pass in a current page number. This page number is included in the url sent to the 3rd party. I've noticed that while they report 45 pages of results, if I go to one of the pages that includes their results and then try to navigate to another page that has additional results from them, the same results from the first page are loaded.
I tried setting up my HttpWebRequest object to disable caching but everything I've tried doesn't seem to work. And, considering the url changes each time due to the page number, I wouldn't think that it would really be a cache issue. But here's where it gets interesting.
If I copy the url that I'm retrieving in code and paste it into chrome it loads the correct results. I then refresh the web app page and it too now loads the results for that page. This makes no sense to me. The code is running locally but since it's running within asp.net, it isn't using chrome to create the web request, so why does this happen?
Here's the code I have that calls the url and returns the result.
public static string FetchPage(string url)
{
//Specify the encoding
Encoding enc = System.Text.Encoding.GetEncoding(1252);
HttpWebRequest.DefaultCachePolicy = new HttpRequestCachePolicy(HttpRequestCacheLevel.Default); ;
//Create the http request
var request = (HttpWebRequest)WebRequest.Create(url);
request.CachePolicy = new HttpRequestCachePolicy(HttpRequestCacheLevel.NoCacheNoStore);
//Create the http response from the http request
var httpWebResponse = (HttpWebResponse)request.GetResponse();
//Get the response stream
var responseStream = new StreamReader(httpWebResponse.GetResponseStream(), enc);
//Read the response stream
string htmlStream = responseStream.ReadToEnd();
//Close the web response
httpWebResponse.Close();
//Close the response stream
responseStream.Close();
return htmlStream;
}
Please try these following lines to set no-cache in both HttpWebRequest and WebResponse
For Request
Request.Headers.Set(HttpRequestHeader.CacheControl, "max-age=0, no-cache, no-store");
Request.CachePolicy = new System.Net.Cache.RequestCachePolicy(System.Net.Cache.RequestCacheLevel.NoCacheNoStore);
For Response
Response.Headers.Add(HttpRequestHeader.CacheControl, "no-cache");
Related
In my C# code running .NET 6 (Azure Function) I am sending an HttpRequestMessage using HttpClient. It doesn't work but it should work, so I want to get the raw request that I am sending, including the header, so I can compare with the documentation and see the differences.
In the past I have used Fiddler but it doesn't work for me now, probably because of some security settings on my laptop. So I am looking for a solution within the world of Visual Studio 2022 or .NET 6 where I can get the raw request out for troubleshooting purposes.
This question is not really about code, but here is my code anyway.
HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Post, "https://myendpoint.com/rest/something");
var apiToken = "AOU9FrasdgasdfagtHJNV";
request.Headers.Add("Authorization", "Basic " + apiToken);
var message = new
{
sender = "Hey",
message = "Hello world",
recipients = new[] { new { id = 12345678} }
};
request.Content = new StringContent(JsonSerializer.Serialize(message), Encoding.UTF8, "application/json");
request.Headers.Add("Accept", "application/json, text/javascript");
HttpResponseMessage response = await httpClient.SendAsync(request);
When SendAsync is invoked, I wish to know what exactly is sent, both header and content.
If you cannot use any proxy solution (like Fiddler) then I can see 2 options. One is described in comments in your question to use DelegatingHandler. You can read more about this in documentation. What is interesting is that HttpClient supports logging out of the box which is described in this section https://learn.microsoft.com/en-us/aspnet/core/fundamentals/http-requests?view=aspnetcore-6.0#logging of the article which describes DelegatingHandlers
If you are worried that something will manipulate the outgoing request then you can implement option 2. This is to create temporary asp.net core application with .UseHttpLogging() middleware plugged in into pipeline as described here https://learn.microsoft.com/en-us/aspnet/core/fundamentals/http-logging/?view=aspnetcore-6.0 That way you will know exactly how your request looks like from application which is being requested point of view. Now if you will point your azure function to you temporary app - you should see what gets send
Hope it helps
I have a XML-RPC server (using XML-RPC.net) running as a .NET console application. I'm trying to connect to it via my ASP.NET Core (2.1.1) web app but the client keeps timing out. Postman also returns a response immediately without issues.
Here is how I'm calling it:
HttpClient client = _clientFactory.CreateClient();
client.Timeout = TimeSpan.FromSeconds(10);
var httpRequest = new HttpRequestMessage(HttpMethod.Post, instance.ServiceUrl);
var stringContent = new ByteArrayContent(Encoding.UTF8.GetBytes(request.ToString()));
httpRequest.Content = stringContent;
httpRequest.Content.Headers.ContentType = MediaTypeHeaderValue.Parse("text/xml");
var httpResponse = await client.SendAsync(httpRequest);
var response = await httpResponse.Content.ReadAsByteArrayAsync();
I can see that the request was made successfully as the console app returns a response. Fiddler shows there was a 200 response but await client.SendAsync(httpRequest); times-out!
The request usually completes in under 10ms so the timeout value is just for debugging, if I leave it out it would take 60s. The response returns XML.
I've tried rewriting this to use StringContent and using PostAsync, same issue. I also attempted to rewrite this using WebClient but it returned The remote server returned an error: (100) Continue. not sure if that's relevant.
Been stuck on this for a whie, anyone know what could be happening?
OK I did some googling and it looks like I needed this line:
client.DefaultRequestHeaders.ExpectContinue = true;
It was definitely related to 100 status code returned not being handled properly.
Found it here:
https://social.msdn.microsoft.com/Forums/en-US/042016f0-d70e-42f9-9924-5febeb2bea86/excluding-the-quotexpect-100continuequot-header-from-httpwebrequest-posts?forum=winappswithcsharp
Two things; first is I keep getting a 401 exception on the last line. I had thought that re-using the session would allow me to not only NOT have to resend the credentials but would also let me access the report by URL. It does neither...
Second, what do I do with the response once I have it to display it in the browser for the User?
This is what I have so far but I am unsure where to go with this from here....
var rs = new ReportExecutionService.ReportExecutionService();
rs.Credentials = new System.Net.NetworkCredential("UserID", "Pswd", "myDomain");
var execInfo = rs.LoadReport("/Nav Reports/OpenSalesOrderByCustomer", null);
var format = "HTML4";
string requestUri = string.Format(
#"https://reports.myServer.com/ReportServer/?{0}&rs:SessionId={1}&rs:Format={2}",
execInfo.ReportPath,
execInfo.ExecutionID,
format
);
WebRequest request = WebRequest.Create(requestUri);
request.UseDefaultCredentials = false;
request.Credentials = new System.Net.NetworkCredential("UserID", "Pswd", "myDomain");
var response = request.GetResponse();
For Background info -->
I am trying to setup Remote Report processing with SSRS on my asp.net web forms app. I would simply use the Report Viewer control but it's hideous looking and not acceptable to our user base. The URL Access returns a much better looking and formatted report and is acceptable but I have to pass Security Credentials. I would use SOAP API except it returns a non-styled html 'blob' and also removes the Toolbar functionality that we want to use.
So, I seem to be left with figuring out a way to use the SOAP API to authenticate a session and then somehow use that to use URL Access.
Just call the SOAP API, get the bytes, and return those bytes to the user with appropriate headers, content-type: application/pdf, content-disposition:attachment;filename=report.pdf. You don't need to do multiple calls to the report server.
I have a Web Service which is doing some screen scraping of an aspx website.
I can get it to log in successfully, but then when I submit a request, it returns a server error. When I check it out with Fiddler, it shows that the content (the query string) is being truncated so it is not all submitted. The content is quite long over 3600 chars. (Not my choice, it's just the way the website was created and what it expects.)
HttpWebRequest webRequest = WebRequest.Create(REQUESTUSAGE) as HttpWebRequest;
webRequest.CookieContainer = this.Cookies;
webRequest.Method = "POST";
webRequest.ContentType = "application/x-www-form-urlencoded";
StreamWriter requestWriter = new StreamWriter(webRequest.GetRequestStream());
requestWriter.Write(GetPostDataForRequest());
WebResponse response = null;
try
{
response = webRequest.GetResponse();
}
catch (Exception ex)
{}
The GetPostDataForRequest returns the content, but like I said, Fiddler shows it is missing the last 600 chars or so for no apparent reason. The debugger shows the string is returned as expected, but somehow it does not get written correctly.
So how to I get it to submit the full string?
OK, I got this resolved. I was not closing the requestWriter.
There are usually limits to the request size - take a look at maximum length of HTTP GET request?
It appears that you are running into a browser issue, not a server issue. Can you issue the request with a command line tool (i.e. something like wget) to verify that it is not a problem with the server?
You can also try a different browser, which may have different limits.
I have a web method in second.aspx,which has to be executed only if the incoming request is 'application/json'.So in my First.aspx page I am programmatically generating a Http request with content type set to 'application/json' using the following code.
HttpWebRequest req = (HttpWebRequest)WebRequest.Create("http://localhost/website1/Second.aspx");
req.ContentType = "application/json";
HttpWebResponse resp = (HttpWebResponse)req.GetResponse();
StreamReader sr = new StreamReader(resp.GetResponseStream());
string results = sr.ReadToEnd();
sr.Close();
and in the Second.aspx i am checking the incoming request in javascript using <%= Request.ContentType %> to see if it is 'application/json'.If yes I want to execute the web method using jquery ajax method.If I write the streamreader 'sr' to a textbox I can see that <%= Request.ContentType %> gives 'application/json'.But my problem is the HTML of Second.aspx is loaded into the textbox on First.aspx but no redirection to Second.aspx is taking place.so I am unable to exceute the web method this way.
Please could someone help me how do I rediret to Second.aspx page with the above programmatically generated HTTP request code?
You don't. What you are doing is having your application make a web request, read it, and do nothing with it.
Your logic rather convoluted... I don't understand at all why you're doing what you're doing. But if you want to redirect the user to a new page, use Response.Redirect. If you want to execute the page and send the results back to the user use Server.Transfer.