Using ASP.NET MVC3, the url http://localhost:22713/tests#123456 with the following code:
Your user agent: #Request.UserAgent<br />
Url: #Request.Url.AbsoluteUri<br />
Url fragment: #Request.Url.Fragment<br />
returns:
Your user agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.16) Gecko/20110319 Firefox/3.6.16
Url: http://localhost:22713/tests
Url fragment:
Why is fragment always empty? I need to be able to parse this info on the server side.
The fragment (everything after the # in a url) doesn't get passed to the server. So the Fragment property will always be empty when you attempt to get it from a request.
The Fragment property is typically only used when constructing URLs.
There's no easy way to get the fragment on the server. Typically you would have to use javascript to retrieve the fragment.
Related
Our web-application is exhibiting an issue whereby we're seeing duplicate requests from a single user action (all details except ConnectionId are the same) in our Signal-R hub.
Assuming the first request was received at T0 then we saw duplicate requests at (approximately)
T0 + 2:00
T0 + 2:30
T0 + 3:30
T0 + 8:00
Effectively, the client-side JavaScript code sets up a connection to a Signal-R hub, when the user clicks the submit button it invokes a method on the hub (kicks off procesing in the back-end).
The problem has only started recently in production - we haven't changed any relevant code (client-side nor SignalR hub). The web-app is deployed in a corporate environment (users use IE11 via Citrix). I note that (presumably due to firewalls) SignalR is using forever-frames (rather than web-sockets etc).
In our logs (from the Signnal-R hub) the user-agent appears as Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET4.0C; .NET4.0E; .NET CLR 3.5.30729; .NET CLR 3.0.30729; InfoPath.3)
The client code looks along the lines of
// Setup
self.myHub = $.connection.myHub;
$.connection.hub.start().done(function () {
console.debug("SignalR Hub Connected");
});
...
// When handling user action
self.myHub.server.someHubMethod(param1, param2);
Basically, I'm stumped as to what is going on / where I should be looking. My thoughts are that there is some caching proxy/web accelerator/spider or similar which is somehow replaying the requests.
I noticed something funny while using Fiddler to debug a cookie issue we are having. If the cookie path value is just the beginning of another site’s path value then the second site sees the first site’s cookies.
This is easier to show than to describe.
I created a simple site with the following ASPX code
<div>
Cookie Value <asp:Label ID="lblTest" runat="server"></asp:Label><br />
See Foo <asp:Label ID="lblSeeFoo" runat="server"></asp:Label><br />
See Foobar <asp:Label ID="lblSeeFoobar" runat="server"></asp:Label><br />
See Foonot <asp:Label ID="lblSeeFoonot" runat="server"></asp:Label>
</div>
In the code behind I create a cookie with a path based on ApplicationPath. The created cookie name includes the ApplicationPath name to make it easy to see in Fiddler. This code also looks for cookies from three specific web sites.
Protected Sub Page_Load(sender As Object, e As EventArgs) Handles Me.Load
Dim CookieName As String = "Test " + Me.Request.ApplicationPath.Replace("/"c, "")
Dim myCookie As HttpCookie = Me.Request.Cookies(CookieName)
If (myCookie Is Nothing) Then
myCookie = New HttpCookie(CookieName)
myCookie.Path = Me.Request.ApplicationPath
myCookie.Expires = DateTime.Now.AddDays(1)
myCookie.Value = DateTime.Now.ToString()
Response.Cookies.Add(myCookie)
End If
Me.lblTest.Text = myCookie.Value
Dim TestCookie As HttpCookie
TestCookie = Me.Request.Cookies("Test Foobar")
Me.lblSeeFoobar.Text = CStr(TestCookie IsNot Nothing)
TestCookie = Me.Request.Cookies("Test Foonot")
Me.lblSeeFoonot.Text = CStr(TestCookie IsNot Nothing)
TestCookie = Me.Request.Cookies("Test Foo")
Me.lblSeeFoo.Text = CStr(TestCookie IsNot Nothing)
End Sub
This application is then published to three web sites named Foo, Foobar and Foonot.
Viewing each site shows that Foobar and Foonot can see the cookie for Foo.
Here is Foobar’s result
Cookie Value 7/3/2014 10:40:01 AM
See Foo True
See Foobar True
See Foonot False
Foobar can read Foo’s cookies. Foonot can also see Foo's cookies. Foonot and Foobar do not see each other's cookies.
Here is the raw header information from Fiddler:
GET /Foobar/ HTTP/1.1
Accept: application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, */*
Accept-Language: en-US
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; InfoPath.3)
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
Host: webdev
Cookie: Test Foobar=7/3/2014 10:40:01 AM; Test Foo=7/3/2014 10:39:43 AM
I tried searching for information on this issue but couldn’t come up with anything.
Is this a known thing?
Is there a way to prevent this?
Isn’t this a big security issue? In a shared hosting environment couldn’t I create a domain name that is longer that an existing one and pickup their cookies?
I'm not sure what the actual URL of Foo, Foobar and Foonot.
However, a cookie can be shared among multiple subdomains.
For example, IE10 shares the cookie among the following sites -
mysite.com
Foo.mysite.com
Foobar.mysite.com
Foono.mysite.com
If you want cookie to be available in the specific domain only, you can set the cookie's Domain property.
myCookie.Domain = "Foo.mysite.com";
When determining if a cookie is included in the request the cookie path is not examined as a ‘path’ but simply as a string value.
If it was processed as a path then the fact that one site happens to begin with the same text as another would not result in a match. MyDomain.com/Foo is a different web site than MyDomain.com/Foobar. I see it as a different site, IIS serves it up as a different web site, but the browser sees them as the same path as far as the cookies are concerned. The browser is simply doing a string compare on the path values and “/Foo” is the beginning of “/Foobar” so the browser includes those cookies.
Normally this is used so the cookies for a parent web site are available to the child we sites. If there is a site named “Parent” and a nested site named “Child” then the parent’s cookies with a path of “/Parent” would be available to the child site. The child’s cookies with a path of “/Parent/Child” would not be available to the parent. The situation I came across is that a site can pick up the cookies for one if its siblings when the sibling’s path is the start of that site’s path.
I suspect that this string comparison is the one that causes the cookie paths to be case sensitive.
So the answers to my questions are:
Is this a known thing?
Yes. The browser is doing a simple string comparison on the cookie’s path to determine if it should include the cookie in the request.
Is there a way to prevent this?
Yes. Append a slash to the end of the path. “/Foo/” is no longer a subset of “/Foobar/”
myCookie.Path = Me.Request.ApplicationPath + "/"
Isn’t this a big security issue? In a shared hosting environment couldn’t I create a domain name that is longer that an existing one and pickup their cookies?
No. As #the_lotus points out, the domains would be different. The paths only have an effect within a domain. I was only thinking of the paths and didn’t consider the domains.
I am new to Task-based programming and this new HttpClient class, but I read the examples and documentation on the MSDN and have a basic understanding of both. I tried to create a basic application that sends an async request, but it has already failed. It seems to be a problem with the URL, but have a look at the code first:
public static async void ScrapeDailyRaces()
{
HttpClient httpClient = new HttpClient();
Stream myStream = await httpClient.GetStreamAsync("https://mobile.bet365.com/");
}
When I tried to replace the URL with http://www.google.com, and also https://www.google.com, but they both worked so it isn't a problem with https. I also tried adding www to the faulty URL, resulting in https://www.mobile.bet365.com/, but it still doesn't work. Any ideas?
Exception details: "The underlying connection was closed: An unexpected error occurred on a receive." and "An error occurred while sending the request."
Just in case someone ran into the same problem, I fixed it by adding a user agent to the request using the following:
httpClient.DefaultRequestHeaders.Add("user-agent", "Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0)");
Hope this helps.
Afternoon All,
Just after a bit of advice on the best method to use for the following.
I am new ish to .net and have an Asp.net web page in development that i simply lists some internal web sites by a ping command and outlines their status (on-line / offline). This is current;y activated by the click of a button.
I need to set up this developemt web page so that it automatically runs at a specific time on a morning say 7am for arguments sake and to then notify a user group by email the status of these items.
I have used Microsoft Visual studio (VB) 2010 before and can create simple web works that connect/ extract/ update data to and from SQL 2008. I have also had some experience in creating scheduled jobs in SQL but not much.
I thought i could maybe create a scheduled job in SQL 2008, find a way to populate the data into the database, use this data in my website and display it a gridview or something. And either have the SQL job or the website email a group of users the status of these internal web sites.
Does anyone know if i would beable to complete the above just in .net? Am i able to write a script of some sort or schedule the web page to run at a specified time?
Im not 100% sure on the best method to tackle this job and i have limited experience. Can anyone suggest any best method ideas on how to complete the above.
Regards
Bet.
Although fairly trivial to implement, I don't believe a ping command is useful in the context of what you are trying to achieve.
As Fredrik pointed out, a ping only says that the server is available. It makes no statement as to whether an individual website is functional on that server.
If I was doing this I would create a service that runs every so often. The service would issue a get request to the web sites, do a little bit of parsing on the content to make sure what was returned was expected, and update a record in a database stating the time of the connection and the status (up/down).
For example:
public String CheckSite(String postLocation) {
HttpWebRequest httpRequest = (HttpWebRequest)WebRequest.Create(postLocation);
// Setting the useragent seems resolves certain issues that *may* crop up depending on the server
httpRequest.UserAgent = "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)";
httpRequest.KeepAlive = false;
httpRequest.Method = "GET";
using ( HttpWebResponse httpResponse = (HttpWebResponse)httpRequest.GetResponse() ) {
using ( StreamReader reader = new StreamReader(httpResponse.GetResponseStream()) ) {
result = reader.ReadToEnd();
} // using reader
} // using httpResponse
return result;
}
This simple call will load a page from a server. From there you can parse to see if you have words like "error" or what have you. Provided it looks good then you report back that the site is up.
I know the above is C#, but you should be able to easily convert that to VB if necessary. You should also place a try .. catch around the call. If it errors out then you know the server is completely offline. If the page returns, but contains "error" or something then you know the server is up but the app is down.
pesronally . . . I think the most elegant solution would be: (untested)
public String CheckSite(String postLocation) {
HttpWebRequest httpRequest = (HttpWebRequest)WebRequest.Create(postLocation);
// Setting the useragent seems resolves certain issues that *may* crop up depending on the server
httpRequest.UserAgent = "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)";
httpRequest.KeepAlive = false;
httpRequest.Method = "GET";
using ( HttpWebResponse httpResponse = (HttpWebResponse)httpRequest.GetResponse() ) {
if(httpResponse.StatusCode != {check for valid status codes here})
{
//Do something based upon an invalid response.
}
} // using httpResponse
return result;
}
When a button/link is clicked, I want this URL to be called followed by the execution of the following statements.
The ASP.Net page is in C# btw.
Function A
statement A
call abc.apsx
statement B
abc.aspx is a silent page, doesn't display anything on the page but creates an output.txt file. So when abc.aspx is called, output.txt file is created and Statement B is executed seamlessly. Hope I made sense.
I have no .Net programming knowledge. Please help me.
Thank you..
You can create a HttpWebRequest object to call abc.apsx page
e.g.
HttpWebRequest myReq = (HttpWebRequest)WebRequest.Create("http://host/abc.apsx");
or Using WebClient to fire a request to the web page.
WebClient client = new WebClient ();
// Add a user agent header in case the
// requested URI contains a query.
// important to add user-agent to emulate a real request from browser.
client.Headers.Add ("user-agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705;)");
Stream data = client.OpenRead ("http://host/abc.apsx");
StreamReader reader = new StreamReader (data);
string s = reader.ReadToEnd ();
Console.WriteLine (s);
data.Close ();
reader.Close ();
http://msdn.microsoft.com/en-us/library/system.net.webclient(VS.80).aspx
This is exactly what HttpServerUtility.Execute is for.
Why are you having the abc.aspx act as a standalone page?
From how your question reads, you're goal is to get the Output.Txt file, thus, why not have a class that builds that output.txt file as a separate object which your initial page can call?
And then, if you need it accessible to the user, have abc.aspx call this class as well.
Or you can go with codemeit's suggestion of the httpwebreques (which if abc.aspx is on a separate domain this is probably your best course of action)