I have my production site's app pool to recycle every 2 hours or so. I noticed that when the first call to the site is made, the App Pool caches the base url (e.g. www.mysite.com). This makes sense as this is used to resolve relative paths in ASP.NET e.g. ~/MyFolder/MyPage.aspx, which is resolved to:
http://www.mysite.com/MyFolder/MyPage.aspx
However since the site can be reached via our host name e.g.
http://masdfg.my.provider.net
IIS thinks the url is
http://masdfg.my.provider.net/MyFolder/MyPage.aspx
As you can image, this causing an issue with SSL as well as others. How can I prevent this from happening?
UPDATE: The work around was to create a url redirect. If anyone knows how to prevent this let me know.
I hope I've understood your question correctly, but please do let me know if I haven't.
It sounds like the sole issue you have is that when you write the links to the response they sometimes reference the wrong root URL.
I notice that you use ~/ . This would resolve and write the entire URL to the response I think. It is better to use only / when writing links to the response.
So in your example you would write /myfolder/mypage.aspx. The browser would then resolve the / to mean that it's from the root address of the site, whichever that may be.
Like I said, I hope I've understood your question correctly and apologies if I haven't.
I know it's a long shot, but I've had a similar problem with my IIS setup. I solved it by going to the already mentioned "bindings" window through "Edit Bindings".
Then I removed all the not wanted bindings, then adding the hostname www.mydomain.com the server should answer to.
Finally I edited the windows hosts file at
%windir%\System32\drivers\etc\hosts
Adding the line
127.0.0.1 www.mydomain.com
This ensures that www.mydomain.com always resolves to the local computer.
After executing iisreset.exe as administrator my problem was over.
HttpContext.Current.Request.Url is not a cacheable item. That value comes from the HOST value of the HTTP headers. Which means it is passed in to the application from the request.
The only time it should take that second URL is if the requests HOST value was masdfg.my.provider.net
There are three possible fixes here. The first is to set your bindings and have any requests to masdfg.my.provider.net be forwarded over to www.mysite.com
The second, because your primary issue appears to be about SSL is to get a unified communications (UC) SSL certificate and install that on your server. This would be to cover the mysite.com and masdfg.my.provider.net domain names.
The third is to simply create a separate IIS site which points to the exact same production directory as the first one. Each site would have only 1 domain name it's responsible for.
Related
I am currently having issues with setting up an HTTPS domain redirect. I have a DNS URL redirect entry that points a few sub-domains to same-server URLs. For example:
docs.kipper-lang.org -> kipper-lang.org/docs/
play.kipper-lang.org -> kipper-lang.org/playground
The issue I am currently experiencing is that when using the subdomains, it mostly works, but it can only use HTTP. If I attempt to use HTTPS (like for example https://docs.kipper-lang.org) the redirect won't work and will get stuck apparently waiting for the HTTPS certificate (I think, but I don't know for sure, since it loads forever and gets a time-out).
So my DNS provider does its job for the most part as I want, but I am not sure how to add the HTTPS encryption to these redirects. Is there maybe even some DNS configuration or even middle-man service for redirects I can use, where these HTTPS encryptions are built-In? Since receiving a "Warning: Insecure connection" every time someone uses the sub-domains is a massive problem for me.
Note though that considering I am hosting on a GitHub Pages server, I am unable to do these redirects on the server side myself, as I can't use any code in this case.
I would greatly appreciate any ideas for fixing this or what I could use to achieve this another way.
Thanks in advance!
I have a password reset email going out to users which uses Request.ServerVariables("SERVER_NAME") to generate a URL for the password reset page. Problem is, the URL of the web application was changed recently, and the old name is still being reflected in the SERVER_NAME server variable. How can I reset this so the new, current server name is used? I'd rather not restart the web app in IIS if I don't have to. (I haven't actually stepped through the code; if I'm understanding this correctly, it will work just fine locally because IIS gets refreshed a lot more frequently on my PC than it does on a production web server.) Or am I misunderstanding how server variables work?
In essence from my reading around, your SERVER_NAME may be the name of the windows server itself and not necessarily the DNS name the rest of the world uses to reach that server. HTTP_HOST might be a better bet because it is the contents of the Host http header, which came from what the user typed into their address bar and subsequently what the user's browser passed in order to gain access to the site.
This is particularly helpful for sites that are multi homed, by which I mean one set of code runs eg two websites with different branding/styling, different bindings in IIS (different dns names) but the same underlying code, repeating back to the user what they typed into the browser means they always think they are interacting with the same site
I've noticed a large increase in the number of events logged daily that have &hash= in the URL. The requested URL is the same every time but the number that follows the &hash= is always different.
I have no idea what the purpose of the &hash= parameter is, so I'm unsure if these attempts are malicious or something else. Can anyone provide insight as to what is being attempted with the requested URL? I have copied in one from a recent log below.
https://www.movinglabor.com:443/moving-services/moving-labor/move-furniture/&du=https:/www.movinglabor.com/moving-services/moving-labor/move.../&hash=AFD3C9508211E3F234B4A265B3EF7E3F
I have been seeing the same thing in IIS on Windows Server 2012 R2. They were mostly HEAD requests. I did see a few other more obvious attack attempts from the same ip address so I'm assuming the du/hash thing is also intended to be malicious.
Here's an example of another attempt which also tries some url encoding to bypass filters:
part_id=D8DD67F9S8DF79S8D7F9D9D%5C&du=https://www.examplesite.com/page..asp%5C?part...%5C&hash=DA54E35B7D77F7137E|-|0|404_Not_Found
So you may want to look through your IIS logs to see if they are trying other things.
In the end I simply created a blocking rule for it using the Url Rewrite extension for IIS.
We are working on the conversion of an ASP site to ASP.Net and are running into problems with redirects and resource locations. Our issues are coming from a peculiarity of our set-up. Our site can be accessed in two ways:
Directly by URL: http://www.mysite.com - in this case everything works fine
Via a proxy server with a URL like: http://www.proxy.com/mysite_proxy/proxy/
In #2 "mysite_proxy" is a mapping on proxy.com that directs the request behind the scenes to www.mysite.com, "proxy" is a virtual sub-website that just redirects the request to the root of www.mysite.com. It essntially is meant to give us a convenient way of knowing if a request is hitting the site from the proxy or not.
We are running into two problems with this setup:
Using Response.Redirect either with the "~" or a plain relative path (Default.aspx) generates a 302 response with a location of "/proxy/rest_of_the_path.aspx." This causes the browser to request http://www.proxy.com/proxy/rest_of_the_path.aspx which isn't anything and doesn't even hit our server so we couldn't do an after the fact re-write.
Using "~" based URLs in our pages for links, images, style-sheets, etc. creates the same kind of path: "/proxy/path_to_resources.css." We could probably solve some of these by using relative paths for all these resources though that would be a lot of work and it would do nothing to address similar resource links generated by the framework and 3rd party components.
Ideally I want to find a global fix that will make these problems transparent to the developers working on the site. I have a few ideas at this point:
Getting rid of the proxy, it is not really needed and is there for administrative and not technical reasons. Easiest to accomplish technically, the hardest to accomplish in the real world.
Hand the problem off to the group that runs the proxy and say it is their problem they need to fix it.
Use a Response filter to modify the raw html before it is sent to the client. I know this could fix my resource links, but I am not certain about the headers (need to test it out) and there would be a performance hit to having to parse every response looking for and re-writing urls.
All of these solutions have big negatives in my mind and I was hoping someone might have another idea. So any thoughts?
Aside: there are a lot of posts up already that deal with the reverse of this issue: I have a relative URL, how do I may it absolute, but I didn't come across anything that fit the bill for the other direction.
As a fix, I'd go with a small detection routine at Global.asax:Session_Start (since i imagine that the proxy doesn't actually starts another application instance), set a session variable with the correct path, and use it instead of '~'.
In the case a different application instance is used, then use Application_Start instead of Session_Start and a static Global variable instead of a Session variable.
We've recently run into an issue with our ASP.NET application where if a user goes to ourcompany.com instead of www.ourcompany.com, they will sometimes end up on a page that does not load data from the database. The issue seems to be related to our SSL certificate, but I've been tasked to investigate a way on the code side to fix this.
Here's the specific use case:
There is a user registration page that new users get sent to after they "quick register" (enter name, email, phone). With "www" in the URL (e.g. "www.ourcompany.com") it works fine, they can proceed as normal. However, if they browsed to just "ourcompany.com" or had that bookmarked, when they go to that page some data is not loaded (specifically a list of states from the DB) and, worse, if they try to submit the page they are kicked out entirely and sent back to the home page.
I will go in more detail if necessary but my question is simply if there is an application setting I can say to keep the session for the app regardless of if the URL has the "www" or not? Buying a second SSL cert isn't an option at this point unless there is no recourse, and I have to look at a way to solve this without another SSL.
Any ideas to point me in the right direction?
When your users go to www.ourcompany.com they get a session cookie for the www subdomain. By default, cookies are not shared across subdomains, which is why users going to ourcompany.com do not have access to their sessions.
There is a useful thread discussing this issue here. The suggested solution is:
By the way, I implemented a fairly good fix/hack today. Put this code
on every page: Response.Cookies["ASP.NET_SessionId"].Value =
Session.SessionID; Response.Cookies["ASP.NET_SessionId"].Domain =
".mydomain.com";
Those two lines of code rewrite the Session cookie so it's now
accessible across sub-domains.
Doug, 23 Aug 2005
Surely you are trying to solve the wrong problem?
Is it possible for you to just implement URL rewriting and make it consistent?
So for example, http://example.com redirects to http://www.example.com ?
For an example of managing rewriting see:
http://paulstack.co.uk/blog/post/iis-rewrite-tool-the-pain-of-a-simple-rule-change.aspx
From the browsers point of view, www.mysite.com is a different site than mysite.com.
If you have a rewrite engine, add a rule to send all requests to www that don't already have it.
Or (this is what I did) add a separate IIS site with the "mysite.com" host header and set the IIS flag to redirect all traffic to www.
In either of these cases, any time a browser requests a page without the www prefix, it will receive a redirect response sending it to the correct page.
Here's the redirect site home directory properties:
And the relevant host header setting:
This fixes the issue without requiring code changes, and incidentally prevents duplicate search results from Google etc.
Just an update, I was able to fix the problem with a web.config entry:
<httpCookies domain=".mycompany.com" />
After adding that, the problem went away.