F5 Add apm_do_not_touch to HTML response to the end user - asp.net

I am supporting ASP.NET application running on 3 web servers and have F5 system as firewall and load balance. Actually I don't have experience at all in F5 system but the following issue seems to be related to it
The issue happened after we applied F5 load balancing. Simply it cause JavaScript in the web page to fail sometimes. After refresh the web page it will work fine
To trace the issue I compared the response that fail and the one that success after refresh. The difference was the failed one contains html tag that is not added by our application apm_do_not_touch with a script tag inside it
It seems that happen when the F5 switch between one server to another one as the issue solved when we redirect all the traffic to only one server
Any advice, what is the possible cause and how we can solve it?

APM is F5's Access Policy Manager module and is used for VPN, Web Portals, and federated authentication. The apm_do_not_touch tag is part of this product and is used when you want to prevent the APM module from rewriting portions of HTML such as external links.
If you're not accessing the application through a web portal, this should not be applied and you'll need to work with whomever setup the access policy to resolve as the APM policy is being applied to your application possibly erroneously.
Here is more information on the apm_do_not_touch tag. Depending on your version, there was a known issue for #cc_on in F5 BIG-IP version 11.1 who's workaround was to prevent the APM module from rewriting that command. The same workaround may pose a solution for you. Either way, there are additional complexities to your client traffic flow that you will need to engage your network team/BIG-IP administrators with to ensure your application and their policies don't clash.
It could be as simple as removing the APM policy from your application's pathway but your admins will be able to identify if it's required for external access or reverse proxy requirements.

Related

How do I find where a redirect is occurring on my website

We're moving an instance of a third party, .NET-based website to a Win2016 server, IIS 8, from an external hosting service. Under the site, there's a WCF web service in a subfolder. There are no virtual directories or apps.
mysite.../Order/v4/service.svc
When I browse to pull the wsdl (https://MySite.../Order/v4/Service.svc?wsdl ), it is redirecting to Login.aspx (https://MySite.../Order/v4/Login.aspx).
Browsing to https://MySite/Order/v4/Service.svc?wsdl redirects to https://MySite/Order/v4/Login.aspx .
IIS has no default document set / web.config's defaultDocument is commented out.
Vendor indicates
That would be a redirect that was setup either on IIS or another
appliance that is doing that.
Same behavior occurs running on the server itself (localhost) and sys eng confirms it shouldn't be leaving the network to hit any firewall.
There are no other .config files on the server with any reference to "Login.aspx".
This still feels like it is some piece of configuration but even doing things I shouldn't need to do like restarting the server just to make sure no caching of settings is hanging around isn't affecting it.
Any guesses on what might be attempting to redirect?
Following Rich-Lang's suggestion in comments provided the information to identify that the global.asax file was handling an error in the web.config. Since the code in this case was in codebehind in a dll, and the vendor indicated their code does not redirect, I had not seen that culprit before. Removing the global.asax and turning off customErrors allowed me to see the underlying issue.

How to turn off a 302 redirect in Azure Websites

I know this is strange.
I have an azure website. If you visit it, e.g. web1.azurewebsites.net you will be immediately redirected (302) to https://web1.azurewebsites.net
I do not have any code in the deployed application that does, this, nor anything in my web.config.
To test, I created another S1 (i.e. SSL capable) azure websites from scratch, deployed the EXACT same code (even used the same database), and I do not get the redirects.
Where is this 302 redirect setup in Azure?
to be clear.
1) Exact same code, webconfig etc.
2) only difference is the Azure publish location
3) I do have some recollection of ticking a box somewhere saying 'only allow ssl', but I really can't find it anywhere now.
All references online are a) to turn ssl ON, b) talk about web config and mvc code. Neither apply here due to test.
For you info, in Edge, F12 tools, I can see the redirect under Network tab.
and X-Powered-By: ASP.NET etc. etc.
I can't find a setting anywhere in Azure portal to turn this off. Any help appreciated.
Hard to investigate without demo or code.
If your project doesn't have any rewrite rules then it must be an app service that is doing this.
You might have an extension installed that does this: https://blog.nicholasrogoff.com/2017/01/12/azure-app-service-force-redirect-from-http-to-https-the-easy-way/
Or rewrites happen at a higher level. Do you have some kind of service that sits on the top of you app service e.g. cloudflare?
You can try clone your existing app (requires premium subscription) to see if the problem is reproduced
The real problem was the browser! not the code....
I had once, at some point put in the HSTS headers for testing. Once you do that in Chrome, it will always 'redirect' itself to the https version.
You can
Use incogneto mode
clear the 'you must always redirect to https' setting in chrome using the advanced chrome settings.

Flex web application works fine before compiling

I created a component in Flex which auto completes a couple of text inputs when users are typing an entry. When running the application from Flex, everything works fine. However, after I have compiled the application and load it, the auto complete does not work. Here is some background information.
Created in Adobe Flash Builder 4.5.
Web Application is running on an internal network.
The service which the auto complete uses is an external service.
The internal server which hosts the web application can load the URL of the external service just fine.
I am not sure if it this is a permissions issue or what. Any insight would be appreciated.
I had similar problems with receiving data from a web service. If the crossdomain file is not where it is supposed to be (webservice.domain.com/crossdomain.xml), you will receive a 404 error. So it does not sound like that is your issue. If your crossdomain file does not contain the right tags, however, it won't throw an HTTP error, but it still won't work either.
It won't work properly by default if you are going from an HTTP server (where your application resides) to an HTTPS server (where your service is). This is typically bad security practice, but if you decide that it is OK, you can use the secure="false" for the allow-access-from tag.
Also, you may need to include both the allow-access-from tag and the allow-http-request-headers-from tag to get the data you are looking for.
Here is the crossdomain policy file specification by Adobe and it is a good resource in figuring out what attributes are required for each tag: http://www.adobe.com/devnet/articles/crossdomain_policy_file_spec.html.
Good luck!

Credentials prompt for an aspx page configured for anonymous authentication

From Flex we call a Upload.aspx page which is configured for anonymous authentication. Most of the time, it works like a charm, but once in a while, the browser does prompt a enter credentials popup...
The whole site is configured for Windows Authentication, but some pages and folders are set to use anonymous authentication. This is done using the location tag in web.config.
What could be the reason for this?
UPDATE:
Only happening in Internet Explorer... they should deport it.
If you use FireFox with FireBug, open up the Net panel, it will show you the request making the permissions request. If you cancel it, it will show an access denied in red, and that will easily help you hunt the issue down.
Maybe it's because a file (image, css, etc.) is being referenced that doesn't exist?
A couple of things I would check out:
Are there any images/other files that are added to your page using the FQDN? If so, are any of these pointing to an external site or staging site that might require credentials?
Is it possible that someone has removed permissions from the application pool credentials on the web server for some specific file or files the site is requesting?
Is the site load balanced or part of a farm? It could be that one or many servers are configured incorrectly, and the rest are ok. Then if by chance you hit the bad servers, you could get the prompt.

Re-publishing an ASP.NET Web Application While Site is Live

I am trying to get a grasp on how to handle updates to a live, functioning ASP.NET (2.0 or greater) Application while there are users on the site.
For example, suppose SO is an ASP.NET Web Application project. The project code compiles down to the single .DLL in the BIN folder. Now, there are constantly users on SO, so what would happen to users' actions/sessions if you would use the Visual Studio .NET "Publish" feature (or just FTP everything again manually) while they are using the site?
Would creating an ASP.NET Web Site, instead, alleviate any problems that may or may not exist with the scenario above? I am beginning to develop a web site as a user-driven Web Application, and I want to make sure that my inexperience with this would not potentially annoy the [potentially] many users that I [want to] have 24/7.
EDIT: Sorry, I should have put this in a more exact context. Assume that this site is being hosted by a web hosting service with monthly fees. I won't be managing the server itself, just what the web host allows as a user of their services.
I create two Web sites in IIS. One is the production Web site, and the other is a static Web site with an HttpHandler that sends all requests to a single static "We're updating" HTML page served with an HTTP 503 Service Unavailable. Typically the update Web site is turned off. When it's time to update, we stop the production Web site, start the update Web site, and now we can fiddle with the production Web site all we want without worrying about DLLs being locked or worker processes needing to be spun down.
I started doing this because
App_Offline.htm really does not work well in Web Gardens, which we use.
App_Offline.htm serves its page as 404, which is bad if you're down for a meaningful period of time.
We can start the upgraded production Web site with modified settings (only listening on localhost), where we can do a last-minute acceptance/verification that everything is working before we flip the switch, turning off the update Web site and re-enabling the production Web site.
Things this does not solve include
Any maintenance that requires a restart of the server--you still have downtime where no page is served.
Any maintenance that diddles with the .NET runtime, like upgrading to the latest service pack.
Other approaches I've seen include
Having two servers. Send all load balancing requests to one server, upgrade the other one; then rinse and repeat. Most of us don't have this luxury.
Creating multiple bin directories, like bin-1.0.0.0 and bin-1.1.0.0 and telling ASP.NET which bin directory to use in the web.config file. (One advantage of this is that reverting to a previous binary is just editing a config file. A disadvantage is that it's harder to revert resources that don't end up in your binaries, like templates and images and such.) I don't remember how this actually worked--I think the application did some late assembly loading in its Global.asax based on its own web.config section (since you touched the web.config, the app had restarted, so it was okay).
If you find a better way, let me know!
Changing to the asp.net web site model won't have any effect, as the recycle will also happen, some of changes that trigger it for sure: web.config, global.asax, app_code.
After the recycle, user will still be logged in because asp.net will just validate the syntax. That is given you use a fixed machine key, otherwise it will change on each recycle. This is something you want to do anyway as other stuff can break if the key change across requests i.e. viewstate validation, embedded resources (decryption of the url fails).
If you can put the session out of process, like in sql server, you will avoid loosing the session. If you can't, your code will have to consider that. There are plenty of scenarios where you can avoid using session, and others were you can wrap it and re-retrieve the info if the session was cleaned. This should leave you with a handful specific cases that you know can give trouble to the users, so for those you do some of the suggestions others have already made.
One solution could be to deploy your application into a load balanced environment (web farm).
When deploying a new version you would use the load balancer to redirect requests to the server you are not deploying to.
App_offline.htm is great solution for this I think.
in SO we see application currently unavailable page when a deployment begins.
I am not sure how SO handles it.. But we usually put a holding page. So what ever the user has done (adding question or answering questions) does not get updated. As soon as he updates something he will see a holding page asking him to try after sometime.
And if I am the user I usually press the back button to make sure what I entered is saved in the browser history so that I can post later.
Some site use use are in clustered environment so I take one server offline and inform the load balancer that she will not be available and once I make sure that the new version is working fine I make it live.. I do the same thing for the next server.
Do we have any other option?
It is not a technical solution, but set up a scheduled maintenance window. You can annoucement in advance giving your user base fair warning that there is a possiblity that the application will not be available during that time frame.

Resources