is there any server configaurations needs to change for session management - apache-flex

I have developed an application with JSP and Flex. In that Flex application interact JSP with HTTP service. I deployed application in one server that server URL is with HTTP it is working fine. But when I deployed this project in another server (HTTPS) the application is not running. There in JSP session is not handled. Is there any server configuration whicn needs to be checked?

I have no idea what you're talking about with "session is not handled". Please elaborate the problem in developer perspective, not in enduser perspective. What exactly happens? What exactly happens not?
I can at least tell that sessions are usually backed by cookies. Cookies on its turn are usually bound to a specific domain and path. Cookies are not dependent from the protocol used. Roughly said, if the webcontainer has created a cookie to track the HttpSession, it will by default use the request.getServerName() as cookie domain and request.getContextPath() as cookie path.
So if you for example have this webapplication on http://example.com/context, then the cookie will be created for host example.com and path /context. Regardless of the protocol. But when you fire a request on http://example.com/anothercontext, then by default you won't get the same cookie back and thus also not the same session.
However, most webcontainers provides configuration options which can influence the cookie host and path. Tomcat, for example, supports an emptySessionPath attribute in the HTTP connector which causes that the cookie path is always /. This way the http://example.com/context and http://example.com/anothercontext will be able to share the same cookies and thus also the session.
This knowledge of how it all works "under the hood" must give a better understanding of your problem and thus also ease nailing down of the root cause.

Related

HTTP POST from app.example.com to localhost: session cookie not sent

I have two Spring Web applications that work together. I'm running the first application from the IDE on localhost, while the second one is running in docker on app.127.0.0.1.nip.io.
The two applications interact indirectly through the users browser by redirecting and POSTing between the two apps. This is slightly similar to how an SP and an IdP work together in SAML2.
In my case, the first application on localhost is sending a 302 to the second application. After doing some work, the second application sends an HTML page with a form an JS code to autosubmit it, back to my first application on localhost. The HTML looks similar to this:
<form method=POST action="http://localhost:8080/some/path">
...
</form>
My first application is using Spring Session with a session cookie, and this works just fine. However, when the second application makes the browser POST the form, the browser does not send the session cookie with the POST request.
When both applications are running in docker under .127.0.0.1.nip.io, the cookie is sent.
I've tried to find any hint if this behaviour is expected, and what headers or other bits the applications could use to influence this.
At this point, this is mostly an annoyance while debugging, but I'm concerned that once the two applications will run on different FQDNs and/or different domains, the browsers will also block the cookie being sent.
I've tested this with current versions of Chrome and Firefox.
The problem is the new(ish) SameSite cookie policy that covers exactly this case: another application is POSTing to a host via HTTP. The default now is SameSite: lax, which does not allow sending the first-party cookie values on this request.
The solution is to allow the session cookie to be sent by specifying SameSite: none. Be aware however that this might create security vulnerabilities. For my application, this is not an issue, so I can allow the cookie to always be sent, and especially when I run my application in the debugger.
For the production deployment, I will be able to tighten this, since both applications will run under the same domain (a.example.com and b.example.com), and both will use TLS, so I can set the session cookie to SameSite: lax.
Here's a decent explanation: https://web.dev/samesite-cookies-explained/

Postman is not using cookie

I've been using Postman in my app development for some time and never had any issues. I typically use it with Google Chrome while I debug my ASP.NET API code.
About a month or so ago, I started having problems where Postman doesn't seem to send the cookie my site issued.
Through Fiddler, I inspect the call I'm making to my API and see that Postman is NOT sending the cookie issued by my API app. It's sending other cookies but not the one it is supposed to send -- see below:
Under "Cookies", I do see the cookie I issue i.e. .AspNetCore.mysite_cookie -- see below:
Any idea why this might be happening?
P.S. I think this issue started after I made some changes to my code to name my cookie. My API app uses social authentication and I decided to name both cookies i.e. the one I receive from Facebook/Google/LinkedIn once the user is authenticated and the one I issue to authenticated users. I call the cookie I get from social sites social_auth_cookie and the one I issue is named mysite_cookie. I think this has something to do with this issue I'm having.
The cookie in question cannot legally be sent over an HTTP connection because its secure attribute is set.
For some reason, mysite_cookie has its secure attribute set differently from social_auth_cookie, either because you are setting it in code...
var cookie = new HttpCookie("mysite_cookie", cookieValue);
cookie.Secure = true;
...or because the service is configured to automatically set it, e.g. with something like this in web.config:
<httpCookies httpOnlyCookies="true" requireSSL="true"/>
The flag could also potentially set by a network device (e.g. an SSL offloading appliance) in a production environment. But that's not very likely in your dev environment.
I suggest you try to same code base but over an https connection. If you are working on code that affects authentication mechanisms, you really really ought to set up your development environment with SSL anyway, or else you are going to miss a lot of bugs, and you won't be able to perform any meaningful pen testing or app scanning for potential threats.
You don't need to worry about cookies if you have them on your browser.
You can use your browser cookies by installing Postman Interceptor extension (left side of "In Sync" button).
I have been running into this issue recently with ASP.NET core 2.0. ASP.NET Core 1.1 however seems to be working just fine and the cookies are getting set in Postman
From what you have describe it seems like Postman is not picking up the cookie you want, because it doesn't recognize the name of the cookie or it is still pointing to use the old cookie.
Things you can try:
Undo all the name change and see if it works( just to get to the root of issue)
Rename one cookie and see if it still works, then proceed with other.
I hope by debugging in this way it will take you to the root cause of the issue.

How does CORS (Access-Control-Allow-Origin header) increase security?

I'm doing some work with this right now and I have to say, it makes no sense at all to me! Basically, I have some CDN server which provides css, images ect for a site. For whatever reason, in order for my browser to stop blocking those resources with a CORS error, I had to have that server (the CDN) add the Access-Control-Allow-Origin header. But as far as I can tell that does absolutely nothing to increase security. Shouldn't the page I request which references those cross-domain resources be telling the browser it's safe to get stuff from the other domain? If that were a malicious domain wouldn't it just have the Access-Control-Allow-Origin set to * so that sites load their malicious responses (you don't have to answer that because obviously they would)?
So can someone explain how this mechanism/feature provides security? As far as I can tell the implementors fucked up and it actually does nothing. The header should be required from the page which references/requests cross-domain resources rather than from that domain being requested.
To be clear; if I request a page at domain A it would make sense for the response to include the Access-Control-Allow-Origin header white listing resources from domain B (Access-Control-Allow-Origin:.B.com), however it makes no sense at all for domain B to effectively white list itself by providing the header; Access-Control-Allow-Origin: which is how this is currently implemented. Can anyone clarify what the benefit of this feature is?
If I have a protected resource hosted on site A, but also control sites B, C, and D, I may want to use that resource on all of my sites but still prevent anyone else from using that resource on theirs. So I instruct my site A to send Access-Control-Allow-Origin: B, C, D along with all of its responses. It's up to the web browser itself to honor this and not serve the response to the underlying Javascript or whatever initiated the request if it didn't come from an allowed origin. Error handlers will be invoked instead. So it's really not for your security as much as it's an honor-system (all major browsers do this) access control method for servers.
Primarily Access-Control-Allow-Origin is about protecting data from leaking from one server (lets call it privateHomeServer.com) to another server (lets call it evil.com) via an unsuspecting user's web browser.
Consider this scenario:
You are on your home network browsing the web when you accidentally stumble onto evil.com. This web page contains malicious javascript that tries to look for web servers on your local home network and then sends their content back to evil.com. It does this by trying to open XMLHttpRequests on all local IP addresses (eg. 192.168.1.1, 192.168.1.2, .. 192.168.1.255) until it finds a web server.
If you are using an old web browser that isn't Access-Control-Allow-Origin aware or you have set Access-Control-Allow-Origin * on your privateHomeServer then your browser would happily retrieve the data from your privateHomeServer (which presumably you didn't bother passwording as it was safely behind your home firewall) and then handing that data to the malicious javascript which can then send the information on to the evil.com server.
On the other hand using an Access-Control-Allow-Origin aware browser and default web configuration on privateHomeServer (ie. not sending Access-Control-Allow-Origin *) your web browser would block the malicious javascript from seeing any data retrieved from privateHomeServer. So this way you are protected from such attacks unless you go out of your way to change the default configuration on your server.
Regarding the question:
Shouldn't the page I request which references those cross-domain
resources be telling the browser it's safe to get stuff from the other
domain?
The fact that your page contains code that is attempting to get resources from a particular server is implicitly telling the web browser that you believe the resources are safe to fetch. It wouldn't make sense to need to repeat this again elsewhere.
CORS makes only sense for Mashup content provider and nothing more.
Example: You are a provider of a embedded maps mashup service which requires a registration. Now you want to make sure that your ajax mashup map will only work for your registered users on their domains. Other domains should be excluded. Only for this reason CORS makes sense.
Another example: Someone misuse CORS for a REST-Service. The clever developer set up a ajax proxy and et voilĂ  you can access from every domain on that service.
Such a ajax proxy would make no sense for a mashup, on the other way the CORS makes no sense for REST-Services, because you could bypass the restriction with a simple http-client.

ARR 3.0: Client Affinity Not Working

I have implemented HTTP Load Balancing using Application Request Routing for my web application. I have one Load balancer server and two application servers namely SERVER1 and SERVER2. I have configured the Client Affinity in my server farm in the load balancer server.
But the problem is that requests from same client are sent to different servers. This behavior I have confirmed in Monitoring and Management section in the server farm.
Also I am getting following error in the client "Object reference not set to an instance of object".
This is because When the first request from the client hit the SERVER1 it created an object in the session of the SERVER1. Now the second request from the same client trying to access the object created in session. But the request is hitting the SERVER2 from load balancer instead of SERVER1. As there is no session exists in SERVER2 client is getting this error.
I understand that Client Affinity configuration is meant to handle this problem where in all the subsequent requests from the client are going to the same server which served the first request.
But in my case this feature is not working. Any solution to this will be very helpful.
I found the solution ! The application was working fine with Firefox browser and not working with IE and Chrome. ARR uses a cookie to enable Client Affinity. The Cookie name will be used to set the cookie on the client. That said, the client must accept cookies for client affinity to work properly. Default cookie name is ARRAffinity.
To browse the application I was using the url servername/appname. ARRAffinity Cookie was not getting created when I am browsing the application from IE and Chrome. The cookie got created and the application was working fine when browsed the site using servername.domainname/appname
Old tread but may be useful for someone.
Seems that it's an ie issue or "expected behavior when using Internet Explorer": if a site name does not contain at least one '.' then the ARR Client Affinity cookie is not sent back to the ARR therefore the ARR generates a new one.
So, valid work around is any alias including '.' (dot) as Nagendra mentioned servername.domainname
http://forums.iis.net/t/1178295.aspx?ARR+2+5+Client+Affinity+Not+Working

Can you access the web server logs from an ASP.NET web application?

Is there a way to access referrer information from the server log in a ASP.NET web application?
I would like to know if a customer comes to my web app from a specific site and change the app's behavior accordingly. I could have the webmaster of the other site include a query string, but to my knowledge this wouldn't work because as soon as Tom, Dick or Harry posted the link somewhere else, the query string would be unreliable.
Is there a sure fire way for a web app to know where the user came from?
Why not just check the Request.UrlReferer property and change the behavior if the referer is not any page on your site?
This would be a lot simpler than referencing IIS logs.
You can access the referrer information through the HttpRequest.UrlReferer object.
However you should note:
This can null - so check for null before calling AbsoluteUri on it.
This can be changed fairly easily, so you can't rely on it completely
Why would you not just access the Request host header for the HTTP_REFERER instead of the log file? See here, but note that you are never guaranteed to recieve this information, nor is it reliable if you do.
Request.UrlReferrer.AbsoluteUri
gives you the same as the server logs will. Probably a combo of querystring variable and UrlReferrer will do the best job of ensuring that it came from the right source.
UrlReferrer is sent by the client, and it's not guaranteed to be there.
Are you using a shared environment? Normally they will supply this if you request the logs (normally an option in Plesk or similar). The log directory will probably be one or two folders up from the root http folder, so it may not be accessible using the IIS user.
On a dedicated server then you can obviously configure this manually.

Resources