Let me start by saying that I have thoroughly looked over all information on stackoverflow and the net in regards to this issue, and I have tried several different things to try and get this working.
I am using the package "cookie-session", but when I set secure to true, my cookies are sent with the request (I can see making a login request with Postman) but the session seems like it is not working through the browser. When secure is set to false everything is fine, works as expected.
Let me explain my setup, I have two servers:
First Server
Serverside rendered React App running Node JS
Running Nginx server
HTTPS is setup here "www.example.com"
Proxying any requests made to "www.example.com/api" to second server
Second Server
Node/express app handling API requests
Running Nginx server
From my understanding, a secure cookie can only be sent if the request is made through HTTPS. Which I believe it is (setup above).
On the second server I have tried using the trust proxy, still no luck:
app.set('trust proxy', 1);
app.use(cookieSession({
maxAge: 7 * 24 * 60 * 60 * 1000,
keys: [env.SESSION.COOKIE_KEY],
secure: true
}));
I also figured that this may have something to do with the headers that are sent with Nginx, I have tried many different headers on both servers e.g. (proxy_set_header X-Forwarded-Proto $scheme), still no luck.
For the life of me I am not sure what to do from here to get secure cookies working.
Also to mention again that everything works fine with secure set to false. When secure is set to true, I can make a request through Postman to my login and receive my cookies in the request, but it appears the session is not applied on the client.
Could this have anything to do with not having a HTTPS cert installed on the second server? If so, how would I add one anyways as both servers run on the same domain "www.example.com" + proxy /api requests to "www.example.com/api"? Thanks for your help.
Related
When trying to run a request through swagger UI, I receive the following response on Swagger
TypeError: Failed to fetch
After searching around, I found that a possible cause of this error is because of a CORS issue, where the origin is changed in the request (as you can see at this other post here). However, in my case, this is not running through some other proxy, it is hosted on a locally hosted server and that server is not changing any of the headers. I realized this when I tried to allow the API to just accept any CORS headers to test if this was the issue; sadly it was not and the issue persisted.
The API is running on IIS hosted on a server that is hosted locally. The API is running as an application on the default website and is accessed via the following url:
http://servername/application-name/swagger/index.html
Can anyone help with this issue?
After further investigation, I found that when I looked at the requests being sent to the server through the dev tools on the browser, that the URL was being changed from http to https on the request of the endpoint through swagger.
HTTPS, has not been set up on the server and returns a 404 (as seen in the dev tools).
It turns out, that even though the server has not been setup to serve content via HTTPS, the requests where still redirected to HTTPS and this was the reason
app.UseHttpsRedirection();
So, even though swagger was able to be loaded on HTTP, when the request was made to the API, the API responded with a 307 - for redirection and the request was redirected to HTTPS - which in turn returned 404. This 404 response was the cause the TypeError: Failed to fetch
The recommended fix for this is to turn off https redirection (ONLY FOR TESTING PURPOSES) and the other is to enable the server to serve the content correctly over HTTPS, so that when a call is made, it is not redirected, but rather sent straight to the correct API address on HTTPS - which should not return the data correctly, since the server can serve HTTPS content
I am using Sustainsys.Saml2 for authentication in my environment. It has worked well until I added a proxy into the loop.
The data flow is:
1) User navigates to site via proxy server (example.mysite.com)
2) Proxy forwards to internal application (example.internal.mysite.com)
3) Saml does its thing, forwards to service for authenticate and redirect
step
4) Weird part: The saml response is sent back to the original host hitting Saml2/Acs (example.mysite.com/Saml2/Acs) and responding as a 303 -- the assumption is that it should be 303'ing to example.mysite.com, but instead it's to the proxy host name at example.internal.mysite.com
Why is it doing that? It doesn't seem to be respecting the ReturnUrl (which is example.mysite.com). I see no evidence of the proxy URL from requests/responses during the auth process until #4.
The Sustainsys.Saml2 library builds various URLs from what it sees in the incoming HTTP Request. When a proxy is involved, that might not be the same URL as the client sees.
There's a setting PublicOrigin that you can set to handle this, that will override any host found in the request.
However, in The AspNetCore2 handler it is assumed that this has already been fixed in the Request object, before the handler is invoked. This is usually done automatically by the hosting environment if hosting in Kestrel behind IIS or similar.
This is yet another question about set-cookie on localhost. I am facing the same problem as many others here when it comes to the usage of cookies on localhost.
This is my setup:
I am running a reactjs app locally on a url like "https://app.web.product". My hosts file points all requests form app.web.product to 127.0.0.1.
My REST service is hosted on http://127.0.0.1:8000 (using AWS chalice). Each response returns the header "Access-Control-Allow-Origin: https://app.web.product" to ensure that the requests go through from my web app.
The REST services returns as well the header "Set-Cookie: name=value; domain=app.web.product", however, the cookie never gets persisted. I tried in all browsers. In Edge/IE I can at least see in the response header that cookie is been recognized. In Chrome the set-cookie response header is not even been displayed.
I've tried to run my REST service on https and same domain name as the web app just with different port. However, for some reason AWS chalice does not let me run https properly. However, I don't think this will solve the issue so I stopped investigating further.
Any ideas?
So basically, the problem was that Chrome never displayed the cookie in the developer tools. Maybe because the cookie belonged to the server address (127.0.0.1) and not to the domain where my reactjs app was running (app.web.product).
Nevertheless, when I clicked on the info icon on the left hand side of the address bar next to the URL, I did see the cookie! The only remaining thing I had to do is to set the path in the cookie to "/" and that was it.
I have a cookie which is sent from the client which is used as part of my MVC web service, however now that I have integrated a hub into this application the hub doesnt get sent the cookie, whereas the mvc app does.
Now after reading other similar questions (not that there are many) the cookies domain seems to be to blame, or the path is not set.
Currently my system has 2 web apps, the ui and service. In my dev environment it is like so:
Service
http://localhost:23456/<some route>
UI
http://localhost:34567/<some route>
So in the above example, the ui will send a query to the service, getting an authorisation cookie on the response, which is used elsewhere.
In this example the cookie domain from the service is localhost, as from what I have read and seen on other questions there is no need for a port, it will automatically just allow all ports.
Are HTTP cookies port specific?
SignalR connection request does not send cookies
So it would appear to me that the cookie above has correct domain, and the path is set to /, so it should work. However it doesn't send them in the request from javascript.
My request is a CORS request so I am not sure if there are any quirks around that but all normal jquery ajax calls make it to the server fine with the cookies, any ideas?
OH also my cookies are httponly as well, not sure if this makes a difference...
== Edit ==
Have tried to rule out some stuff, have turned off httponly and it still refuses to send the cookies to the server, I have also noticed a few outstanding cookie issues which mention adding the following code in somewhere to make ajax behave a certain way:
$.ajax({
xhrFields: {withCredentials: true}
})
Tried using that and still no luck, so I am out of ideas.
I raised an issue as there is an underlying issue with < version 2 beta of SignalR relating to CORS and cookies.
https://github.com/SignalR/SignalR/issues/2318
However you can manually fix this issue by appending:
xhrFields: {withCredentials: true}
to all ajax requests within the jquery.signalr-*.js, this will then send cookies over CORS, although I do not know if this has any adverse effects on older browsers or IE.
I'm developing your standard high traffic ecommerce website and want to setup caching with Varnish. The particular thing on this setup is that the application will return different content depending on the user's particular location.
So my plans are these:
Setup Nginx with GeoIP module, so I can get a X-Country: XX header on all the requests going to the app backends.
Configure the Rails application to always return a "Vary: X-Country" response header.
Put the Varnish server behind the Nginx and the app backends, so it can cache multiple versions of the objects served by Rails, and serve them based on the request headers set by Nginx (not the client browser)
Does anyone have experience with a setup like this? Anything I should be aware of?
If GeoIP lookup is slow, and/or you want to enable people to override the country setting, you could use a country cookie and have the front-end Varnish check for it.
If there is no country cookie, forward the request to your nginx back-end for GeoIP lookup. Nginx serves a redirect with a Set-Cookie: country=us header. If you want to avoid redirects and support cookie-refusing clients/robots, ngingx can forward it to Rails and still try to set the country cookie in the response. Or Varnish can capture the redirect response and do a "restart" with the newly set cookie and go to the back-end
If you have already have a country cookie, use this in your Varnish hash
If Rails can do GeoIP resolving, you don't need Ngingx, except when you use it to serve files...