This is yet another question about set-cookie on localhost. I am facing the same problem as many others here when it comes to the usage of cookies on localhost.
This is my setup:
I am running a reactjs app locally on a url like "https://app.web.product". My hosts file points all requests form app.web.product to 127.0.0.1.
My REST service is hosted on http://127.0.0.1:8000 (using AWS chalice). Each response returns the header "Access-Control-Allow-Origin: https://app.web.product" to ensure that the requests go through from my web app.
The REST services returns as well the header "Set-Cookie: name=value; domain=app.web.product", however, the cookie never gets persisted. I tried in all browsers. In Edge/IE I can at least see in the response header that cookie is been recognized. In Chrome the set-cookie response header is not even been displayed.
I've tried to run my REST service on https and same domain name as the web app just with different port. However, for some reason AWS chalice does not let me run https properly. However, I don't think this will solve the issue so I stopped investigating further.
Any ideas?
So basically, the problem was that Chrome never displayed the cookie in the developer tools. Maybe because the cookie belonged to the server address (127.0.0.1) and not to the domain where my reactjs app was running (app.web.product).
Nevertheless, when I clicked on the info icon on the left hand side of the address bar next to the URL, I did see the cookie! The only remaining thing I had to do is to set the path in the cookie to "/" and that was it.
Related
We are using AWS Amplify for our NextJS web app and keep receiving error when ever I try to load the application once deployed to Amplify. Locally there is no issue.
I am using Amplify's default Auth configuration, with basic email and password auth. It looks like it could be related to the Amplify cookie being set in the header but I cannot find any documentation within AWS to prevent this or reduce the amount of information passed with the header. Any help would be appreciated.
I have faced the same issue and was able to solve it. Here's how -
Identify the CloudFront Distribution ID for your app. You can find it in the Deploy logs of your app build console.
Search & open that particular CF Distribution and go to the Behaviours tab.
Select the Default behaviour (5th one in my case) and hit Edit.
Scroll down to the Cache key and origin requests section.
Here you will find settings to control what's included in the headers of the request that goes to the server. In my case, I didn't need any Cookies so I chose None, and it solved the issue for me.
In your case, you can do the same or pick what all info needs to be in the headers.
Check to see if there are any unnecessary cookies for that domain.
I was getting this error (on a site I don't own). I took a look at the request headers and found a very large number of cookies (several dozen) for the site's domain. I cleaned up the cookies which seemed non-critical and the error went away.
As the error implies, the size of the entire request header section is above 8192 bytes. Request headers include the accept headers, the user agent, the cookies, etc. and all combined can get rather large. Large headers look malicious to some WAFs. I once had a single user having trouble with our site. Turns out they were a polyglot and had configured their browser to accept several dozen languages causing their accept-language header to be suspiciously long, and the WAF refused to proxy the request.
I faced the same issue using Nextjs, amplify and an external Auth provider.
The problem is that AWS S3 service has a request header maximum allowed size of 8192 bytes, so when ever you try to access the static generated pages of Nextjs it returns that error. This has already been asked here
In my case, I was using an external Auth provider and I was able to solve the issue configuring the cookies only for the '/api/' path. That way the Auth cookies are sent only to the Nextjs api endpoints, so your request header is lighter whenever you try to get the static pages.
I have two Spring Web applications that work together. I'm running the first application from the IDE on localhost, while the second one is running in docker on app.127.0.0.1.nip.io.
The two applications interact indirectly through the users browser by redirecting and POSTing between the two apps. This is slightly similar to how an SP and an IdP work together in SAML2.
In my case, the first application on localhost is sending a 302 to the second application. After doing some work, the second application sends an HTML page with a form an JS code to autosubmit it, back to my first application on localhost. The HTML looks similar to this:
<form method=POST action="http://localhost:8080/some/path">
...
</form>
My first application is using Spring Session with a session cookie, and this works just fine. However, when the second application makes the browser POST the form, the browser does not send the session cookie with the POST request.
When both applications are running in docker under .127.0.0.1.nip.io, the cookie is sent.
I've tried to find any hint if this behaviour is expected, and what headers or other bits the applications could use to influence this.
At this point, this is mostly an annoyance while debugging, but I'm concerned that once the two applications will run on different FQDNs and/or different domains, the browsers will also block the cookie being sent.
I've tested this with current versions of Chrome and Firefox.
The problem is the new(ish) SameSite cookie policy that covers exactly this case: another application is POSTing to a host via HTTP. The default now is SameSite: lax, which does not allow sending the first-party cookie values on this request.
The solution is to allow the session cookie to be sent by specifying SameSite: none. Be aware however that this might create security vulnerabilities. For my application, this is not an issue, so I can allow the cookie to always be sent, and especially when I run my application in the debugger.
For the production deployment, I will be able to tighten this, since both applications will run under the same domain (a.example.com and b.example.com), and both will use TLS, so I can set the session cookie to SameSite: lax.
Here's a decent explanation: https://web.dev/samesite-cookies-explained/
Let me start by saying that I have thoroughly looked over all information on stackoverflow and the net in regards to this issue, and I have tried several different things to try and get this working.
I am using the package "cookie-session", but when I set secure to true, my cookies are sent with the request (I can see making a login request with Postman) but the session seems like it is not working through the browser. When secure is set to false everything is fine, works as expected.
Let me explain my setup, I have two servers:
First Server
Serverside rendered React App running Node JS
Running Nginx server
HTTPS is setup here "www.example.com"
Proxying any requests made to "www.example.com/api" to second server
Second Server
Node/express app handling API requests
Running Nginx server
From my understanding, a secure cookie can only be sent if the request is made through HTTPS. Which I believe it is (setup above).
On the second server I have tried using the trust proxy, still no luck:
app.set('trust proxy', 1);
app.use(cookieSession({
maxAge: 7 * 24 * 60 * 60 * 1000,
keys: [env.SESSION.COOKIE_KEY],
secure: true
}));
I also figured that this may have something to do with the headers that are sent with Nginx, I have tried many different headers on both servers e.g. (proxy_set_header X-Forwarded-Proto $scheme), still no luck.
For the life of me I am not sure what to do from here to get secure cookies working.
Also to mention again that everything works fine with secure set to false. When secure is set to true, I can make a request through Postman to my login and receive my cookies in the request, but it appears the session is not applied on the client.
Could this have anything to do with not having a HTTPS cert installed on the second server? If so, how would I add one anyways as both servers run on the same domain "www.example.com" + proxy /api requests to "www.example.com/api"? Thanks for your help.
I need some help in resolving a strange behavior I came across while using Thinktectures Embedded STS locally in my ASP.net MVC application. I don’t see this issue on the server using ADFS.
The issue is
After I sign in into the application, most of the HTTP calls from then on are getting called twice.
The first HTTP request goes without the FedAuth cookie to which the server responds with a status code of 302 (redirect) and another request to the same URL is made but this time with the Fedauth cookie. I'm trying to understand what is causing the browser to send the first request without the FedAuth cookie and also why the server redirects to the same URL?
I also need help in understanding how the EmbeddedSTS URL gets resolved. I went through the code on Github but it is not very clear to me how the EmbeddedSTS url is resolved.
Any help is appreciated.
I was able to figure out the issue on my own.
This issue is related to cookie paths being case sensitive. My virtual directory in localhost was configured as ATSWeb but while making AJAX calls I am constructing the full URL with a different case for the virtual directory (atsweb).
Since the ADFS cookie was set with the path /ATSWeb, while doing the AJAX call the browser is not sending the Fedauth cookie to the server. This is leading to all sorts of issues.
You can read more about cookie paths at the links below.
http://www.allbacktomine.com/blog/2009/02/04/BrowserCookiesThePathIsCaseSensitive.aspx
Why are cookie paths case sensitive?
I read about "HTTP persistent connection" but somehow I don't seem to understand what does persistent mean in this context.
Could you'll elaborate?
It means the server doesn't close the socket once it's finished pushing out the response (so the length of the response has to be otherwise indicated, via headers or chunking), so the client can make other requests on the same socket. A web page often requests several other pieces (images, CSS, scripts, ...) on the same server as the page itself, so reusing the socket for some of those further requests to the same server can reduce overall latency compared to closing the original socket and opening new ones for all the follow-on requests.
All the discussion till now has been from the browser side of things. The browser first request the actual page, and it parses the page and finds out all other resources that it needs before it can render that page. The browser requests these resources and other dependent resources one by one. So maintaining a persistent connection is very efficient here, as the overhead of creating and destroying connections is avoided.
Now from web server side of things, a persistent connection would be one that allows it to "push" content to the web browser. Now HTTP doesn't support this. So, there are few workarounds with javascript where the page is basically refreshed after a while.
You can see this being trick being used by many web based email providers which continuously keep checking in the background for new mails. This gives a feeling that when a new mails arrives, the server "pushes" the new mail notification to the web browser. But in fact, its actually the web browser which keeps on checking the server for any new mail.
Also another point that I would like to state is that we actually don't see any page refresh that's because of another trick which allows only specific parts of the page to be refreshed by the request. (HINT: AJAX)
I think this is a switching for http or https for website browser. If you have old https:// and you are now using http for browser .htaccess file then this problem should created via yoast plugins one page crawl page. don't worry about it is not important error. For hackers this is a way to hack your website if your ssl connection is empty they should attach there page or domain to your ssl connection
e.b http://www.example.com and when you brows https://www.example.com in browser there are some other link with open your site domain.
Solution for this always use your full address for website: to protect hackers against your website use ssl and https:/ page for your website.
Then this problem have never scene in any test site or page.