HTTP caching: why is browser not checking server at all before presuming cached file is current? - http

This is about some code I inherited; the intent is clear, but (at least in Firefox and Chrome) it is not behaving as intended.
The idea is clearly to build a PNG based on client-side data and to cache it unless and until that data changes. The intent presumably is that the state of the PNG is preserved regardless of whether or not the client is using cookies, local storage, etc., but at the same time the server does not preserve data about this client.
Client-side JavaScript:
function read_or_write_png(name, value) {
// WRITE if value is defined, non-null, etc., get otherwise
if (value) {
// WRITE
// Use cookie to convey new data to server
document.cookie = 'bx_png=' + value + '; path=/';
// bx_png.php generates the image
// based off of the http cookie and returns it cached
var img = new Image();
img.style.visibility = 'hidden';
img.style.position = 'absolute';
img.src = 'bx_png.php?name=' + name; // the magic saying "load this".
// 'name' is not consulted server-side,
// it's here just to get uniqueness
// for what is cached.
} else {
// READ
// Kill cookie so server should send a 304 header
document.cookie = 'bx_png=; expires=Mon, 20 Sep 2010 00:00:00 UTC; path=/';
// load the cached .png
var img = new Image();
img.style.visibility = 'hidden';
img.style.position = 'absolute';
img.src = 'bx_png.php?name=' + name;
}
}
Server-side PHP in bx_png.php:
if (!array_key_exists('bx_png', $_COOKIE) || !isset($_COOKIE['bx_png'])) {
// we don't have a cookie. Client side code does this on purpose. Force cache.
header("HTTP/1.1 304 Not Modified");
} else {
header('Content-Type: image/png');
header('Last-Modified: Wed, 30 Jun 2010 21:36:48 GMT');
header('Expires: Tue, 31 Dec 2030 23:30:45 GMT');
header('Cache-Control: private, max-age=630720000');
// followed by the content of the PNG
}
This works fine to write the PNG the first time and cache it, but clearly the intention is to be able to call this again, pass a different value for the same name, and have that cached. In practice, once the PNG has been cached, it would appear (via Fiddler) that the server is not called at all. That is, on an attempted read, rather than go to the server and get a 304 back, the browser just takes the content from the cache without ever talking to the server. In and of itself, that part is harmless, but of course what is harmful is that the same thing happens on an attempted write, and the server never has a chance to send back a distinct PNG based on the new value.
Does anyone have any idea how to tweak this to fulfill its apparent intention? Maybe something a bit different in the headers? Maybe some way of clearing the cache from client-side? Maybe something else entirely that I haven't thought of? I'm a very solid developer in terms of both server-side and client-side, but less experienced with trickiness like this around the HTTP protocol as such.

You need to add must-revalidate to your Cache-Control header to tell the browser to do that.

Try cache-control: no-store as it fixed this exact same problem for me in Safari/WebKit. (I think Chrome fixed it in the time since your question.)
It's still an open WebKit bug but they added a fix for this header.

Related

'Access-Control-Allow-Origin' missing using actix-web

Stuck on this problem where I received this error everytime making POST request to my actix-web server.
CORS header 'Access-Control-Allow-Origin' missing
my javascript (VueJs running on localhost:3000) :
let data = //some json data
let xhr = new XMLHttpRequest();
xhr.open("POST", "http://localhost:8080/abc");
xhr.setRequestHeader("Content-Type", "application/json");
xhr.onload = () => {
console.log(xhr.responseText);
}
xhr.send(JSON.stringify(data));
My Actix_Web server (running on localhost:8080) :
#[actix_web::main]
async fn main() {
HttpServer::new(move || {
let cors = Cors::default()
.allowed_origin("http://localhost:3000/")
.allowed_methods(vec!["GET", "POST"])
.allowed_header(actix_web::http::header::ACCEPT)
.allowed_header(actix_web::http::header::CONTENT_TYPE)
.max_age(3600);
App::new()
.wrap(cors)
.service(myfunc)
})
.bind(("0.0.0.0", 8080))
.unwrap()
.run()
.await
.unwrap();
}
my cargo.toml dependencies
[dependencies]
actix-web = "4"
actix-cors = "0.6.1"
...
Got any idea?
Okay, so I've done some testing. If you're writing a public API, you probably want to allow all origins. For that you may use the following code:
HttpServer::new(|| {
let cors = Cors::default().allow_any_origin().send_wildcard();
App::new().wrap(cors).service(greet)
})
If you're not writing a public API... well, I'm not sure what they want you to do. I've not figured out how to tell the library to send that header. I guess I will look at the code.
UPDATE:
So funny story, this is how you allow specific origins:
let cors = Cors::default()
.allowed_origin("localhost:3000")
.allowed_origin("localhost:2020");
BUT, and oh boy, is that but juicy. The Access-Control-Allow-Origin response header is only set when there is a Origin request header. That header is normally added by the browser in certain cases 1. So I did that (using the Developer tools in the browser). What did I get? "Origin is not allowed to make this request". I set my origin header to localhost:3000. Turns out, the arctix library simply discards that header if no protocol was provided... (e.g. http://) (I assume it discards it, if it deems its format invalid). That internally results in the header being the string "null". Which is, checks notes, not in the list of allowed origins.
And now the grand finale:
Your origin header needs to be set to (by either you or the browser): "http://localhost:3000".
Your configuration needs to include: .allowed_origin("http://localhost:3000").
After doing that, the server will happily echo back your origin header in the Access-Control-Allow-Origin header. And it will only send that one.
I've no idea if any of that is what the standard specifies (or not). I encourage you to read through it, and if it doesn't comply, please open an issue on GitHub. I would do it myself, but I'm done with programming for today.
Cheers!

Web API as a Proxy and Chunked Transfer Encoding

I have been playing around with using Web API (Web Host) as a proxy server and have run into an issue with how my Web API proxy handles responses with the "Transfer-Encoding: chunked" header.
When bypassing the proxy, the remote resource sends the following response headers:
Cache-Control:no-cache
Content-Encoding:gzip
Content-Type:text/html
Date:Fri, 24 May 2013 12:42:27 GMT
Expires:-1
Pragma:no-cache
Server:Microsoft-IIS/8.0
Transfer-Encoding:chunked
Vary:Accept-Encoding
X-AspNet-Version:4.0.30319
X-Powered-By:ASP.NET
When going through my Web API based proxy, my request hangs unless I explicitly reset the TransferEncodingChunked property on the response header to false:
response.Headers.TransferEncodingChunked = false;
I admit, I don't fully understand what impact setting the TransferEncodingChunked property has, but it seems strange to me that in order to make the proxy work as expected, I need to set this property to false when clearly the incoming response has a "Transfer-Encoding: chunked" header. I am also concerned about side effects to explicitly setting this property. Can anyone help me understand what is going on and why setting this property is required?
UPDATE: So I did a little more digging into the difference in the response when going through the proxy vs. not. Whether I explicitly set the TransferEncodingChunked property to false, the response headers when coming through the proxy are exactly the same as when not going through the proxy. However, the response content is different. Here are a few samples (I turned off gzip encoding):
// With TransferEncodingChunked = false
2d\r\n
This was sent with transfer-encoding: chunked\r\n
0\r\n
// Without explicitly setting TransferEncodingChunked
This was sent with transfer-encoding: chunked
Clearly, the content sent with TransferEncodingChunked set to false is in fact transfer encoded. This is actually the correct response as it is what was received from the requested resource behind the proxy. What continues to be strange is the second scenario in which I don't explicitly set TransferEncodingChunked on the response (but it is in the response header received from the proxied service). Clearly, in this case, the response is NOT in fact transfer encoded by IIS, in spite of the fact that the actual response is. Strange...this is starting to feel like designed behavior (in which case, I'd love to know how / why) or a bug in IIS, ASP.Net, or Web API.
Here is a simplified version of the code I am running:
Proxy Web API application:
// WebApiConfig.cs
config.Routes.MapHttpRoute(
name: "Proxy",
routeTemplate: "{*path}",
handler: HttpClientFactory.CreatePipeline(
innerHandler: new HttpClientHandler(), // Routes the request to an external resource
handlers: new DelegatingHandler[] { new ProxyHandler() }
),
defaults: new { path = RouteParameter.Optional },
constraints: null
);
// ProxyHandler.cs
public class ProxyHandler : DelegatingHandler
{
protected override async System.Threading.Tasks.Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
{
// Route the request to my web application
var uri = new Uri("http://localhost:49591" + request.RequestUri.PathAndQuery);
request.RequestUri = uri;
// For GET requests, somewhere upstream, Web API creates an empty stream for the request.Content property
// HttpClientHandler doesn't like this for GET requests, so set it back to null before sending along the request
if (request.Method == HttpMethod.Get)
{
request.Content = null;
}
var response = await base.SendAsync(request, cancellationToken);
// If I comment this out, any response that already has the Transfer-Encoding: chunked header will hang in the browser
response.Headers.TransferEncodingChunked = false;
return response;
}
}
And my web application controller which creates a "chunked" response (also Web API):
public class ChunkedController : ApiController
{
public HttpResponseMessage Get()
{
var response = Request.CreateResponse(HttpStatusCode.OK);
var content = "This was sent with transfer-encoding: chunked";
var bytes = System.Text.Encoding.ASCII.GetBytes(content);
var stream = new MemoryStream(bytes);
response.Content = new ChunkedStreamContent(stream);
return response;
}
}
public class ChunkedStreamContent : StreamContent
{
public ChunkedStreamContent(Stream stream)
: base(stream) { }
protected override bool TryComputeLength(out long length)
{
length = 0L;
return false;
}
}
From an HttpClient standpoint, content chunking is essentially a detail of the transport. The content provided by response.Content is always de-chunked for you by HttpClient.
It looks like there's a bug in Web API that it doesn't correctly (re-)chunk content when requested by the response.Headers.TransferEncodingChunked property when running on IIS. So the problem is that the proxy is telling the client, via the headers, that the content is chunked when in fact it is not. I've filed the bug here:
https://aspnetwebstack.codeplex.com/workitem/1124
I think your workaround is the best option at the moment.
Also notice that you have multiple layers here that likely weren't designed/tested for proxying scenarios (and may not support it). On the HttpClient side, note that it will automatically decompress and follow redirects unless you turn that behavior off. At a minimum, you'll want to set these two properties:
http://msdn.microsoft.com/en-us/library/system.net.http.httpclienthandler.allowautoredirect.aspx
http://msdn.microsoft.com/en-us/library/system.net.http.httpclienthandler.automaticdecompression.aspx
On the WebApi/IIS side, you've found at least one bug, and it wouldn't be suprising to find others as well. Just be forewarned there may be bugs like this currently writing a proxy using these technologies outside their main design use cases.

Unknown reason for Timeout on HTTP HEAD request

I'm using ASP.NET 3.5 to build a website. One area of the website shows 28 video thumbnail images, which are jpeg's hosted on another webserver. If one or more of these jpegs do not exist, I want to display a locally hosted default image to the user, rather than a broken image link in the browser.
The approach I have taken to implement this is whenever the page is rendered it will perform an HTTP HEAD request to each of the images. If I get a 200 OK status code back, then the image is good and I can write out <img src="http://media.server.com/media/123456789.jpg" />. If I get a 404 Not Found, then I write out <img src="/images/defaultthumb.jpg" />.
If course I don't want to do this every time for all requests, and so I've implemented a list of cached image status objects stored at application level so that each image is only checked once every 5 minutes across all users, but this doesn't really have any bearing on my issue.
This seems to work very well. My problem is that for specific images, the HTTP HEAD request fails with Request Timed Out.
I have set my timeout value very low to only 200ms so that is doesn't delay the page rendering too much. This timeout seems to be fine for most of the images, and I've tried playing around and increasing this during debugging, but it makes no difference even if it's 10s or more.
I write out a log file to see whats happening, and this is what I get (edited for clarify and anonymity):
14:24:56.799|DEBUG|[HTTP HEAD CHECK OK [http://media.server.com/adpm/505C3080-EB4F-6CAE-60F8-B97F77A43A47/videothumb.jpg]]
14:24:57.356|DEBUG|[HTTP HEAD CHECK OK [http://media.server.com/adpm/66E2C916-EEB1-21D9-E7CB-08307CEF0C10/videothumb.jpg]]
14:24:57.914|DEBUG|[HTTP HEAD CHECK OK [http://media.server.com/adpm/905C3D99-C530-46D1-6B2B-63812680A884/videothumb.jpg]]
...
14:24:58.470|DEBUG|[HTTP HEAD CHECK OK [http://media.server.com/adpm/1CE0B04D-114A-911F-3833-D9E66FDF671F/videothumb.jpg]]
14:24:59.027|DEBUG|[HTTP HEAD CHECK OK [http://media.server.com/adpm/C3D7B5D7-85F2-BF12-E32E-368C1CB45F93/videothumb.jpg]]
14:25:11.852|ERROR|[HTTP HEAD CHECK ERROR [http://media.server.com/adpm/BED71AD0-2FA5-EA54-0B03-03D139E9242E/videothumb.jpg]] The operation has timed out
Source: System
Target Site: System.Net.WebResponse GetResponse()
Stack Trace: at System.Net.HttpWebRequest.GetResponse()
at MyProject.ApplicationCacheManager.ImageExists(String ImageURL, Boolean UseCache) in d:\Development\MyProject\trunk\src\Web\App_Code\Common\ApplicationCacheManager.cs:line 62
14:25:12.565|ERROR|[HTTP HEAD CHECK ERROR [http://media.server.com/adpm/92399E61-81A6-E7B3-4562-21793D193528/videothumb.jpg]] The operation has timed out
Source: System
Target Site: System.Net.WebResponse GetResponse()
Stack Trace: at System.Net.HttpWebRequest.GetResponse()
at MyProject.ApplicationCacheManager.ImageExists(String ImageURL, Boolean UseCache) in d:\Development\MyProject\trunk\src\Web\App_Code\Common\ApplicationCacheManager.cs:line 62
14:25:13.282|ERROR|[HTTP HEAD CHECK ERROR [http://media.server.com/adpm/7728C3B6-69C8-EFAA-FC9F-DAE70E1439F9/videothumb.jpg]] The operation has timed out
Source: System
Target Site: System.Net.WebResponse GetResponse()
Stack Trace: at System.Net.HttpWebRequest.GetResponse()
at MyProject.ApplicationCacheManager.ImageExists(String ImageURL, Boolean UseCache) in d:\Development\MyProject\trunk\src\Web\App_Code\Common\ApplicationCacheManager.cs:line 62
As you can see, the first 25 HEAD requests work, and the final 3 do not. It's always the last three.
If I paste one of the failed HEAD request URLs into a web browser: http://media.server.com/adpm/BED71AD0-2FA5-EA54-0B03-03D139E9242E/videothumb.jpg, it loads the image with no problems.
To try to work out what is happening here, I used Wireshark to capture all of the HTTP requests that are sent to the webserver hosting the images. For the log example I've given, I can see 25 HEAD requests for the 25 that were successful, but the 3 that failed do NOT appear in the wireshark trace.
Other than the images having different visual content, there is no difference from one image to the next.
To eliminate any problems with the URL itself (even though it works in a browser) I changed the order by switching one of the first images with one of the last failed three. When I do this, the problem goes away for the one that used to fail, and starts failing for the one that was bumped down to the end of the list.
So I think I can deduce from the above that when more than 25 HEAD requests occur in quick succession, subsequent HEAD requests fail regardless of the specific URL. I also know that the issue is on the IIS server rather than the remote image hosting server, due to the lack of requests in the Wireshark trace beyond the first 25.
The code snippet I'm using to perform the HEAD requests is shown below. Can anyone give me any suggestions as to what might be the problem? I've tried various different combinations of request header values, but none of them seem to make any difference. My gut feeling is there is some IIS setting somewhere that limits the number of concurrent HttpWebRequests's to 25 in any one request to an ASP.NET page.
try {
HttpWebRequest hwr = (HttpWebRequest)WebRequest.Create(ImageURL);
hwr.Method = "HEAD";
hwr.KeepAlive = false;
hwr.AllowAutoRedirect = false;
hwr.Accept = "image/jpeg";
hwr.Timeout = 200;
hwr.CachePolicy = new System.Net.Cache.RequestCachePolicy(System.Net.Cache.RequestCacheLevel.Reload);
//hwr.Connection = "close";
HttpWebResponse hwr_result = (HttpWebResponse)hwr.GetResponse();
if (hwr_result.StatusCode == HttpStatusCode.OK) {
Diagnostics.Diags.Debug("HTTP HEAD CHECK OK [" + ImageURL + "]", HttpContext.Current.Request);
// EXISTENCE CONFIRMED - ADD TO CACHE
if (UseCache) {
_ImageExists.Value.RemoveAll(ie => ie.ImageURL == ImageURL);
_ImageExists.Value.Add(new ImageExistenceCheck() { ImageURL = ImageURL, Found = true, CacheExpiry = DateTime.Now.AddMinutes(5) });
}
// RETURN TRUE
return true;
} else if (hwr_result.StatusCode == HttpStatusCode.NotFound) {
throw new WebException("404");
} else {
throw new WebException("ERROR");
}
} catch (WebException ex) {
if (ex.Message.Contains("404")) {
Diagnostics.Diags.Debug("HTTP HEAD CHECK NOT FOUND [" + ImageURL + "]", HttpContext.Current.Request);
// NON-EXISTENCE CONFIRMED - ADD TO CACHE
if (UseCache) {
_ImageExists.Value.RemoveAll(ie => ie.ImageURL == ImageURL);
_ImageExists.Value.Add(new ImageExistenceCheck() { ImageURL = ImageURL, Found = false, CacheExpiry = DateTime.Now.AddMinutes(5) });
}
return false;
} else {
Diagnostics.Diags.Error(HttpContext.Current.Request, "HTTP HEAD CHECK ERROR [" + ImageURL + "]", ex);
// ASSUME IMAGE IS OK
return true;
}
} catch (Exception ex) {
Diagnostics.Diags.Error(HttpContext.Current.Request, "GENERAL CHECK ERROR [" + ImageURL + "]", ex);
// ASSUME IMAGE IS OK
return true;
}
I have solved this myself. The problem was indeed the number of allowed connections, which was set to 24 by default.
In my case, I am going to only perform the image check if the MyHttpWebRequest.ServicePoint.CurrentConnections is less than 10.
To increase the max limit, just set ServicePointManager.DefaultConnectionLimit to the number of concurrent connections you require.
An alternative which may help some people would be to reduce the idle time that is the time a connection waits until it destroys itself. To change this, you need to set MyHttpWebRequest.ServicePoint.MaxIdleTime to the timeout value in milliseconds.

Prevent flex from caching an external resource

I'm writing a flex application that polls an xml file on the server to check for updated data every few seconds, and I'm having trouble preventing it from caching the data and failing to respond to it being updated.
I've attempted to set headers using the IIS control panel to use the following, without any luck:
CacheControl: no-cache
Pragma: no-cache
I've also attempted adding a random HTTP GET parameter to the end of the request URL, but that seems like it's stripped off by the HttpService class before the request is made. Here's the code to implement it:
http.url = "test.xml?time=" + new Date().getMilliseconds();
And here's the debug log that makes me think it failed:
(mx.messaging.messages::HTTPRequestMessage)#0
body = (Object)#1
clientId = (null)
contentType = "application/x-www-form-urlencoded"
destination = "DefaultHTTP"
headers = (Object)#2
httpHeaders = (Object)#3
messageId = "AAB04A17-8CB3-4175-7976-36C347B558BE"
method = "GET"
recordHeaders = false
timestamp = 0
timeToLive = 0
url = "test.xml"
Has anyone dealt with this problem?
The cache control HTTP header is "Cache-Control" ... note the hyphen! It should do the trick. If you leave out the hyphen, it is not likely to work.
I used the getTime() to make the date into a numeric string that did the trick. I also changed GET to POST. There were some issues with different file extensions being cached differently. For instance, a standard dynamic extension like .php or .jsp might not be cached by the browser and
private var myDate:Date = new Date();
[Bindable]
private var fileURLString:String = "http://www.mysite.com/data.txt?" + myDate.getTime();
Hopefully this helps someone.
I also threw a ton of the header parameters at it but they never fully did the trick. Examples:
// HTTPService called service
service.headers["Pragma"] = "no-cache"; // no caching of the file
service.headers["Cache-Control"] = "no-cache";

How to detect server-side whether cookies are disabled

How can I detect on the server (server-side) whether cookies in the browser are disabled? Is it possible?
Detailed explanation: I am processing an HTTP request on the server. I want to set a cookie via the Set-Cookie header. I need to know at that time whether the cookie will be set by the client browser or my request to set the cookie will be ignored.
Send a redirect response with the cookie set; when processing the (special) redirected URL test for the cookie - if it's there redirect to normal processing, otherwise redirect to an error state.
Note that this can only tell you the browser permitted the cookie to be set, but not for how long. My FF allows me to force all cookies to "session" mode, unless the site is specifically added to an exception list - such cookies will be discarded when FF shuts down regardless of the server specified expiry. And this is the mode I run FF in always.
You can use Javascript to accomplish that
Library:
function createCookie(name, value, days) {
var expires;
if (days) {
var date = new Date();
date.setTime(date.getTime() + (days * 24 * 60 * 60 * 1000));
expires = "; expires=" + date.toGMTString();
}
else expires = "";
document.cookie = name + "=" + value + expires + "; path=/";
}
function readCookie(name) {
var nameEQ = name + "=";
var ca = document.cookie.split(';');
for (var i = 0; i < ca.length; i++) {
var c = ca[i];
while (c.charAt(0) == ' ') c = c.substring(1, c.length);
if (c.indexOf(nameEQ) == 0) return c.substring(nameEQ.length, c.length);
}
return null;
}
function eraseCookie(name) {
createCookie(name, "", -1);
}
function areCookiesEnabled() {
var r = false;
createCookie("testing", "Hello", 1);
if (readCookie("testing") != null) {
r = true;
eraseCookie("testing");
}
return r;
}
Code to run:
alert(areCookiesEnabled());
Remember
This only works if Javascript is enabled!
I dont think there are direct ways to check. The best way is to store a value in the cookie and try to read them and decide whether cookies are enabled or not.
The below answer was written a long time ago. Now, for better or worse, due to laws in various countries it has become either good practice - or a legal requirement - not to require cookies except where necessary, at least until the user has had a chance to consent to such mechanisms.
It's a good idea to only do this when the user is trying to do something that initiates a session, such as logging in, or adding something to their cart. Otherwise, depending on how you handle it, you're potentially blocking access to your entire site for users - or bots - that don't support cookies.
First, the server checks the login data as normal - if the login data is wrong the user receives that feedback as normal. If it's right, then the server immediately responds with a cookie and a redirect to a page which is designed to check for that cookie - which may just be the same URL but with some flag added to the query string. If that second page doesn't receive the cookie, then the user receives a message stating that they cannot log in because cookies are disabled on their browser.
If you're following the Post-Redirect-Get pattern for your login form already, then this setting and checking of the cookie does not add any additional requests - the cookie can be set during the existing redirect, and checked by the destination that loads after the redirect.
Now for why I only do a cookie test after a user-initiated action other than on every page load. I have seen sites implement a cookie test on every single page, not realising that this is going to have effects on things like search engines trying to crawl the site. That is, if a user has cookies enabled, then the test cookie is set once, so they only have to endure a redirect on the first page they request and from then on there are no redirects. However, for any browser or other user-agent, like a search engine, that doesn't return cookies, every single page could simply result in a redirect.
Another method of checking for cookie support is with Javascript - this way, no redirect is necessarily needed - you can write a cookie and read it back virtually immediately to see if it was stored and then retrieved. The downside to this is it runs in script on the client side - ie if you still want the message about whether cookies are supported to get back to the server, then you still have to organise that - such as with an Ajax call.
For my own application, I implement some protection for 'Login CSRF' attacks, a variant of CSRF attacks, by setting a cookie containing a random token on the login screen before the user logs in, and checking that token when the user submits their login details. Read more about Login CSRF from Google. A side effect of this is that the moment they do log in, I can check for the existence of that cookie - an extra redirect is not necessary.
Try to store something into a cookie, and then read it. If you don't get what you expect, then cookies are probably disabled.
I always used this:
navigator.cookieEnabled
According to w3schools "The cookieEnabled property is supported in all major browsers.".
However, this works for me when i am using forms, where i can instruct the browser to send the additional information.
check this code , it' will help you .
<?php
session_start();
function visitor_is_enable_cookie() {
$cn = 'cookie_is_enabled';
if (isset($_COOKIE[$cn]))
return true;
elseif (isset($_SESSION[$cn]) && $_SESSION[$cn] === false)
return false;
// saving cookie ... and after it we have to redirect to get this
setcookie($cn, '1');
// redirect to get the cookie
if(!isset($_GET['nocookie']))
header("location: ".$_SERVER['REQUEST_URI'].'?nocookie') ;
// cookie isn't availble
$_SESSION[$cn] = false;
return false;
}
var_dump(visitor_is_enable_cookie());
NodeJS - Server Side - Cookie Check Redirect
Middleware - Express Session/Cookie Parser
Dependencies
var express = require('express'),
cookieParser = require('cookie-parser'),
expressSession = require('express-session')
Middleware
return (req, res, next) => {
if(req.query.cookie && req.cookies.cookies_enabled)
return res.redirect('https://yourdomain.io' + req.path)
if(typeof(req.cookies.cookies_enabled) === 'undefined' && typeof(req.query.cookie) === 'undefined') {
return res.cookie('cookies_enabled', true, {
path: '/',
domain: '.yourdomain.io',
maxAge: 900000,
httpOnly: true,
secure: process.env.NODE_ENV ? true : false
}).redirect(req.url + '?cookie=1')
}
if(typeof(req.cookies.cookies_enabled) === 'undefined') {
var target_page = 'https://yourdomain.io' + (req.url ? req.url : '')
res.send('You must enable cookies to view this site.<br/>Once enabled, click here.')
res.end()
return
}
next()
}
The question whether cookies are "enabled" is too boolean. My browser (Opera) has a per-site cookie setting. Furthermore, that setting is not yes/no. The most useful form is in fact "session-only", ignoring the servers' expiry date. If you test it directly after setting, it will be there. Tomorrow, it won't.
Also, since it's a setting you can change, even testing whether cookies do remain only tells you about the setting when you tested. I might have decided to accept that one cookie, manually. If I keep being spammed, I can (and at times, will) just turn off cookies for that site.
If you only want to check if session cookies (cookies that exist for the lifetime of the session) are enabled, set your session mode to AutoDetect in your web.config file, then the Asp.Net framework will write a cookie to the client browser called AspxAutoDetectCookieSupport. You can then look for this cookie in the Request.Cookies collection to check if session cookies are enabled on the client.
E.g. in your web.config file set:
<sessionState cookieless="AutoDetect" />
Then check if cookies are enabled on the client with:
if (Request.Cookies["AspxAutoDetectCookieSupport"] != null) { ... }
Sidenote: By default this is set to UseDeviceProfile, which will attempt to write cookies to the client so long as the client supports them, even if cookies are disabled. I find it slightly odd that this is the default option as it seems sort of pointless - sessions won't work with cookies disabled in the client browser with it set to UseDeviceProfile, and if you support cookieless mode for clients that don't support cookies, then why not use AutoDetect and support cookieless mode for clients that have them disabled...
I'm using a much more simplified version of "balexandre"'s answer above. It tries to set, and read a session cookie for the sole purpose of determining if cookies are enabled. And yes, this requires that JavaScript is enabled as well. So you may want a tag in there if you care to have one.
<script>
// Cookie detection
document.cookie = "testing=cookies_enabled; path=/";
if(document.cookie.indexOf("testing=cookies_enabled") < 0)
{
// however you want to handle if cookies are disabled
alert("Cookies disabled");
}
</script>
<noscript>
<!-- However you like handling your no JavaScript message -->
<h1>This site requires JavaScript.</h1>
</noscript>
The cookieEnabled property returns a Boolean value that specifies whether or not cookies are enabled in the browser
<script>
if (navigator.cookieEnabled) {
// Cookies are enabled
}
else {
// Cookies are disabled
}
</script>
<?php session_start();
if(SID!=null){
echo "Please enable cookie";
}
?>
Use navigator.CookieEnabled for cookies enabled(it will return true of false) and the Html tag noscript. By the way navigator.cookieEnabled is javascript so don't type it in as HTML

Resources