Google Chrome restores session cookies after a crash, how to avoid? - asp.net

On Google Chrome (I saw this with version 35 on Windows 8.1, so far I didn't try other versions) when browser crashes (or you simply unplug power cable...) you'll be asked to recover previous session when you'll open it again. Good feature but it will restore session cookies too.
I don't want to discuss here if it's a bug or not anyway IMO it's a moderate security bug because a user with physical access to that machine may "provoke" a crash to stole unclosed sessions with all their content (you won't be asked to login again).
Finally my question is: how a web-site can avoid this? If I'm using plain ASP.NET authentication with session cookies I do not want they survive to a browser crash (even if computer is restarted!).
There is not something similar to a process ID in the User Agent string and JavaScript variables are all restored (so I can't store a random seed, generated - for example - server side). Is there anything else viable? Session timeout will handle this but usually it's pretty long and there will be an unsafe window I would eliminate.

I didn't find anything I can use as process id to be sure Chrome has not been restarted but there is a dirty workaround: if I setup a timer (let's say with an interval of five seconds) I can check how much time elapsed from last tick. If elapsed time is too long then session has been recovered and logout performed. Roughly something like this (for each page):
var lastTickTime = new Date();
setInterval(function () {
var currentTickTime = new Date();
// Difference is arbitrary and shouldn't be too small, here I suppose
// a 5 seconds timer with a maximum delay of 10 seconds.
if ((currentTickTime - lastTickTime) / 1000 > 10) {
// Perform logout
}
lastTickTime = currentTickTime;
}, 5000);
Of course it's not a perfect solution (because a malicious attacker may handle this and/or disable JavaScript) but so far it's better than nothing.
New answers with a better solution are more than welcome.

Adriano's suggestion makes is a good idea but the implementation is flawed. We need to remember the time from before the crash so we can compare it to the time after the crash. The easiest way to do that is to use sessionStorage.
const CRASH_DETECT_THRESHOLD_IN_MILLISECONDS = 10000;
const marker = parseInt(sessionStorage.getItem('crashDetectMarker') || new Date().valueOf());
const diff = new Date().valueOf() - marker;
console.log('diff', diff)
if (diff > CRASH_DETECT_THRESHOLD_IN_MILLISECONDS) {
alert('log out');
} else {
alert ('ok');
}
setInterval(() => {
sessionStorage.setItem('crashDetectMarker', new Date().valueOf());
}, 1000)
To test, you can simulate a Chrome crash by entering chrome://crash in the location bar.
Don't forget to clear out the crashDetectMarker when the user logs out.

Related

Google reCAPTCHA response success: false, no error codes

UPDATE: Google has recently updated their error message with an additional error code possibility: "timeout-or-duplicate".
This new error code seems to cover 99% of our previously mentioned mysterious
cases.
We are still left wondering why we get that many validation requests that are either timeouts or duplicates. Determinining this with certainty is likely to be impossible, but now I am just hoping that someone else has experienced something like it.
Disclaimer: I cross posted this to Google Groups, so apologies for spamming the ether for the ones of you who frequent both sites.
I am currently working on a page as part of a ASP.Net MVC application with a form that uses reCAPTCHA validation. The page currently has many daily users.
In my server side validation** of a reCAPTCHA response, for a while now, I have seen the case of the reCAPTCHA response having its success property set to false, but with an accompanying empty error code array.
Most of the requests pass validation, but some keep exhibiting this pattern.
So after doing some research online, I explored the two possible scenarios I could think of:
The validation has timed out and is no longer valid.
The user has already been validated using the response value, so they are rejected the second time.
After collecting data for a while, I have found that all cases of "Success: false, error codes: []" have either had the validation be rather old (ranging from 5 minutes to 10 days(!)), or it has been a case of a re-used response value, or sometimes a combination of the two.
Even after implementing client side prevention of double-clicking my submit-form button, a lot of double submits still seem to get through to the server side Google reCAPTCHA validation logic.
My data tells me that 1.6% (28) of all requests (1760) have failed with at least one of the above scenarios being true ("timeout" or "double submission").
Meanwhile, not a single request of the 1760 has failed where the error code array was not empty.
I just have a hard time imagining a practical use case where a ChallengeTimeStamp gets issued, and then after 10 days validation is attempted, server side.
My question is:
What could be the reason for a non-negligible percentage of all Google reCAPTCHA server side validation attempts to be either very old or a case of double submission?
**By "server side validation" I mean logic that looks like this:
public bool IsVerifiedUser(string captchaResponse, string endUserIp)
{
string apiUrl = ConfigurationManager.AppSettings["Google_Captcha_API"];
string secret = ConfigurationManager.AppSettings["Google_Captcha_SecretKey"];
using (var client = new HttpClient())
{
var parameters = new Dictionary<string, string>
{
{ "secret", secret },
{ "response", captchaResponse },
{ "remoteip", endUserIp },
};
var content = new FormUrlEncodedContent(parameters);
var response = client.PostAsync(apiUrl, content).Result;
var responseContent = response.Content.ReadAsStringAsync().Result;
GoogleCaptchaResponse googleCaptchaResponse = JsonConvert.DeserializeObject<GoogleCaptchaResponse>(responseContent);
if (googleCaptchaResponse.Success)
{
_dal.LogGoogleRecaptchaResponse(endUserIp, captchaResponse);
return true;
}
else
{
//Actual code ommitted
//Try to determine the cause of failure
//Look at googleCaptchaResponse.ErrorCodes array (this has been empty in all of the 28 cases of "success: false")
//Measure time between googleCaptchaResponse.ChallengeTimeStamp (which is UTC) and DateTime.UtcNow
//Check reCAPTCHAresponse against local database of previously used reCAPTCHAresponses to detect cases of double submission
return false;
}
}
}
Thank you in advance to anyone who has a clue and can perhaps shed some light on the subject.
You will get timeout-or-duplicate problem if your captcha is validated twice.
Save logs in a file in append mode and check if you are validating a Captcha twice.
Here is an example
$verifyResponse = file_get_contents('https://www.google.com/recaptcha/api/siteverify?secret='.$secret.'&response='.$_POST['g-recaptcha-response'])
file_put_contents( "logfile", $verifyResponse, FILE_APPEND );
Now read the content of logfile created above and check if captcha is verified twice
This is an interesting question, but it's going to be impossible to answer with any sort of certainly. I can give an educated guess about what's occurring.
As far as the old submissions go, that could simply be users leaving the page open in the browser and coming back later to finally submit. You can handle this scenario in a few different ways:
Set a meta refresh for the page, such that it will update itself after a defined period of time, and hopefully either get a new ReCAPTCHA validation code or at least prompt the user to verify the CAPTCHA again. However, this is less than ideal as it increases requests to your server and will blow out any work the user has done on the form. It's also very brute-force: it will simply refresh after a certain amount of time, regardless of whether the user is currently actively using the page or not.
Use a JavaScript timer to notify the user about the page timing out and then refresh. This is like #1, but with much more finesse. You can pop a warning dialog telling the user that they've left the page sitting too long and it will soon need to be refreshed, giving them time to finish up if they're actively using it. You can also check for user activity via events like onmousemove. If the user's not moving the mouse, it's very likely they aren't on the page.
Handle it server-side, by catching this scenario. I actually prefer this method the most as it's the most fluid, and honestly the easiest to achieve. When you get back success: false with no error codes, simply send the user back to the page, as if they had made a validation error in the form. Provide a message telling them that their CAPTCHA validation expired and they need to verify again. Then, all they have to do is verify and resubmit.
The double-submit issue is a perennial one that plagues all web developers. User behavior studies have shown that the vast majority occur because users have been trained to double-click icons, and as a result, think they need to double-click submit buttons as well. Some of it is impatience if something doesn't happen immediately on click. Regardless, the best thing you can do is implement JavaScript that disables the button on click, preventing a second click.

URL dwell time in any programming language?

Could you please give me some hints, websites, books or research papers that would explain how to calculate the URL dwell time.
in case you don't know what is dwell time : dwell time denotes the time which a user spends viewing a document after clicking a link on a search engine results page.
Thanks in advance
One crude way to do this on a page would be to use a small GET request on a timer, going to a server - an "I'm still here". The frequency of this would be a trade off. This would be relatively easy to do with jquery or a similar framework.
You would not know if it is actually in an abandoned tab or that it is open but not actually being looked at.
A sample for the client end (using jquery):
$session = Math.floor((1 + Math.random()) * 0x10000);
function still_alive() {
$url = $server_url + "/still_alive";
$.get($url, {location: location.href, session: $session});
}
// call it once to prime it
still_alive();
// Set it up on a timer
window.setTimeout(function() {
still_alive();
}, 1000);
1000 is the interval in milliseconds - so this is on a 1 second interval. $server_url is the server to register this at - I am adding "/still_alive" as an endpoint to register this at. $session - this can be some way of identifying the current session - set to something once when the page loads - it could be the result of a uuid function.
The next line is a Jquery GET request to that whole url. It is being passed a plain object - with the key location holding the url of the current location. It may be more appropriate to be a POST instead of a GET - but the principle is still the same.

How to clean old deployed versions in Firebase hosting?

Every time you deploy to Firebase hosting a new deploy version is created so you can roll back and see who deployed. This means that each time every file you deploy is stored and occupying more space.
Other than manually deleting each deployed version one by one, is there any automated way to clean those useless files?
You're correct. You'll need to delete the old deployed versions one by one using the Firebase Hosting console.
There's no other way to do this, so I'd suggest you to file a feature request to enable deletion of multiple deployed version in the Firebase Hosting console.
Update:
You can vote here (please avoid +1 spam, use reactions) https://github.com/firebase/firebase-tools/issues/215#issuecomment-314211730 for one of the alternatives proposed by the team (batch delete, keep only X versions, keep versions with published date < Y)
UPDATE Mar/2019
There's now a proper solution: "Version history settings" which allows to keep the last X versions.
https://support.google.com/firebase/answer/9242086?hl=en
UPDATE Feb/2019
Confirmed by Google employee # github.com/firebase/firebase-tools/issues/...
It is actively being worked on. 🙂
🎉🎉🎉
Before continuing reading:
You can vote here (please avoid +1 spamming, use reactions) https://github.com/firebase/firebase-tools/issues/215#issuecomment-314211730 for one of the alternatives proposed by the team
So, by using Chrome Dev tools I found a way to delete multiple versions. Keep in mind it requires a bit for work (proceed with care since deleted versions can't be restored and you won't get any warnings like when using the UI).
Step 1. Retrieving the version list.
Open Chrome Dev Tools (if you don't know how to chances are you should wait for a proper solution by Firebase's team).
Open Firebase's Console and go to the "Hosting" tab.
Go to the "Network" tab on CDT and use the Websockets filter.
Select the request named .ws?v=5&ns=firebase
Open the "Frames" tab
Now comes the tedious part: Select the frames with the highest "length" value. (Depending on your data, it could be 2-n frames. In my case, 3 frames with 14k-16k length)
Paste each of the frame's data in order (which will form a valid JSON object).
Extracting the data: There are several ways to do it. I opted for simple JS on CDT's console.
var jsonString = '...';
var json = JSON.parse(jsonString);
var ids = Object.keys(json.d.b.d);
Step 2. Performing the requests
Almost there :P
Now that you have the IDs, perform the following requests:
DELETE https://firebasehosting.clients6.google.com/v1beta1/sites/PROJECT_NAME/versions/-VERSION_ID?key=KEY
I used Sublime (to create the request strings) + Paw.
The "KEY" can be copied from any of CDT's requests. It doesn't match Firebase's Web API key
=> Before performing the requests: take note of the version you don't want to delete from the table provided by Firebase. (Each version listed on the website has the last 6 digits of it's ID under your email)
(Screenshots weren't provided since all of them would require blurring and a bit of work)
This script is not yet super-solid, so use it at your own risk. I'll try to update it later, but worked for me for now.
Just some javascript to click on buttons to delete deployed items one by one.
var deleteDeployment = function(it) {
it.click()
setTimeout(function() {
$('.md-dialog-container .delete-dialog button.md-raised:contains("Delete")').click()
}, 300)
}
$('.h5g-hist-status-deployed').map((i, a) => $(a).parent()).map((i, a) => $(a).find('md-menu button:contains(Delete)')).each((i, it) => {
setTimeout(function() {
deleteDeployment(it)
}, (i + 1) * 2000)
})
Firebase finally implemented a solution for this.
It is now possible to set a limit of retained versions.
https://firebase.google.com/docs/hosting/deploying#set_limit_for_retained_versions
EDIT: previous link is outdated. Here is a new link that works:
https://firebase.google.com/docs/hosting/usage-quotas-pricing#control-storage-usage
This may be a bit brittle due to the selectors' reliance on current DOM structure and classes on the Hosting Dashboard, but it works for me!
NOTE: This script (if executed from the console) or bookmarklet will click and confirm delete on all of the rows in the current view. I'm fairly certain that even if you click delete on the current deployment it will not delete it.
Function for running in console:
let deleteAllHistory = () => {
let deleteBtns = document.querySelectorAll('.table-row-actions button.md-icon-button');
const deleteBtn = (pointer) => {
deleteBtns[pointer].click();
setTimeout(() => {
document.querySelector('.md-open-menu-container.md-clickable md-menu-item:last-child button').click();
setTimeout(() => {
document.querySelector('.fb-dialog-actions .md-raised').click();
if(pointer < deleteBtns.length - 1) {
deleteBtn(pointer + 1);
}
}, 500);
}, 500);
};
deleteBtn(0);
};
Bookmarklet:
javascript:(function()%7Bvar%20deleteBtns%3Ddocument.querySelectorAll('.table-row-actions%20button.md-icon-button')%2CdeleteBtn%3Dfunction(a)%7BdeleteBtns%5Ba%5D.click()%2CsetTimeout(function()%7Bdocument.querySelector('.md-open-menu-container.md-clickable%20md-menu-item%3Alast-child%20button').click()%2CsetTimeout(function()%7Bdocument.querySelector('.fb-dialog-actions%20.md-raised').click()%2Ca%3CdeleteBtns.length-1%26%26deleteBtn(a%2B1)%7D%2C500)%7D%2C500)%7D%3BdeleteBtn(0)%7D)()
Nathan's option is great, but I have a quick-and-dirty method using AutoHotkey. Takes about a second per version to delete, so you can knock out a page in 10 seconds.
#a::
Click
MouseGetPos, xpos, ypos
MouseMove, xpos, ypos + 30
Sleep 300
Click
Sleep 400
Click 1456, 816
MouseMove, xpos, ypos + 82
return
#s::
Click
MouseGetPos, xpos, ypos
MouseMove, xpos, ypos - 820
return
You'll likely need to modify the exact pixel values for your screen, but this works perfectly on my 1920x1080.
Win + a is delete and move to the next entry, Win + s is move to the next page. Put your mouse on the first 3-dot menu and go for it!
On top of the release history table, click the tool bar and select "Version history settings". Set to desired amount and click save.This will auto delete older deployments.
I don't know it can help you or not but I can delete old deployments from "hosting" menu like this:
Delete or rollback old deployment

Profiling ASP.net applications over the long term?

What is the accepted way to instrument a web-site to record execution statistics?
How long it takes to X
For example, i want to know how long it takes to perform some operation, e.g. validating the user's credentials with the Active Directory server:
authenticated = CheckCredentials(Login1.UserName, Login1.Password);
A lot of people will suggest using Tracing, of various kinds, to output, or log, or record, the interesting performance metrics:
var sw = new System.Diagnostics.Stopwatch();
sw.Start();
authenticated = CheckCredentials(Login1.UserName, Login1.Password);
sw.Stop();
//write a number to a log
WriteToLog("TimeToCheckCredentials", sw.ElapsedTicks);
Not an X; all X
The problem with this is that i'm not interested in how long it took to validate a user's credentials against Active Directory. i'm interested in how long it took to validate thousands of user's credentials in ActiveDirectory:
var sw = new System.Diagnostics.Stopwatch();
sw.Start();
authenticated = CheckCredentials(Login1.UserName, Login1.Password);
sw.Stop();
timeToCheckCredentialsSum = timeToCheckCredentialsSum + sw.ElapsedTicks;
timeToCheckCredentialsCount = timeToCheckCredentialsCount + 1;
if ((sw.ElapsedTicks < timeToCheckCredentialMin) || (timeToCheckCredentialMin == 0))
timeToCheckCredentialMin = sw.ElapsedTicks;
if ((sw.ElapsedTicks > timeToCheckCredentialMax) || (timeToCheckCredentialMax == 0))
timeToCheckCredentialMax = sw.ElapsedTicks;
oldMean = timeToCheckCredentialsAverage;
newMean = timeToCheckCredentailsSum / timeToCheckCredentialsCount;
timeToCheckCredentialsAverage = newMean;
if (timeToCheckCredentailsCount > 2)
{
timeToCheckCredentailsVariance = (
((timeToCheckCredentialsCount -2)*timeToCheckCredentailsVariance + (sw.ElapsedTicks-oldMean)*(sw.ElapsedTicks-newMean))
/ (timeToCheckCredentialsCount -1))
}
else
timeToCheckCredentailsVariance = 0;
Which is a lot of boilerplate code that can easily be abstracted away into:
var sw = new System.Diagnostics.Stopwatch();
sw.Start();
authenticated = CheckCredentials(Login1.UserName, Login1.Password);
sw.Stop();
//record the sample
Profiler.AddSample("TimeToCheckCredentials", sw.ElapsedTicks);
Which is still a lot of boilerplate code, that can be abstracted into:
Profiler.Start("TimeToCheckCredentials");
authenticated = CheckCredentials(Login1.UserName, Login1.Password);
Profiler.Stop("TimeToCheckCredentials");
Now i have some statistics sitting in memory. i can let the web-site run for a few months, and at any time i can connect to the server and look at the profiling statistics. This is very much the ability of SQL Server to present it's own running history in various reports:
But ASP kills apps without warning
The problem is that this is an ASP.net web-site/application. Randomly throughout the course of a year, the web-server will decide to shut down the application, by recycling the application pool:
perhaps it has been idle for 3 weeks
perhaps it reached the maximum recycle time limit (e.g. 24 hours)
perhaps a date on a file changed, and the web-server has to recompile the application
When the web-server decides to shut down, all my statistics are lost.
Are there any ASP.net performance/instrumentation frameworks that solve this problem?
Try persisting to SQL Server
i thought about storing my statistics in SQL Server. Much like ASP.net session state can be stored in SQL Server after every request is complete, i could store my values in SQL Server every time:
void AddSample(String sampleName, long elapsedTicks)
{
using (IDbConnection conn = CreateDatabaseConnection())
{
ExecuteAddSampleStoredProcedure(conn, sampleName, elapsedTicks);
}
}
Except now i've introduced a huge latency into my application. This profiling code is called many thousand times a second. When the math is performed only in memory it takes few microseconds. Now it takes few dozen milliseconds. (Factor of 1,000; noticeable delay). That's not going to work.
Save only on application shutdown
i have considered registering my static helper class with the ASP.net hosting environment by implementing IRegisteredObject:
public class ShutdownNotification : IRegisteredObject
{
public void Stop(Boolean immediate)
{
Profiler.SaveStatisticsToDatabase();
}
}
But i'm curious what the right way to solve this problem is. Smarter people than me must have added profiling to ASP.net before.
We use Microsoft's Application Performance Monitoring for this. It captures page load times, DB call times, API call times, etc. When a page load is unexpectedly slow, it also alerts us and provides the stack trace along with the timings of various calls that impacted the load time. It's somewhat rudimentary but it does the trick and allowed us to verify that we didn't have any variations that were not performing as expected.
Advance warning: the UI only works in IE.
http://technet.microsoft.com/en-us/library/hh457578.aspx

Get ASP.NET Session Last Access Time (or Time-to-Timeout)

I'm trying to determine how much time is left in a given ASP.NET session until it times out.
If there is no readily available time-to-timeout value, I could also calculate it from its last access time (but I didn't find this either). Any idea how to do this?
If you are at the server, processing the request, then the timeout has just been reset so the full 20 minutes (or whatever you configured) remain.
If you want a client-side warning, you will need to create some javascript code that will fire about 20 minutes from "now". See the setTimeout method.
I have used that to display a warning, 15 minutes after the page was requested. It pops up an alert like "your session will expire on {HH:mm}, please save your work". The exact time was used instead of "in 5 minutes" as you never know when the user will see that message (did he return to his computer 10 minutes after the alert fired?).
For multi-page solution one could save last request time in cookie, and javascript could consider this last access time for handling warning message or login out action.
I have just implemented a solution like the one asked about here and it seems to work. I have an MVC application and have this code in my _Layout.chtml page but it could work in an asp.net app by placing it in the master page I would think. I am using local session storage via the amplify.js plugin. I use local session storage because as Mr Grieves says there could be a situation where a user is accessing the application in a way that does not cause a page refresh or redirect but still resets the session timeout on the server.
$(document).ready(function () {
var sessionTimeout = '#(Session.Timeout)'; //from server at startup
amplify.store.sessionStorage("sessionTimeout", sessionTimeout);
amplify.store.sessionStorage("timeLeft", sessionTimeout);
setInterval(checkSession, 60000); // run checkSession this every 1 minute
function checkSession() {
var timeLeft = amplify.store.sessionStorage("timeLeft");
timeLeft--; // decrement by 1 minute
amplify.store.sessionStorage("timeLeft", timeLeft);
if (timeLeft <= 10) {
alert("You have " + timeLeft + " minutes before session timeout. ");
}
}
});
Then in a page where users never cause a page refresh but still hit the server thereby causing a reset of their session I put this on a button click event:
$('#MyButton').click(function (e) {
//Some Code that causes session reset but not page refresh here
amplify.store.sessionStorage("sessionTimeout", 60); //default session timeout
amplify.store.sessionStorage("timeLeft", 60);
});
Using local session storage allows my _Layout.chtml code to see that the session has been reset here even though a page never got refreshed or redirected.
You can get the timeout in minutes from:
Session.Timeout
Isn't this enough to provide the information, as the timeout is reset every request? Don't know how you want to display this without doing a request?
Anyhow, best way is on every request setting some Session variable with the last access time. That should provide the info on remote.

Resources