Get ASP.NET Session Last Access Time (or Time-to-Timeout) - asp.net

I'm trying to determine how much time is left in a given ASP.NET session until it times out.
If there is no readily available time-to-timeout value, I could also calculate it from its last access time (but I didn't find this either). Any idea how to do this?

If you are at the server, processing the request, then the timeout has just been reset so the full 20 minutes (or whatever you configured) remain.
If you want a client-side warning, you will need to create some javascript code that will fire about 20 minutes from "now". See the setTimeout method.
I have used that to display a warning, 15 minutes after the page was requested. It pops up an alert like "your session will expire on {HH:mm}, please save your work". The exact time was used instead of "in 5 minutes" as you never know when the user will see that message (did he return to his computer 10 minutes after the alert fired?).

For multi-page solution one could save last request time in cookie, and javascript could consider this last access time for handling warning message or login out action.

I have just implemented a solution like the one asked about here and it seems to work. I have an MVC application and have this code in my _Layout.chtml page but it could work in an asp.net app by placing it in the master page I would think. I am using local session storage via the amplify.js plugin. I use local session storage because as Mr Grieves says there could be a situation where a user is accessing the application in a way that does not cause a page refresh or redirect but still resets the session timeout on the server.
$(document).ready(function () {
var sessionTimeout = '#(Session.Timeout)'; //from server at startup
amplify.store.sessionStorage("sessionTimeout", sessionTimeout);
amplify.store.sessionStorage("timeLeft", sessionTimeout);
setInterval(checkSession, 60000); // run checkSession this every 1 minute
function checkSession() {
var timeLeft = amplify.store.sessionStorage("timeLeft");
timeLeft--; // decrement by 1 minute
amplify.store.sessionStorage("timeLeft", timeLeft);
if (timeLeft <= 10) {
alert("You have " + timeLeft + " minutes before session timeout. ");
}
}
});
Then in a page where users never cause a page refresh but still hit the server thereby causing a reset of their session I put this on a button click event:
$('#MyButton').click(function (e) {
//Some Code that causes session reset but not page refresh here
amplify.store.sessionStorage("sessionTimeout", 60); //default session timeout
amplify.store.sessionStorage("timeLeft", 60);
});
Using local session storage allows my _Layout.chtml code to see that the session has been reset here even though a page never got refreshed or redirected.

You can get the timeout in minutes from:
Session.Timeout
Isn't this enough to provide the information, as the timeout is reset every request? Don't know how you want to display this without doing a request?
Anyhow, best way is on every request setting some Session variable with the last access time. That should provide the info on remote.

Related

Google reCAPTCHA response success: false, no error codes

UPDATE: Google has recently updated their error message with an additional error code possibility: "timeout-or-duplicate".
This new error code seems to cover 99% of our previously mentioned mysterious
cases.
We are still left wondering why we get that many validation requests that are either timeouts or duplicates. Determinining this with certainty is likely to be impossible, but now I am just hoping that someone else has experienced something like it.
Disclaimer: I cross posted this to Google Groups, so apologies for spamming the ether for the ones of you who frequent both sites.
I am currently working on a page as part of a ASP.Net MVC application with a form that uses reCAPTCHA validation. The page currently has many daily users.
In my server side validation** of a reCAPTCHA response, for a while now, I have seen the case of the reCAPTCHA response having its success property set to false, but with an accompanying empty error code array.
Most of the requests pass validation, but some keep exhibiting this pattern.
So after doing some research online, I explored the two possible scenarios I could think of:
The validation has timed out and is no longer valid.
The user has already been validated using the response value, so they are rejected the second time.
After collecting data for a while, I have found that all cases of "Success: false, error codes: []" have either had the validation be rather old (ranging from 5 minutes to 10 days(!)), or it has been a case of a re-used response value, or sometimes a combination of the two.
Even after implementing client side prevention of double-clicking my submit-form button, a lot of double submits still seem to get through to the server side Google reCAPTCHA validation logic.
My data tells me that 1.6% (28) of all requests (1760) have failed with at least one of the above scenarios being true ("timeout" or "double submission").
Meanwhile, not a single request of the 1760 has failed where the error code array was not empty.
I just have a hard time imagining a practical use case where a ChallengeTimeStamp gets issued, and then after 10 days validation is attempted, server side.
My question is:
What could be the reason for a non-negligible percentage of all Google reCAPTCHA server side validation attempts to be either very old or a case of double submission?
**By "server side validation" I mean logic that looks like this:
public bool IsVerifiedUser(string captchaResponse, string endUserIp)
{
string apiUrl = ConfigurationManager.AppSettings["Google_Captcha_API"];
string secret = ConfigurationManager.AppSettings["Google_Captcha_SecretKey"];
using (var client = new HttpClient())
{
var parameters = new Dictionary<string, string>
{
{ "secret", secret },
{ "response", captchaResponse },
{ "remoteip", endUserIp },
};
var content = new FormUrlEncodedContent(parameters);
var response = client.PostAsync(apiUrl, content).Result;
var responseContent = response.Content.ReadAsStringAsync().Result;
GoogleCaptchaResponse googleCaptchaResponse = JsonConvert.DeserializeObject<GoogleCaptchaResponse>(responseContent);
if (googleCaptchaResponse.Success)
{
_dal.LogGoogleRecaptchaResponse(endUserIp, captchaResponse);
return true;
}
else
{
//Actual code ommitted
//Try to determine the cause of failure
//Look at googleCaptchaResponse.ErrorCodes array (this has been empty in all of the 28 cases of "success: false")
//Measure time between googleCaptchaResponse.ChallengeTimeStamp (which is UTC) and DateTime.UtcNow
//Check reCAPTCHAresponse against local database of previously used reCAPTCHAresponses to detect cases of double submission
return false;
}
}
}
Thank you in advance to anyone who has a clue and can perhaps shed some light on the subject.
You will get timeout-or-duplicate problem if your captcha is validated twice.
Save logs in a file in append mode and check if you are validating a Captcha twice.
Here is an example
$verifyResponse = file_get_contents('https://www.google.com/recaptcha/api/siteverify?secret='.$secret.'&response='.$_POST['g-recaptcha-response'])
file_put_contents( "logfile", $verifyResponse, FILE_APPEND );
Now read the content of logfile created above and check if captcha is verified twice
This is an interesting question, but it's going to be impossible to answer with any sort of certainly. I can give an educated guess about what's occurring.
As far as the old submissions go, that could simply be users leaving the page open in the browser and coming back later to finally submit. You can handle this scenario in a few different ways:
Set a meta refresh for the page, such that it will update itself after a defined period of time, and hopefully either get a new ReCAPTCHA validation code or at least prompt the user to verify the CAPTCHA again. However, this is less than ideal as it increases requests to your server and will blow out any work the user has done on the form. It's also very brute-force: it will simply refresh after a certain amount of time, regardless of whether the user is currently actively using the page or not.
Use a JavaScript timer to notify the user about the page timing out and then refresh. This is like #1, but with much more finesse. You can pop a warning dialog telling the user that they've left the page sitting too long and it will soon need to be refreshed, giving them time to finish up if they're actively using it. You can also check for user activity via events like onmousemove. If the user's not moving the mouse, it's very likely they aren't on the page.
Handle it server-side, by catching this scenario. I actually prefer this method the most as it's the most fluid, and honestly the easiest to achieve. When you get back success: false with no error codes, simply send the user back to the page, as if they had made a validation error in the form. Provide a message telling them that their CAPTCHA validation expired and they need to verify again. Then, all they have to do is verify and resubmit.
The double-submit issue is a perennial one that plagues all web developers. User behavior studies have shown that the vast majority occur because users have been trained to double-click icons, and as a result, think they need to double-click submit buttons as well. Some of it is impatience if something doesn't happen immediately on click. Regardless, the best thing you can do is implement JavaScript that disables the button on click, preventing a second click.

Is there a callback when a session expires?

I wonder if there is any callback which fires, when the session expires (I'm using Simplelogin with $authWithPassword()). I searched already with google and stumbled upon this: https://groups.google.com/forum/#!topic/firebase-talk/btaE-hCVQdk
But I don't understand how the callback of the auth-Method listens to the "Session expired" since its getting executed only once (when a user logs in). Or is there actually an event listener on its callbacks?`
I tried testing the login by using the options paramters with expires: ((new Date()).getTime() + 1000) / 1000 (it says, it needs a timestamp in seconds not milliseconds) but I don't get a result.
Any help is appreciated.
My solution for this:(in pseudo-code-steps, can help with full javascript).
1. get timeOffset (server/client) by doing:
1.1. on login ref.set() clientTime (Date.now()) and serverTime (Firebase.ServerValue.TIMESTAMP) in firebase-object (i.e. online-list)
1.2. on success read both and get timeOffset in ms
2. window.setTimeout() with your controlled logout-function (i.e. unauth()) and following timeout-value:
2.2 microseconds to timeout: with login via auth() you get authData.expires, use this to calc expire-timeout-value:
authData.expires*1000 - Date.now()+that.serverTimeOffset - 2000
use *1000 because authData.expires comes in seconds.
user -2000 because you have to be faster with unauth() than firebase disconnects you :-)
I'm very pleased with this solution. It works perfect for my multiplayer-browser game.

how to check if page finished loading in RSelenium

Imagine that you click on an element using RSelenium on a page and would like to retrieve the results from the resulting page. How does one check to make sure that the resulting page has loaded? I can insert Sys.sleep() in between processing the page and clicking the element but this seems like a very ugly and slow way to do things.
Set ImplicitWaitTimeout and then search for an element on the page. From ?remoteDriver
setImplicitWaitTimeout(milliseconds = 10000)
Set the amount of time
the driver should wait when searching for elements. When searching for
a single element, the driver will poll the page until an element is
found or the timeout expires, whichever occurs first. When searching
for multiple elements, the driver should poll the page until at least
one element is found or the timeout expires, at which point it will
return an empty list. If this method is never called, the driver will
default to an implicit wait of 0ms.
In the RSelenium reference manual (http://cran.r-project.org/web/packages/RSelenium/RSelenium.pdf), you will find the method setTimeout() for the remoteDriver class:
setTimeout(type = "page load", milliseconds = 10000)
Configure the amount of time that a particular type of operation can execute for before they are aborted and a |Timeout| error is returned to the client.
type: The type of operation to set the timeout for. Valid values are: "script" for script timeouts, "implicit" for modifying the implicit wait timeout and "page load" for setting a page load timeout. Defaults to "page load"
milliseconds: The amount of time, in milliseconds, that time-limited commands are permitted to run. Defaults to 10000 milliseconds.
This seems to suggests that remDr$setTimeout() after remDr$navigate("...") would actually wait for the page to load, or return a timeout error after 10 seconds.
you can also try out this code that waits for the browser to provide whether page loaded or not.
objExecutor = (JavascriptExecutor) objDriver;
if (!objExecutor.executeScript("return document.readyState").toString()
.equalsIgnoreCase("complete")){
Thread.sleep(1000);
}
You can simply put it in your base page so you wont need to write it down in every pageobjects. I have never tried it out with any AJAX enabled sites, but this might help you and your scenario dependency will also get away.

Google Chrome restores session cookies after a crash, how to avoid?

On Google Chrome (I saw this with version 35 on Windows 8.1, so far I didn't try other versions) when browser crashes (or you simply unplug power cable...) you'll be asked to recover previous session when you'll open it again. Good feature but it will restore session cookies too.
I don't want to discuss here if it's a bug or not anyway IMO it's a moderate security bug because a user with physical access to that machine may "provoke" a crash to stole unclosed sessions with all their content (you won't be asked to login again).
Finally my question is: how a web-site can avoid this? If I'm using plain ASP.NET authentication with session cookies I do not want they survive to a browser crash (even if computer is restarted!).
There is not something similar to a process ID in the User Agent string and JavaScript variables are all restored (so I can't store a random seed, generated - for example - server side). Is there anything else viable? Session timeout will handle this but usually it's pretty long and there will be an unsafe window I would eliminate.
I didn't find anything I can use as process id to be sure Chrome has not been restarted but there is a dirty workaround: if I setup a timer (let's say with an interval of five seconds) I can check how much time elapsed from last tick. If elapsed time is too long then session has been recovered and logout performed. Roughly something like this (for each page):
var lastTickTime = new Date();
setInterval(function () {
var currentTickTime = new Date();
// Difference is arbitrary and shouldn't be too small, here I suppose
// a 5 seconds timer with a maximum delay of 10 seconds.
if ((currentTickTime - lastTickTime) / 1000 > 10) {
// Perform logout
}
lastTickTime = currentTickTime;
}, 5000);
Of course it's not a perfect solution (because a malicious attacker may handle this and/or disable JavaScript) but so far it's better than nothing.
New answers with a better solution are more than welcome.
Adriano's suggestion makes is a good idea but the implementation is flawed. We need to remember the time from before the crash so we can compare it to the time after the crash. The easiest way to do that is to use sessionStorage.
const CRASH_DETECT_THRESHOLD_IN_MILLISECONDS = 10000;
const marker = parseInt(sessionStorage.getItem('crashDetectMarker') || new Date().valueOf());
const diff = new Date().valueOf() - marker;
console.log('diff', diff)
if (diff > CRASH_DETECT_THRESHOLD_IN_MILLISECONDS) {
alert('log out');
} else {
alert ('ok');
}
setInterval(() => {
sessionStorage.setItem('crashDetectMarker', new Date().valueOf());
}, 1000)
To test, you can simulate a Chrome crash by entering chrome://crash in the location bar.
Don't forget to clear out the crashDetectMarker when the user logs out.

What keeps caching from working in WebMatrix?

I have a number of pages in a WebMatrix Razor ASP.Net site where I have added one line of code:
Response.OutputCache(600);
From reading about it I had assumed that this mean that IIS would create a cache of the html produced by the page, serve that html for the next 10 minutes, and after 10 minutes when the next request came in, it would run the code again.
Now the page is being fetched as part of an timed jquery call. The time code in the client runs every minute. The code there is very simple:
function wknTimer4() {
$.get('PerfPanel', function(data) {
$('#perfPanel').html(data);
});
It occasionally appears to cache, but when i look at the number of database queries done during the 10 minute period, i might have well over 100 database queries. I know the caching isn't working the way I expect. Does the cache only work for a single session? Is there some other limitation?
Update: it really shouldn't matter what the client does, whether it fetches the page through a jQuery call, or straight html. If the server is caching, it doesn't matter what the client does.
Update 2: complete code dumped here. Boring stuff:
#{
var db = Database.Open("LOS");
var selectQueryString = "SELECT * FROM LXD_funding ORDER BY LXDOrder";
// cache the results of this page for 600 seconds
Response.OutputCache(600);
}
#foreach (var row in db.Query(selectQueryString) ){
<h1>
#row.quotes Loans #row.NALStatus, oldest #(NALWorkTime.WorkDays(row.StatusChange,DateTime.Now)) days
</h1>
}
Your assumptions about how OutputCache works are correct. Can you check firebug or chrome tools to look at the outgoing requests hitting your page? If you're using jQuery, sometimes people set the cache property on the $.get or $.ajax to false, which causes the request to the page to have a funky trailing querystring. I've made the mistake of setting this up globally to fix some issues with jQuery and IE:
http://api.jquery.com/jQuery.ajaxSetup/
The other to look at here is the grouping of DB calls. Are you just making a lot of calls with one request? Are you executing a db command in a loop, within another reader? Code in this case would be helpful.
Good luck, I hope this helps!

Resources