This is a simple question... yet I have spent some time poking around online to no avail.
In ASP classic (which I am unfortunately stuck with), I need to know the range of possible values returned by getting session.sessionid. I can print it just fine but without a better idea of how it works (besides the obvious incrementation), I don't trust testing to determine it.
Any information/resources/leads at all would be appreciated. Thanks!
Session IDs are created bij the server. Thus, session.sessionid gives you the ID generated by the server.
session.sessionid always gives you a series of 9 digits. As you can read in the sources below, the Session ID is always generated the same way (32 bit long integer, which gets encrypted).
Take a look at this resource. The linked document in the article is an interesting read.
And another article.
Related
Again I come to you guys for your expertise and advice on an issue that I am having. I was wondering if any of you would know how to detect if a web page has been modified using VB.NET. I need to be able to set up a task which periodically (like once a week) scans the user inputted web pages and if the web page content has changed, I need to fire off an email to an individual that it has changed (not the exact location on the page itself). I'll be storing the HTTP status and of course the page data itself as well as the date of when it was last modified. Of course this needs to be very fault tolerant since it could be another week before the check runs again. Any help would be great. Thank you.
EDIT
New twist on this question sorry. I had more time to think about what we wanted. So... Detecting ANY change on a web page would be kind of silly since time dependent elements of the page would change every so often. Instead, what I would like to do is be able to detect the documents in the page. For instance if there are excel, word docs, or pdfs that get changed on that page. So, I'd run the hash on these documents then on some sort of schedule do a check to see if new documents have been added or if the old documents have been modified. Any suggestions on how to detect the documents embedded on the page and running the hash? Thanks again!
As I mentioned in a comment, this sort of job is what checksums (also known as hash functions) were designed for.
You code for will look something like this:
- for each webpage of interest
- pull webbpage
- calculate checksum of contents
- is current checksum different to last checksum?
- if yes, send email
- store new checksum and other appropriate data
The .Net framework has a number of checksums available. The two most popular are MD5 and sha1
In addition to the checksum option, there are also various Diff function that achieve this, and provide much more information than changed=true/false. This question has more info:
How to tell when a web page has changed by x% in VB.net?
On a new website, I've an huge formular(meaning really big, needs at least 15-20min to finish it), that configure the whole website for one client for the next year.
It's distributed between several tabs(it's a wizard). Every time we go to the next tab, it makes a regular(non ajax) call to the server that generate the next "page". The previous informations are stored in the session(an object with a custom binder).
Everything was working fine until we test it today with all real data. Real data needs reflexion, work to find correct elements, ... And it takes times.
The problem we got is that the View receive a Model partialy empty. The session duration is set to 1440 minutes(and in IIS too). For now what I know is that I get a NullException the first time I try to access the Model into my view.
I'm checking the controller since something like 1 hour, but it's just impossible it gives a null model. If I put all those data very fast, I don't have any problem(but it's random data).
For now I did only manage to reproduce this problem on the IIS server, and I'm checking elmah logs to debug it, so it's not so easy to reproduce it.
Have you just any idea about how should I debug this? I'm a little lost here
I think you should assume session does not offer reliable persistence. I am not sure about details but I guess it will start freeing some elements when it exceeds its memory limit.
You will be safer if you use database to store that information or you could introduce your own implementation for persisting state.
in addition to ans provided by #Ufuk
you can easily send an ajax request every 1 minute which would actually do nothing but by doing this the session wont get expired and site will continue to run in extended periods
The problem was that the sessions wasn't having enough space I think. I resolved temporary my problem by restarting the application pool. Still searching a solution that will not implies to changes all this code. Maybe with other mode of session states, but I need to make my models serializable.
I am designing a web API which requires fast read only access to a large dataset which will be hopefully be constantly stored and ready for access. Access will be from a static class which will just do some super fast lookups on the data.
So, I want to pre-cache a Dictionary<string,Dictionary<string,Dictionary<string,myclass>>>, with the total number of elements at the third level dictionary being around 1 Million, which will increase eventually, but lets say not more than 2 million ever. 'myclass' is a small class with a (small) list of strings, an int, an enum and a couple of bools, so nothing major. It should be a bit over 100mb in memory.
From what I can tell, the way to do this is simply call my StaticClass.Load() method to read all this data in from a file with the Application_Start event in Global.asax.
I am wondering what the things I need to consider/worry about with this. I am guessing it is not just as simple as calling Load() and then assuming everything will be OK for future access. Will the GC know to leave the data there even if the API is not hit for a couple of hours?
To complicate things, I want to reload this data every day as well. I think I'll just be able to throw out the old dataset and load the new one in from another file, but I'll get to that later.
Cheers
Please see my similar question IIS6 ASP.NET 2.0 Application Cache - data storage options and performance for large amounts of data BUT in particular the answer from Marc and his last paragraph about options for large cache's which I think would apply to your case.
The Standard ASP.net application Cache could work for you here. Check out this article. With this you get built in management of dependance (the file changes) or time based expiry. The linked article shows an On_start application
My concern is the size of what you want to cache.
Cheers guys
I also found this article which addresses the options: http://www.asp.net/data-access/tutorials/caching-data-at-application-startup-cs
Nothing really gives recommendations for large amounts of data - or what is even defined as 'large amounts'. I'll keep doing my research, but Redis looks pretty good
This question is somewhat of a follow up to How serious is this new ASP.NET security vulnerability and how can I workaround it? So if my question seems to be broken read over this question and its accepted solution first and then take that into the context of my question.
Can someone explain why returning the same error page and same status code for custom errors matters? I find this to be immaterial especially if this is advocated as part of the work around to it.
Isn't it just as easy for the script/application to execute this attack and not specifically care whether or not it gets a http status code and more on the outcome? Ie doing this 4000 times you get redirected to an error page where on 4001 you stay on the same page because it didn't invalidate the padding?
I see why adding the delay to the error page is somewhat relevant but doesn't this also just add another layer to fool the script into thinking the site is an invalid target?
What could be done to prevent this if the script takes into account that since the site is asp.net it's running the AES encryption that it ignores the timing of error pages and watches the redirection or lack of redirection as the response vector? If a script does this will that mean there's NO WAY to stop it?
Edit: I accept the timing attack reduction but the error page part is what really seems bogus. This attack vector puts their data into viewstate. There's only 2 cases. Pass. Fail.
Either Fail, they're on a page and the viewstate does not contain their data. No matter what you do here there is no way to remove the fail case because the page just will never contain their inserted data unless they successfully cracked the key. This is why I can't justify the custom errors usage having ANY EFFECT AT ALL.
Or Pass, they're on a page and the viewstate contains their inserted data.
Summary of this vulnerability
The cipher key from the WebResoure.axd / ScriptResource.axd is taken and the first guess of the validation key is used to generate a value of potential key with the ciphered text.
This value is passed to the WebResource.axd / ScriptResource.axd at this point if the decryption key was guessed correctly their response will be accepted but since the data is garbage that it's looking for the WebResource.axd / ScriptResource.axd will return a 404 error.
If the decryption key was not successfully guessed it will get a 500 error for the padding invalid exception. At this point the attack application knows to increment the potential decryption key value and try again repeating until it finds the first successful 404 from the WebResource.axd / ScriptResource.axd
After having successfully deduced the decryption key this can be used to exploit the site to find the actual machine key.
re:
How does this have relevance on whether they're redirected to a 200, 404 or 500? No one can answer this, this is the fundamental question. Which is why I call shenanigans on needing to do this tom foolery with the custom errors returning a 200. It just needs to return the same 500 page for both errors.
I don't think that was clear from the original question, I'll address it:
who said the errors need to return 200? that's wrong, you just need all the errors to return the same code, making all errors return 500 would work as well. The config proposed as a work around just happened to use 200.
If you don't do the workaround (even if its your own version that always returns 500), you will see 404 vs. 500 differences. That is particularly truth in webresource.axd and scriptresource.axd, since the invalid data decrypted is a missing resource / 404.
Just because you don't know which feature had the issue, doesn't mean there aren't features in asp.net that give different response codes in different scenarios that relate to padding vs. invalid data. Personally, I can't be sure if there is any other feature that gives different response code as well, I just can tell you those 2 do.
Can someone explain why returning the same error page and same status code for custom errors matters? I find this to be immaterial especially if this is advocated as part of the work around to it.
Sri already answered that very clearly in the question you linked to.
Its not about hiding than an error occurred, is about making sure the attacker can't tell the different between errors. Specifically is about making sure the attacker can't determine if the request failed because it couldn't decrypt /padding was invalid, vs. because the decrypted data was garbage.
You could argue: well but I can make sure it isn't garbage to the app. Sure, but you'd need to find a mechanism in the app that allows you to do that, and the way the attack works you Always need at least a tiny bit of garbage in the in message. Consider these:
ScriptResource and WebResource both throw, so custom error hides it.
View state is by default Not encrypted, so by default its Not involved the attack vector. If you go through the trouble of turning the encryption on, then you very likely set it to sign / validate it. When that's the case, the failure to decrypt vs. the failure to validate is the same, so the attacker again can't know.
Auth ticket also signs, so its like the view state scenario
Session cookies aren't encrypted, so its irrelevant
I posted on my blog how the attack is getting so far like to be able to forge authentication cookies.
Isn't it just as easy for the script/application to execute this attack and not specifically care whether or not it gets a http status code and more on the outcome? Ie doing this 4000 times you get redirected to an error page where on 4001 you stay on the same page because it didn't invalidate the padding?
As mentioned above, you need to find a mechanism that behaves that way i.e. decrypted garbage stays on the same page instead of throwing an exception / and thus getting you to the same error page.
Either Fail, they're on a page and the viewstate does not contain their data. No matter what you do here there is no way to remove the fail case because the page just will never contain their inserted data unless they successfully cracked the key. This is why I can't justify the custom errors usage having ANY EFFECT AT ALL.
Or Pass, they're on a page and the viewstate contains their inserted data.
Read what I mentioned about the view state above. Also note that the ability to more accurately re-encrypt is gained After they gained the ability to decrypt. That said, as mentioned above, by default view state is not that way, and when its on its usually accompanied with signature/validation.
I am going to elaborate on my answer in the thread you referenced.
To pull off the attack, the application must respond in three distinct ways. Those three distinct ways can be anything - status codes, different html content, different response times, redirects, or whatever creative way you can think of.
I'll repeat again - the attacker should be able to identify three distinct responses without making any mistake, otherwise the attack won't work.
Now coming to the proposed solution. It works, because it reduces the three outcomes to just two. How does it do that? The catch-all error page makes the status code/html/redirect all look identical. The random delay makes it impossible to distinguish between one or the other solely on the basis of time.
So, its not a lie, it does work as advertised.
EDIT : You are mixing things up with brute force attack. There is always going to be a pass/fail response from the server, and you are right it can't be prevented. But for an attacker to use that information to his advantage will take decades and billions of requests to your server.
The attack that is being discussed allows the attacker to reduce those billions of requests into a few thousands. This is possible because of the 3 distinct response states. The workaround being proposed reduces this back to a brute-force attack, which is unlikely to succeed.
The workaround works because:
You do not give any indication about "how far" the slightly adjusted took you. If you get another error message, that is information you can learn from.
With the delay you hide how long the actual calculation took. So you do not get information, that shows if you got deeper into the system, that you can learn from.
No, it isn't a big lie. See this answer in the question you referenced for a good explanation.
I have an application variable which is populated onstart (in this case it is an array). Ideally I need to rebuild this array every 3 hours, what is the best way of going about this?
Thanks, R.
Save the time you last refreshed the variable contents.
On every request, check the current time against the saved time. If there's a three hour difference, lock and refresh the variable.
As long as there are no requests, the variable also needs no refreshing.
If your application variable must remain "in process" with the rest of the site's code, the way suggested by Tomalak may be your only way of achieving this.
However, if it's possible that the application variable could effectively reside "out of process" of the website's ASP code (although still accessible by it), you may be able to utilise a different (and perhaps slightly better) approach.
Please see "ASP 101: Getting Scripts to Run on a Schedule" for the details.
Tomalak's method is effectively Method 1 in the article, whilst Method's 2 & 3 offer different ways of achieving what is effectively something happening on a schedule, and avoid the potentially redundant checking with every HTTP request.