Is it possible to create a page that redirects to a private page if a correct time sensitive variable is passed?
Ex:
http://www.mysite.com/redirectpage.aspx?code=0912042400
The code value is a year-month-day-time combination that must fall withing some time window (15 min, 30 min etc) based on the server's time.
The redirectpage would parse the code and would redirect to a private page (using an obfuscated url with the code variable) if the code is valid or show a 404.
Usage scenario:
Party A wishes to show party B a private page.
A sends a link to B with a code that is valid for the next 30 minutes.
B clicks the link and is redirected to a private page.
After 31 minutes clicking the link produces a 404 and a refresh/postback of the private page also produces a 404.
Thanks
Yes.
One approach is to concatenate the "valid start time" with a private string known only to the server. Generate a has code (e.g. MD5 hash) based on that concatenated value. Send the "valid start time" and the hash back to the client. They pass both back in to view the page. The server re-combines the "valid start time" with the secret key, recomputes the hash, and ensures it matches the passed-in hash. If it matches, compare the passed-in time to the server time to make sure the redirect is still valid.
There is no need for a database of valid keys and what time range they pertain to with this approach. You can even add the page name for the redirect to the time to make the system completely self-contained.
Server computes:
Hash = md5("2009-12-12 10:30:00" + "MyPage.aspx" + Secret Key)
Send to client:
"2009-12-12 10:30:00" + "MyPage.aspx", Hash
Client later sends to server
"2009-12-12 10:30:00" + "MyPage.aspx", Hash
Server checks
newHash = md5("2009-12-12 10:30:00" + "MyPage.aspx" + Secret Key)
Hash == newHash?
Yes and time within window then redirect, else error.
This is a simple task for a database connected web appliction. The basic algorithm would be to insert a "ticket" into a database table. The ticket would be composed of a random string and a timestamp.
When a request comes in for the page, the script that generates that page can look in the ticket table to see if there is a record that matches the code passed in via the URL argument. If there is a record, the script then checks to see if the timestamp is expired. If so, generate the 404 page. Otherwise show the correct info.
There may be a pre-built content management system module or a caned script that can do this, but I don't know of one myself.
As an example, in ASP.net i would cache a keyvaluepair with the code and the redirect page, and set the cache timeout to 30 mins, just a quick example, but this is very possible.
The one issue you are going to run into here is how easy it would be to simply change the url and view private information.
The approach I would take would be this:
When the private page is generated, make a new record in the database with an encrypted key, that contains the starting availible time, and the starting ending time.
Put this encrypted ID in the URL.
when the person goes to the page, look up the timestamps, make sure they are within range, then redirect them to a 404 page.
One way of doing it is to pass the page an encrypted time-limit as part of the query string. Something like http://www....aspx?timelimit=[encrypted]. Where [encrypted] isn't user editable. You may just need to hash the DateTime somehow.
Yes, you could do it that way. However, encoding the valid date range in the value passed is a security risk.
A better approach would be to generate a random code and store that code in a database along with a date range for when the code is valid.
That way there's less of an opportunity for malicious users to guess valid values.
Do you know ahead of time, when the X minutes will start ? Some sites have promotion codes for specific times (hours,days etc) and if you know it ahead of time you can check if the request from the client is within those times.
If you do not know it ahead of time, this is what I would do.
Make sure that the user given token/code is valid
Create a session object with the code as the session key(you can do that in asp.net, not sure about other programming languages) and the IP (or any unique string as value for that key, if behind a proxy, IP will not work, so generate a GUID and pass it as a secure cookie to the client when sending the response). This will prevent multiple users from accessing the secure resource at the same time (though not sure if this is a part of your requirement)
note down the first request time in the session and DB
You can expire the session after X minutes (Get Session to expire gracefully in ASP.NET) .
For subsequent requests check the validity of the key(cookie) sent by the client (against the server side value) and the request time with the first request time + X minutes, If the Key & time is valid, let him access the resource, if the Key is invalid, tell the used that there is already a session in progress
If the user tries to access it after X minutes, (you know it from the session) send a "your page cannot be served as your X minutes has expired since visiting the page the first time" instead of sending a 404 (404 says the resource was not found and would not convey that the request time was not valid) or log him out
Related
Im developing a app which has a form but i need the form to be usable for long periods of time because the app works in a static tablet fixed to a wall. Im using a hidden tag, so when it expires the page must be refreshed manually. Im using sockets so i thought that instead of taking off the expiring time of tokens i could send a new token through a socket when the expiry time is coming. The problem with sending a new token is how and if i should do it...
I thought a solution by sending new tokens every 30 minutes to the clients:
It starts by sending 2 tokens on rendering (instead of just the hidden tag).
One of them (the one that is not the wtform token) uses jwt and is the result of encrypting a datetime which is the datetime when the actual hidden tag was updated (in rendering would be created on request so it would be the time of rendering). The second token is the token of the hidden tag itself.
Every 30 minutes the client-side would send a request with the jwt token to a socket handler that would check that have at least passed 30 minutes from the datetime in the token and then emit the 2 tokens (the new token of wtforms and the updated encrypted datetime to use in the next emit by the client-side) this way i prevent some client from editing the front end to send thousands of request for new tokens which i thougt could be problematic (maybe its not).
I dont think pasting the code would be of much help because i have a theorical question about working of the logic of the code more than a code itself.
My expected result is to have a working front-end form which uses a token to prevent csrf without deactivating the expiry time of the tokens.
MY QUESTIONS ARE:
is there a real necessity of using that jwt encrypted token instead of just giving a new token to every client asking for one ?
is there a better solution to the problem i have more than the one i thought ?
I'm trying to scrape some pages on a website that uses ASPX forms. The forms involve adding details of people by updating the server (one person at a time) and then proceeding to a results page that shows information regarding the specified people. There are 5 steps to the process:
Hit the login page (the site is HTTPS) by sending a POST request with my credentials. The response will contain cookies that will be used to validate all subsequent requests.
Hit the search criteria page by sending a GET request (no parameters). The only purpose of this is to discover the __VIEWSTATE and __EVENTVALIDATION tokens in the HTML response to be used in the next step.
Update the server with a person. This involves hitting the same webpage in step 2 but using a POST request with form parameters that correspond to the form controls on the page for adding person details and their values. The form parameters will include the __VIEWSTATE and __EVENTVALIDATION tokens gained from the previous step. The server response will include a new __VIEWSTATE and __EVENTVALIDATION. This step can be repeated using the new __VIEWSTATE and __EVENTVALIDATION, or can proceed to the next step.
Signal to the server that all people have been added. This involves hitting the same page as the previous 2 steps by sending a POST request with form parameters that correspond to the form controls on the page for signalling that all people have been added. The server response will simply be 25|pageRedirect||/path/to/results.aspx|.
Hit the search results page specified in the redirect response from the previous step by sending a GET request (no parameters - cookies are enough). The server response will be the HTML that I need to scrape.
If I follow the process manually with any browser, filling in the form controls and clicking the buttons etc. (testing with just one person) I get to the results page and the results are fine. If I do this programmatically from an application running on my machine, then ultimately the search results HTML is wrong (the page returns valid HTML, but there are no results compared with the browser version and some null values were there should not be).
I've run this using a Java application with Apache HttpClient handling the requests. I've also tried it using a Ruby script with Mechanize handling the requests. I've setup a proxy server using Charles to intercept and examine all 5 HTTPS requests. Using Charles, I've scrutinized the raw requests (headers and body) and made comparisons between requests made using a browser and requests made using the application(s). They are all identical (except for the VIEWSTATE / EVENTVALIDATION values and session cookie values, which I would expect to differ).
A few additional points about the programmatic attempts:
The login step returns successful data, and the cookies are valid (otherwise the subsequent requests would all fail)
Updating the server with a person (step 3) returns successful responses, in that they are the same as would be returned from interaction using a browser. I can only assume this must mean the server is updating successfully with the person added.
A custom header is being added to requests in step 3 X-MicrosoftAjax: Delta=true (just like the browser requests are doing)
I don't own or have access to the server I'm scraping
Given that my application requests are identical to the browser requests that succeed, it baffles me that the server is treating them differently somehow. I can't help but feel that this is an ASP.net issue with forms that I'm overlooking. I'd appreciate any help.
Update:
I went over the raw requests again a bit more methodically, and it turns out I was missing something in the form parameters of the requests. Unfortunately, I don't think it will be of much use to anyone else, because it would seem to be specific to this particular ASP servers logic.
The POST request that notifies the server that all people have been added (step 4) requires two form parameters specifying the county and address of the last person that was added to the search. I was including these form parameters in my request, but the values were empty strings. I figured the browser request was just snagging these values because when the user hits the Continue button on the form, those controls would have the values of the last person added. I figured they wouldn't matter and forgot about them, but I was wrong.
It's a peculiar issue that I should have caught the first time. I can't complain though, I am scraping a site after all.
Review Charles logs again. It is possible that the search results and other content may be coming over via Ajax, and that your Java/Ruby apps are not actually doing all of the requests/responses that happen with the browser. Look for any POST or GET requests in between the requests you are already duplicating. If search results are populated via Javascript your client app may not be able to handle this?
Can i access session variable of one site in another(same IIS)
site1:
aaa.xxx.com
Session["name"]="balaji"
site2:
bbb.xxx.com
string name=Session["name"].ToString()
Is it possible?
ASP.NET session state enables you to store and retrieve values for a
user as the user navigates ASP.NET pages in a Web application. HTTP is
a stateless protocol. This means that a Web server treats each HTTP
request for a page as an independent request. The server retains no
knowledge of variable values that were used during previous requests.
ASP.NET session state identifies requests from the same browser during
a limited time window as a session, and provides a way to persist
variable values for the duration of that session.
source : MSDN
This can be achieved using Query String.
Call the second site URL with a query string with it. http:\\bbb.xxx.com?name=balaji
Handle the query string in the second site.
More info - How to use Query String
In my asp.net application, the HTTP session times out after 30 mins on which the user is redirected to a login page. Here is a scenario:
User clicks on an item to edit, with the URL like so:
http://localhost/app/edit?id=1
User makes certain changes on a page to edit item with id=1 but does not "save" them for 30 minutes.
On the 31st minute he clicks on save and he's redirected to the login page where I have a return URL to redirect to, if he enters the correct credentials.
What is correct way of preserving the state in which the user left the page?
All I have is the return URL which coming in with the login request which looks like: http://localhost/app/edit.
Clicking on save does a POST request but does not pass anything in the query string.
I want to be able to redirect to http://localhost/app/edit?id=1
Unless you have specifically built a mechanism to preserve state outside the ASP.NET session-based state mechanism, session is lost when it times out. You could, theoretically, construct a state system that stores intermediate drafts of data from selected pages/inputs on a per-userID basis, where pages containing data likely to be "orphaned" in this way could be somewhat retrieved, but that would also require periodic background saves of data back to the database on those pages deemed important enough to preserve changes otherwise lost due to timeout.
When the POST happens, you have the original URL, and the request object with all of the form values.
The request will be intercepted, and redirected to the login screen. At that point, you need to save the original URL and request values, handle the login, and then redirect to the original URL with the original values.
What does your POST request look like? Is there an object you could stuff into temporary storage (session, etc) or do you process the form collection?
What options do I have to work around disabled cookies for session management?
In the page in hidden field
In the query string
In the HTTP header
You can append an SID variable to every link you output to the user. PHP has some built in support for this.
Well, all a cookie does is holds on to the big ugly string your system generated as that user's session identifier (SID) for you. If you don't have cookies, the goal is to get that SID sent in with every request from that specific user.
Creating a hidden form field with the SID in it is necessary when you are accepting input from the user. You should probably read up a bit on Cross-Site Scripting vulnerabilities - might as well head these off while you're monkeying with your forms anyway.
Adding data to links (via the query string) is typically called "URL Rewriting", so just look that up for details. The upshot is that every time you output a link it must have the SID as one of the parameters in the query string.
For example: "http://mysite.com/action?SID=da83fdec49ebfafe4"
Some frameworks can handle this URL rewriting semi-transparently.