Imagine a simple html page with 3 iframes pointing to the same url:
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title></title>
</head>
<body>
<iframe src="http://www.mydom.com/mypage.aspx"></iframe>
<iframe src="http://www.mydom.com/mypage.aspx"></iframe>
<iframe src="http://www.mydom.com/mypage.aspx"></iframe>
</body>
</html>
My goal is to track unique visitors in mypage.aspx code behind. Sounds simple but the following:
if (Request.Cookies["myc"] == null)
{
// New visitor!
Response.Cookies["myc"].Value = myval;
Response.Cookies["myc"].Expires = DateTime.Now.AddYears(10);
}
else
{
// Returning visitor
}
has a problem. Visiting the html page with the 3 iframes I get three simultaneous hits to mypage.aspx and Request.Cookies["myc"] is null all three times while I should understand that it is the same user (1st hit: new visitor, 2nd and 3rd hits: returning visitor for a total of one visitor/user). Any ideas how to fix this?
A very intriguing question!
The core problem is that the internet is an asynchronous, anonymous place.
The browser may submit one of these requests at a time, or all three at the same time, so there is no way to control the order of events on the user's machine.
In addition, the browser does not make any special effort to uniquely identify the user. Any identification system must be jerry-rigged into the request/response cycle. GA uses cookies to tag users and can pull that information with each and every request. Essentially, GA tags the user before they visit your site, thus allowing that service to identify the user on the "first" hit.
Your problem is that you want to implement your own identification solution. You need to somehow include the user's identity in the request. But, until they visit the site, you have no way of doing that. And, given that each simultaneous request will have a different identity embedded in the response, you cannot guarantee that a user will be tagged with a single identity.
Basically, to the best of my knowledge, there is no solution which will allow an anonymous user to be automatically and uniquely identified by your site.
You may be tempted to use IP addresses and that can work in some situations, but it's a very bad solution. Right now I am behind my employers firewall. If you were to use the IP address currently visible to you to identify me, you would see me and the 5,000 other people who work here as a single user. That's a very dangerous system to rely on.
If you really, really need a single identity for each user - one that cannot be circumvented or accidentally trodden on via multiple simultaneous requests, etc - then your only solution is to require the user to explicitly identify themselves on their first request via a value embedded in the POST or GET query or a cookie created during a previous visit.
In your scenario, all three responses could generate a cookie with a unique ID (three ids in succession, each overwriting the previous value). The landing page could be a generic "welcome to my site" page, or something like that. The user could then click a link to visit the site, at which point the last generated cookie (with the last generated id) would be embedded in the request. While you cannot guarantee that only one ID will be generated per user, you can at least be fairly confident that they will be identified by a single ID after the initial round of requests (and before they visit the main content portion of your site).
Of course, you could use a complicated AJAX solution where the response for unidentified users is essentially a container (without an ID). The AJAX could then set a flag within a cookie indicating that it is retrieving an ID, then it could submit the first request. Subsequent AJAX containers would see this cookie and then enter a polling state, waiting for the flag to be cleared. When the first response comes back, the first AJAX container could set the ID in the cookie and change the flag. Then, the remaining containers will detect the flag change and can retrieve the ID from the cookie (rather than sending their own requests).
Once the AJAX container has an ID, it could send a request for content along with the unique ID. Your site could then respond, filling the container with the appropriate data (or simply redirecting to the appropriate page).
This solution, if properly implemented, would more or less guarantee that a user is assigned only one ID. But really, is it worth it? Remember, the cookie and the request "belong" to the user. The user can pretty easily edit both. While there are techniques for detecting an edited identity (most involving some form of encryption), you cannot prevent a user from "anonymizing" themselves if they so choose. Sometimes a halfway solution is sufficient.
All you have to do is to put the same code in a [webMethod] instead of in PageLoad then call the [WebMethod] from client via js (XMLHttpRequest), you will receive the three calls in sequence. No more probs due to simultaneous hits.
That makes good sense. The request for all of these does not have any cookies the first time, so it will be null in all cases. Remember it should do this as async as possible, it's necessarily linear (it might be, I am not expert at how the flow is).
Instead, look at the Ip address and timestamp. If it is the same, you can work as if it is the same user.
Yes, that solution is not perfect, but it's better than the cookie solution.
Do not use iframes, Use Master Pages. Create a Master Page, and put this code on the top of your "mypage.aspx"
<%# Page Language="C#" MasterPageFile="~/Site1.Master" ...
Related
I'm doing some brainstorming for a portal framework, and I'm envisioning a breadcrumb navigation stack that is tracked via the ViewState (so that if the user clicks "back" in their browser and clicks some other link, the breadcrumb trail will depart from the right page). My pages are really just ascx controls that get loaded into a placeholder control on the main portal page based on the URL. When the user clicks a portal link, there is a postback that loads the original page and invokes the given link's "clicked" handler, which should then "push" the current location onto the breadcrumb stack before sending the browser a redirect instruction to change the URL to that of the page that I want to go to.
That's as far as my brainstorming goes for the moment, because once we perform a redirect, we lose the ViewState. Rather than doing the redirect, I've thought of simply telling my main portal page to replace the current page control with the target page control, thus avoiding the extra http round-trip and allowing me to keep the ViewState. But then my entire website experience occurs in the context of a single URL, so I lose URL bookmarking among other things. And if I wrap some of my controls in AJAX panels, the entire site happens in one page request as far as the browser's history is concerned.
What I would like is some way to have the browsing history and URLs behave as if each link is leading them to a new page with a descriptive URL and all that, but still have some way to know the path that the user took to get to the page that they're on (ViewState seeming to be the simplest way to track this).
Could anyone suggest some techniques I might try using?
First suggestion... You may want to look into ASP.NET MVC. However, I have to admit to some ignorance here as I'm not sure that would really solve your problem. But it sounds like the sort of thing MVC would be suited for.
Second... it's possible to override the methods responsible for saving and loading ViewState. One of the things you can do, for instance, is push the ViewState into the Session rather than sending it down to the user and back up on postback. You could easily add some custom code here.
Third... I think you may want to rethink part of your design. The ViewState really serves one purpose: It recreates the state of the page as it existed when the page was rendered for the user. If you are moving to a different page, or a new set of controls, why would you need the ViewState at all? The ViewState itself is really just a hack to begin with... ASP.NET's way of maintaining state on top of a stateless system. (but that's a whole 'nother discussion) We have other methods of maintaining state... the primary mechanism being the Session object. Why not save your breaacrumb data there instead?
I would look at using cookies. For performance reasons, you really want to avoid HTTP redirects if you can, and ViewState only works if the user submits a form, not for regular links.
You might do something like maintain several path lists in cookies that show the path that the user took to go from one page to another. Maybe you set a unique ID with each page that is applied by some JavaScript as a query string when the user clicks on a link, and the server uses that ID and the past history from the cookies to determine how to render the bread crumb on the next page?
The problem that I am having is as follows:
I currently have a custom class that generates buttons and places them on a placeholder on a master page.
The events for these buttons put specific values into session that differs values for a database query. In essence, the buttons serve as filters for charts.
After creating all the buttons, I realized that session values will stay constant from page to page, so everytime a user enters a different page while another is open, the filters selected on the open page will remain constant for the new page that is opened.
At first, I wanted to use viewstate rather than session, but then realized that a master page and a content page do not share the same viewstate.
At the current time, I am thinking of using a prefix for the sesson key that will identify what page the filters actually exist for. However, I am not wanting to overload session with numerous values if the user wishes to have many pages open at the same time.
Any solutions that would entail a way to share viewstate (or some other way to store values) between app_code, the master, and the content page?
Use HttpContext.Current.Items, it is a key-value pair collection with a lifetime of a single Http Request.
Have you considered Context.Items?
How many filters are we talking here? Store the filter values in the URL. Have you seen some of the URLs that google or an ecommerce site uses? They are quite long. Here is how I do it:
I store the filter values in the query like, www.chart.com?filter1=val1&filter2=val2 etc.
I user JQuery's query plugin to manipulate the query on the client side, and then request the chart from the server again, using the new query.
This way, I'm not junking up session, cookies, or anything like that, and if the user wants to store a bookmark to a particular chart or email it to a friend, they can and the filters are preserved.
I'm starting to think the answer shown in the following question will work:
ViewState object lost in Master Page Load
Exposing the desired variables via a property.
If the data isn't too long, cookies are a typical solution.
Another option is to use Silverlight isolated storage. The Silverlight control itself could be invisible (no UI).
I am trying to come up with a way to measure how long a user has been on a page in my ASP.NET application. I am storing the userid, pagename, pageenteredtime and pagelefttime in a database. Each record has its own unique id as well, called featureuselogid.
At the moment, I can track when a user comes into the page with the page_load function on the server side. I store the userid, pagename and pageenteredtime.
After that im stuck, and need some guidance in the right direction. I need to record the time the user leaves the page. I know in javascript there is a window.onbeforeunload function, which will cover most cases (browser shutdown, links etc).
But how do I pass the featureuselogid to the javascript? If i can do that, I think I can make a webservice call from the javascript and update the record with the pagelefttime.
Am I going down the wrong path?
Cheers in advance.
You will have to run Javascript timer on the client that periodically calls the server. Once the call does not get logged at the expected time, it means that the user has left. Woopra does this, and it works reliably. For instance, sometimes people have a page loaded in a browser in the background for days, and there is no other way of detecting that they are still connected.
Users on laptops pulling the network cable, moving out of the coverage of a Wifi zone, etc etc there are too many scenarios where the onbeforeunload event will not reach a server anymore.
Just put the featureuselogid in a hidden field in the page, with a unique ID that will make it accessible to Javascript, or set a javascript variable.
Having said that, I believe that you are going to be able to most reliably detect Page_Load. You can determine the time between pages by measuring the time between Page_Load events. You won't get the browser closure, but IMO knowing when the user closes the browser is not all that meaningful.
When you render the page in ASP.net, include a javascript tag that assigns a variable to the value of a server side script:
<script language="javascript">
var JS_featureuselogid = <%= featureuselogid %>;
</script>
Later in your javascript code, you can reference the JS_featureuselogid variable and get the value that was injected into it during page construction.
I want to redirect the user to another page to fill out a captcha but i would like to keep the post data, and if the captcha pass to send it 'back' and complete the previous page action.
When/if the user succeeds i like to add an captchaPass=true and would like access the post data and continue processing. Right now i am using redirects but ATM i am not required to use it.
Is it possible to carry the post data? keep in mind i may the user access multiple pages so separating data and not having a mixup is necessary.
One idea is to get and save all posted data [1] on the captcha page, and then recreate a middle white page with this form data and automatically make a new post to the previous page.
Can this work with out any issues with hash checks and security ?
Is there a better idea with out this white redirect page ?
[1] One other issue here, how to send this posted data with the redirect ? and not change the url - or make it too big to accept it. Keep in mine that a server transfer may not good idea because is complicate the thinks on captach post back.
Update 1
The basic idea here is how some one capture the full post back of a page, show a different one page and then send the post back data to the original first one.
The reason is to stop a bad user, or an attacker bot program that try to bring down the pages/server by making many post back from different pages in short time. All that happens with out javascript, and most attackers use custom made programs that just make post of data to all page together try to bring down the system.
For example, if a page have a search box, is very easy for most of the the site to bring them down by start making hundred of random search with wildcard (called and Dos Attacks using SQL wildcards) and make the sql server and the computer spend his time and cpu to search and search thinks. So to prevent an attack like this you need to recognize multiple post backs from the same computer, and then the next step is to redirect him to a captcha page to block him out in case that is a computer program.
Other example, many page have email submit, very easy you can submit hundred times the email of his and full his mail box in no time with hundred of emails, or on a store to place all items on the cart again and again and full the database with stuff like that.
So ajax and javascript is not working in this case, and we need a way to redirect him after the post back to a page that can check if is a real user or an attacker and stop him - but if is a real user must return back to his normal action.
Update 2
This all must be done in a general way, eg on BasePage, or on Global.asax or somewhere that is independed from the content of any page. Because we try to prevent a DoS attack, or multiple submit anywhere on any random place of any random page.
Yes I know how you can place a captcha on the contact page, but this is not what this question was first asked for - this questions asked how can carry post data to one different page, keep them there and then resend them back to the original one.
The obvious solution is to read all post back, and save them on the form, and then read them back and make on fly a form only with that data and make the post back. Here I am asking if there are any other better than this solution.
Other Applications
There is also the case that a user is inside a page that request authentication, but the authentication ticket has expired, and the user make post back. In this case we need to keep somewhere all the posted back data, to proceed with the login page, and resend them back to the first page that request the authentication.
Sure, just write the form data out to the captcha page in hidden elements with the additional captcha fields added to the form. Have your submit action post the whole thing back to the original. Using ASP.NET it's probably easier to have the captcha written to the same page with the form fields hidden, but you can do cross-page postbacks as I've described above.
Cross Page Posting might help you.
Why not implement the CAPTCHA with AJAX? Load the captcha object and form with Javascript in a div perhaps displayed lightbox style, accept the user input and post it to your server for validation, hence continue with the users post request or keep them there until they get it right (or cancel).
A more specific situation example:
Give the form submittal button an onClientClick value of some Javascript function. This function decides if this particular form needs a CAPTCHA. If it does it loads an interface for taking the CAPTCHA (which you'd need to do with some server-side code) and inserts the CAPTCHA's input element to the form that the user clicked to submit.
Once the user has entered the CAPTCHA input and clicks some button whose click event is bound to return to your first JS function, the Javascript intercepts this action and posts the full form, all the data from the original form and the CAPTCHA for validation. Your server script can now process all this at once!
This is the best solution I can think of that works similar to how you've asked, but I can't imagine why you want to perform the CAPTCHA on a different page.
Server.Transfer with MultiViews, Panels like control is fine with you? In this way, no need to bother about the Data Maintenance and Postbacks. You can do the validations in javascript.
You can keep both functionality in the same page to avoid moving data from one page to another page/Bring the data back to original page. You can utilize Session for this intermediate operation. Set it back to associated controls across Postback. You can create a class, Instantiate it and Initialize the control values in this class object. Save class object in Session. On Postback, You can reassign the values to the associated controls. This will definitely keep the things simple and without much complexity.
Doubts ?
Or vice versa.
Update:
Hmm, let's assume I have a shopping cart app, the user clicks on the Checkout button.
The next thing I want to do is send the user to a Invoice.aspx page (or similar). When the user hits checkout, I could Button.PostBackURL = "Invoice.aspx"
or I could do
Server.Transfer("Invoice.aspx")
(I also changed the title since the method is called Transfer and not TransferURL)
Server.TransferURL will not result
in a roundtrip of HTTP
request/response. The address bar
will not update, as far as the
browser knows it has received only
one document. Server.Transfer also retains execution context, so the script "keeps going" as opposed to "starts anew".
PostbackURL ensures an
HTTP request, resulting in a
possibly different URL and of course
incurring network latency costs.
Usually when you are attempting to "decide between the two" it means you are better off using PostbackURL.
Feel free to expand your question with specifics and we can look at your precise needs.
Here is a good breakdown between the two:
Server.Transfer vs Response.Redirect
Server.Transfer is done entirely from the server. Postback is initiated from the client for posting form contents and postback url identifies the page to post to.
Maybe you meant to compare with Response.Redirect, which forces the client to submit a new request for a new url.