ASP.NET Less known ways for unregistered user tracking - asp.net

I am building application that needs to interact with users without accounts and keep track of them. I know OpenID is great and easy and I've used it in almost all my apps, but accounts are not option even those that user is likely to have like Facebook, Google, Yahoo account, etc.
Any coding language is acceptable (but asp.net, JavaScript or Flash would be best, or a combination).
So my plan is to use cookies...but cookies are so easily removed (I really don't count it as reliable identifier)
IP address...well this is efficient even trough proxies, but if someone uses dynamic IP like my whole country this also becomes unreliable
Flash cookies are fine, but I recently read an article describing Mozilla Firefox History-cleaning system gets rid of them too, I need confirmation for this.
Browser Fingerprinting - I don't know how reliable it is since anyone that knows little of any language that can send HTTP requests can spoof it (client string at least).
If anyone knows of any other methods from the ones I listed, or want to correct me in my list feel free to reply.

Related

Is there a good way to link registered users' emails with data in google analytics?

If I build a website for my new awesome mobile app (or web service or whatever) I might want to do a slow launch, sending email invites to the first x people to register on the site.
Is there a good way to link each registered email to the corresponding data in google analytics (or any similar service), and query them based on location, language, etc.?
Maybe the spanish version isn't quite done yet, so I don't want to invite people who used a spanish browser to sign up. Or maybe my app is location-dependent (like time tables for buses) and just doesn't work at all outside of my home town.
I really want to have a simple email-only "registration".
It is completely possible, although it may breach some of GA's terms of use if done wrong.
You should not store email addresses in any way as part of your GA data because it would be considered personally identifiable data. However, there is nothing saying that you couldn't store a kind of GUID for each user, and then compare that with email addresses offline - although the user should be made aware that any actions they take while using your service/application/whatever are being tracked with the capability of being personally identified.
As far as getting the actual data that you are discussing, language and location are stored by GA by default, so no headache there!
The best way to store the user's GUID would probably be in a custom dimension. How you do this is going to depend on how you build your product. I had to write a tracking library using the measurement protocol for an AS3 project awhile back because there isn't an AS3 library that is supported anymore. If you are using JavaScript, it will be much easier, as Google offers native JS libraries to handle web analytics.
Finally, try taking a look at the documentation. Its pretty easy to understand

How to ID a web user uploading a file

I just used a great PDF Converter, but I noted that they have a 30 minute intermission between conversions (to get paying customers). So I got curious as to how the restriction might be is implemented; and afaik it doesn't seem to be (solely?) cookie-based.
IP-address doesn't seem likely (wouldn't that block entire NATted organizations collectively?), and using filename would be too blunt. Can Javascript generate hardware-unique info these days? What other other ways are there? What is secure, what is easy to implement and what is just rotten?
I think the problem here is to uniquely identify a client's browser.
Can Javascript generate hardware-unique info these days? What other
other ways are there?
A simple solution (may not be exhaustive) I can imagine, is to consider not just the cookie or the ip address but all possible parameters like
cookies
ip address
browser details
flash cookies and
then those information that can be pulled off from a client's browser via Javascript (which is enabled for most of the browsers and needed by most sites like the one you mentioned) such as plugins installed, their versions.
With all these information, one can identify a machine uniquely on the internet to a great extent.
What is secure, what is easy to implement and what is just rotten?
Personally, I have never implemented this, but it seems quite doable.
Some interesting links that I found during the course of this short interesting research are:
Peter Eckersley. 2010. How unique is your web browser?. In Proceedings of the 10th international conference on Privacy enhancing technologies (PETS'10), Mikhail J. Atallah and Nicholas J. Hopper (Eds.). Springer-Verlag, Berlin, Heidelberg, 1-18.
How unique and trackable is your browser?
Is browser fingerprinting a viable technique for identifying anonymous users?
How do I uniquely identify computers visiting my web site?
Browser fingerprinting code snippet
Flash Cookies, a Little-Known Privacy Threat

Avoid website garb programs

I found several program over the internet which can grab your website and download the whole website on your pc. How one can secure your website from these programs?
Link: http://www.makeuseof.com/tag/save-and-backup-websites-with-httrack/
You have to tell whether the visitor is human or bot in the first place. This no easy task, see e. g. : Tell bots apart from human visitors for stats?
Then, if you detected what bot it is, you can decide wether you want to give it your website content or not. Legitimate bots (like Googlebot) will conveniently provide their own userAgent id; malicious bots / web crawlers may disguise themselves as common browser programs.
There is no 100% solution, anyway.
If you content is really sensitive, you may want to add captcha, or user authentication.

Will Google block my access if I use their features without token?

I'm using this link https://www.google.com/reader/api/0/stream/contents/feed/FEEDHERE?output=json&n=20
to fetch feeds using Google's algorithm. As you can see I'm not adding any other parameters, just fetching the returned data in JSON format. My app will be heavily used hopefully and if I send a lot of requests to this link, will Google block my access or something?
Is there anything I can include, like userip, url for my app (so if they have problem to just contact me) or something else?
The most basic answer to your question is that Google will change its Terms of Service whenever it likes, and you've got no say in the matter. So if it's allowed today, it might not be allowed tomorrow, at Google's whim.
On this issue, though, you seem fairly safe. From the Terms of Service (these is the general document, since Reader doesn't seem to have a specific one):
Don’t misuse our Services. For example, don’t interfere with our Services or try to access them using a method other than the interface and the instructions that we provide.
Google provides RSS and Atom. They provide these feeds, so I assume they expect that they'll be used. They don't say that it's a misuse to point someone else at those feeds, so it looks OK for now, but they could add such a clause at any time.
All online services are subject to the terms and conditions of the providers of those services. So, as others have said, they may be ok with your use today, but they can change their mind any time down the line. I doubt including a URL or email or contact info will help anything, because when these services change, they don't notify every user of the service, they just announce the change publicly, and usually they give several month's notice in order to give users a chance to adapt their applications, but this is not standardized or enforced so there is no guarantee. One example would be the fairly recent discontinuance of the Google Finance API (for which no replacement has been announced).
The safest approach would be to design your app such that this feature that uses google's functionality is decoupled as much as possible from the rest of your app, so that, when or if the availability of the service changes (ie it's no longer available at all) you can adapt your app to use some other source for the feeds with minimal impact to the rest of the app. Design for change and plan for the worst.

Check if anyone is currently using an ASP.Net app (site)

I build ASP.NET websites (hosted under IIS 6 usually, often with SQL Server backends and forms authentication).
Clients sometimes ask if I can check whether there are people currently browsing (and/or whether there are users currently logged in to) their website at a given moment, usually so the can safely do a deployment (they want a hotfix, for example).
I know the web is basically stateless so I can't be sure whether someone has closed the browser window, but I imagine there'd be some count of not-yet-timed-out sessions or something, and surely logged-in-users...
Is there a standard and/or easy way to check this?
Jakob's answer is correct but does rely on installing and configuring the Membership features.
A crude but simple way of tracking users online would be to store a counter in the Application object. This counter could be incremented/decremented upon their sessions starting and ending. There's an example of this on the MSDN website:
Session-State Events (MSDN Library)
Because the default Session Timeout is 20 minutes the accuracy of this method isn't guaranteed (but then that applies to any web application due to the stateless and disconnected nature of HTTP).
I know this is a pretty old question, but I figured I'd chime in. Why not use Google Analytics and view their real time dashboard? It will require minor code modifications (i.e. a single script import) and will do everything you're looking for...
You may be looking for the Membership.GetNumberOfUsersOnline method, although I'm not sure how reliable it is.
Sessions, suggested by other users, are a basic way of doing things, but are not too reliable. They can also work well in some circumstances, but not in others.
For example, if users are downloading large files or watching videos or listening to the podcasts, they may stay on the same page for hours (unless the requests to the binary data are tracked by ASP.NET too), but are still using your website.
Thus, my suggestion is to use the server logs to detect if the website is currently used by many people. It gives you the ability to:
See what sort of requests are done. It's quite easy to detect humans and crawlers, and with some experience, it's also possible to see if the human is currently doing something critical (such as writing a comment on a website, editing a document, or typing her credit card number and ordering something) or not (such as browsing).
See who is doing those requests. For example, if Google is crawling your website, it is a very bad idea to go offline, unless the search rating doesn't matter for you. On the other hand, if a bot is trying for two hours to crack your website by doing requests to different pages, you can go offline for sure.
Note: if a website has some critical areas (for example, writing this long answer, I would be angry if Stack Overflow goes offline in a few seconds just before I submit my answer), you can also send regular AJAX requests to the server while the user stays on the page. Of course, you must be careful when implementing such feature, and take in account that it will increase the bandwidth used, and will not work if the user has JavaScript disabled).
You can run command netstat and see how many active connection exist to your website ports.
Default port for http is *:80.
Default port for https is *:443.

Resources