Pro's & Cons of URL Expiring Concept - http

Is there any disadvantage of using URL expiring concept to protect online videos?

You're stopping your users from usefully bookmarking the URLs to get back to them later (thus, making life harder for your users), and stopping your users from mailing, tweeting or otherwise usefully sending the URLs of your best videos to their friends, so basically killing any chance that the videos may "go viral" and attract large number of viewers. In other words, you're working totally against the growing "social" component of the web world, as well as making your users' experience less pleasant and useful.
If you want to "protect" a URL e.g. against visits from users who aren't registered with your service, why not use the HTTP authentication mechanisms and/or cookies handled out and processed at application level in correspondence with registration and login? It seems to me that such approaches, if you do need protection, can have fewer issues than "expiring URLs".

Related

Looking for examples of SSO implementations without redirects and pop-ups

I've spent a whole afternoon looking for real world examples of SSO logins which work without redirects or pop-ups - and had no luck.
Does this mean they do not exist or do I just suck at searching?
I want to be able to login from blabla.com to blablablablabla.com as well - without redirects or popups. I am looking for real examples not instructions how to pull it off.
Thanks so much.
real world examples of SSO logins which work without redirects or pop-ups - and had no luck.
Trust distribution is really hard. So it's no surprise that some solutions prefer a robust solution which introduces little-to-no new requirements on the client over warts like lack of redirects.
How much control do you have over your clients?
Authentication Options
TLS Client-side certificate: relies on certificates associated with the client's identity to be configured for each user agent.
Kerberos authentication: relies on an existing Kerberos system for authentication. MS Domain Controllers may serve as an adequate Kerberos authentication infrastructure.
Each of these offers pros and cons over the other, but the main theme here is that if you raise the bar on your expectation of your clients system configuration, you can get attractive results.
If you are designing an application for a small or medium sized business, this new burden might not be too great. Small businesses have few clients to configure and medium sized businesses may have a mature computer-imaging/deployment solution that's appropriate for including configuration items like these.
Large businesses include a great deal of bureaucracy and real world exceptions that would break these authentication choices.
Public-facing websites are a poor fit for these choices because the general public isn't experienced enough to maintain a computer configuration that's this complex.

Security for Exposing Internal Web-based application to the World

We have an internal CRM system which is currently a website that can only be accessed inside our intranet. The boss is now wanting to have it exposed to the outside world so that people can use it from home and on the road. My concern is security based in the fact we will be exposing our Customer base to the outside world. I have implemented 3 layers of security as follows:
User Name and Strong password combination to login
SSL on all data being pushed across the line
Once the user is logged in and authenticated the server passes them a token which must be used in all communication with the server from than on.
Basically Im a bit of newb in the respect of web security. Can anyone give me advice on whether I am missing anything? Or something should be changed?
There's a whole world of stuff you should consider, and it'll be really hard to quickly answer this - so I'll point you at a range of resources that should help you out / get you started.
First, I'll plug http://security.stackexchange.com, for any specific questions you have - they could be a great help.
Now, on to more immediate things you should check:
Are your systems behind a firewall? I'd recommend at least your DB is placed on a server that is not directly available to the outside world.
Explore and run a range of (free) security tools against your site to try and find any problems. e.g.:
https://asafaweb.com
http://sectools.org/
Read up on common exploits (e.g. SQL injection) and make sure you are guarding against them:
https://www.owasp.org/index.php/Top_10_2010-Main
https://www.owasp.org/index.php/Category:Vulnerability
How is your token being passed around, and what happens to it if another user gets hold of it (e.g. after it being cached on another machine)?
Make sure you have a decent password protection policy (decent complexity, protects against brute force attacks by locking accounts after 3 attempts).
If this is a massive concern for you (consider the risk to your business in a worst case scenario) consider getting an expert in, or someone to run a security test against your systems?
Or, as mrunion excellently points out in the comments above (+1), have you considered other more secure ways of opening this up, so that you don't need to publish this on the web?
Hope that gets you started.

ASP.NET Less known ways for unregistered user tracking

I am building application that needs to interact with users without accounts and keep track of them. I know OpenID is great and easy and I've used it in almost all my apps, but accounts are not option even those that user is likely to have like Facebook, Google, Yahoo account, etc.
Any coding language is acceptable (but asp.net, JavaScript or Flash would be best, or a combination).
So my plan is to use cookies...but cookies are so easily removed (I really don't count it as reliable identifier)
IP address...well this is efficient even trough proxies, but if someone uses dynamic IP like my whole country this also becomes unreliable
Flash cookies are fine, but I recently read an article describing Mozilla Firefox History-cleaning system gets rid of them too, I need confirmation for this.
Browser Fingerprinting - I don't know how reliable it is since anyone that knows little of any language that can send HTTP requests can spoof it (client string at least).
If anyone knows of any other methods from the ones I listed, or want to correct me in my list feel free to reply.

Google Analytics vs ddos

What i'm wondering is, what kind of behaviour does google analytics show when a ddos attack occurs? Any theories?
My theory would be that an effective DDoS platform/script would not include anything as heavyweight as a JavaScript engine, and that therefore the DDoS activity would not show up in Google Analytics at all.
The point of a DDoS attack is to overwhelm the server with a flood of requests. Any CPU cycles that are spent evaluating JavaScript in the response that the server sends back are cycles that could better be used churning out more requests to the server. I would fully expect a properly executed DDoS attack to not waste time parsing the response from the server, or even reading it off of the underlying socket, let alone interpreting and executing and JavaScript that may be embedded in the markup or fetching scripts and other resources from domains other than the target server.
Of course, this does not preclude the possibility of an exceptionally naive DDoS attack implemented using web frameworks and libraries that do evaluate embedded JavaScript. Such an attack would not (or rather, should not if you've implemented your server code correctly) be very effective, but it would likely generate a spike in Google Analytics traffic.
It depends on the way that the DDOS is implemented. If it's simply an executable distributed to multiple machines, making simple HTTP queries using native TCP sockets, then Google Analytics wouldn't notice anything at all: because the JavaScript that gets returned would never be executed.
However, other sorts of DDOS attacks could leverage actual browsers distributed across many machines. For instance, if you could hack the Yahoo home page and insert an <iframe src='takemedown.com'> into it, you could easily DDOS "takemedown.com". In this particular scenario, GA would certainly detect the impressions, and because (depending on the scenario) there might be an HTTP referrer tag, you could possibly run a report in GA that could pull out the suspicious impressions.
But there are other similar scenarios that wouldn't leave any particular footprints. For instance, if you could hack Lady Gaga's twitter account, you could send out a link to her 16MM followers, and a significant number would immediately click on it: and since most of those clicking on it would probably be doing so from within a separate app, there wouldn't be any referrer tag, and no particular way of identifying the requests.
In other words, it all depends, but it's probably not a terribly useful avenue to investigate. In many (most?) scenarios, GA wouldn't even recognize the impression; and in many others, wouldn't have any reasonable way of picking out the good impressions from the bad.
It will show up 100% some significant peaks in google analytics , simply because there are huge number of requests from multiple sources having huge bounce rate !
When a HTTP DDoS attack occurs the attacker is either using several (thousands) of computers to do so. Sometimes, it's also done with servers. When they make the request, they don't render the javascript or anything - they simply in most cases just make a GET request to the webpage.
So no, it shouldn't really have an impact on GoogleAnalytics
Well, I'm also searching this kind of information, but I have some considerations about the answer:
You will probably not see the attack itself with Google analytics, but you should see the results, I mean, a DDoS is a "distributed deny of service", so, if the service is effectively denied, then you should see a flat line on the graph on Google analytics.
It depends how the bot works, but here's what happened to my website:
Google Analytics real time report for the monk
As well as the increase in traffic you will likely see your bounce rate go sky high and average time on page significantly drop - which I'm sure can have a negative impact on SERPS.
For me it coincided with a Google update so first I put it down to that, but I started getting a lot of traffic to the root page, terms, and privacy, with many prefixed with /?m=0 which is in itself odd (and I'd love for someone to shed light).
The attack caused a great deal of timeouts and was painful to fix:
In short, I hooked up CloudFlare, then created Security -> WAF rules to challenge countries where I was receiving most of the bot traffic. I also switched on the basic bot attack mode (there's a more effective super bot attack mode with the paid subscriptions).
The other interesting point of note was why was my site subject to a DDOS attack. I wish I knew, but at a similar time to when the attack started I was approached by someone who enquired about buying the website. Possibly a tactic to get me to sell it/sell it cheap.

Check if anyone is currently using an ASP.Net app (site)

I build ASP.NET websites (hosted under IIS 6 usually, often with SQL Server backends and forms authentication).
Clients sometimes ask if I can check whether there are people currently browsing (and/or whether there are users currently logged in to) their website at a given moment, usually so the can safely do a deployment (they want a hotfix, for example).
I know the web is basically stateless so I can't be sure whether someone has closed the browser window, but I imagine there'd be some count of not-yet-timed-out sessions or something, and surely logged-in-users...
Is there a standard and/or easy way to check this?
Jakob's answer is correct but does rely on installing and configuring the Membership features.
A crude but simple way of tracking users online would be to store a counter in the Application object. This counter could be incremented/decremented upon their sessions starting and ending. There's an example of this on the MSDN website:
Session-State Events (MSDN Library)
Because the default Session Timeout is 20 minutes the accuracy of this method isn't guaranteed (but then that applies to any web application due to the stateless and disconnected nature of HTTP).
I know this is a pretty old question, but I figured I'd chime in. Why not use Google Analytics and view their real time dashboard? It will require minor code modifications (i.e. a single script import) and will do everything you're looking for...
You may be looking for the Membership.GetNumberOfUsersOnline method, although I'm not sure how reliable it is.
Sessions, suggested by other users, are a basic way of doing things, but are not too reliable. They can also work well in some circumstances, but not in others.
For example, if users are downloading large files or watching videos or listening to the podcasts, they may stay on the same page for hours (unless the requests to the binary data are tracked by ASP.NET too), but are still using your website.
Thus, my suggestion is to use the server logs to detect if the website is currently used by many people. It gives you the ability to:
See what sort of requests are done. It's quite easy to detect humans and crawlers, and with some experience, it's also possible to see if the human is currently doing something critical (such as writing a comment on a website, editing a document, or typing her credit card number and ordering something) or not (such as browsing).
See who is doing those requests. For example, if Google is crawling your website, it is a very bad idea to go offline, unless the search rating doesn't matter for you. On the other hand, if a bot is trying for two hours to crack your website by doing requests to different pages, you can go offline for sure.
Note: if a website has some critical areas (for example, writing this long answer, I would be angry if Stack Overflow goes offline in a few seconds just before I submit my answer), you can also send regular AJAX requests to the server while the user stays on the page. Of course, you must be careful when implementing such feature, and take in account that it will increase the bandwidth used, and will not work if the user has JavaScript disabled).
You can run command netstat and see how many active connection exist to your website ports.
Default port for http is *:80.
Default port for https is *:443.

Resources