I just used a great PDF Converter, but I noted that they have a 30 minute intermission between conversions (to get paying customers). So I got curious as to how the restriction might be is implemented; and afaik it doesn't seem to be (solely?) cookie-based.
IP-address doesn't seem likely (wouldn't that block entire NATted organizations collectively?), and using filename would be too blunt. Can Javascript generate hardware-unique info these days? What other other ways are there? What is secure, what is easy to implement and what is just rotten?
I think the problem here is to uniquely identify a client's browser.
Can Javascript generate hardware-unique info these days? What other
other ways are there?
A simple solution (may not be exhaustive) I can imagine, is to consider not just the cookie or the ip address but all possible parameters like
cookies
ip address
browser details
flash cookies and
then those information that can be pulled off from a client's browser via Javascript (which is enabled for most of the browsers and needed by most sites like the one you mentioned) such as plugins installed, their versions.
With all these information, one can identify a machine uniquely on the internet to a great extent.
What is secure, what is easy to implement and what is just rotten?
Personally, I have never implemented this, but it seems quite doable.
Some interesting links that I found during the course of this short interesting research are:
Peter Eckersley. 2010. How unique is your web browser?. In Proceedings of the 10th international conference on Privacy enhancing technologies (PETS'10), Mikhail J. Atallah and Nicholas J. Hopper (Eds.). Springer-Verlag, Berlin, Heidelberg, 1-18.
How unique and trackable is your browser?
Is browser fingerprinting a viable technique for identifying anonymous users?
How do I uniquely identify computers visiting my web site?
Browser fingerprinting code snippet
Flash Cookies, a Little-Known Privacy Threat
Related
This question comes up because of very specific HIPAA requirements. A Covered Entity(CE) eg, doctor can't use a cloud storage provider (CSP) unless they have a Business Associate Agreement (BAA) with the CSP, even if the data are encrypted and the CSP has no access. I'm not a security expert, but most web hosts' security would IMO satisfy HIPAA, IF there were a BAA.
There's a conduit exception for video, ISPs, and other electronic equivalents of USPS that do not store electronic Protected Health Information (e-PHI.)
I don't know why, but the web hosts who will sign a BAA charge $100-300/ mo for very basic hosting other sites charge $5-15/mo for. I think they're preying on CE ignorance with the perception there's lots of money sloshing around, true for radiology, but not for primary care.
G-Suite will execute a BAA, which makes G-Suite a reasonably-priced solution for gathering Protected Health Information (PHI) patient input, while keeping the CE compliant with HIPAA.
It's worth noting that "HIPAA compliance" is ONLY a property of CEs and Electronic Medical Records, not other software or sites. Any other product or service claiming "HIPAA compliance" is misrepresenting itself.
I find Google Sites not as user-friendly as most web hosts. There's less hand-holding for doing things like installing WP add-ins, or adding SSL certificates. Or maybe Google just does a terrible job of explaining how to actually DO something with a site hosted there. In any case, it seems easier to run a website on a web host that's set up to manage software and WP plug-ins for amateurs.
I'm willing to be educated on this. (24 hours later--I did a lot of self-education-see answer below.)
The basic HIPAA privacy requirements are rather simple:
CEs can use PHI to treat and carry out essential functions, but must
not share it with anyone not entitled to it.
The basic HIPAA security requirements are also simple:
Make a security risk analysis.
Implement reasonable security measures and
Document why various measures were taken or not.
Some elements are required, others must simply be addressed, evaluated and documented.
For example, 2FA is "addressable" as is data encryption, but making an analysis, having physical security and employee training are required.
So my question is whether a G-Suite form embedded in a website on another web host stores any data on that web host, or does it all go back to G-Suite, eg G-Drive, where it's secure and covered by a BAA?
The problem when you know very little about a topic is, you don't know what to ask. I know a bunch about HIPAA, not much about HTML. I did a lot more research, and there's at least two answers.
The short answer is, NO, the embedded frame is an iframe HTTPS linked to G-Suite.
The form in the iframe is a window into docs.google.com, so data never gets off docs.google.com, where it's covered by G-Suite's BAA. The host site is in effect a conduit.
<iframe src="https://docs.google.com/forms......…</iframe>
Note https
Embedding the form does not create a HIPAA violation.
The second answer is, G-Suite has its own content management system and website builder, which requires very little technical skill. Thus there's no need to install Wordpress or anything else, you just drag-and-drop to create a site. All the back end stuff is done for you. Duh. And they execute a BAA, all for $6 a month. So G-Suite is much simpler, in fact so simple that only a child can do it. Their help pages leave much to be desired.
Bottom line--for small covered entities, G Suite is a very economical website solution that doesn't create a HIPAA violation. Wish I knew this yesterday!
FYI: HIPAA compliant Cloud Services
We have a WordPress website that sells and ships products all over the world including European countries. We have modified UK-Cookie-Consent plugin to our needs. We currently display the following warning at the top of the page where clicking on "Find out more" takes the user to our privacy page:
At the same time, we do not display cookie warnings on continents other than Europe. We also have several 3rd party tracking cookies such as facebook, google analytics and klaviyo that we use for various tracking purposes.
When I scanned our website for GDPR compliance via various web scanners such as cookiebot, cookieserve.com, gdprcookiescan.eu and ezigdpr.com, the website shows up as non-compliant.
My question is as a wordpress developer, what additional steps if any I can take to make the website GDPR compliant.
My additional question is on whether the results of the GDPR scans from aforementioned scanners should be taken with concern and whether there are other more respected scanners out there that are recommended to use to ensure GDPR compliance.
Some background info first:
This is important since there is a lot of misinformation and confusion about this topic out there. I'll do my best to clarify it. There are 2 different laws(regulations/directives) that come into play here.
ePrivacy Directive: This is the directive responsible for the cookie banners, which was actually implemented in 2003 already and had its last amendment in 2009. Its currently being reworked again at the moment. Since its a directive and not a regulation, each EU member country is responsible for implementing their own "version" of it. This has resulted in different requirements depending on the country. (I know, not helpful) Some countries required an opt-out for cookies, others just an informational banner, which is what you see most of the time.
GDPR (General Data Protection Regulation):
The new buzz in the industry, doesn't actually explicitly deal with cookies. This deals with the processing of personal data and "personally identifiable information" (PII), which is any data that can be used to identify an individual. Examples: Name, Email address, phone number, credit card number, IP-Address (under certain conditions). According to the GDPR, you need to have a so-called legal basis (why am I legally permitted to process the data) for processing any personal data. There are 6 of these you can look them up here: Lawfulness of processing
So what does all that mean and how does it fit together?
You need to show a cookie banner because of ePrivacy and you need to have a legal basis for processing data retrieved via cookies because of GDPR which can vary depending on what the cookie is used for. There are 3 types of legal bases that will probably be relevant for your website: Legitimate Interests, Consent and contract (To process the customers purchase)
IMPORTANT: According to the GDPR you are required to provide your users with the information about which data is processed, under which legal basis as well as the purpose of the processing. This needs to go into your privacy policy.
So when can I set which types of cookies?
Strictly necessary cookies: Can be set without explicit consent. (still required your to inform your users that you use cookies via banner) These are cookies which your website requires in order to operate. Like your customer's login session and shopping cart.
Statistics: Assuming that your site uses some kind of analytics service that doesn't share any data with an ad network. You could argue that you have the legitimate interest, in this case being something like "improving the website by analyzing website usage". I would definitely at least provide an opt-out for this type.
Targeting/Marketing Cookies: Here it's difficult to argue that you have a "legitimate interest" since users are being tracked and profiled. For these opt-in is a must. That means if a user opts-in, your legal basis is consent. Facebook pixel, for example, should be opt-in.
Answers:
My question is as a WordPress developer, what additional steps if any I can take to make the website GDPR compliant.
You need to do a lot more than just handle the cookies properly. That is only a small aspect of what you need for GDPR compliance. You need to determine what your processing purposes for all types of personal data you collect from your customers/users. This needs to be included in your privacy policy, not forgetting the legal basis for processing. You need to be able to inform (privacy policy) your users/customers about the following when you collect any personal data: GDPR Article 13
My additional question is on whether the results of the GDPR scans from aforementioned scanners should be taken with concern and whether there are other more respected scanners out there that are recommended to use to ensure GDPR compliance.
I would not rely on scanners in general, except maybe to figure out what types of cookies your site is setting that you may have overlooked. These scanners can not tell you if your site is GDPR compliant, in the best case they can tell you if your cookie consent dialogue is working by it only finding "strictly necessary" cookies for example. That banner that you have is for implicit consent, by the way, that would have been ok in most cases before GDPR, however, is no longer ok. If you are setting cookies like those of Facebook before the user clicks "I consent" then that is probably why the scanners are saying you are not compliant.
Hope I didn't freak anyone out ;) Everyone is in the same boat of not being entirely sure of some aspects, even the big enterprises. There are a lot of aspects of the GDPR where the text is not entirely clear, leaving room for interpretation.
Side note:
We built a solution for some customers that continuously auto-generates the privacy policy, keeping it aligned with the website, central updates for policy changes as well as managing the privacy controls for cookies, social media etc. We're in the process turning it into a generic solution that anyone can use. We're looking for pilot customers that we can work with to further develop it. You can check it out here: TRUENDO
You may use this Cookie Consent Solution for GDPR, it will automatically block the cookies prior to the consent. It works for all platforms like WordPress, Drupal...etc.
I found several program over the internet which can grab your website and download the whole website on your pc. How one can secure your website from these programs?
Link: http://www.makeuseof.com/tag/save-and-backup-websites-with-httrack/
You have to tell whether the visitor is human or bot in the first place. This no easy task, see e. g. : Tell bots apart from human visitors for stats?
Then, if you detected what bot it is, you can decide wether you want to give it your website content or not. Legitimate bots (like Googlebot) will conveniently provide their own userAgent id; malicious bots / web crawlers may disguise themselves as common browser programs.
There is no 100% solution, anyway.
If you content is really sensitive, you may want to add captcha, or user authentication.
I am building application that needs to interact with users without accounts and keep track of them. I know OpenID is great and easy and I've used it in almost all my apps, but accounts are not option even those that user is likely to have like Facebook, Google, Yahoo account, etc.
Any coding language is acceptable (but asp.net, JavaScript or Flash would be best, or a combination).
So my plan is to use cookies...but cookies are so easily removed (I really don't count it as reliable identifier)
IP address...well this is efficient even trough proxies, but if someone uses dynamic IP like my whole country this also becomes unreliable
Flash cookies are fine, but I recently read an article describing Mozilla Firefox History-cleaning system gets rid of them too, I need confirmation for this.
Browser Fingerprinting - I don't know how reliable it is since anyone that knows little of any language that can send HTTP requests can spoof it (client string at least).
If anyone knows of any other methods from the ones I listed, or want to correct me in my list feel free to reply.
I build ASP.NET websites (hosted under IIS 6 usually, often with SQL Server backends and forms authentication).
Clients sometimes ask if I can check whether there are people currently browsing (and/or whether there are users currently logged in to) their website at a given moment, usually so the can safely do a deployment (they want a hotfix, for example).
I know the web is basically stateless so I can't be sure whether someone has closed the browser window, but I imagine there'd be some count of not-yet-timed-out sessions or something, and surely logged-in-users...
Is there a standard and/or easy way to check this?
Jakob's answer is correct but does rely on installing and configuring the Membership features.
A crude but simple way of tracking users online would be to store a counter in the Application object. This counter could be incremented/decremented upon their sessions starting and ending. There's an example of this on the MSDN website:
Session-State Events (MSDN Library)
Because the default Session Timeout is 20 minutes the accuracy of this method isn't guaranteed (but then that applies to any web application due to the stateless and disconnected nature of HTTP).
I know this is a pretty old question, but I figured I'd chime in. Why not use Google Analytics and view their real time dashboard? It will require minor code modifications (i.e. a single script import) and will do everything you're looking for...
You may be looking for the Membership.GetNumberOfUsersOnline method, although I'm not sure how reliable it is.
Sessions, suggested by other users, are a basic way of doing things, but are not too reliable. They can also work well in some circumstances, but not in others.
For example, if users are downloading large files or watching videos or listening to the podcasts, they may stay on the same page for hours (unless the requests to the binary data are tracked by ASP.NET too), but are still using your website.
Thus, my suggestion is to use the server logs to detect if the website is currently used by many people. It gives you the ability to:
See what sort of requests are done. It's quite easy to detect humans and crawlers, and with some experience, it's also possible to see if the human is currently doing something critical (such as writing a comment on a website, editing a document, or typing her credit card number and ordering something) or not (such as browsing).
See who is doing those requests. For example, if Google is crawling your website, it is a very bad idea to go offline, unless the search rating doesn't matter for you. On the other hand, if a bot is trying for two hours to crack your website by doing requests to different pages, you can go offline for sure.
Note: if a website has some critical areas (for example, writing this long answer, I would be angry if Stack Overflow goes offline in a few seconds just before I submit my answer), you can also send regular AJAX requests to the server while the user stays on the page. Of course, you must be careful when implementing such feature, and take in account that it will increase the bandwidth used, and will not work if the user has JavaScript disabled).
You can run command netstat and see how many active connection exist to your website ports.
Default port for http is *:80.
Default port for https is *:443.