Should I keep order history on Woocommerce? GDPR [closed] - wordpress

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm currently going through the process of making sure my Woocommerce websites are compliant with the GDPR legislation coming into effect, May 25th. The default way Woocommerce works is that it stores every order in the database so customers are able to view their previous orders and the admins can process them.
My question is.. Should I introduce a way customers can delete their own orders? Or a maximum amount of time I hold onto these before automatically deleting?
Is there an industry standard for this?
Thanks in advance

What you're looking at is the right for no longer relevant data to be erased. Keep in mind this is different from the right to be forgotten. This does not need to be a programatic thing. Sites like Facebook and Google give a set of admin controls to do this so they don't need to process hundreds of thousands of users individually. The rules state 30 days from request.
A note in site terms on an email to contact to have your data deleted really should suffice. Again keep in mind it is legal to keep sales data, only specific data may be requested to be destroyed. This is paramount in an e-commerce environment.
There are WP plugins to allow users to delete their account but this may cause issues with WC.
A good place to start is with WooCommerces own blog post on the issue
https://woocommerce.com/2017/12/gdpr-compliance-woocommerce/
For full detail of the right of erasure check here
https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-regulation-gdpr/individual-rights/right-to-erasure/

Related

Meteor Users collection vs additional collection [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
i'n trying to figure out, what's the best practice for building collections associated with user's data.(In terms of reactive, queries speed, or other.)
For example, what's better?
Meteor.Users.profile: {friends, likes, previous orders, locations, favorites, etc"}.
Or create additional collection to keep this data, for example:
Meteor.UserInfo.user{friends, locations, previous orders, etc").
Thanks.
Use the Users collection to store information about that user that isn't related to other collections. Typically this should be at the top level of the user document, not inside the profile. The only thing I'd expect to see in the profile is profile information (and not, for instance, a list of previous orders).
Things like previous orders shouldn't be there since you can just query the Orders collection to find them. For performance reasons it is sometimes useful to denormalise this data, but this should be an exception, not the rule.

Is web-scraping legal for scientific purposes? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am writing a research on a service ranking algorithm, and I want to prove its performance and accuracy by running it on a public data. let's say apple store data, google play, expedia etc. Can I parse their data from HTML and use it in my research? or I would be performing illegal act (web scraping)?
And should i mention explicitly in my research that the data is used only for scientific reasons?
I've read about webscraping and the controversies about its illegality, but i did not find any article about if it's used for scientific purposes only.
Thanks in advance
There is nothing inherently illegal about web-scraping a site.
However, I would suggest that you pay attention to the particular site's "Terms of Use" to see if it is something which they expressly forbid. For example, the Expedia Terms of Use here http://www.expedia.ie/p/support/termsofuse outline:
you may not visit or make available the website or any part of the web
pages of the website by automatic means, such as by using crawlers or
shop bots to systematically retrieve or copy information or connect
the content of the website functionally to another website via links
*That being said, as long as you don't exert an unreasonable load on the site, or republish their content as your own, I don't expect you will run into any problems.

Legality of Mining Crowdsourced Data [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a project idea for which I want to mine publicly available data on another website that it received by crowd-sourcing. This is so I have initial data for my own project. To reiterate, I want to write a robot to grab data that is displayed on another website and use it for my own website. Does anyone know the legality of this sort of thing? Does the original website own the data that was given to it by a crowd? Even if so, can I use it?
Web scraping is a legally complicated issue.
The hassles of legal action and enforceability often keep scrapers from getting in trouble.
Outright duplication is considered actionable, although courts have ruled that "duplication of facts" is permitted (US).
I advise you read up here: http://en.wikipedia.org/wiki/Web_scraping#Legal_issues
Best,
legally, you should be fine. as long as the data is made available and the people have consented; you aren't hacking and the other site has permission to share. check for a license on the other site, if there isn't one inquire or be prepared for access to be denied at some point. and even though it is publicly available doesn't mean the other site wants it to be.
also, double check and make sure that you don't inadvertently publish private data as well.

Google Analytics - Track every property separately [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm creating a real estate site, where providers can promote their properties. Since it's nice for providers to see some statistics about clicks, views etc. I tought I use Google Analytics to track this stuff.
In blogger, you can enable Google Analytics and show statistics just for your blog. I'm wondering if it is possible to do it similar for the properties, so that property providers can see separated statistics for each of their properties.
Does Google Analytics have such a functionality? Or is Blogger just an exception because it belongs to Google?
Thx!
You have 2 options, at least in my mind:
Allow the user to input their own account number so that they can get data into their account for just their pages. If you do this, you'll need to make sure that you use different namespaces (see: tracker names) in order to allow for this.
Create profiles based off of the URL (if your URLs show what provider the property is for.) This is a lot more limited than option 1 because there is a hard limit to the number of profiles you can have and it requires you to manually add individuals to each profile.
If I were you, I'd go for option 1. Option 2 is a "last resort" option in my books, or if it is for very few providers (e.g. just a handful of friends as providers)

How to handle flagged content in a community? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
On a multi-lingual community with almost only user-generated content, is there a commonly used way to treat flagged content (profanity, racism, general illegal stuff etc)?
As there will be a lot non-english content, the only way to handle the flagging itself is crowdsourcing by the community itself and somehow automaticly hide/delete the flagged stuff at a threshold. But what method could be used to stop abuse? e.g. "I don't like him, lets all report this and get it deleted"
FIrst of all, it depends on your content.
But in general, I would start by hide/delete the flagged stuff at a threshold.
When the community grows I would add crowdsourcing and create a balance from both.
I would also do a general scan on all posts to search for keywords which might lead or contain bad content.
Also, you will need to create some tolerance as some posts might contain a reference to illegal stuff but intended for god reasons.
ex: dont take drugs
If the community builds well, I would mostly rely on it.
Another option you might consider is to allow your users to "hide" other users, i.e. not see the content of hidden users.
This allows people to "remove" other users that they don't feel contribute to the community.
You could also allow users to report bad posts, and allow a human to decide whether or not to hide or delete the post. You would have to have community rules for this to be effective.

Resources