r shiny - is uploaded data safe and secure? - r

I'm building a shiny app where users upload transaction data to get access to an analytics dashboard. Can I assure these people that their data is secure from sniffers/hackers and will be removed from the shiny server when their session expires? How does this actually work in Shiny? (Note that I'll be hosting my app on shinyapps.io)

This is not to do with shiny, but whatever server you're storing the data on, how you're using encryption/hashing, and software/app security methods you've used to protect against specific vulnerabilities.
Having said that, here's the (rather minimal, IMHO) security statement for shinyapps.io:
shinyapps.io is secure-by-design. Each Shiny application runs in its
own protected environment and access is always SSL encrypted. Standard
and Professional plans offer user authentication, preventing anonymous
visitors from being able to access your applications.
I would say that the burden will heavily fall on you to use good encryption and data storage practices.
There are many official and unofficial guidelines you can look to for guidance on data storage. One which big companies, particularlly companies going public, must follow is Sarbanes-Oxley.
From grtcorp.com:
The Sarbanes-Oxley Act (SOX Act) was passed by Congress and signed
into law in 2002 in response to major cases of financial fraud, of
which the rise and collapse of Enron is the best known. The overall
focus of the measure is on financial reporting responsibilities, and
ensuring that financial audits are genuinely independent.
However, SOX also includes provisions that relate to the security and
preservation of financial data. And the standards set out for its
implementation "recognized that senior management can't just certify
controls ON the system, these controls also have to control the way
financial information is generated, accessed, collected, stored,
processed, transmitted, and used through the system."
Senior management is thus held ultimately responsible for financial
data security, including putting in place appropriate controls and
procedures to ensure this data security. The good news is that
powerful tools, including data discovery and Data Masking, are
available to meet these standards.
I would also encourage you to familiarize yourself with OWASP's list of the top 10 major web app vulnerabilities:
https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project

Related

When a G-Suite form is embedded on external website, does any form data get stored on the host site?

This question comes up because of very specific HIPAA requirements. A Covered Entity(CE) eg, doctor can't use a cloud storage provider (CSP) unless they have a Business Associate Agreement (BAA) with the CSP, even if the data are encrypted and the CSP has no access. I'm not a security expert, but most web hosts' security would IMO satisfy HIPAA, IF there were a BAA.
There's a conduit exception for video, ISPs, and other electronic equivalents of USPS that do not store electronic Protected Health Information (e-PHI.)
I don't know why, but the web hosts who will sign a BAA charge $100-300/ mo for very basic hosting other sites charge $5-15/mo for. I think they're preying on CE ignorance with the perception there's lots of money sloshing around, true for radiology, but not for primary care.
G-Suite will execute a BAA, which makes G-Suite a reasonably-priced solution for gathering Protected Health Information (PHI) patient input, while keeping the CE compliant with HIPAA.
It's worth noting that "HIPAA compliance" is ONLY a property of CEs and Electronic Medical Records, not other software or sites. Any other product or service claiming "HIPAA compliance" is misrepresenting itself.
I find Google Sites not as user-friendly as most web hosts. There's less hand-holding for doing things like installing WP add-ins, or adding SSL certificates. Or maybe Google just does a terrible job of explaining how to actually DO something with a site hosted there. In any case, it seems easier to run a website on a web host that's set up to manage software and WP plug-ins for amateurs.
I'm willing to be educated on this. (24 hours later--I did a lot of self-education-see answer below.)
The basic HIPAA privacy requirements are rather simple:
CEs can use PHI to treat and carry out essential functions, but must
not share it with anyone not entitled to it.
The basic HIPAA security requirements are also simple:
Make a security risk analysis.
Implement reasonable security measures and
Document why various measures were taken or not.
Some elements are required, others must simply be addressed, evaluated and documented.
For example, 2FA is "addressable" as is data encryption, but making an analysis, having physical security and employee training are required.
So my question is whether a G-Suite form embedded in a website on another web host stores any data on that web host, or does it all go back to G-Suite, eg G-Drive, where it's secure and covered by a BAA?
The problem when you know very little about a topic is, you don't know what to ask. I know a bunch about HIPAA, not much about HTML. I did a lot more research, and there's at least two answers.
The short answer is, NO, the embedded frame is an iframe HTTPS linked to G-Suite.
The form in the iframe is a window into docs.google.com, so data never gets off docs.google.com, where it's covered by G-Suite's BAA. The host site is in effect a conduit.
<iframe src="https://docs.google.com/forms......…</iframe>
Note https
Embedding the form does not create a HIPAA violation.
The second answer is, G-Suite has its own content management system and website builder, which requires very little technical skill. Thus there's no need to install Wordpress or anything else, you just drag-and-drop to create a site. All the back end stuff is done for you. Duh. And they execute a BAA, all for $6 a month. So G-Suite is much simpler, in fact so simple that only a child can do it. Their help pages leave much to be desired.
Bottom line--for small covered entities, G Suite is a very economical website solution that doesn't create a HIPAA violation. Wish I knew this yesterday!
FYI: HIPAA compliant Cloud Services

Deploying app with Crashlytics to Apple Appstore - do I need a privacy policy?

I am about to submit an app to the Apple AppStore built in Swift that uses Crashlytics to capture crash information. As users of Crashlytics know, some information about usage, duration, crashes, etc. is captured and stored on the Crashlytics servers. My application does not ask for, store or attempt to capture any user data.
My question is about the privacy policy for my application. Since I don't capture any user data, I want to state that in my privacy policy but I'm not sure that's factual since I am using Crashlytics. Any feedback on people that have used Crashlytics in their app and have an actual privacy policy?
Thanks
--Vinny
Quick answer: yes, you need that privacy policy. There are ways to get it done fast, too.
Longer answer:
Third parties (here Crashlytics)
When dealing with a third party service like this, often a quick look into their legal documents will help (for Crashlytics in this case as described in your question).
(...) At all times during the term of this Agreement, Developer shall
maintain a privacy policy (a) that is readily accessible to users from
its website or within its online service (as applicable), (b) that
fully and accurately discloses to its users what information is
collected about its users and (c) that states that such information is
disclosed to and processed by third party providers like Crashlytics
in the manner contemplated by the Services, including, without
limitation, disclosure of the use of technology to track users’
activity and otherwise collect information from users. (...)
And
Developer shall at all times comply with all applicable laws, rules
and regulations relating to data collection, privacy and security,
including, without limitation, the Children’s Online Privacy
Protection Act (“COPPA”). Crashlytics may, at its sole discretion from
time to time during the Term of this Agreement, audit Developer Data
to verify compliance.
Crashlytics is actually being unusually vocal about this topic.
The App Store
At the time of writing (and since iOS8) Apple requires privacy policies for 5 categories:
Kids Category, HomeKit, HealthKit, Apple Pay, and Keyboard Extentions. Also they require privacy policies for user registrations (more). I can't tell if any of the above for your app is true. Apple still says in their App Store Review Guidelines that you need to be compliant with all applicable laws. This brings us to the third and most important reason.
Privacy related regulations
All of the above is just there because of global privacy regulations, these companies would most likely not care otherwise. As soon as you work with User data you are mostly under an obligation to disclose these facts. It's personal data like names, addresses or the tracking of user behaviour. It's been written at length why analytics services need privacy policies. All of it is more important as soon as you share data and use third party services for it. Mostly the disclosure or some kind of consent is the condition for it's compliant usage.
If you are interested in reading more about the matter in the context of mobile apps I'd suggest any of these documents:
ICO UK
Ireland
USA/California
Canada
Australia
Hope this helps.
(For proper disclosure: I do some work for iubenda, a tool that helps creating privacy policies for apps and websites)
Vinny, I think it's not mandatory (I've seen apps using Crashlytics wihtout a privacy policy), but it's recommended to have transparency in the communications with your users.
Crashlytics already has a privacy policy so you can just use that policy and add a statement informing that you are not collecting any sensitive information from the user, such as email or phone number.

iTunes Connect: Is your app designed to use cryptography?

I am submitting an app that uses the dropbox SDK to upload photos from the iPhone to a specified folder in dropbox. I am stuck on a question as I don;t know how/what/if dropbox sdk uses cyroptograhy. Can you help me answer the following questions?:
Is your app designed to use cryptography or does it contain or incorporate cryptography? (Select Yes even if your app is only utilizing the encryption available in iOS or OS X.)
If so,
Does your app qualify for any of the exemptions provided in Category 5, Part 2 of the U.S. Export Administration Regulations?
Make sure that your app meets the criteria of the exemption listed here. You are responsible for the proper classification of your product. Incorrectly classifying your app may lead to you being in violation of U.S. export laws and could make you subject to penalties, including your app being removed from the App Store. Read the FAQ thoroughly before answering the questions.
You can select Yes for question #2 if the encryption of your app is:
(a) Specially designed for medical end-use
(b) Limited to intellectual property and copyright protection
(c) Limited to authentication, digital signature, or the decryption of data or files
(d) Specially designed and limited for banking use or "money transactions"; or
(e) Limited to "fixed" data compression or coding techniques
You can also select Yes if your app meets the descriptions provided in Note 4 for Category 5, Part 2 of the U.S. Export Administration Regulations.
If not,
Does your app implement one or more encryption algorithms that are proprietary or yet to be accepted as standard by international standard bodies (such as, the IEEE, IETF, ITU, and so on)?
Etc.
I work for the Dropbox API team. I'm not a lawyer, nor familiar with the App Store process. Presumably it asks this question of everyone submitting an app, and many apps already approved use the Dropbox SDK.
That said, reading through the question ISTM that the Dropbox SDK qualifies under (b) and (c). In the SDK that links with your app we use OAuth and SSL for authentication, SSL for keeping your users' files safe from prying eyes, and either digital signatures or cryptographic hashes to safeguard against data corruption and to detect duplicates.
For more info on this topic see also a recent thread on the Dropbox forum: https://forums.dropbox.com/topic.php?id=114805

Is Firebase an all-purpose database?

I've been reading about Firebase and playing with it for a short while. The idea (BAAS) and implementation are impressive, and having programmed with Javascript it seems a viable choice. Not having to deal with scaling and other server side concerns makes it even more attractive.
My question is: generally speaking, is Firebase a first class back-end candidate for any average data-based application? e.g. billing, CRM, e-commerce, social, location based, etc. I do not include super light or heavy extremes such as a basic chat, or a nuclear plant monitor...
The answer may not be a clear yes/no, but was it built to support the general application space, or just stand out as a real-time read/write data service?
Would appreciate answers based on experience and existing production applications.
Thanks
Yes, Firebase is intended to be a first class back-end for any data based Web, iOS or Android application. The service offers real-time data reads and writes, but also comes with a powerful and flexible security system that allows you to write secure client-only apps, without needing any server code to enforce data boundaries.
There are several apps in production listed on the front page as customer and on the app showcase page on https://firebase.google.com/customers/
Firebase is now more capable and is considered as a full stand-alone back-end, especially after the introduction of cloud function. https://firebase.google.com/docs/functions/
Firebase may not have support for transaction spanning multiple business objects.
e.g. When a sales order is booked then it needs to update inventory for multiple items, update billing in receivables, give sales credit to multiple sales persons etc.
Firebase team is supposed to come up with a database trigger option which will make all these happen.

Database Authentication for Intranet Applications

I am looking for a best practice for End to End Authentication for internal Web Applications to the Database layer.
The most common scenario I have seen is to use a single SQL account with the permissions set to what is required by the application. This account is used by all application calls. Then when people require access over the database via query tools or such a separate Group is created with the query access and people are given access to that group.
The other scenario I have seen is to use complete Windows Authentication End to End. So the users themselves are added to groups which have all the permissions set so the user is able to update and change outside the parameters of the application. This normally involves securing people down to the appropriate stored procedures so they aren't updating the tables directly.
The first scenario seems relatively easily to maintain but raises concerns if there is a security hole in the application then the whole database is compromised.
The second scenario seems more secure but has the opposite concern of having to much business logic in stored procedures on the database. This seems to limit the use of the some really cool technologies like Nhibernate and LINQ. However in this day and age where people can use data in so many different ways we don't foresee e.g. mash-ups etc is this the best approach.
Dale - That's it exactly. If you want to provide access to the underlying data store to those users then do it via services. And in my experience, it is those experienced computer users coming out of Uni/College that damage things the most. As the saying goes, they know just enough to be dangerous.
If they want to automate part of their job, and they can display they have the requisite knowledge, then go ahead, grant their domain account access to the backend. That way anything they do via their little VBA automation is tied to their account and you know exactly who to go look at when the data gets hosed.
My basic point is that the database is the proverbial holy grail of the application. You want as few fingers in that particular pie as possible.
As a consultant, whenever I hear that someone has allowed normal users into the database, my eyes light up because I know it's going to end up being a big paycheck for me when I get called to fix it.
Personally, I don't want normal end users in the database. For an intranet application (especially one which resides on a Domain) I would provide a single account for application access to the database which only has those rights which are needed for the application to function.
Access to the application would then be controlled via the user's domain account (turn off anonymous access in IIS, etc.).
IF a user needs, and can justify, direct access to the database, then their domain account would be given access to the database, and they can log into the DBMS using the appropriate tools.
I've been responsible for developing several internal web applications over the past year.
Our solution was using Windows Authentication (Active Directory or LDAP).
Our purpose was merely to allow a simple login using an existing company ID/password. We also wanted to make sure that the existing department would still be responsible for verifying and managing access permissions.
While I can't answer the argument concerning Nhibernate or LINQ, unless you have a specific killer feature these things can implement, Active Directory or LDAP are simple enough to implement and maintain that it's worth trying.
I agree with Stephen Wrighton. Domain security is the way to go. If you would like to use mashups and what-not, you can expose parts of the database via a machine-readable RESTful interface. SubSonic has one built in.
Stephen - Keeping normal end users out of the database is nice but I am wondering if in this day and age with so many experienced computer users coming out of University / College if this the right path. If someone wants to automate part of their job which includes a VBA update to a database which I allow them to do via the normal application are we losing gains by restricting their access in this way.
I guess the other path implied here is you could open up the Application via services and then secure those services via groups and still keep the users separated from the database.
Then via delegation you can allow departments to control access to their own accounts via the groups as per Jonathan's post.

Resources