data collection for statistics: from web to a database - r

I'm a statistician by trade and I'd like some recommendations on how to set up a website that can collect data into a database. For personal use, I use Google Forms to collect data, and everything gets populated into a spreadsheet. However, this may not be appropriate in a more professional setting, especially when we have multiple pages/forms. I imagine two uses:
A website where I can send the link to others so they can fill out, similar to Google Forms.
A website where only authorized users can log in to fill out data. Think of a setting where patients are followed periodically in a research study. It'd be cool to have the clinician enter the data directly into the database as he/she fills out the forms as opposed to having another data analyst transcribe his written forms into the database.
The obvious solution would be to hire a web developer. However, I like doing things myself when they are manageable. I imagine a web developer would have to know html, php, and database knowledge (eg, MySQL or PostgreSQL). My experience in these are limited to setting up a wordpress blog on my linux server. My experience with html is also limited as I use emacs org-mode to generate them from plain text. I hope to hear about solutions with a minimal learning curve. My preference of course would be free open source software and Linux-based, but I'd like to hear all available solutions (our data manager is a Windows user).
I recently read a post on Linux Journal that mentions REDCap, but it seems you have to get institutional permission to use.
I also tagged "R" on this post as I'd like to hear what R users are doing about data collection. I'll ultimately analyze the data with R, but all data analysis begins with the scientific question and data collection.
Thanks!
UPDATE 10/4/2010: Thanks everyone for the responses so far. It appears most of the third-party solutions proposed so far has data housed in a database hosted by the vendor. I'd like to house all data in our SQL Server. That is, data entry from the web enters the database in real time, ready for data analysis.

Maybe the limesurvey.org project is of interest ...

It sounds to me like you've got yourself a med study. There are a plethora of concerns that come to mind just from what you've described you want to do. Not the least of which is privacy. Where is it going to be hosted? Have you received consent from the patients to be collecting and transmitting their information electronically? What data are you storing, if any, that could combine to present their identity.
Personally, I steer clear of DIY online data collection tools. I pay a firm, like Ipsos, Research Now/E-Rewards, to program and manage data collection using questionnaires that I have designed. The reason is, knowing how to design research and analyze data is one thing. But if you've been trained in statistics - I can safely argue that you "don't know shit" about data collection. Sure you may know a bunch about sampling theory, but when it comes to getting data in - it's best to leave it to the pros.
There are a number of "industrial quality" online data collection tools available.
Confirmit (Pretty much the gold standard for online data collection)
DASH (Smaller following, but incredibly flexible)
There are also purely web based solutions, some of which are free (not that I would recommend using them)
QuestionPro
SurveyMonkey
Zoomerang
Although, unless you're doing a study with over 50 patients, I would just recommend getting the Physicians or their assistants to fill out Excel sheets and send them to your co.
Also, it's unlikely that you'll need to set-up a username/password system. What you want is referred to as an "open-link". Where respondents click a link and enter information, identifier info can be added by the respondent. You don't need a password because people can only INPUT information, not read it.
Most of the systems I mentioned above work on the idea of emailing a respondent (a clinician) with a link to a web based survey. Which could be easily adapted to your specific needs and act as a reminder to the clinician to fill out the form.
If your question types are simple. I'm sure you could hire a programmer to put together a website that has the forms you need behind an authorized front end. PHP/MySQL would likely do the trick. But, I would double check the privacy laws in your jurisdiction surrounding medical research before going ahead.

I have conducted medial research using an online form (actually two of them). My questions were quite discrete and particular to the disease I was researching.
Previously in a related project, I had created two or three page questionnaires which were printed and then subjects and surgeons filled out the forms and our research coordinator would enter them into our database. It was a lot of work with lots of room for error. I did not like it. Online forms were much better.
I used SurveyGizmo and was happy with it. I looked at lots of options about two years ago. Google Forms did not exist at that time. I went with SurveryGizmo primarily because they had a a statement (attestation) that they were compliant with HIPAA. I could not ensure security such as ssl connections with the other websites. However in order to get myself into that capability (https connections) I had to buy the enterprise level eventhough on every other capability I could have used the free service. Also SurveyGizmo offered a 50% reduction for non-profits which our research institute qualified for.
SurveryGizmo was easy to design and put into production without having to program myself. It was easy to download the data in csv format and read that straight into R. Although I had some weird issues that I needed help with. I had to use the "old" format for export so that it came as a straigtforward csv. Furthermore, the csv file had the odd feature of the the first TWO rows being header rows. But I solved that problem with the help of stackoverflow.
SurveryGizmo has fantastic logic and piping that enbabled me to only ask relevant questions and thereby not waste the time of my respondents and even more importantly, there were no irrelevant questions to confuse respondents.
Finally, I was able to use SurveyGizmo in such a way that I could also track our (research staff) fulfillment and logistics. For instance we got notification when there were new potential subjects who were interested in participating. We were able to note FedEx tracking numbers along with the records of the corresponding subjects.
Basically it worked well.

The safest platform for collecting confidential survey data is Confirmit. There is a learning curve involved here- you will be coding in VisualSQL, which is only used in Confirmit. The survey responses will export to csv files, where you can analyze your results in R.
If you are collecting any confidential data, or data where respondents need unique access links so they can only see their own version of the survey, you will want to use Confirmit. The data is housed in Confirmit's data center, but their data is much more secure than other vendors (i.e., a third party will not be able to hack into your survey and see an individual's responses, or intercept the data that is being sent from your respondent to Confirmit).

Related

Firebase - Machine Learning and interest tracking to create an algorithm for sorting posts

One of my applications includes user-generated posts and functions in a similar way to Instagram. When a user opens the app they see a feed of posts sorted by date. This works when there just one small demographic using the app, but as the user base becomes more diverse, not everyone is interested in the same posts. This is why apps like TikTok and Instagram have algorithms to decide which posts to show to a user. Where do I even start with this? I understand that there need to be tags on each post for what they are about (this is where I think I can use machine learning) and then each users information needs to include their interests (I’m not sure what can be used to change this as they like or dislike posts). Is there a simple pre-built way of doing this or any examples? It seems fo be a pretty big secret that mostly big tech companies understand and use.
You could use Google's "cloud vision api(For Images): https://cloud.google.com/vision" and "Video Intelligent Api(For videos): https://cloud.google.com/video-intelligence/docs".
Video Intelligence Api could handle images too from byte stream.
Build a firebase function that analyse posted media with these api.
Build the rest of the logic from here. Find a way to detect their interest from post, save their interests.

guidance needed for corda Application design

I have a web background majorly with javascript, I have started learning Corda recently for project implementation and need guidance in this regard,
So our application is based on the web, the user signs up with different school name, create question papers, and then want to share either part of it or whole with teachers of some other school in our platform. they can make changes and assign it back to creator and the process goes back and forth, finally signing the paper to be finalized, once finalized it cannot be changed by anyone. I need to store these transactions in Corda application, not sure how to go about it, I did try replicating it using negotiation application in corda/kotlin/sample, but stuck in a bug as I was trying to send a list of objects.
I do have the following questions in mind
Should I use enterprise edition or go with open source as I think I need schema design for this. web db is in postgress
As far as I have seen each node is predefined in the config with username and password,is there a way to create the node while the user signs up.
I have schools and teachers inside the school, do I need a separate node for each school and then create states in each node(not sure if a node can be set up at run time). or do I use the account's library provided for creating the account of each teacher, if yes id there a way to use passwords in it, unable to find password field in it.
how do I send an array of objects to the state, or should I create a separate state for each question, as different questions can be assigned to different teachers, but again multiple questions can be assigned to the same teacher.
These are few questions on my mind any help is much appreciated, as most of the examples gave IOU samples or states with int and string, Please guide me in the right direction.
Alessandro has good advice here, definitely look at the samples repos for inspiration on how to build what you're looking for.
start with open source, it's easier to prototype and you can switch to enterprise later it won't be an issue for you
this depends on design, you wouldn't really want to create a new corda node per-person, you might want to have corda accounts that run on a single node instead. See accounts sdk here: https://github.com/corda/accounts
what you might do is make a corda node for each school and then accounts per teacher like you were already were thinking. That would mean only a couple of nodes based on the number of schools you have.
as long as your state is marked with #CordaSerializable you won't have problems sending arrays of data, I send an array in a state in this sample here: https://github.com/corda/samples-java/blob/master/Advanced/secretsanta-cordapp/contracts/src/main/java/net/corda/samples/secretsanta/states/SantaSessionState.java#L24
https://github.com/corda/samples-java
https://github.com/corda/samples-kotlin

drupal creating database queried pages

I'm starting to learn drupal and hopefully this is a easy newbie question for you to answer but my firm basically has a very large data set and I want to present it to the public.
We do research on firms and have a database with a company name and all the data we have about the firm(its mainly numbers/estimates). So is there a way to create a view that does this for us?
Creating individual pages is not very practical since we have several thousand companies we have studied over the years.
If there isn't anything easy, then is it possible to create a php page that takes the company name from the url and then queries the database and presents all the data to users?
Take a look at the views module. It can take a bit of work to customize it for a specific database, but is very flexible once you do.

Lotus Smartsuite to "something newer"

I shall try and keep my scenario as brief as possible and to the point.
The office I’m currently working for uses Lotus Smartsuite on Windows 98 / XP, using lots of Lotus Script to tie together Lotus 123 and Lotus Word Pro documents. They also make heavy use of the Lotus Object Linking functions. I shall describe its behaviour below:
You can fill rows and columns in a 123 Spreadsheet with data galore, style it and format it any way you like and define it as a range (nothing unique here). However, you can then copy that range and paste it as a link in a Lotus Word Pro document. This link is then categorised by its range name, so expanding the range back in the 123 file causes the table in the Word Pro Document to expand. This link also carries with it all the formatting and styling of the cells in the 123 Spreadsheet. As I imagine you are now aware, this link is completely live, you can double click anywhere in the object and it opens up the 123 file for editing, and all changes go backward and forward between the two documents. Most of the data retrieved from testing equipment is stored in these 123 spreadsheets and then parts of that are linked into a final Lotus Word Pro report document sent to the customer.
Note: Just to be clear, this is NOT the same as a DDE link in Open Office, which seems to allow for copying of a non-defined range of cells to be imported into a document where all formatting is lost and editing back and forth is not straight forward. It also behaves differently to an OLE object, which seems to only import the entire Spreadsheet rather than a small subsection of it.
However, in recent years, support this older software (Lotus) is becoming more difficult, especially with regards to sending customers documents (Lotus word Pro files are generally unsupported by more modern Office Tools) and technical support for Lotus Smartsuite seems to be practically non-existent these days. Also, with the fear of on going development in a scripting language no-longer being practised by mainstream IT technicians, on-going development and support seems futile. Once the guys who wrote it move on to other things, we will be left with spaghetti script in a language nobody can help us with.
So, we have this goal of "modernising" our IT system by the end of the year. Linux is becoming a very viable option too (No doubt Debian or a derivative), but Open Office doesn't seem to have the linking capability mentioned above. The reason this linking is so important is because the veterans of the office are so used to working this way - storing data in the spreadsheet, linking back to it later in their Word Pro documents, etc. I think they are more than keen to keep this practice going and we have found no equivalent of it in modern office tools (as was requested of me). I can see, as a software engineer (fluent in many languages), how this practice is not the safest or best way of using and storing data (databases spring to mind), but I was wondering if someone could give me a few other good reasons as to why this is bad practice in the work place (I was always in the belief that you should keep your data away from your reporting and formatting, the two should never be entwined - this looks like spreadsheet hell to me) ... or why this is a good thing to keep doing!?
So, for those of you still with me, I guess what I am asking is:
Is this practice of storing data, formatting it in spreadsheets and importing that directly back and forth between word documents good or bad, and what can be done about it? I guess I'll need to prove my point in case either way for this.
Are there ANY modern alternatives to this linking method (regardless of weather it is good or bad practice or not) out there for Linux or Windows? This link MUST carry formatting as well as dynamic range sizes (DDE links don't seem to be the answer).
What would your solution be if you had to start from scratch? Store everything in databases and use SQL to simply ask for the data you need in your word documents? How would you do this? What software would you use?
Any help with this scenario would be more than helpful, or if you know anywhere I should go to ask for advice, that would be appreciated too.
Thank-you for reading!
My suggestion is to first take a step back. What is the benefit to the way things are done now? Is it just a habit that is tough to break? Is there any reason the documents and spreadsheets need to be maintained and linked the way they are, or is it just a requirement because 'that's how it was done before'?
If you can remove that requirement, you have a lot more options and you're building a system that's easier to understand and maintain.
Regarding question 1, I believe there's nothing wrong with storing data in spreadsheets, especially if the end-users need to create and maintain them and development staff is limited. Some questions are whether that data needs to be secured, is related between spreadsheets, is duplicated across the company, or should be shared in a better way across the company. If any of those are true then a centralized database would make more sense. Personally I'd want any valuable data safely stored in a database where it can be managed, access to it can be controlled, it can be easily backed-up, etc.
Regarding question 2, you can do the same thing in Microsoft Office. You can either link the documents, so that the data stays in the source excel doc but appears in the word doc, or you can embed the excel spreadsheet within the word doc.
You might want to look at Microsoft Access for storing the data and generating reports. Or you could build an application using a relational database back-end and reporting front-end. The possibilities are wide-open. It really depends on where the expertise lies within the company.
If it were me I'd probably use a SQL Express back-end (it's free) and a custom ASP.NET MVC application for generating the reports, but that's just where my expertise lies.

Application for graphing lots of web related data

I know this isn't programming related, but I hope some feedback which helps me out the misery.
We've actually lots of and different data from our web applications, dating years back.
For example, we've
Apache logfiles
Daily statistics files from our tracking software (CSV)
Another daily statistics from nation-wide rankings for advertisement (CSV)
.. and I can probably produce new data from other sources, too.
Some of the data records started in 2005, some in 2006, etc. However at some point in time we start to have data of all of them.
What's I'm drea^H^H^H^Hsearching for is an application to understand all the data, lets me load them, compare individual data sets and timelines (graphically), compare different data sets within the same time span, allow me to filter (especially the Apache logfile); and of course this all should be interactively.
Just the BZ2 compressed Apache logfiles are already 21GB in total, growing weekly.
I've had no real success with things like awstats, Nihu Web Log Analyzer or similar tools. They can just produce statical information, but I would need to interactive query the information, apply filters, lay over other datas, etc.
I've also tried data mining tools in hope they can help me but didn't really success in using them (i.e. they're over my head), e.g. RapidMiner.
Just to make it sure: it can be a commercial application. But yet have to find something which is really useful.
Somehow I get the impression I'm searching for something which does not exist or I've the wrong approach. Any hints are very welcome.
Update:
In the end I it was a mixture of the following things:
wrote bash and PHP scripts to parse and managing parsing the log files, including lots of filtering capabilities
generated plain old CSV file to read into Excel. I'm lucky to use Excel 2007 and it's graphical capabilities, albeit still working on a fixed set of data, helped a lot
I used Amazon EC2 to run the script and send me the CSV via email. I had to crawl through around 200GB of data and thus used one of the large instances to parallelize the parsing. I had to execute numerous parsing attempts to get the data right, the overall processing duration was 45 minutes. I don't know what I could have done without Amazon EC2. It was worth every buck I paid for it.
Splunk is a product for this sort of thing.
I have not used it my self though.
http://www.splunk.com/
The open source data mining and web mining software RapidMiner can import both Apache web server log files as well as CSV files and it can also import and export Excel sheets. Rapid-I offers a lot of training courses for RapidMiner, some also on web mining and web usage mining.
In the interest of full disclosure, I've not used any commercial tools for what your describing.
Have you looked at LogParser? It might be more manual than what your looking for, but it will allow you to query many different structured formats.
As for the graphical aspect of it, there is some basic charting capabilities built in, but your likely to get much more mileage piping the log parser output into a tabular/delimited format and loading into Excel. From there you can chart/graph just about anything.
As for cross joining different data sources, you can always pump all the data into the database where you'll have a richer language for querying the data.
What you're looking for is a "data mining framework", i.e. something which will happily eat gigabytes of somewhat random data and then lets you slice'n'dice it in yet unknown ways to find the gold nuggets buried deep inside of the static.
Some links:
CloudBase: "CloudBase is a high-performance data warehouse system built on top of Map-Reduce architecture. It enables business analysts using ANSI SQL to directly query large-scale log files arising in web site, telecommunications or IT operations."
RapidMiner: "RapidMiner aleady is a full data mining and business intelligence engine which also covers many related aspects ranging from ETL (Extract, Transform & Load) over Analysis to Reporting."

Resources