I am attempting to use rvest to scrape data from a public health website. When I checked whether the site could be scraped:
robotstxt::paths_allowed('https://cityofcambridge.shinyapps.io/COVID19/')
returned FALSE. It would make my colleague's daily copy/paste steps much less error-prone and less of a drudgery if I could get her the data she needs in R. Unfortunately, some of the daily data she needs is not retrievable from the site after the day is past.
Is there a way of getting around the web scraping restriction? This is a public health website with a stated purpose of providing data to the public, run by the city of Cambridge public health department. I don't see an ethical concern here, but perhaps I am not understanding this.
I am somewhat experienced in R, especially the tidyverse, and new to web scraping. Thanks for any help.
Related
I am making search engine. But I want to know, How google scrapes all data of stackoverflow.
As my intuition,
Do they save all stackoverflow data in csv file?
and when user types some coding question, use some algorithm and recommend users.
or anything else,
Or Anything else?
Thank you for help.
Storing every data in a csv and running a search will probably cost you hours to retrieve a result.
Google Search Engine works in 3 stages, Crawling, Indexing, and Serving.
These algorithms working in these 3 stages are what makes Google so powerful. As they are fine tuned after years of optimization and learning, they can accurate index each webpage and analyze them without just plainly storing everything, just what you suggested.
Reference: How Search Works
I started using Hukkster.com a few days ago. It is really fast and accurate.
The bookmarklet of hukkster always fetches correct price from the product page.
This happens for all the featured merchants it supports.
I was really curious to know what technology stack they might me using for such a fast and accurate response ?
I have tried to search everything I could on google. I found nothing other than Hukkster success story, Hukkster in NEWS etc.
There was nothing related to technology stuff used by Hukkster.
It is Mozenda .
Found it. Here it is:
http://blogs.wsj.com/venturecapital/2012/08/29/the-founders-creators-of-new-shopping-app-hukkster-definitely-not-brogrammers/
The co-founders believed in their idea. There was just one problem–neither one knew how to code. They didn’t let that stop them. They developed a “paper prototype” that they could run without coding. They built a crawler using a data extraction service called Mozenda, and did the rest of Hukkster’s legwork with spreadsheets, emails and phones.
http://www.mozenda.com/
I'm a statistician by trade and I'd like some recommendations on how to set up a website that can collect data into a database. For personal use, I use Google Forms to collect data, and everything gets populated into a spreadsheet. However, this may not be appropriate in a more professional setting, especially when we have multiple pages/forms. I imagine two uses:
A website where I can send the link to others so they can fill out, similar to Google Forms.
A website where only authorized users can log in to fill out data. Think of a setting where patients are followed periodically in a research study. It'd be cool to have the clinician enter the data directly into the database as he/she fills out the forms as opposed to having another data analyst transcribe his written forms into the database.
The obvious solution would be to hire a web developer. However, I like doing things myself when they are manageable. I imagine a web developer would have to know html, php, and database knowledge (eg, MySQL or PostgreSQL). My experience in these are limited to setting up a wordpress blog on my linux server. My experience with html is also limited as I use emacs org-mode to generate them from plain text. I hope to hear about solutions with a minimal learning curve. My preference of course would be free open source software and Linux-based, but I'd like to hear all available solutions (our data manager is a Windows user).
I recently read a post on Linux Journal that mentions REDCap, but it seems you have to get institutional permission to use.
I also tagged "R" on this post as I'd like to hear what R users are doing about data collection. I'll ultimately analyze the data with R, but all data analysis begins with the scientific question and data collection.
Thanks!
UPDATE 10/4/2010: Thanks everyone for the responses so far. It appears most of the third-party solutions proposed so far has data housed in a database hosted by the vendor. I'd like to house all data in our SQL Server. That is, data entry from the web enters the database in real time, ready for data analysis.
Maybe the limesurvey.org project is of interest ...
It sounds to me like you've got yourself a med study. There are a plethora of concerns that come to mind just from what you've described you want to do. Not the least of which is privacy. Where is it going to be hosted? Have you received consent from the patients to be collecting and transmitting their information electronically? What data are you storing, if any, that could combine to present their identity.
Personally, I steer clear of DIY online data collection tools. I pay a firm, like Ipsos, Research Now/E-Rewards, to program and manage data collection using questionnaires that I have designed. The reason is, knowing how to design research and analyze data is one thing. But if you've been trained in statistics - I can safely argue that you "don't know shit" about data collection. Sure you may know a bunch about sampling theory, but when it comes to getting data in - it's best to leave it to the pros.
There are a number of "industrial quality" online data collection tools available.
Confirmit (Pretty much the gold standard for online data collection)
DASH (Smaller following, but incredibly flexible)
There are also purely web based solutions, some of which are free (not that I would recommend using them)
QuestionPro
SurveyMonkey
Zoomerang
Although, unless you're doing a study with over 50 patients, I would just recommend getting the Physicians or their assistants to fill out Excel sheets and send them to your co.
Also, it's unlikely that you'll need to set-up a username/password system. What you want is referred to as an "open-link". Where respondents click a link and enter information, identifier info can be added by the respondent. You don't need a password because people can only INPUT information, not read it.
Most of the systems I mentioned above work on the idea of emailing a respondent (a clinician) with a link to a web based survey. Which could be easily adapted to your specific needs and act as a reminder to the clinician to fill out the form.
If your question types are simple. I'm sure you could hire a programmer to put together a website that has the forms you need behind an authorized front end. PHP/MySQL would likely do the trick. But, I would double check the privacy laws in your jurisdiction surrounding medical research before going ahead.
I have conducted medial research using an online form (actually two of them). My questions were quite discrete and particular to the disease I was researching.
Previously in a related project, I had created two or three page questionnaires which were printed and then subjects and surgeons filled out the forms and our research coordinator would enter them into our database. It was a lot of work with lots of room for error. I did not like it. Online forms were much better.
I used SurveyGizmo and was happy with it. I looked at lots of options about two years ago. Google Forms did not exist at that time. I went with SurveryGizmo primarily because they had a a statement (attestation) that they were compliant with HIPAA. I could not ensure security such as ssl connections with the other websites. However in order to get myself into that capability (https connections) I had to buy the enterprise level eventhough on every other capability I could have used the free service. Also SurveyGizmo offered a 50% reduction for non-profits which our research institute qualified for.
SurveryGizmo was easy to design and put into production without having to program myself. It was easy to download the data in csv format and read that straight into R. Although I had some weird issues that I needed help with. I had to use the "old" format for export so that it came as a straigtforward csv. Furthermore, the csv file had the odd feature of the the first TWO rows being header rows. But I solved that problem with the help of stackoverflow.
SurveryGizmo has fantastic logic and piping that enbabled me to only ask relevant questions and thereby not waste the time of my respondents and even more importantly, there were no irrelevant questions to confuse respondents.
Finally, I was able to use SurveyGizmo in such a way that I could also track our (research staff) fulfillment and logistics. For instance we got notification when there were new potential subjects who were interested in participating. We were able to note FedEx tracking numbers along with the records of the corresponding subjects.
Basically it worked well.
The safest platform for collecting confidential survey data is Confirmit. There is a learning curve involved here- you will be coding in VisualSQL, which is only used in Confirmit. The survey responses will export to csv files, where you can analyze your results in R.
If you are collecting any confidential data, or data where respondents need unique access links so they can only see their own version of the survey, you will want to use Confirmit. The data is housed in Confirmit's data center, but their data is much more secure than other vendors (i.e., a third party will not be able to hack into your survey and see an individual's responses, or intercept the data that is being sent from your respondent to Confirmit).
Some time ago I came across as site online who's sole purpose was the collection of various data sets, location data, district census data, or whatever sets community members were interested in maintaining.
My question is, do you know the site that I'm thinking of, or can you suggest any other sites that perform a similar service?
I'll suggest GeoNames, a great source for zip/postal, lat/long and lots of other geographical infomation.
Take a look at these websites:
IBM Many Eyes
Swivel
Data 360
Here are some others out of my bookmarks:
DatabaseAnswers.org
Discogs
Freebase
Another that was just shown to me is sig.ma, a search tool for retrieving ontologies. If you're building web 2.0 services, this would be a great tool for bootstrapping.
I use the Common Data Hub a lot:
http://www.commondatahub.com/home
especially for tables of standardized data.
buzzdata: http://buzzdata.com/ seems like a pretty good way to publish and share large data sets to me.
What about sports: The baseball archive has stats about everything baseball from 1871...
http://baseball1.com/statistics - great source of date if one wants to train some statistics.
The most comprehensive dataset for music is MusicBrainz http://musicbrainz.org/
Search
Edit: I don't mean search generally, I mean to say: Amazon is hosting a few selected public domain data sets for all to use. You can find out what data sets they have available by searching.
I know this isn't programming related, but I hope some feedback which helps me out the misery.
We've actually lots of and different data from our web applications, dating years back.
For example, we've
Apache logfiles
Daily statistics files from our tracking software (CSV)
Another daily statistics from nation-wide rankings for advertisement (CSV)
.. and I can probably produce new data from other sources, too.
Some of the data records started in 2005, some in 2006, etc. However at some point in time we start to have data of all of them.
What's I'm drea^H^H^H^Hsearching for is an application to understand all the data, lets me load them, compare individual data sets and timelines (graphically), compare different data sets within the same time span, allow me to filter (especially the Apache logfile); and of course this all should be interactively.
Just the BZ2 compressed Apache logfiles are already 21GB in total, growing weekly.
I've had no real success with things like awstats, Nihu Web Log Analyzer or similar tools. They can just produce statical information, but I would need to interactive query the information, apply filters, lay over other datas, etc.
I've also tried data mining tools in hope they can help me but didn't really success in using them (i.e. they're over my head), e.g. RapidMiner.
Just to make it sure: it can be a commercial application. But yet have to find something which is really useful.
Somehow I get the impression I'm searching for something which does not exist or I've the wrong approach. Any hints are very welcome.
Update:
In the end I it was a mixture of the following things:
wrote bash and PHP scripts to parse and managing parsing the log files, including lots of filtering capabilities
generated plain old CSV file to read into Excel. I'm lucky to use Excel 2007 and it's graphical capabilities, albeit still working on a fixed set of data, helped a lot
I used Amazon EC2 to run the script and send me the CSV via email. I had to crawl through around 200GB of data and thus used one of the large instances to parallelize the parsing. I had to execute numerous parsing attempts to get the data right, the overall processing duration was 45 minutes. I don't know what I could have done without Amazon EC2. It was worth every buck I paid for it.
Splunk is a product for this sort of thing.
I have not used it my self though.
http://www.splunk.com/
The open source data mining and web mining software RapidMiner can import both Apache web server log files as well as CSV files and it can also import and export Excel sheets. Rapid-I offers a lot of training courses for RapidMiner, some also on web mining and web usage mining.
In the interest of full disclosure, I've not used any commercial tools for what your describing.
Have you looked at LogParser? It might be more manual than what your looking for, but it will allow you to query many different structured formats.
As for the graphical aspect of it, there is some basic charting capabilities built in, but your likely to get much more mileage piping the log parser output into a tabular/delimited format and loading into Excel. From there you can chart/graph just about anything.
As for cross joining different data sources, you can always pump all the data into the database where you'll have a richer language for querying the data.
What you're looking for is a "data mining framework", i.e. something which will happily eat gigabytes of somewhat random data and then lets you slice'n'dice it in yet unknown ways to find the gold nuggets buried deep inside of the static.
Some links:
CloudBase: "CloudBase is a high-performance data warehouse system built on top of Map-Reduce architecture. It enables business analysts using ANSI SQL to directly query large-scale log files arising in web site, telecommunications or IT operations."
RapidMiner: "RapidMiner aleady is a full data mining and business intelligence engine which also covers many related aspects ranging from ETL (Extract, Transform & Load) over Analysis to Reporting."