I am trying to download some sound files through R (mostly mp3). I've started off using download.file() like below. However, the sound files downloaded this way sound horrible and it's like as if they're playing way too fast. Any ideas?
download.file("http://www.mfiles.co.uk/mp3-downloads/frederic-chopin-piano-sonata-2-op35-3-funeral-march.mp3","test.mp3")
Even better than if the above function would work, is there a way do download files without having to specify the extension? Sometimes I only have the redirecting page.
Thanks!
Try explicitly setting binary mode with mode="wb":
download.file("http://www.mfiles.co.uk/mp3-downloads/frederic-chopin-piano-sonata-2-op35-3-funeral-march.mp3",
tf <- tempfile(fileext = ".mp3"),
mode="wb")
(You can view the filename with cat(tf).)
Related
I'm wondering if this is possible.
I've got a link to a pdf file and I need to get it's MD5Sum. (my earlier question) For that I'll need to first download it and I want to do it on the most efficient way. I'm guessing that since files are small and RAM is faster than disk the obvious path would be to store it on RAM, calculate the MD5Sum and throw it away.
So far I'm using this:
link <- "xxxxx.pdf"
download.file(link, "temp.pdf", quiet = TRUE, mode = "wb")
result <- md5sum("temp.pdf")
unlink("temp.pdf")
I just read about tempfile() and that I could use it instead of "temp.pdf" but it looks like i'd just be adding an extra line to the code.
Are there better methods to get the files?
I'm trying to download a long list of podcasts, but when I use the download.file command in R it corrupts the audio file into a bunch of crackling noises.
Could any of you recommend a dedicated audio-downloading package, or recommend a download.file method that would be better suited for download audio. I went through the ones listed in the help file, but none worked. ("auto", "internal", "wininet", "libcurl", "wget" and "curl")
The downloading portion of the code looks similar to this:
url <- "http://play.podtrac.com/npr-510289/npr.mc.tritondigital.com/NPR_510289/media/anon.npr-mp3/npr/pmoney/2016/06/20160603_pmoney_podcast.mp3?
orgId=1&d=1121&p=510289&story=480606726&t=podcast&e=480606726&siteplayer=true&dl=1"
download.file(url = url, destfile = "test.mp3")
I attempted different audio files from different sites and had similar results.
Edit: In response to the question by VC.One, this a url to the initial section of the Hex code. I added in more than the couple of lines he requested because the first section looked like file information which may or may not be relevant:
Try mode = "wb" in download.file(). I had the same issue you mentioned and this solved it for me.
I'm no R-programmer (because of the problem I started learning it), I'm using Python, In a forcasting task I got a dataset signalList.rdata of a pheomenen called partial discharge.
I tried some commands to load, open and view, Hardly got a glimps
my_data <- get(load('C:/Users/Zack-PC/Desktop/Study/Data Sets/pdCluster/signalList.Rdata'))
but, since i lack deep knowledge about R, I wanted to convert it into a csv file, or any type that I can deal with in python.
or, explore it and copy-paste manually.
so, i'm asking for any solution whether using R or Python or any tool to get what's in the .rdata file.
Have you managed to load the data successfully into your working environment?
If so, write.csv is the function you are looking for.
If not,
setwd("C:/Users/Zack-PC/Desktop/Study/Data Sets/pdCluster/")
signalList <- load("signalList.Rdata")
write.csv(signalList, "signalList.csv")
should do the trick.
If you would like to remove signalList from your working directory,
rm(signalList)
will accomplish this.
Note: changing your working directory isn't necessary, it just makes it easier to read in a comment I feel. You may also specify another path for saving your csv to within the second argument of write.csv.
I am downloading raw html websites with R's download.file function.
For storage efficiency, I would like to know if it is possible to download the html-files without contained images.
Something rings that I have heard about something like that before, but I can simply neither remember nor find any evidence/instructions.
I would be very grateful for some help.
Greetings,
Marcel
url_list <- c("http://www.spiegel.de/",
"http://www.faz.net/")
dest_list <- c("test1.html",
"test2.html")
download.file(url_list,
dest_list,
method="libcurl",
quiet=F)
Is it possible to load ftp files directly to the R workspace without downloading it?
I have 700+ files each around 1.5 Gb and I want to extract approx 0.1 % of the information from every files and add them into a single dataframe.
I had a look at Download .RData and .csv files from FTP using RCurl (or any other method), could not get it to work.
Edit: After some reading, i managed to get the files into R
library(httr)
r <- GET("ftp://ftp.ais.dk/ais_data/aisdk_20141001.csv", write_memory())
when i try to read the body i use
content(r, "text")
but the output is gibberish. It might be because of the encoding, but how do i know which encoding the server uses. Any ideas on how to get the original data from the ftp?
I found a solution, which is very simple, but works nonetheless:
library(data.table)
r <- fread("ftp://ftp.ais.dk/ais_data/aisdk_20141001.csv")
This blog was helpfull