Incomplete wav file from load.wave, saved by save.wave - r

If you record and save a wave file in R using the audio package, there is an error trying to load it again. The object is to store the audio files for later retrieval.
library(audio)
k = 3 # three seconds
x <- rep(NA_real_, 44100*2*k)
# record and save wav file
record(x, 44100, 2)
wait(k)
play(x)
save.wave(x, "test.wav")
# load file again
y <- load.wave("test.wav")
After the last command we get:
Error in load.wave("test.wav") : incomplete file
A previous post has suggested an extra data problem, but why wouldn't a file type be internally consistent within a package?

It looks like the version 0.1-5 on CRAN (https://cran.r-project.org/web/packages/audio/index.html) does not include the last commit / fix done in 2014 (http://www.rforge.net/audio/git.html).
When I clone the git project and re-build the audio package, I can now successfully run the following (which is broken using the CRAN package):
save.wave(audioSample(sin(1:48000/10), 48000), "test.wav")
play(load.wave("test.wav"))
Apart from this, there are two other potentially useful packages sound and tuneR on CRAN. They have been published more recently.

Related

unable to open .dat files on R even with haven installed

So I use SGA tools for processing my images. It gives back results in .dat files. Now in order to work on this data in R, I tried to import the .dat file using the haven package. I installed haven and then its library, but I am not able to import data still and it gives this error message.
Error: Failed to parse C:/Users/QuRana/Desktop/SGA Tools/Plate_Image_Example (1).dat: This version of the file format is not supported.
When I use this command install.packages("haven"), haven is loaded, but then when I load library using library(haven) nothing appears on my console except for this
> library(haven)
Then when I use this code:
datatrial1 <- read_dta("C:/Users/QuRana/Desktop/SGA Tools/Plate_Image_Example (1).dat")
It gives me the error mentioned above. When I try converting my .dat file to a .csv file and load my data, the imported data adds additional "t" values before the values in columns except for the first one like this:
Flags: S - Colony spill or edge interference C - Low colony circularity
# row\tcol\tsize\tcircularity\tflags
1\t1\t4355\t0.9053\t
1\t2\t4456\t0.8401\t
1\t3\t3439\t0.8219\t
1\t4\t3215\t0.8707\t
All the t's before the numeric values are not what I want. Another issue that I am facing is I cannot install the gitter package on my R version which is R 4.2.2.
You can read your tab separated file like so `read.delim("file_path", header = TRUE, sep = "\t")

Error reading hdf file in R

I have two hdf4 files namely file 1:"MYD04_L2.A2011001.2340.006.2014078044212.hdf" and file 2: "MYD04_L2.A2011031.mosaic.006.AOD_550_DT_DB_Combined.hdf". First one is raw data file with 72 sub-datasets and second one is the file I obtained after ordering (i.e. post-processed). For the first R code:
layer_name <- getSds("MYD04_L2.A2011001.2340.006.2014078044212.hdf",method="mrt")
layer_name$SDSnames[66:68]
[1] "AOD_550_Dark_Target_Deep_Blue_Combined"
[2] "AOD_550_Dark_Target_Deep_Blue_Combined_QA_Flag"
[3] "AOD_550_Dark_Target_Deep_Blue_Combined_Algorithm_Flag"
It works ok with method="gdal" as well. However, when I try to read file 2, a window pops up showing gdalinfo.exe has stopped working (method = "gdal"). The same kind of problem arises for mrt and it shows sdslist.exe has stopped working. I get following error message:
Error in sds[[i]] <- substr(sdsRaw[i], 1, 11) == "SDgetinfo: " :
attempt to select less than one element in integerOneIndex
Is single layer is the issue here? As the first one has 72 sub-data sets and second one has only one sub-data set (assuming because of the given file name as I couldn't read it), have R failed to read the data file? Can anyone propose any solution for reading such data files? If ncdf4 package is the solution with enabled hdf4, can anyone explain, step-by-step, how can I enable hdf4 and build ncdf4 using windows platform?

I am trying to make an interactive map using leafletR package according to a ZevRoss blog. But there is an error in the code

The ZevRoss blog is as follow:
http://zevross.com/blog/2014/04/11/using-r-to-quickly-create-an-interactive-online-map-using-the-leafletr-package/
The code with error is:
# ----- Write data to GeoJSON
leafdat<-paste(downloaddir, "/", filename, ".geojson", sep="")
writeOGR(subdat, leafdat, layer="", driver="GeoJSON")
And the error is:
Error in writeOGR(subdat, leafdat, layer = "", driver = "GeoJSON") :
GDAL Error 3: Cannot open file 'd:/Leaflet/County_2010Census_DP1.geojson'
Because I am a freshman in R, I searched for this problem a lot and didn't get any good answer.
I am using Rstudio R version 3.1.1(2014-07-10) on windows 7 32bit.
My rgdal version is 0.9-1.
The other code in the blog runs successfully, this sentence seems to be the only difficult point.
You could create GeoJSON using leafletR package:
library('leafletR')
Your_GeoJSON <- toGeoJSON(data=YourData, dest=getwd())
I've tried to find a solution for this mysterious error for some time.
Eventually I found this post on the Gdal package errors' tickets site that clarified the problem and gave a solution.
Basically the problem is in the interface between rgdal and Gdal (Gdal changed their way to work and the latest version of rgdal hasn't watched up yet):
writeOGR() calls ogrCheckExists("foo.geojson") to check first if the file exists before creating a new dataset.
In the 1.11 version the OGR GeoJSON driver will emit an error message that this file doesn't exists, whereas previous version didn't emit an error message.
rgdal catches this error as a fatal one and doesn't go to the writing step. This should be fixed in rgdal.
Meanwhile you have an easy workaround : add check_exists = FALSE as a parameter to writeOGR()
Therefore the following code will work:
writeOGR(spDf,'foo.geojson','spDf', driver='GeoJSON',check_exists = FALSE)
Of course if there is already a geojson file with the chosen name at the location writeOGR still fails.
Even though you already have a 'd:' drive on your computer and you have permission to write to that drive, try the following:
--------------------------------
leafdat<-paste(downloaddir, "/", ".geojson", sep="")
> leafdat
> "d:/Leaflet/.geojson"
writeOGR(subdat, leafdat, layer="", driver="GeoJSON")
--------------------------------
Then you may get ".geojson" file on "d:/Leaflet". Change the file name ".geojson" to "County_2010Census_DP1.geojson".

Using SRTM tif file in R

I'm trying to import a SRTM dataset into R. I've downloaded the data in a tif file however am having trouble reading it in "R".
Ive tried using the following code:
t = readTIFF("srtm_56_06/srtm_56_06.tif", as.is=TRUE)
load('srtm_56_06/srtm_56_06.tif')
read_file<-as.matrix(raster("srtm_56_06/srtm_56_06.tif")
However I am still getting error messages:
load('srtm_56_06/srtm_56_06.tif')
# Error: bad restore file magic number (file may be corrupted) -- no data loaded
# In addition: Warning message:
# file ‘srtm_56_06.tif’ has magic number 'II*'
# Use of save versions prior to 2 is deprecated
library(raster)
t = readTIFF("srtm_56_06/srtm_56_06.tif", as.is=TRUE)
# Error: could not find function "readTIFF"
read_file<-as.matrix(raster("srtm_56_06/srtm_56_06.tif") + min(read_file)
# Error: unexpected symbol in:
# "read_file<-as.matrix(raster("srtm_56_06/srtm_56_06.tif")
# min"
Can anyone help me with the commands to import this data. I'm a novice at "R" and a little lost.
Just read it with raster, but note you depend on rgdal being installed as well to read a .tif.
library(raster)
library(rgdal)
r <- raster("srtm_56_06/srtm_56_06.tif")
If that works, try
plot(r)
r
If it's really a "TIFF" then that should be fine, if it's really a GeoTIFF then you'll have a sensible map as well. (If it's something else that GDAL can read you might get a good result anyway, remember the extension of a file is not a reliable indicator of its contents).
The SRTM clue suggests that this is a single band DEM file from the tiled global SRTM data set. If it's somehow a "multi-band image" then you could read that with brick and plot with plotRGB (but I really doubt that is the case here). Note that there is a native binary format for SRTM that raster/rgdal could read as well but either they distributed .tif as well or someone else converted it.
There are a number of misconceptions in your code:
load is for a particular file type created from R (not these .tifs)
readTIFF is not in package raster
read_file would be a sensible matrix, if you have rgdal installed (which raster must use to load a .tif), but why throw away the spatial metadata?

Package that downloads data from the internet during installation

Is anyone aware of a package that downloads a dataset from the internet during the installation process and then prepares and saves it so that it is available when loading the package using library(packageName)? Are there any drawbacks in this approach (besides the obvious one that package installation will fail if the data source is unavailable or the data format has changed)?
EDIT: Some background. The data is three tab-separated files in a ZIP archive, owned by federal statistics and generally freely accessible. I have R code which downloads, extracts and prepares the data, in the end three data frames are created which could be saved in .RData format.
I am thinking about creating two packages: A "data" package that provides the data, and a "code" package that operates on it.
I did this mockup before, while you were posting your edit. I presume it would work, but not tested. I've commented it so you can see what you would need to change. The idea here is to check to see if an expected object is available in the current working environment. If it is not, check to see that the file that the data can be found in is in the current working directory. If that is not found, prompt the user to download the file, then proceed from there.
myFunction <- function(this, that, dataset) {
# We're giving the user a chance to specify the dataset.
# Maybe they have already downloaded it and saved it.
if (is.null(dataset)) {
# Check to see if the object is already in the workspace.
# If it is not, check to see whether the .RData file that
# contains the object is in the current working directory.
if (!exists("OBJECTNAME", where = 1)) {
if (isTRUE(list.files(
pattern = "^DATAFILE.RData$") == "DATAFILE.RData")) {
load("DATAFILE.RData")
# If neither of those are successful, prompt the user
# to download the dataset.
} else {
ans = readline(
"DATAFILE.RData dataset not found in working directory.
OBJECTNAME object not found in workspace. \n
Download and load the dataset now? (y/n) ")
if (ans != "y")
return(invisible())
# I usually use RCurl in case the URL is https
require(RCurl)
baseURL = c("http://some/base/url/")
# Here, we actually download the data
temp = getBinaryURL(paste0(baseURL, "DATAFILE.RData"))
# Here we load the data
load(rawConnection(temp), envir=.GlobalEnv)
message("OBJECTNAME data downloaded from \n",
paste0(baseURL, "DATAFILE.RData \n"),
"and added to your workspace\n\n")
rm(temp, baseURL)
}
}
dataset <- OBJECTNAME
}
TEMP <- dataset
## Other fun stuff with TEMP, this, and that.
}
Two packages, hosted at Github
Here's another approach, building on the comments between #juba and I. The basic concept is to have, as you describe, one package for the codes and one for the data. This function would be part of the package that contains your code. It will:
Check to see if the data package is installed
Check to see if the version of the data package you have installed matches the version at Github, which we are going to assume is the most up to date version.
When it fails any of the checks, it asks the user if they want to update their installation of the package. In this case, for demonstration, I've linked to one of my packages in progress at Github. This should give you an idea of what you need to substitute to get it to work with your own package once you've hosted it there.
CheckVersionFirst <- function() {
# Check to see if installed
if (!"StataDCTutils" %in% installed.packages()[, 1]) {
Checks <- "Failed"
} else {
# Compare version numbers
require(RCurl)
temp <- getURL("https://raw.github.com/mrdwab/StataDCTutils/master/DESCRIPTION")
CurrentVersion <- gsub("^\\s|\\s$", "",
gsub(".*Version:(.*)\\nDate.*", "\\1", temp))
if (packageVersion("StataDCTutils") == CurrentVersion) {
Checks <- "Passed"
}
if (packageVersion("StataDCTutils") < CurrentVersion) {
Checks <- "Failed"
}
}
switch(
Checks,
Passed = { message("Everything looks OK! Proceeding!") },
Failed = {
ans = readline(
"'StataDCTutils is either outdated or not installed. Update now? (y/n) ")
if (ans != "y")
return(invisible())
require(devtools)
install_github("StataDCTutils", "mrdwab")
})
# Some cool things you want to do after you are sure the data is there
}
Try it out with CheckVersionFirst().
Note: This would succeed only if you religiously remember to update your version number in your description file every time you push a new version of the data to Github!
So, to clarify/recap/expand, the basic idea would be to:
Periodically push the updated version of your data package to Github, being sure to change the version number of the data package in its DESCRIPTION file when you do so.
Integrate this CheckVersionFirst() function as an .onLoad event in your code package. (Obviously modify the function to match your account and package name).
Change the commented line that reads # Some cool things you want to do after you are sure the data is there to reflect the cool things you actually want to do, which would probably start with library(YOURDATAPACKAGE) to load the data....
This method may not be efficient, but a good workaround. If you are making a package that needs regularly updated data, first make a package which has that data. It does not need any functions, but I like the concept of a setter (which you might not need in this case) & getter.
Then when you make your package, have the 'data'-package as a dependency. This way, whenever someone installs your package, he/she will always have the latest data.
On your part, you'll just have to swap out the data in your 'data' package, and upload it to the repo you want.
If you don't know how to build a package, check ?packages.skeleton and R CMD CHECK, R CMD BUILD

Resources