I am using the RRDEDIS package as a client to REDIS. I am building a neural-redis module.
I am performing all pre-processing of data in R and I am sending the variables from R to the REDIS instance using RedisCmd command. I am using the iris dataset.
When I pass it as values, the modules accepts the input. If I pass the variables, then it says its an invalid neural network input.
NR.OBSERVE works if the values are given individually
library(rredis)
redisConnect("localhost",6379)
create <- redisCmd('NR.CREATE','net', 'REGRESSOR', '2','3', '->', '1','NORMALIZE','DATASET','50','TEST','10')
obs<- redisCmd('NR.OBSERVE', 'net','1','2','->','3')
NR.OBSERVE however does not work if I give a variable containing the values.
library(rredis)
redisConnect("localhost",6379)
create <- redisCmd('NR.CREATE','net', 'REGRESSOR', '2','3', '->', '1','NORMALIZE','DATASET','50','TEST','10')
a<-1
b<-2
c<-3
obs<- redisCmd('NR.OBSERVE', 'net','a','b','->','c')
This throws the following error
Error in doTryCatch(return(expr), name, parentenv, handler) :
ERR invalid neural network input: must be a valid float precision floating point number
What is the correct way of doing it
Related
Trying to do a simple isochrone using the osrm package in R, following this example:
https://rstudio-pubs-static.s3.amazonaws.com/259089_2f5213f21003443994b28aab0a54cfd6.html
iso1 <- osrm::osrmIsochrone(loc=c(-93.2223, 44.8848),breaks=10)
Yields the following output to the console. This occurs whether I try to call the default server https://routing.openstreetmap.de/ or an internal OSRM server. I assume this is a simple problem, but I haven't seen anything on Google or SO.
Error in doTryCatch(return(expr), name, parentenv, handler): object 'res' not found {repeat 8 more times}
Error in UseMethod("st_as_sf") :
no applicable method for 'st_as_sf' applied to an object of class "NULL" ```
The first error may be related to a limit in the OSRM server configuration that your query exceeds, see osrm-routed options, e.g.:
--max-table-size arg (=100) Max. locations supported in distance
table query
Indeed, computing isochrones may require queries for large time/distance tables.
I see maybe 2 problems :
The CRAN version of osrm uses .onAttach() to set the server address and the routing profile. Using osrm:: does not attach the package, so the server is not set. You should use library(osrm) to set the server or directly use osrm.server and osrm.profile arguments in the function.
But if you tried to use an internal OSRM server you have probably used something like:
options(osrm.server = "https://routing.openstreetmap.de/", osrm.profile = "car")
In this case, check the osrm.profile argument, in the previous versions "driving" was allowed as profile name but now the only allowed names are "car", "foot" and "bike".
I am trying to run the following lines of code:
tree <- read.nexus("~/Dropbox/Billfishes/Analysis/Phylogenies/Fish_12Tax_time_calibrated.tre");
characterTable <- read.csv("~/Dropbox/Billfishes/Analysis/CodingTableThresh95.csv", row.names = 1);
treeWData <- treedata(tree, characterTable, sort = T);
When I ran this code last week, it worked. I then updated all my packages as part of routine maintenance, and now I get this error:
Error in integer(max(oldnodes)) : vector size cannot be infinite
In addition: Warning message:
In max(oldnodes) : no non-missing arguments to max; returning -Inf
I've tried rolling back to previous versions of R (I'm currently running R version 3.4.0 in RStudio 1.0.143; geiger is version 2.0.6), reading the tree in as a Newick, and trying other tree files, always resulting in the same error. When I try using other tree and character datasets, I do not get the error.
Any ideas what this error means, and/or how to get this code to run without throwing this error?
After careful error checking, I discovered that the taxon names in the phylogeny file were separated by underscores, whereas the taxon names in the table used camel caps. Thus, the error was thrown because no taxa in the phylogeny mapped to the character table.
I am trying to create a new factor column on an .xdf data set with the rxDataStep function in RevoScaleR:
rxDataStep(nyc_lab1
, nyc_lab1
, transforms = list(RatecodeID_desc = factor(RatecodeID, levels=RatecodeID_Levels, labels=RatecodeID_Labels))
, overwrite=T
)
where nyc_lab1 is a pointer to a .xdf file. I know that the file is fine because I imported it into a data table and successfully created a the new factor column.
However, I get the following error message:
Error in doTryCatch(return(expr), name, parentenv, handler) :
ERROR: The sample data set for the analysis has no variables.
What could be wrong?
First, RevoScaleR has some warts when it comes to replacing data. In particular, overwriting the input file with the output can sometimes causes rxDataStep to fail for unknown reasons.
Even if it works, you probably shouldn't do it anyway. If there is a mistake in your code, you risk destroying your data. Instead, write to a new file each time, and only delete the old file once you've verified you no longer need it.
Second, any object you reference that isn't part of the dataset itself, has to be passed in via the transformObjects argument. See ?rxTransform. Basically the rx* functions are meant to be portable to distributed computing contexts, where the R session that runs the code isn't be the same as your local session. In this scenario, you can't assume that objects in your global environment will exist in the session where the code executes.
Try something like this:
nyc_lab2 <- RxXdfData("nyc_lab2.xdf")
nyc_lab2 <- rxDataStep(nyc_lab1, nyc_lab2,
transforms=list(
RatecodeID_desc=factor(RatecodeID, levels=.levs, labels=.labs)
),
rxTransformObjects=list(
.levs=RatecodeID_Levels,
.labs=RatecodeID_Labels
)
)
Or, you could use dplyrXdf which will handle all this file management business for you:
nyc_lab2 <- nyc_lab1 %>% factorise(RatecodeID)
I have a number of large dataframes in R which I was planning to store using redis. I am totally new to redis but have been reading about it today and have been using the R package rredis.
I have been playing around with small data and saved and retrieved small dataframes using the redisSet() and redisGet() functions. However when it came to saving my larger dataframes (the largest of which is 4.3 million rows and 365MB when saved as .RData file)
using the code redisSet('bigDF', bigDF) I get the following error message:
Error in doTryCatch(return(expr), name, parentenv, handler) :
ERR Protocol error: invalid bulk length
In addition: Warning messages:
1: In writeBin(v, con) : problem writing to connection
2: In writeBin(.raw("\r\n"), con) : problem writing to connection
Presumably because the dataframe is too large to save. I know that redisSet writes the dataframe as a string, which is perhaps not the best way to do it with large dataframes. Does anyone know of the best way to do this?
EDIT: I have recreated the error my creating a very large dummy dataframe:
bigDF <- data.frame(
'lots' = rep('lots',40000000),
'of' = rep('of',40000000),
'data' = rep('data',40000000),
'here'=rep('here',40000000)
)
Running redisSet('bigDF',bigDF) gives me the error:
Error in .redisError("Invalid agrument") : Invalid agrument
the first time, then running it again immediately afterwards I get the error
Error in doTryCatch(return(expr), name, parentenv, handler) :
ERR Protocol error: invalid bulk length
In addition: Warning messages:
1: In writeBin(v, con) : problem writing to connection
2: In writeBin(.raw("\r\n"), con) : problem writing to connection
Thanks
In short: you cannot. Redis can store a maximum of 512 Mb of data in a String value and your serialized demo data frame is bigger than that:
> length(serialize(bigDF, connection = NULL)) / 1024 / 1024
[1] 610.352
Technical background:
serialize is called in the .cerealize function of the package via redisSet and rredis:::.redisCmd:
> rredis:::.cerealize
function (value)
{
if (!is.raw(value))
serialize(value, ascii = FALSE, connection = NULL)
else value
}
<environment: namespace:rredis>
Offtopic: why would you store such a big dataset in redis anyway? Redis is for small key-value pairs. On the other hand I had some success storing big R datasets in CouchDB and MongoDB (with GridFS) by adding the compressed RData there as an attachement.
Yesterday I posted this question on Stats Exchange and based on the response I got, I decided to do some analysis using R's src() function. It's part of the "sensitivity" package.
I installed the package with no trouble, and then tried the following command:
sens <- src(seminars, REV, rank=TRUE, nboot=100)
sens is a new variable to store the results of the test
seminars is a data frame that I imported from a CSV file using the read.csv() command
REV is the name of a variable/column in seminars and my desired response variable
When I ran the command, I got the following error:
Error in data.frame(Y = y, X) : object 'REV' not found
Any thoughts?
From the documentation of src
y: a vector containing the responses corresponding to the design
of experiments (model output variables).
The input needs to be a vector (apparently) and you're attempting to pass in a name (and not even quoting the name at that). Since REV isn't defined (I'm guessing due to the error message) in the global environment it doesn't know what to do.
From reading the documentation it sounds like what you want to do is pass sensitivity[,-which(colnames(sensitivity) == "REV")] (just the design matrix - you don't want to include the responses) in as x and sensitivity[,"REV"] in as y.
This error is linked to the fact that the data.frame X=seminars include factors with 0 value, which produce an error while constructing the regression coefficient. You can first remove them as they don't contribute to the variance of the output.