I have a R data file and inside that file I have data called results_NN3 (it is a type list[111] with a value of list of length 111). I tried to convert results_NN3 to a JSON, to use in python, but I got an error. I am trying to do it this way:
> dados_json <- toJSON(results_NN3)
and the result is:
Error in toJSON(results_NN3) : unable to convert R type 6 to JSON
Sorry if this question is someway wrong, I do not know much R, but I need that file in JSON so I can work with it in python, for a paper. Thanks.
I had success using the force = TRUE argument:
jsonlite::toJSON(results_NN3, force = TRUE)
{"NN3.001":{"rank":[{"AICc":-69.9076,"AIC":-70.7772,"BIC":-63.0499,"logLik":39.3886,"MSE":419053.9795,"NMSE":1.7235,"MAPE":9.4205,"sMAPE":0.0881,"MaxError":1190.4399,"rank.position.sum":1,"_row":"LT"},{"AICc":-154.9789,"AIC":-155.8485,"BIC":-148.1212,"logLik":81.9242,"MSE":419053.9795,"NMSE":1.7235,"MAPE":9.4205,"sMAPE":0.0881,"MaxError":1190.4399,"rank.position.sum":2,"_row":"LT10"},{"AICc":626.1925,"AIC":625.6344,"BIC":631.1848,"logLik":-309.8172,"MSE":421498.4547,"NMSE":1.7335,"MAPE":9.6515,"sMAPE":0.092,"MaxError":1116.7813,"rank.position.sum":3,"_row":"MAS"},{"AICc":816.5476,"AIC":815.2142,"BIC":824.8734,"logLik":-402.6071,"MSE":463819.2847,"NMSE":1.9076,"MAPE":9.9746,"sMAPE":0.0928,"MaxError":1260.0692,"rank.position.sum":4,"_row":"BCT"},{"AICc":816.5476,"AIC":815.2142,"BIC":824.8734,"logLik":-402.6071,"MSE":463819.2847,"NMSE":1.9076,"MAPE":9.9746,"sMAPE":0.0928,"MaxError":1260.0692,"rank.position.sum":5.5,"_row":"original"},{"AICc":816.5476,"AIC":815.2142,"BIC":...
Related
I'm working on a project for school that requires me to combine ~600 JSON files into one CSV file. I have minimal coding knowledge in R, and I keep getting errors that I can't resolve, probably due to my minimal knowledge. Here's the code I'm using:
filenames <- list.files(pattern="*.json")
myJson <- lapply(filenames, function(x) fromJSON(file=x))
This returns a list of the JSON contents of all my files (hooray), and it's where things break down. If I use:
myJson <- toJSON(myJson)
to try converting all my list of JSON data into one JSON, I get this error:
Error in toJSON(myJson) : unable to escape string. String is not utf8
If I use unlist(myJson), I lose all the columns and get a useless single column of all my data. Any assistance would be hugely appreciated! Thank you.
I think I have exhausted the entire internet looking for an example / answer to my query regarding implementing a h2o mojo model to predict within RShiny. We have created a bunch of models, and wish to predict scores in a RShiny front end where users enter values. However, with the following code to implement the prediction we get an error of
Warning: Error in checkForRemoteErrors: 6 nodes produced errors; first
error: No method asJSON S3 class: H2OFrame
dataInput <- dfName
dataInput <- toJSON(dataInput)
rawPred <- as.data.frame(h2o.predict_json(model= "folder/mojo_model.zip", json = dataInput, genmodelpath = "folder/h2o-genmodel.jar"))
Can anyone help with some pointers?
Thanks,
Siobhan
This is not a Shiny issue. The error indicates that you're trying to use toJSON() on an H2OFrame (instead of an R data.frame), which will not work because the jsonlite library does not support that.
Instead you can convert the H2OFrame to a data.frame using:
dataInput <- toJSON(as.data.frame(dataInput))
I can't guarantee that toJSON() will generate the correct input for h2o.predict_json() since I have not tried that, so you will have to try it out yourself. Note that the only way this may work is if this is a 1-row data.frame because the h2o.predict_json() function expects a single row of data, encoded as JSON. If you're trying to score multiple records, you'd have to loop over the rows. If for some reason toJSON() doesn't give you the right format, then you can use a function I wrote in this post here to create the JSON string from a data.frame manually.
There is a ticket open to create a better version of h2o.predict_json() that will support making predictions from a MOJO on data frames (with multiple rows) without having to convert to JSON first. This will make it so you can avoid dealing with JSON altogether.
An alternative is to use a H2O binary model instead of a MOJO, along with the standard predict() function. The only requirement here is that the model must be loaded into H2O cluster memory.
The following works now using the json formatting from first two lines and the single quote around var with spaces.
df<- data.frameV1=1,V2=1,CMPNY_EL_IND=1,UW_REGION_NAME = "'LONDON & SE'" )
dfstr <- sapply(1:ncol(df), function(i) paste(paste0('\"', names(df)[i], '\"'), df[1,i], sep = ':'))
json <- paste0('{', paste0(dfstr, collapse = ','), '}')
dataPredict <- as.data.frame(h2o.predict_json(model = "D:\\GBM_model_0_CMP.zip", json = json, genmodelpath = "D:\\h2o-genmodel.jar", labels = TRUE))
I have merged a bunch of csv files but cant get them to export to one file correctly what am i doing wrong?The data shows up in my console but I get a error that says "Error in as.data.frame.default(x[[i]], optional = TRUE) :
cannot coerce class ""function"" to a data.fram",
setwd("c:/users/adam/documents/r data/NBA/DK/TEMP")
filenames <- list.files("c:/users/adam/documents/r data/NBA/DK/TEMP")
do.call("rbind",lapply(filenames, read.csv, header = TRUE))
write.csv(read.csv, file ='Lineups.csv')
You did not assign the results of do.call function to anything. Fairly common R noob error. Failure to understand the functional programming paradigm. Results need to be assigned to R names or they just get garbage-collected.
The error is actually from the code that you didn't put in a code block:
write.csv(read.csv, file ='Lineups.csv')
The 'read.csv' was presumably your intended name for the result of the do.call-operation, except it is by default a function name rather than your expectation. You could assign the do.call-results to the name 'read.csv' but doing so is very poor practice. Choose a more descriptive name like 'TEMP_files_appended'.
TEMP_files_appended <- do.call("rbind",lapply(filenames, read.csv, header = TRUE))
write.csv(TEMP_files_appended, file ='Lineups.csv')
(I will observe that using header=TRUE for read.csv is not needed since that is the default for that function.)
I using the Alteryx R Tool to sign an amazon http request. To do so, I need the hmac function that is included in the digest package.
I'm using a text input tool that includes the key and a datestamp.
Key= "foo"
datastamp= "20120215"
Here's the issue. When I run the following script:
the.data <- read.Alteryx("1", mode="data.frame")
write.Alteryx(base64encode(hmac(the.data$key,the.data$datestamp,algo="sha256",raw = TRUE)),1)
I get an incorrect result when compared to when I run the following:
write.Alteryx(base64encode(hmac("foo","20120215",algo="sha256",raw = TRUE)),1)
The difference being when I hardcode the values for the key and object I get the correct result. But if use the variables from the R data frame I get incorrect output.
Does the data frame alter the data in someway. Has anyone come across this when working with the R Tool in Alteryx.
Thanks for your input.
The issue appears to be that when creating the data frame, your character variables are converted to factors. The way to fix this with the data.frame constructor function is
the.data <- data.frame(Key="foo", datestamp="20120215", stringsAsFactors=FALSE)
I haven't used read.Alteryx but I assume it has a similar way of achieving this.
Alternatively, if your data frame has already been created, you can convert the factors back into character:
write.Alteryx(base64encode(hmac(
as.character(the.data$Key),
as.character(the.data$datestamp),
algo="sha256",raw = TRUE)),1)
I have a textfile of 4.5 million rows and 90 columns to import into R. Using read.table I get the cannot allocate vector of size... error message so am trying to import using the ff package before subsetting the data to extract the observations which interest me (see my previous question for more details: Add selection crteria to read.table).
So, I use the following code to import:
test<-read.csv2.ffdf("FD_INDCVIZC_2010.txt", header=T)
but this returns the following error message :
Error in read.table.ffdf(FUN = "read.csv2", ...) :
only ffdf objects can be used for appending (and skipping the first.row chunk)
What am I doing wrong?
Here are the first 5 rows of the text file:
CANTVILLE.NUMMI.AEMMR.AGED.AGER20.AGEREV.AGEREVQ.ANAI.ANEMR.APAF.ARM.ASCEN.BAIN.BATI.CATIRIS.CATL.CATPC.CHAU.CHFL.CHOS.CLIM.CMBL.COUPLE.CS1.CUIS.DEPT.DEROU.DIPL.DNAI.EAU.EGOUL.ELEC.EMPL.ETUD.GARL.HLML.ILETUD.ILT.IMMI.INAI.INATC.INFAM.INPER.INPERF.IPO ...
1 1601;1;8;052;54;051;050;1956;03;1;ZZZZZ;2;Z;Z;Z;1;0;Z;4;Z;Z;6;1;1;Z;16;Z;03;16;Z;Z;Z;21;2;2;2;Z;1;2;1;1;1;4;4;4,02306147485403;ZZZZZZZZZ;1;1;1;4;M;22;32;AZ;AZ;00;04;2;2;0;1;2;4;1;00;Z;54;2;ZZ;1;32;2;10;2;11;111;11;11;1;2;ZZZZZZ;1;2;1;4;41;2;Z
2 1601;1;8;012;14;011;010;1996;03;3;ZZZZZ;2;Z;Z;Z;1;0;Z;4;Z;Z;6;2;8;Z;16;Z;ZZ;16;Z;Z;Z;ZZ;1;2;2;2;Z;2;1;1;1;4;4;4,02306147485403;ZZZZZZZZZ;3;3;3;1;M;11;11;ZZ;ZZ;00;04;2;2;0;1;2;4;1;14;Z;54;2;ZZ;1;32;Z;10;2;23;230;11;11;Z;Z;ZZZZZZ;1;2;1;4;41;2;Z
3 1601;1;8;006;05;005;005;2002;03;3;ZZZZZ;2;Z;Z;Z;1;0;Z;4;Z;Z;6;2;8;Z;16;Z;ZZ;16;Z;Z;Z;ZZ;1;2;2;2;Z;2;1;1;1;4;4;4,02306147485403;ZZZZZZZZZ;3;3;3;1;M;11;11;ZZ;ZZ;00;04;2;2;0;1;2;4;1;14;Z;54;2;ZZ;1;32;Z;10;2;23;230;11;11;Z;Z;ZZZZZZ;1;2;1;4;41;2;Z
4 1601;1;8;047;54;046;045;1961;03;2;ZZZZZ;2;Z;Z;Z;1;0;Z;4;Z;Z;6;1;6;Z;16;Z;14;974;Z;Z;Z;16;2;2;2;Z;2;2;4;1;1;4;4;4,02306147485403;ZZZZZZZZZ;2;2;2;1;M;22;32;MN;GU;14;04;2;2;0;1;2;4;1;14;Z;54;2;ZZ;2;32;1;10;2;11;111;11;11;1;4;ZZZZZZ;1;2;1;4;41;2;Z
5 1601;2;9;053;54;052;050;1958;02;1;ZZZZZ;2;Z;Z;Z;1;0;Z;2;Z;Z;2;1;2;Z;16;Z;12;87;Z;Z;Z;22;2;1;2;Z;1;2;3;1;1;2;2;4,21707670353782;ZZZZZZZZZ;1;1;1;2;M;21;40;GZ;GU;00;07;0;0;0;0;0;2;1;00;Z;54;2;ZZ;1;30;2;10;3;11;111;ZZ;ZZ;1;1;ZZZZZZ;2;2;1;4;42;1;Z
I encountered a similar problem related to reading csv into ff objects. On using
read.csv2.ffdf(file = "FD_INDCVIZC_2010.txt")
instead of implicit call
read.csv2.ffdf("FD_INDCVIZC_2010.txt")
I got rid of the error. The explicitly passing values to the argument seems specific to ff functions.
You could try the following code:
read.csv2.ffdf("FD_INDCVIZC_2010.txt",
sep = "\t",
VERBOSE = TRUE,
first.rows = 100000,
next.rows = 200000,
header=T)
I am assuming that since its a txt file, its a tab-delimited file.
Sorry I came across the question just now. Using the VERBOSE option, you can actually see how much time your each block of data is taking to be read. Hope this helps.
If possible try to filter the data at the OS level, that is before they are loaded into R. The simplest way to do this in R is to use a combination of pipe and grep command:
textpipe <- pipe('grep XXXX file.name |')
mutable <- read.table(textpipe)
You can use grep, awk, sed and basically all the machinery of unix command tools to add the necessary selection criteria and edit the csv files before they are imported into R. This works very fast and by this procedure you can strip unnecessary data before R begins to read them from pipe.
This works well under Linux and Mac, perhaps you need to install cygwin to make this work under Windows or use some other windows-specific utils.
perhaps you could try the following code:
read.table.ffdf(x = NULL, file = 'your/file/path', seq=';' )