I used Rblpapi to retrieve historic data, format is a list of data.frames. I now want to write that list of data into a MySQL database, I tried
dbWriteTable(con,name="db_all",data,header=T), which gives me
"unable to find an inherited method for function ‘dbWriteTable’ for
signature ‘"MySQLConnection", "character", "list"’"
I have the feeling the problem is the list of data.frames, is there any easy way to insert this data from R into MySQL?
Did you try:
df <- do.call("rbind", data)
dbWriteTable(con, name="db_all", df, header=T)
Related
I am trying to import dynamodb in R using paws.database library. I am successful in retrieving the required attribute into R(using scan operation). However the imported data is in the form of a nested list i.e. in [[]] form. My intention is to format the imported dynamodb attribute into a dataframe and later be able to plot it using ggpplot. I have tried using options such as
df <- ldply (list_a, data.frame), ldply (list_a, data.frame),
data.frame(matrix(unlist(list_a), nrow=length(list_a),
byrow=TRUE),stringsAsFactors=FALSE),
as.data.frame(do.call(cbind,list_a))
so far and was unable to convert the data in a proper dataframe format. The final error I get in ggplot is "
Error: data must be a data frame, or other object coercible by fortify(), not a list "
Could anyone please help ?
See this similar issue.
I'm also using paws. Here's what I did to work with a small DynamoDB table:
dyna <- paws::dynamodb()
Table<-dyna$scan("table_name")
newtable<-rbindlist(Table$Items,fill = TRUE)
Then I create a new dataframe by using unlist() on each column of newtable.
I want to import data in JSON format from MongoDB in to R. I am using Mongolite package to connect MongoDB to R, but when i use mongo$find('{}') data is getting stored as dataframe. Please check my Rcode,
library(mongolite)
mongo <- mongolite::mongo(collection = "Attributes", db = "Test", url =
"mongodb://IP:PORT",verbose = TRUE)
df1 <- mongo$find('{}')
df1 is getting stored as dataframe, but I want the data in JSON format only. Please give your suggestions for the same.
Edit -
Actual json structure converted in to list
But when i load data from MongoDB to R using mongolite package, data is getting stored as dataframe and then if i convert to list, the structure is getting changed and few extra columns are inserted in to list.
Please let me know on how to solve this issue.
Thanks
SJB
I think I have exhausted the entire internet looking for an example / answer to my query regarding implementing a h2o mojo model to predict within RShiny. We have created a bunch of models, and wish to predict scores in a RShiny front end where users enter values. However, with the following code to implement the prediction we get an error of
Warning: Error in checkForRemoteErrors: 6 nodes produced errors; first
error: No method asJSON S3 class: H2OFrame
dataInput <- dfName
dataInput <- toJSON(dataInput)
rawPred <- as.data.frame(h2o.predict_json(model= "folder/mojo_model.zip", json = dataInput, genmodelpath = "folder/h2o-genmodel.jar"))
Can anyone help with some pointers?
Thanks,
Siobhan
This is not a Shiny issue. The error indicates that you're trying to use toJSON() on an H2OFrame (instead of an R data.frame), which will not work because the jsonlite library does not support that.
Instead you can convert the H2OFrame to a data.frame using:
dataInput <- toJSON(as.data.frame(dataInput))
I can't guarantee that toJSON() will generate the correct input for h2o.predict_json() since I have not tried that, so you will have to try it out yourself. Note that the only way this may work is if this is a 1-row data.frame because the h2o.predict_json() function expects a single row of data, encoded as JSON. If you're trying to score multiple records, you'd have to loop over the rows. If for some reason toJSON() doesn't give you the right format, then you can use a function I wrote in this post here to create the JSON string from a data.frame manually.
There is a ticket open to create a better version of h2o.predict_json() that will support making predictions from a MOJO on data frames (with multiple rows) without having to convert to JSON first. This will make it so you can avoid dealing with JSON altogether.
An alternative is to use a H2O binary model instead of a MOJO, along with the standard predict() function. The only requirement here is that the model must be loaded into H2O cluster memory.
The following works now using the json formatting from first two lines and the single quote around var with spaces.
df<- data.frameV1=1,V2=1,CMPNY_EL_IND=1,UW_REGION_NAME = "'LONDON & SE'" )
dfstr <- sapply(1:ncol(df), function(i) paste(paste0('\"', names(df)[i], '\"'), df[1,i], sep = ':'))
json <- paste0('{', paste0(dfstr, collapse = ','), '}')
dataPredict <- as.data.frame(h2o.predict_json(model = "D:\\GBM_model_0_CMP.zip", json = json, genmodelpath = "D:\\h2o-genmodel.jar", labels = TRUE))
I am using R Studio version 3.1.2 with XLConnect package to load, read and write multiple xlsx files. I can do this with duplicating and creating multiple objects but I am trying to do it using 1 object(all files in the same folder). please see examples
I can do this listing each file but want to do it using a loop
tstA <- loadWorkbook("\\\\FS01\\DEPARTMENTFOLDERS$\\tst\\2015\\Apr\\DeptA.xlsx")
tstB <- loadWorkbook("\\\\FS01\\DEPARTMENTFOLDERS$\\tst\\2015\\Apr\\DeptB.xlsx")
This is the way im trying to do it but get an error
dept <- c("DeptA","DeptB","DeptC")
for(dp in 1:length(dept)){
dept[dp] <- loadWorkbook("\\\\FS01\\DEPARTMENTFOLDERS$\\tst\\2015\\Apr\\",dept[dp],".xlsx")}
After this I want to use the readWorksheet function from XLConnect.
Apologies for the lame question but I am struggling to workout how best to do this.
Thanks
You can read all the files into a list in one operation as follows (adjust pattern and sheet as needed to get the files/sheets you want):
path = "\\\\FS01\\DEPARTMENTFOLDERS$\\tst\\2015\\Apr\\"
df.list = lapply(list.files(path, pattern="xlsx$"), function(i) {
readWorksheetFromFile(paste0(path, i), sheet="YourSheetName")
})
If you want to combine all of the data frames into a single data frame, you can do this:
df = do.call(rbind, df.list)
I have to load in many files and tansform their data. Each file contains only one data.table, however the tables have various names.
I would like to run a single script over all of the files -- to do so, i must assign the unknown data.table to a common name ... say blob.
What is the R way of doing this? At present, my best guess (which seems like a hack, but works) is to load the data.table into a new environment, and then: assign('blob', get(objects(envir=newEnv)[1], env=newEnv).
In a reproducible context this is:
newEnv <- new.env()
assign('a', 1:10, envir = newEnv)
assign('blob', get(objects(envir=newEnv)[1], env=newEnv))
Is there a better way?
The R way is to create a single object, i.e. a single list of data tables.
Here is some pseudocode that contains three steps:
Use list.files() to create a list of all files in a folder.
Use lapply() and read.csv() to read your files and create a list of data frames. Replace read.csv() with read.table() or whatever is appropriate for your data.
Use lapply() again, this time with as.data.table() to convert the data frames to data tables.
The pseudocode:
filenames <- list.files("path/to/files")
dat <- lapply(files, read.csv)
dat <- lapply(dat, as.data.table)
Your result should be a single list, called dat, containing a data table for each of your original files.
I assume that you saved the data.tables using save() somewhat like this:
d1 <- data.table(value=1:10)
save(d1, file="data1.rdata")
and your problem is that when you load the file you don't know the name (here: d1) that you used when saving the file. Correct?
I suggest you use instead saveRDS() and readRDS() for saving/loading single objects:
d1 <- data.table(value=1:10)
saveRDS(d1, file="data1.rds")
blob <- readRDS("data1.rds")