how to index data using index_create() in elastic package in R - r

This is my code in R to index iris data.
library(elastic)
iris<-datasets::iris
body <- list(data = list(iris))
index_create(index = 'iris',body = body)
but it gives the following error.
Error: 400 - Failed to parse content to map.
Please explain how to give data in the body of the index_create();

elastic maintainer here. index_create is only for creating an index as the function name indicates. That is, create an index, not create an index and insert data into the index. From your example you probably want
index_create(index = "iris")
docs_bulk(iris, "iris")

Related

RSQLite only pretends to write to a database table with a bracketed name

I want to write data to a table called [foo/bar] inside of a SQLite database created through RSQLite. I wrote the following function to save answers to disk.
save_to_disk = function(data) {
con = dbConnect(SQLite(), "responses.db")
path = "[foo/bar]"
message("Writing the data now!")
message("The database file size was: ", file.size(response_db))
dbWriteTable(con, path, data, append = TRUE)
message("Table append complete!")
message("The database file size is now: ", file.size(response_db))
dbDisconnect(con)
}
However, when I try to pass this function data I see:
Writing the data now!
The database file size was: 8192
Table append complete!
The database file size is now: 8192
If I change the table name to dummy and repeat the process, then instead I see:
Writing the data now!
The database file size was: 8192
Table append complete!
The database file size is now: 12288
It seems as though RSQLite doesn't like my table name for some reason. My understanding was that this was a valid table name since it was wrapped in [...]. In fact, outside of that function I am perfectly able to write to such a table. In other words, this works from the REPL:
test = data.frame(submitted = as.integer(Sys.time()),
respondent = "Alice",
subject = "Alice",
question = "name",
part = "first",
order = 1L,
answer = "Alice")
dbWriteTable(con, "[foo/bar]", test, append = TRUE)
After that I can use dbReadTable to see what I had just entered. Why doesn't it work the same way in a function?

How to scrape from investing.com using 'rusquant' package in R

I posted a similar question before [the question is closed now, I deleted it]. From that I came to know about 'rusquant' package. I thank the person who introduced me to the package 'rusquant' here. I tried the following codes in several unsuccessful attempts to scrape stock data from investing.com
library(rusquant)
all_stocks <- getSymbolList(src = "Investing", country = "Bangladesh")
head(all_stocks, 4)
from_date <- date("2021-01-01")
grameenphone <- getSymbols('GRAE', src = 'Investing', from = from_date, auto.assign = F)
grameenphone <- getSymbols.Investing('GRAE', from = from_date, auto.assign = F)
Now, the getSymbolList function works. But when I try to scrape for a particular stock, and I followed the method from https://github.com/arbuzovv/rusquant, I get an error. as follows:
grameenphone <- getSymbols('GRAE', src = 'Investing', from = from_date, auto.assign = F)
‘getSymbols’ currently uses auto.assign=TRUE by default, but will
use auto.assign=FALSE in 0.5-0. You will still be able to use
‘loadSymbols’ to automatically load data. getOption("getSymbols.env")
and getOption("getSymbols.auto.assign") will still be checked for
alternate defaults.
This message is shown once per session and may be disabled by setting
options("getSymbols.warning4.0"=FALSE). See ?getSymbols for details.
Error in curl::curl_fetch_memory(url, handle = handle) :
Unrecognized content encoding type. libcurl understands deflate, gzip content encodings.
Then I tried getSymbols.Investing function. But I get following error:
grameenphone <- getSymbols.Investing('GRAE', from = from_date, auto.assign = F)
Error in missing(verbose) : 'missing' can only be used for arguments
Please help me out here. I'm new in coding. I apologize if anything silly happened here. Thanks in advance.

Batch-reading mesh3D objects with the 'file2mesh' function from the 'Morpho' package

I am trying to batch-read a series of ply-meshes (as mesh3D objects), in order to slide semilandmarks with 'slider3d'. However, when I try to use a loop to read those files, I am told that the object 'Mesh' could not be found. This indicates that a mesh object must first be created in order to then be altered in a loop. How do I solve this?
Is there a simple function in the 'rgl' package that I overlooked?
Or is there an alternative to read all 3D-meshes in one folder, and create a list that I can use to match files downstream?
library(Morpho)
FilesPLY <- list.files("HumerusPLY",pattern="*.ply")
for(j in 1:length(FilesPLY)){
Mesh[j] <- file2mesh(paste("HumerusPLY/",FilesPLY[j],sep=""), clean = TRUE, readcol = FALSE)
}
Error: Object 'Mesh' could not be found.
One way to solve the problem is by creating a list of empty files, and then reading the meshes into the empty files. Oddly enough the first read-out results in an error, but it sets up the system for the read-in. I don't understand the problem behind it, but it works. Thus, here is the temporary solution:
library(Morpho)
# Read ply-list from subfolder "HumerusPLY/"; Create Mesh series of objects, and fill them
FilesPLY <- list.files("HumerusPLY/",pattern="*.ply")
for(i in 1:length(FilesPLY)) {
assign(paste("Mesh",i,sep=""), i)
}
meshlist <- c(1:length(FilesPLY))
for (i in 1:length(meshlist)){
meshlist[i] <- paste("Mesh",meshlist[i],sep="")
}
meshlist <- noquote(meshlist)
ls()
##read ply-files; the second read fixes an error, but does not work without the first read
for(j in 1:length(meshlist)){
meshlist[j] <- file2mesh(paste("HumerusPLY/",FilesPLY[j],sep=""), clean = TRUE, readcol = FALSE)
}
for(j in 1:length(meshlist)){
meshlist[[j]] <- file2mesh(paste("HumerusPLY/",FilesPLY[j],sep=""), clean = TRUE, readcol = FALSE)
}

Using for loop to write data frame as dta file in R

I have data frames in a list a and I want to use a loop to save these as both rda and write as dta. I don't get why I get the error message that object data frame cannot be found:
for (f in a) {
for (name in 1:length(filenames)) {
save(as.data.frame(f),file = paste("~/Dropbox/Data_Insert/Panels/",name,end_rda,sep=""))
write.dta(as.data.frame(f),file = paste("~/Dropbox/Data_Insert/Panels/",name,end_dta,sep=""))
}
}
Error in save(as.data.frame(f), file = paste("~/Dropbox/Data_Insert/Panels/", :
object ‘as.data.frame(f)’ not found
So by f, this would be indexing the data frame in the list? I did as.data.frame(f) because when I only used f, I got the message:
The object "dataframe" must have class data.frame
I changed the code to for f in a, but it still returns an error saying that as.data.frame(f) not found.
I think that this is what you are trying to do. I assume that a is a list of data frames and filenames is a character vector of the same length.
for (i in 1:length(a)) {
to_save = as.data.frame(a[[i]])
save(to_save, file = paste0("~/Dropbox/Data_Insert/Panels/", filenames[i], end_rda))
write.dta(to_save, file = paste0("~/Dropbox/Data_Insert/Panels/", filenames[i], end_dta))
}
Note that save preserves the name of the R object, so when you load any of these files it will be loaded into the workspace with the name to_save. This seems bad. For individual R objects I would strongly encourage you to use saveRDS and create .RDS files instead of save. See, e.g., Ricardo's answer to this question for an example of Rda vs RDS.

Adding rows to a Google Sheet using the R Package googlesheets

I'm using the googlesheets package (CRAN version, but available here: https://github.com/jennybc/googlesheets) to read data from a Google Sheet in R, but would now like to add rows. Unfortunately, every time use gs_add_row for an existing sheet I get the following error:
Error in gsheets_POST(lf_post_link, XML::toString.XMLNode(new_row)) :
client error: (405) Method Not Allowed
I followed the tutorial on Github to create a sheet and add rows as follows:
library(googlesheets)
library(dplyr)
df.colnames <- c("Project Short Name","Project Start Date","Proj Stuff")
my.df <- data.frame(a = "cannot be empty", b = "cannot be empty", c = "cannot be empty")
colnames(my.df) <- df.colnames
## Create a new workbook populated by this data.frame:
mynewSheet <- gs_new("mynewsheet", input = my.df, trim = TRUE)
## Append Element
mynewSheet <- mynewSheet %>% gs_add_row(input = c("a","b","c"))
mynewKey <- mynewSheet$sheet_key
Rows are added successfully, I even get the cheery message Row successfully appended.
I now provide mynewKey to gs_key, as I would if this were a new sheet I were working with and attempt to add a new row using gs_add_row (Note: before evaluating these lines, I navigate to the Google Sheet and make it public to the web):
myExistingWorkbook <- gs_key(mynewKey, visibility = "public")
## Attempt to gs_add_row
myExistingWorkbook <- myExistingWorkbook %>% gs_add_row(input = c("a","b","c"), ws="Sheet1", verbose = TRUE)
Error in gsheets_POST(lf_post_link, XML::toString.XMLNode(new_row)) :
client error: (405) Method Not Allowed
Things that I have tried:
1) Published the Google Sheet to the web (as per https://github.com/jennybc/googlesheets/issues/126#issuecomment-118751652)
2) Enabled the sheet as editable to the public
Notes
In my actual example, I have an existing Google Sheet with many worksheets within it that I would like to add rows to. I have tried to use a minimal example here to understand my error, I can also provide a link to the specific worksheet that I would like to update as well.
I have raised an issue on the package's github page here, https://github.com/jennybc/googlesheets/issues/168
googlesheets::gs_add_row() and googlesheets::gs_edit_cells() make POST requests to the Sheets API. This requires that the visibility be set to "private".
Above, when you register the Sheet by key, please do so like this:
gs_key(mynewKey, visibility = "private")
If you want this to work even for Sheets you've never visited in the browser, then add lookup = FALSE as well:
gs_key(mynewKey, lookup = FALSE, visibility = "private")

Resources