Error in fstRead R - r

I am using the new 'fst' package in R for a few weeks to write and read tables in the .fst format. Sometimes I cannot read a table that I've just write having the following message :
> tab=read.fst("Tables R/tab.fst",as.data.table=TRUE)
Error in fstRead(fileName, columns, from, to) :
Unknown type found in column.
Do you know why this happens ? Is there an other way to retrieve the table ?

Related

How to append a R data frame into Redshift?

I'm trying to upload a data frame into SQL, but I keep receiving errors. I've tried the following funciont
DBI::dbWriteTable(rposgre_rsql, name = "my_schema.the_table", value = base, row.names = FALSE, append=TRUE)
that functions returns the error
RPosgreSQL error: could not Retrieve the result : ERROR: syntax error at or near "STDIN"
so i tried:
insert\<- "INSERT INTO my_schema.the_table VALUES base"
M2_results\<- RPostgreSQL::dbSendQuery(conn= rposgre_rsql,statement= insert)
but it returns
RPosgreSQL error: could not Retrieve the result : ERROR: syntax error at or near "base"
I'm positive the connection works, since I can either select the table or use "dbExistsTable.R", but I don't understand why it doesn't work with INSERT INTO. The connection is for a corporate enviroment, maybe it's a permission issue? Also, I don't quite understand what's "STDIN" is.

Error while using colSums with tab-delimited file

I'm new at R and I'm currently trying to get some statistical data from a file. It is a large set of data in txt tab-delimited file. While importing the file I had no problem and all of the data is shown correctly as a table in rstudio. However, when I'm trying to make any sort of calculations using colsums,
> colSums("Wages and salaries")
Error in colSums("Wages and salaries") : 'x' must be an array of at
least two dimensions
I do receive an error
x' must be an array of at least two dimensions.
"Wages and Salaries" is the name of the column I'm trying to get the sum of.
Using V1 or any other column name that was created by r gives me another error
> colSums(V2)
Error in is.data.frame(x) : object 'V2' not found
The way I'm importing the file is
rm(list=ls())
filename <- read.delim("~/filename.txt", header=FALSE)`
> is.data.frame(filename)
[1] TRUE
This gives me a matrix type data table with rows and columns the same way excel would show me the data.
The reason I'm trying to get a sum of all of the numbers in column is to later get sum of several different columns.
I'm very new at R and I could not find an answer to my question as most of the examples are using just a very small set of data that was created in the r.
In R you can access a column in 2 ways:
filename["Wages and salaries"]
or
filename$`Wages and salaries`
So, please try :
colSums(filename["Wages and salaries"])

Bloomberg data retrieval in R : Invalid override field id specified error

I would like to retrieve power hedging data using Rbbg bloomberg package in R and I know this formula works in excel :
=BDH("VATT SS Equity","BI_%_ELECTRIC_POWER_HEDGED","01/01/2000","","GEOGRAPHIC_LOCATION_OVERRIDE=EUCN","BI_CONTRACT_MATURITY_OVERRIDE=CY12","FUND_PER=Q")
But when I try this in R :
conn<-blpConnect(log.level="off")
data<-bdh(conn,"VATT SS Equity","BI_PER_ELECTRIC_POWER_HEDGED","20000101","","GEOGRAPHIC_LOCATION_OVERRIDE=EUCN","BI_CONTRACT_MATURITY_OVERRIDE=CY12","FUND_PER=Q")
I get the following error message :
Error in .jcall("RJavaTools", "Ljava/lang/Object;", "invokeMethod", cl, :
org.findata.blpwrapper.WrapperException: response error: Invalid override field id specified [nid:217]
What should I change in the formula to make it work ?
Thanks
Edit: Indeed it is BI_PCT_ELECTRIC_POWER_HEDGED, however the problem does not come from here but from the overrides.
This returns an empty variable for me, but it doesn't throw an error so it might get you on the right track.
The way you specify options is different in the current version it seems.
data<-bdh(conn,"VATT SS Equity", "BI_PER_ELECTRIC_POWER_HEDGED","20000101","",
override_fields=c("GEOGRAPHIC_LOCATION_OVERRIDE",
"BI_CONTRACT_MATURITY_OVERRIDE",
override_values=c("EUCN","CY12"),
option_names="periodicitySelection",
option_values="QUARTERLY")
The doc where I found the correct syntax is here: RBloomberg. It was written in 2010 for the predecessor package (before Bloomberg complained about using their name) but I guess it works! I think the convention of enumerating the list of option names then the option values is odd compared to your assumption that OPTION=VALUE was correct, but there you go.

R : Check if R object exists before creating it

I am trying to skip steps loading data from large files if this has already been done earlier. Since the data ends up in (for example) mydf, I thought I could do:
if( !exists(mydf) )
{
#... steps to do loading here.
}
I got this from How to check if object (variable) is defined in R? and https://stat.ethz.ch/R-manual/R-devel/library/base/html/exists.html
However R Studio simply complains with
'Error in exists(mydf) : object 'mydf' not found
Why does it complain instead of just returning 'true' or 'false'? Any tips appreciated.
You should use exists("mydf") instead exists(mydf)

How to read data from excel in r?

I am trying to prepare data for cluster analysis. That's why I have prepared data tables in excel and the headers are "id","name","crime_type","crime_date","gender","age"
Then , I convert the excel into .csv format.
Then , I write the following command ->
>crime <- read.csv("crime_data.csv",header=T)
>crime # I print , and it prints
# now I will do cluster with kmeans()
>kmeans.result <- kmeans(crime,3)
But , it shows errors.
"Error is as follows :
Error in do_one(nmeth) : NA/NaN/Inf in foreign function call (arg 1)
In addition: Warning message:
In kmeans(crime, 3) : NAs introduced by coercion"
What I am doing wrong here...
I can't speak to your specific problem without knowing what you data looks like but it could be as simple as giving the xlsx package a try. I think it handles NaNs better
install.packages(xlsx)
library(xlsx)
yourdata <- read.xlsx("YOURDATASHEET.xlsx", sheetName="THESHEETNAME")
Seems like you are asking two questions. For the first; you can also try reading directly from the clipboard (beware of large tables tough, but so far I have good results with 40k rows, 30 col)
d1<-read.table(file="clipboard",sep="\t",header=FALSE,stringsAsFactors=FALSE)
set header to TRUE if you want to name your columns. You can also use what was suggested above to open excel sheets directly but this might not be practical if you have non standard tables.
For the second part perhaps you should convert to numerical using the sapply function and or suppressWarnings().

Resources