I've a question about using sqlSave. How does R map RODBC data in the data frame to the database table columns?
If I've a table with columns X and Y and a data frame with columns X and Y, RODBC puts X into X and Y into Y (I found out by trail-and-error). But can I explicitly tell R how to map data.frame columns to database table columns, like put A in X and B in Y.
I'm rather new to R and think the RODBC manual is a bit cryptic. Nor can I find an example on the internet.
I'm now doing it this way (maybe that's also what you meant):
colnames(dat) <- c("A", "B")
sqlSave(channel, dat, tablename = "tblTest", rownames=FALSE, append=TRUE)
It works for me. Thanks for your help.
You should find the fine R manuals of great help as you start to explore R, and its help facilities are very good too.
If you start with
help(sqlSave)
you will see the colNames argument. Supplying a vector c("A", "B") would put your first data.frame column into a table column A etc.
I'm having massive problems using sqlSave with an IBM DB2 databank. I'm trying to avoid it by using sqlQuery instead to create the table with the correct formatting and then use sqlSave with append=T to force my R table into the database table. This resolve a lot of problems such as date formats and floating point numbers (instead of doubles).
Related
I am very new to R and would appreciate any advice. I am from a STATA background and so learning to think in R. I am trying to produce tables of percentages for my 20 binary variables. I have tried a for loop but not sure where I am going wrong as there is no warning message.
for (i in 1:ncol(MAAS1r[varbinary])) {
varprop<- varbinary[i]
my.table<-table(MAAS1r[varprop])
my.prop<-prop.table(my.table)
cbind(my.table, my.prop)
}
Many thanks
I made one with an example extracted from mtcars
this are two variables that are binary (0 or 1), called VS and AM
mtcarsBivar<- mtcars[,c(8,9)]
get names of the columns:
varbinary <- colnames(mtcarsBivar)
use dplyr to do it:
library(dplyr)
make an empty list to populate
Binary_table <- list()
now fill it with the loop:
for (i in 1:length(varbinary)) {
Binary_table[[i]] <- summarise(mtcarsBivar, percent_1 = sum(mtcarsBivar[,1] == 1)/nrow(mtcarsBivar))
}
Transform it to a data frame
Binary_table <- do.call("cbind", Binary_table)
give it the name of the varbinary to the columns
colnames(Binary_table) <- varbinary
this only works if all your variables are binary
I am new to R and trying to use R to run the report I am currently doing in excel. Most of the topics here have been so helpful to me translating excel formula to R codes, however, I am struggling to generate codes for below excel if statement
=IF(AND(G2="SEA",OR(F2="FCL",F2="BCN")),W2*40,IF(G2="AIR",X2/1000*66,""))
G Column corresponds to Container/Product
F Column corresponds to Transport Mode
AI and AJ correspond to the volumes associated to each Transport mode
Appreciate all the help. Thanks
Here's the link to data exported to R
We can do a nested ifelse after reading the dataset
df1 <- read.csv("yourfile.csv", stringsAsFactors=FALSE)
ifelse(df1[,7]=="SEA" & df1[,6] %in% c("FCL", "BCN"),
df1[,35]*40, ifelse(df1[,7]=="AIR", df1[,36]*66, NA))
NOTE: Here we are referring to numeric index for extracting the columns as a reproducible example was not showed.
I'm (very!) new to R and mysql and I have been struggling and researching for this problem for days. So I would really appreciate ANY help.
I need to complete a mathematical expression from 2 variables in two different tables. Essentially, I'm trying to figure out how old a subject was (DOB is in one table) when they were serviced (date of service is in another table). I have an identifying variable that is the same in both.
I have tried merging these:
age<-merge("tbl1", "tbl2", by=c("patient_id") all= TRUE)
this returns:
Error in fix.by(by.x, x) : 'by' must specify a uniquely valid column
I have tried sub-setting where I just keep the variables of interest, but is is not working because I believe sub-setting only works for numbers not characters..right?
Again, I would appreciate any help. Thanks in advance
Since you are new to data.base , I think you should use dplyr here. It is an abstraction layer for many data base providers. So you will not have to deal with db problems. Here I show you a simple example of where:
I read the tables from the MYSQL
Merge the table assuming having unique shared variable
The code:
library(dplyr)
library(RMySQL)
## create a connection
SDB <- src_mysql(host = "localhost", user = "foo", dbname = "bar", password = getPassword())
# reading tables
tbl1 <- tbl(SDB, "TABLE1_NAME")
tbl2 <- tbl(SDB, "TABLE2_NAME")
## merge : this step can be done using dplyr also
age <- merge(tbl1, tbl2, all= TRUE)
hope i get everything together for this problem. first time for me and it's a little bit tricky to describe.
I want to add some attributes to a dbf file and save it afterwards for use in qgis. its about elections and the data are the votes from the 11 parties in absolute and relative values. I use the shapefiles package for this, but also tried it simply with foreign.
my system: RStudio 0.97.311, R 2.15.2, shapefile 0.7, foreign 0.8-52, ubuntu 12.04
try #1 => no problems
shpDistricts <- read.shapefile(filename)
shpDataDistricts <- shpDistricts$dbf[[1]]
shpDataDistricts <- shpDataDistricts[, -c(3, 4, 5)] # delete some columns
shpDistricts$dbf[[1]] <- shpDataDistricts
write.shapefile(shpDistricts, filename))
try #2 => "error in get("write.dbf", "package:foreign")(dbf$dbf, out.name) : cannot handle matrix/array columns"
shpDistricts <- read.shapefile(filename)
shpDataDistricts <- shpDistricts$dbf[[1]]
shpDataDistricts <- shpDataDistricts[, -c(3, 4, 5)] # delete some columns
shpDataDistricts <- cbind(shpDataDistricts, votesDistrict[, 2]) # add a new column
names(shpDataDistricts)[5] <- "SPOE"
shpDistricts$dbf[[1]] <- shpDataDistricts
write.shapefile(shpDistricts, filename))
the write function returns "error in get("write.dbf", "package:foreign")(dbf$dbf, out.name) : cannot handle matrix/array columns"
so by simply adding a column (integer) to the data.frame, the write.dbf function isn't able to write out anymore. am now debugging for 3 hours on this simple issue. tried it with shapefiles package via opening shapefile and dbf file, all the time the same problem.
When i use the foreign package directly (read.dbf).
if i save the dbf-file without the voting data (only with the small adapations from step 1+2), it's no problem. It must have to do with the merge with the voting data.
I got the same error message ("error in get("write.dbf"...) while working with shapefiles in R using rgdal. I added a column to the shapefile, then tried to save the output and got the error. I was added the column to the shapefile as a dataframe, when I converted it to a factor via as.factor() the error went away.
shapefile$column <- as.factor(additional.column)
writePolyShape(shapefile, filename)
The problem is that write.dbf cannot write a dataframe into an attribute table. So I try to changed it to character data.
My initial wrong code was:
d1<-data.frame(as.character(data1))
colnames(d1)<-c("county") #using rbind should give them same column name
d2<-data.frame(as.character(data2))
colnames(d2)<-c("county")
county<-rbind(d1,d2)
dbfdata$county <- county
write.dbf(dbfdata, "PANY_animals_84.dbf") **##doesn't work**
##Error in write.dbf(dataname, ".bdf")cannot handle matrix/array columns
Then I changed everything to character, it works! right code is:
d1<-as.character(data1)
d2<-as.character(data2)
county<-c(d1,d2)
dbfdata$county <- county
write.dbf(dbfdata, "filename")
Hope it helps!
I have successfully added information to shapefiles before (see my post on http://rusergroup.swansea.ac.uk/Healthmap.ashx?HL=map ).
However, I just tried to do it again with a slightly different shapefile (new local health boards for Wales) and the code fails at spCbind with a "row names not identical error"
o <- match(wales.lonlat$NEW_LABEL, wds$HB_CD)
wds.xtra <- wds[o,]
wales.ncchd <- spCbind(wales.lonlat, wds.xtra)
My rows did have different names before and that didn't cause any problems. I relabeled the column in wds.xtra to match "NEW_LABEL" and that doesn't help.
The labels and order of labels do match exactly between wales.lonlat and wds.xtra.
(I'm using Revolution R 5.0, which is built on R 2.13.2)
I use match to merge data to the sp data slot based on rownames (or any other common ID). This avoids the necessity of maptools for the spCbind function.
# Based on rownames
sdata#data=data.frame(sdata#data, new.df[match(rownames(sdata#data), rownames(new.df)),])
# Based on common ID
sdata#data=data.frame(sdata#data, new.df[match(sdata#data$ID, new.df$ID),])
# where; sdata is your sp object and new.df is a data.frame object that you want to merge to sdata.
I had the same error and could resolve it by deleting all other data, which were not actually to be added. I suppose, they confused spCbind because the matching wanted to match all row-elements, not only the one given. In my example, I used
xtra2 <- data.frame(xtra$ID_3, xtra$COMPANY)
to extract the relevant fields and fed them to spCbind afterwards
gadm <- spCbind(gadm, xtra2)