I can use sqlSave to create a new table and to append data to that table, but I want the tablet to have some additional columns (for instance, an "ID" autoincrementing column - I am manually adding these columns after creating the table and testing that I can save to it and append to it). As soon as I try to use sqlSave after adding those columns I get an error when trying to use sqlSave to append more data
Error in odbcUpdate... missing columns in 'data'
So I added an ID column to my data frame (so its columns would match my database table) and tried setting it to "NULL", NULL, and "". I keep getting the same error.
Any ideas?
Thanks,
Aerik
P.S. I'm using RODBC with MySQL OOBC driver version 5.1
Ah, I got it. The sqlSave function seems to lowercase everything. I'm not sure what checks it's doing behind the scenes, but if I make an "id" column it works, but an "ID" column does not.
Try the case="nochange" argument in odbcConnect. I'm using RODBC (1.3-10) with MySQL ODBC driver version 5.2 and it works for me.
Related
As in the header, I have opened an RDS table in R Studio, and need to know the field names within that table.
But I don't know the correct command or syntax to follow this:
UK_2001 <- readRDS("D:/Census_History/Postcodes/2001_05_MAY_AFPD.rds")
Any guidance would be gratefully received.
Thanks in advance.
You can display the structure of any R object using str which will give you the object's type (e.g. data.frame), column names and column types as well.
str(UK_2001)
If you are just after the names of the columns colnames will do.
I am trying to export a data frame from R to MS Access but it seems to me that there is no package available to do this task. Is there a way to export a data frame directly to Access? Any help will be greatly appreciated.
The following works for medium sized datasets, but may fail if MyRdataFrame is too large for the 2GB limit of Access or conversion type errors.
library(RODBC)
db <- "C:Documents/PreviouslySavedBlank.accdb"
Mycon <- odbcConnectAccess2007(db)
sqlSave(Mycon, MyRdataFrame)
There is the ImportExport package.
The database has to already exist (at least in my case). So you have to create it first.
It has to be a access database 2000 version with extension .mdb
Here is an example:
ImportExport::access_export("existing_databse.mdb",as.data.frame(your_R_data),
tablename="bob")
with "bob" the name of the table you want to create in the database. Choose your own name of course and it has to be a non already existing table
It will also add a first column called rownames which is just an index column
Note that creating a .accdb file and then changing the extension to .mdb wont work ^^ you really have to open it and save it as .mdb. I added as.data.frame() but if your data is already one then no need.
There might be a way for .accdb files using directly sqlSave (which is used internally by ImportExport) and specifying the driver from the RODBC package. This is in the link in the comment from #BenJacobson. But the solution above worked for me and it was only one line.
I have a local file in PostgreSQL format that I would like to read into R into chunks and export it as .csv.
I know this might be a simple question but I'm not at all familiar with PostgreSQL or SQL. I've tried different things using R libraries like RPostgreSQL, RSQLite and sqldf but I couldn't get my head around this.
If your final goal is to create a csv file, you can do it directly using PostgreSQL.
You can run something similar to this:
COPY my_table TO 'C:\my_table.csv' DELIMITER ',' CSV HEADER;
Sorry if I misunderstood your requirement.
The requirement is to programmatically create a very large .csv file from scratch and populate it from data in a database? I would use this approach.
Step 1 - isolate the database data into a single table with an auto incrementing primary key field. Whether you always use the same table or create and drop one each time depends on the possibility of concurrent use of the program.
Step 2 - create the .csv file with your programming code. It can either be empty, or have column headers, depending on whether or not you need column headers.
Step 3 - get the minimum and maximum primary key values from your table.
Step 4 - set up a loop in your programming code using the values from Step 3. Inside the loop:
query the table to get x rows
append those rows to your file
increment the variables that control your loop
Step 5 - Do whatever you have to do with the file. Don't try to read it with your programming code.
we hit to an error with RODBC and SqlSave command. We are a bit confused what to do since the same SqlSave command works when data that we are trying to save to Sybase database is small (~under 10.000 rows). When trying to save bigger data (~200.000 rows) saving process starts without any problems but it crashes after few thousand rows is saved. Then we hit to this error message “unable to append to table..”
We use this kind of code:
library(RODBC)
channel <- odbcConnect("linfo-test", uid="DBA", pwd="xxxxxx", believeNRows=FALSE)
sqlSave(channel=channel, dat=matkat, tablename = "testitaulu", append = TRUE)
odbcClose(channel)
If someone has any idea why this happens only with bigger data and how we could fix this, we would be extremely grateful. We are lacking ideas ourselves.
sqlSave with append=TRUE pretty much never works. You will have to explicitly write an SQL INSERT INTO statement, which is unfortunate. Sorry for the bad news.
sqlSave is working but you have to be really careful with it.
You need to remember:
in R column names of your data frame and sql server table have to match EXACTLY and remember that R is case sensitive, even leading space or space at the end can make a difference
make sure that your data types in R and sql table are matching
that you are not missing any column to insert( even it is default in sql)
connection have to be set up properly, if sql user doesn't have permission to read and write in a destination table(used in sqlSave) it can also fail
I have an Excel file that I am trying to load into R using the odbcConnectExcel and sqlQuery commands from RODBC package. One of the columns has numerical values with plus or minus signs, such as '5+ or '3-. However, if i do something like,
conn <- odbcConnectExcel("file.xls")
sqlQuery(conn, "SELECT * FROM `Sheet1$`")
then the column with the plus and minus signs will be returned as a numerical column with those symbols stripped. Is there a way to have this column read in as a factor in which the signs are maintained? I would prefer to not have to convert the file to another format first.
Thanks.
Data like this becomes a factor if you use the xlsReadWrite (http://www.swissr.org/software/xlsreadwrite) package to read the file:
library(xlsReadWrite)
x <- read.xls(file="file.xls")
However, note that you need to do something more than just install.packages("xlsReadWrite") to get this package to run. You need another file or so, I forgot.
This doesn't directly address your question, but hopefully it will help:
This is the best summary of options for connecting to Excel that I have seen: Export Data Frames To Multi-worksheet Excel File. While it deals generally with exporting, importing is also possible with most of these approaches.
My favorite is actually the RDCOMClient because it provides total control over Excel as an application.