How to get SQL translation of some functions from the DBI package - r

For example, if I want to create a table that stores the mtcars data set in a remote database, I can do the following with DBI:
dbWriteTable(database_connection, "MTCARS", mtcars)
I think behind the scenes, DBI (or perhaps dbplyr?) generates some SQL and send it to the database to complete the task. Then how can I get the SQL so that I can tweak it to better suit my use case?

The APIs from the DBI (and other R SQL) package do not necessarily correspond to just one SQL operation. From the documentation for DBI, dbWriteTable does the following:
Writes, overwrites or appends a data frame to a database table, optionally converting row names to a column and specifying SQL data types for fields.
That is, depending on how you call dbWriteTable, using parameters such as append and overwrite, it may generate either an INSERT, UPDATE, or even an upsert.

Related

Can we create a temporary table, getting a result in oracle through R and delete table afterwards?

I have a .sql file that basically creates a temporary table using multiple joins to form the data needed. Then I manually copy the result, paste it in Excel and delete the table afterwards in Oracle DB.
As the query is big, I don't think it would be a good idea to write an oracle query in R.
Is there any way by which I can directly run that .sql file through Rstudio and store the result in the data frame?
I don't know R.
However, consider moving code you have into a stored procedure. You'd then - in a single line (hopefully) - call that procedure from R. It would do its job (populate the table) and you'd just use its contents in R.

Writing date and time into SQL database in R

I am trying to create a SQL database using a data set with a column that has both date and time included in it. The problem that I am running into is when the data is written into a SQL database, and read back into R the date and time column end up having a numeric structure rather than a Posixct structure or does not show the date and times correctly.
I have been using the RSQlite and DBI package to work between the two. I just started working with SQL, is there an appropriate way in reading date and time columns into a SQL database?
Thank you for your time.
SQLite does not support date and time types. Here are some options:
convert the date/time fields back to R classes yourself. You could write a separate function for each table read into R that reads in the table and does the conversion transparently or you could adopt a naming convention for the columns that lets a single function perform the conversion according to the naming rules if you control the database itself. Another way to implement a naming convention, other than writing your own function, is to use the sqldf package. If you use sqldf("select ...", dbname = "mydb", connection = con, method = "name__class") it will convert every column whose name has two underscores to the class name after the two underscores. Note that name__class has two underscores as well.
the dbmisc package (also see this) can perform conversions. You must prepare a schema, i.e. layout specification, for each table, as described there, in order to use it.
Use a different database that does support date/time types. I usually use the H2 database in such cases. The RH2 package includes the entire H2 database software right in the R driver package in a similar manner to RSQLite.
As per a comment below, the latest version of RSQLite has support for time and date fields; however, note that that that is on the R side, like the other solutions above (except using H2), and does not change the fact that SQLite itself has no such support so, for example, if you use SQL to modify such a field such as adding 1 to get the next date it will no longer be of the same type.

How to import a data frame in RSQLite with specifying column's constrains?

I am trying to put a large data frame into a new table of a database. It could be done simply done via:
dbWriteTable(conn=db,name="sometablename",value=my.data)
However, I want to specify the Primary keys, foreign keys and the column Types like Numeric, Text and so on.
Is there any thing I can do? Should I create a table with my columns first and then add the data frame into it?
RSQlite assumes you have already your data.frame table all set before writing it to disk. There is not much to specify in the writing query. So, I visualise two ways, either before firing a query to write it, or after. I usually write the table from R to disk, then I polish it using dbGetQuery to alter table attributes. The only problem with this workflow is that Sqlite has very limited feature for altering tables.

Refer database tables in R

I have a database name Team which has 40 tables . How can I connect to that database and refer to particular table without using sqlquerry. By the use of R data Structures.
I am not sure what do you mean with "How can I connect to that database and refer to particular table without using sqlquerry".
I am not aware of a way to "see" DB tables as R dataframes or arrays or whatever without importing the tuples first through some sort of query (in SQL) - this seems to be the most practical way to use R with DB data (without going to the hassle of exporting these as .csv files first, and re-read them in R).
There are a couple ways to import data from a DB to R, so that the result of a query becomes a R data structure (including proper type conversion, ideally).
Here is a short guide on how to do that with SQL-R
A similar brief introduction to the DBI family

what is the best way to export data from Filemaker Pro 6 to Sql Server?

I'm migrating/consolidating multiple FMP6 databases to a single C# application backed by SQL Server 2008. the problem I have is how to export the data to a real database (SQL Server) so I can work on data quality and normalisation. Which will be significant, there are a number of repeating fields that need to be normalised into child tables.
As I see it there are a few different options, most of which involve either connecting to to FMP over ODBC and using an intermediate to copy the data across (either custom code or MS Acess linked tables), or, exporting to flat file format (CSV with no header or xml) and either use excel to generate insert statements or write some custom code to load the file.
I'm leaning towards writing some custom code to do the migration (like this article does, but in C# instead of perl) over ODBC, but I'm concerned about the overhead of writing a migrator that will only be used once (as soon as the new system is up the existing DB's will be archived)...
a few little joyful caveats: in this version of FMP there's only one table per file, and a single column may have multi-value attributes, separated by hex 1D, which is the ASCII group separator, of course!
Does anyone have experience with similar migrations?
I have done this in the past, but using MySQL as the backend. The method I use is to export as csv or merge format and them use the LOAD DATA INFILE statement.
SQL Server may have something similar, maybe this link would help bulk insert

Resources