I am trying to connect to a table that is not in the sys schema. The code below works if sys.tablea exists.
conn <- dbConnect(dbDriver("MonetDB"), "monetdb://localhost/demo")
frame <- monet.frame(conn,"tablea")
If I define tablea in a different schema, e.g. xyz.tablea, then I get the error message
Server says 'SELECT: no such table 'tablea'' [#NA]
The account used to connect has rights to the table.
In a related question, is it possible to use camel-case from MonetDB.R? When I change the table name to TableA, the server again responds with
Server says 'SELECT: no such table 'tablea'' [#NA]
where the table name is all lower-case.
Using tables in other schemata is not possible with the current constructor of monet.frame. However, you can work around the issue as follows:
frame <- monet.frame(conn,"select * from xyz.tablea")
This trick also works with CamelCased table names.
For the next version, I am planning to fix the issue.
Related
I have an Ionic App using SQLite. I don't have any problems with implementation.
The issue is that I need to import an SQL file using SQLitePorter to populate the database with configuration info.
But also, on the same database I have user info, so my question is:
Everytime I start the app, it will import the sql file, fill the database and probably overwrite my user data too? Since it is all on the same base?
I assume that you can always init your table using string queries inside your code. The problem is not that you are importing a .sql file. Right?
According to https://www.sqlitetutorial.net/sqlite-create-table/ it is obvious that you always create a table with [IF NOT EXISTS] switch. Writing a query like :
CREATE TABLE [IF NOT EXISTS] [schema_name].table_name (
column_1 data_type PRIMARY KEY);
you let sqlite to decide if it's going to create a table with the risk to overwrite an existing table. It is supposed that you can trust that sqlite is smart enough, not to overwrite any information especially if you use 'BEGIN TRANSACTION' - 'COMMIT' procedure.
I give my answer assuming that you have imported data and user data in distinct tables, so you can manipulate what you populate and what you don't. Is that right?
What I usually do, is to have a sql file like this:
DROP TABLE configutation_a;
DROP TABLE configutation_b;
CREATE TABLE configutation_a;
INSERT INTO configutation_a (...);
CREATE TABLE configutation_b;
INSERT INTO configutation_b (...);
CREATE TABLE IF NOT EXIST user_data (...);
This means that every time the app starts, I am updating with the configuration data I have at that time (that's is why we use http.get to get any configuration file from a remote repo in the future) and create user data only if user_data table is not there (hopefully initial start).
Conclusion: It's always a good practice, in my opinion, to trust a database product 100% and abstractly let it do any transaction that might give you some risk if you implemented your self in your code; since it gives a tool for that.For example, the keyword [if not exists], is always safer than implementing a table checker your self.
I hope that helps.
PS: In case you refer in create database procedure, SQLite, connects to a database file and it doesn't exist, it creates it. For someone comfortable in sqlite command line, when you type
sqlite3 /home/user/db/configuration.db will connect you with this db and if the file is not there, it will create it.
There's a bug-like phenomenon in the odbc library that has been a known issue for years with the older, slower RODBC library, however the work-around solutions for RODBC do not seem to work with odbc.
The problem:
Very often a person may wish to create a SQL table from a 2-dimensional R object. In this case I'm doing so with SQL Server (i.e. T-SQL). The account used to authenticate, e.g. "sysadmin-account", may be different from the owner and creator of the database that will house tables being created but the account has full read/write permissions for the targeted DB.
The odbc() call to do so goes like this and runs "successfully"
library(odbc)
db01 <- odbc::dbConnect(odbc::odbc(), "UserDB_odbc_name")
odbc::dbWriteTable(db01, "UserDB.dbo.my_table", r_data)
This connects and creates a table, but instead of creating the table in the intended location of UserDB.dbo.my_table, it gets created in UserDB.sysadmin-account.dbo.my_table.
Technically, .dbo is a child of the UserDB database. what this is doing is creating a new child object of UserDB called sysadmin-account with a child .dbo of its own, and then creating the table within there.
With RODBC and some other libraries/languages we found that a work-around solution was to change the reference to the target table location in the call to ".dbo.my_table" or in some cases "..dbo.my_table". Also I think running a query to use UserDB sometimes used to help with RODBC.
None of these solutions seems to any effect with odbc().
Updates
Tried the DBI library as a potential substitute to no avail
Found a work-around of sending the data to a global temp table, then using a SQL statement to copy from the temp table to the intended location
With the release of dplyr 0.7.0, it is now supposedly easy to connect to Oracle using the odbc package. However, I am running into a problem accessing tables not inside the default schema (for me it is my username). For example, suppose there is the table TEST_TABLE in schema TEST_SCHEMA. Then, example SQL syntax to get data would be: select * from TEST_SCHEMA.TEST_TABLE'.
To do the same in `dplyr, I am trying the following:
# make database connection using odbc: [here's a guide][1]
oracle_con <- DBI::dbConnect(odbc::odbc(), "DB")
# attempt to get table data
tbl(oracle_con, 'TEST_SCHEMA.TEST_TABLE')
Now, this leads to an error message:
Error: <SQL> 'SELECT *
FROM ("TEST_SCHEMA.TEST_TABLE") "zzz12"
WHERE (0 = 1)'
nanodbc/nanodbc.cpp:1587: 42S02: [Oracle][ODBC][Ora]ORA-00942: table or view does not exist
I think the problem here is the double quotation marks, as:
DBI::dbGetQuery(oracle_con, "select * from (TEST_SCHEMA.TEST_TABLE) where rownum < 100;")
works fine.
I struggled with this for a while until I found the solution at the bottom of the introduction to dbplyr. The correct syntax to specify the schema and table combo is:
tbl(oracle_con, in_schema('TEST_SCHEMA', 'TEST_TABLE'))
As an aside, I think the issue with quotation marks is lodged here: https://github.com/tidyverse/dplyr/issues/3080
There are also the following alternate work-arounds that may be suitable depending on what you wish to do. Since the connection used DBI, one can alter the schema via:
DBI::dbSendQuery(oracle_con, "alter session set current_schema = TEST_SCHEMA")
after which tbl(oracle_con, 'TEST_TABLE') will work.
Or, if you have create view privileges, you can create a "shortcut" in your default schema to any table you are interested in:
DBI::dbSendQuery(oracle_con, "CREATE VIEW TEST_TABLE AS SELECT *
FROM TEST_SCHEMA.TEST_TABLE")
Note that the latter may be more suitable for applications where you wish to copy local data to the database for a join, but do not have write access to the table's original schema.
I'm trying to use RJDBC to connect to a SAP HANA database and query for a temporary table, which is stored with a #-prefix:
test <- dbGetQuery(jdbcConnection,
"SELECT * FROM #CONTROL_TBL")
# Error in [...]: invalid table name: Could not find table/view #CONTROL_TBL in schema USER
If I execute the SQL statement in HANA, it works perfectly fine. I'm also able to query for permanent tables. Therefore I assume that R doesn't pass over the hashtag. Inserting escapes like "SELECT * FROM \\#CONTROL_TBL" however didn't solve my problem.
It's not possible to query for the data of a local or global temporary table from a different session, since they are by definition session-specific. In the case of a global temporary table one can query for the metadata of the table because they are shared across sessions.
Source: Tutorial for HANA temporary tables
You have to double-quote the table because it contains special characters, see SAP Help, identifiers for details.
test <- dbGetQuery(jdbcConnection,
'SELECT * FROM "#CONTROL_TBL"')
See also related discussion on stackoverflow.
Ok, local temporary tables are always only visible to the session in which they've been defined, while global temporary tables are visible just like normal tables, but the data is session private.
So, if you created the local temp. table (name starts with #) in a different session, then no wonder it cannot be found.
For your example, the question is: why do you need a temporary table in the first place?
Instead of that, you could e.g. define a view or a table function to select data from.
I have an Access database with a couple of tables and I want to work in just one of them. I am using library RODBC. Let's say the table that I want to work it called dtsample. And my Access database is called database.accdb
Here is my code:
library(RODBC)
dataconnect <- odbcConnectAccess2007("database.accdb")
data <- sqlQuery(dataconnect,"SELECT*dtsample columb1, columb2...)
but it does not work. How can I define the table in Access that I want to work with?
Your solution is not really one, because you just got around learning about SELECT
data <- sqlQuery(dataconnect, "SELECT * from dtsample where Columb1 = 'a' or Columb1 ='b')
My suggestion you are not fluent in SQL: Use the query designer in Access, and when it works, get the SQL code generated (View:SQL) and paste it into R.