There's a bug-like phenomenon in the odbc library that has been a known issue for years with the older, slower RODBC library, however the work-around solutions for RODBC do not seem to work with odbc.
The problem:
Very often a person may wish to create a SQL table from a 2-dimensional R object. In this case I'm doing so with SQL Server (i.e. T-SQL). The account used to authenticate, e.g. "sysadmin-account", may be different from the owner and creator of the database that will house tables being created but the account has full read/write permissions for the targeted DB.
The odbc() call to do so goes like this and runs "successfully"
library(odbc)
db01 <- odbc::dbConnect(odbc::odbc(), "UserDB_odbc_name")
odbc::dbWriteTable(db01, "UserDB.dbo.my_table", r_data)
This connects and creates a table, but instead of creating the table in the intended location of UserDB.dbo.my_table, it gets created in UserDB.sysadmin-account.dbo.my_table.
Technically, .dbo is a child of the UserDB database. what this is doing is creating a new child object of UserDB called sysadmin-account with a child .dbo of its own, and then creating the table within there.
With RODBC and some other libraries/languages we found that a work-around solution was to change the reference to the target table location in the call to ".dbo.my_table" or in some cases "..dbo.my_table". Also I think running a query to use UserDB sometimes used to help with RODBC.
None of these solutions seems to any effect with odbc().
Updates
Tried the DBI library as a potential substitute to no avail
Found a work-around of sending the data to a global temp table, then using a SQL statement to copy from the temp table to the intended location
Related
I have an Ionic App using SQLite. I don't have any problems with implementation.
The issue is that I need to import an SQL file using SQLitePorter to populate the database with configuration info.
But also, on the same database I have user info, so my question is:
Everytime I start the app, it will import the sql file, fill the database and probably overwrite my user data too? Since it is all on the same base?
I assume that you can always init your table using string queries inside your code. The problem is not that you are importing a .sql file. Right?
According to https://www.sqlitetutorial.net/sqlite-create-table/ it is obvious that you always create a table with [IF NOT EXISTS] switch. Writing a query like :
CREATE TABLE [IF NOT EXISTS] [schema_name].table_name (
column_1 data_type PRIMARY KEY);
you let sqlite to decide if it's going to create a table with the risk to overwrite an existing table. It is supposed that you can trust that sqlite is smart enough, not to overwrite any information especially if you use 'BEGIN TRANSACTION' - 'COMMIT' procedure.
I give my answer assuming that you have imported data and user data in distinct tables, so you can manipulate what you populate and what you don't. Is that right?
What I usually do, is to have a sql file like this:
DROP TABLE configutation_a;
DROP TABLE configutation_b;
CREATE TABLE configutation_a;
INSERT INTO configutation_a (...);
CREATE TABLE configutation_b;
INSERT INTO configutation_b (...);
CREATE TABLE IF NOT EXIST user_data (...);
This means that every time the app starts, I am updating with the configuration data I have at that time (that's is why we use http.get to get any configuration file from a remote repo in the future) and create user data only if user_data table is not there (hopefully initial start).
Conclusion: It's always a good practice, in my opinion, to trust a database product 100% and abstractly let it do any transaction that might give you some risk if you implemented your self in your code; since it gives a tool for that.For example, the keyword [if not exists], is always safer than implementing a table checker your self.
I hope that helps.
PS: In case you refer in create database procedure, SQLite, connects to a database file and it doesn't exist, it creates it. For someone comfortable in sqlite command line, when you type
sqlite3 /home/user/db/configuration.db will connect you with this db and if the file is not there, it will create it.
I'm trying to use RJDBC to connect to a SAP HANA database and query for a temporary table, which is stored with a #-prefix:
test <- dbGetQuery(jdbcConnection,
"SELECT * FROM #CONTROL_TBL")
# Error in [...]: invalid table name: Could not find table/view #CONTROL_TBL in schema USER
If I execute the SQL statement in HANA, it works perfectly fine. I'm also able to query for permanent tables. Therefore I assume that R doesn't pass over the hashtag. Inserting escapes like "SELECT * FROM \\#CONTROL_TBL" however didn't solve my problem.
It's not possible to query for the data of a local or global temporary table from a different session, since they are by definition session-specific. In the case of a global temporary table one can query for the metadata of the table because they are shared across sessions.
Source: Tutorial for HANA temporary tables
You have to double-quote the table because it contains special characters, see SAP Help, identifiers for details.
test <- dbGetQuery(jdbcConnection,
'SELECT * FROM "#CONTROL_TBL"')
See also related discussion on stackoverflow.
Ok, local temporary tables are always only visible to the session in which they've been defined, while global temporary tables are visible just like normal tables, but the data is session private.
So, if you created the local temp. table (name starts with #) in a different session, then no wonder it cannot be found.
For your example, the question is: why do you need a temporary table in the first place?
Instead of that, you could e.g. define a view or a table function to select data from.
I have been using DBExpress connections to various databases (mostly MSSQL, Sybase SQL) with:
SQLConnection -> SQLDataSet -> DataSetProvider -> ClientDataSet.
I need to connect to the databases in a fashion that does NOT write changes back to the tables.
So, the DataSetProvider has ResolveToDataSet:=false, and the ClientDataSet has LogChanges:=false (for performance).
In use I connect the SQLConnection, open the ClientDataSet, and then close the SQLConnection.
I can then manipulate the ClientDataSet without fear of changing the underlying table.
I'm new to FireDAC (XE8), and I'm looking to establish the same sort of scenario - load data into memory from a SQL query, and safely manipulate this data in memory without accidentally updating the source table(s).
I'm currently using:
FDConnection -> FDQuery and a FDMemTable
The FDQuery has CachedUpdates := true and I perform:
FDQ.Open;
FDQ.FetchAll;
FDMemT.CloneCursor(FDQ,true,false);
FDQ.Close;
I think this is pretty much equivalent - I end up with the data in an FDMemTable such that editing the data will not be able to "write back" to tables.
One other issue - in the dbExpress scenario, I often add InternalCalc Fields to the ClientDataSet. It isn't clear to me that I can do that (and have persistent field names) if I'm performing a CloneCursor operation.
Is there a simpler way of ensuring the data never updates the database? Setting the FDQuery to read-only doesn't work - I often have to modify records (but do not wish to persist these changes).
TIA.
EdB
There is a much easier way. Use the FDMemTable's CopyDataSet method. This will copy both the data and the metadata. Changes to the FDMemTable will not be written to the underlying dataset, and internal calc fields (and calculated field) will be copied as well, though you'll have to wire up the OnCalcFields event handler.
FDMemTable1.CopyDataSet( FDQuery1, [coStructure, coRestart, coAppend]);
I am trying to connect to a table that is not in the sys schema. The code below works if sys.tablea exists.
conn <- dbConnect(dbDriver("MonetDB"), "monetdb://localhost/demo")
frame <- monet.frame(conn,"tablea")
If I define tablea in a different schema, e.g. xyz.tablea, then I get the error message
Server says 'SELECT: no such table 'tablea'' [#NA]
The account used to connect has rights to the table.
In a related question, is it possible to use camel-case from MonetDB.R? When I change the table name to TableA, the server again responds with
Server says 'SELECT: no such table 'tablea'' [#NA]
where the table name is all lower-case.
Using tables in other schemata is not possible with the current constructor of monet.frame. However, you can work around the issue as follows:
frame <- monet.frame(conn,"select * from xyz.tablea")
This trick also works with CamelCased table names.
For the next version, I am planning to fix the issue.
I need to be able to run a query such as
SELECT * FROM atable WHERE MyFunc(afield) = "some text"
I've written MyFunc in a VB module but the query results in "Undefined function 'MyFunc' in expression." when executed from .NET
From what I've read so far, functions in Access VB modules aren't available in .NET due to security concerns. There isn't much information on the subject but this avenue seems like a daed end.
The other possibility is through the CREATE PROCEDURE statement which also has precious little documentation: http://msdn.microsoft.com/en-us/library/bb177892%28v=office.12%29.aspx
The following code does work and creates a query in Access:
CREATE PROCEDURE test AS SELECT * FROM atable
However I need more than just a simple select statement - I need several lines of VB code.
While experimenting with the CREATE PROCEDURE statement, I executed the following code:
CREATE PROCEDURE test AS
Which produced the error "Invalid SQL statement; expected 'DELETE', 'INSERT', 'PROCEDURE', 'SELECT', or 'UPDATE'."
This seems to indicate that there's a SQL 'PROCEDURE' statement, so then I tried
CREATE PROCEDURE TEST AS PROCEDURE
Which resulted in "Syntax error in PROCEDURE clause."
I can't find any information on the SQL 'PROCEDURE' statement - maybe I'm just reading the error message incorrectly and there's no such beast. I've spent some time experimenting with the statement but I can't get any further.
In response to the suggestions to add a field to store the value, I'll expand on my requirements:
I have two scenarios where I need this functionality.
In the first scenario, I needed to enable the user to search on the soundex of a field and since there's no soundex SQL function in Access I added a field to store the soundex value for every field in every table where the user wants to be able to search for a record that "soundes like" an entered value. I update the soundex value whenever the parent field value changes. It's a fair bit of overhead but I considered it necessary in this instance.
For the second scenario, I want to normalize the spacing of a space-concatenation of field values and optionally strip out user-defined characters. I can come very close to acheiving the desired value with a combination of TRIM and REPLACE functions. The value would only differ if three or more spaces appeared between words in the value of one of the fields (an unlikely scenario). It's hard to justify the overhead of an extra field on every field in every table where this functionality is needed. Unless I get specific feedback from users about the issue of extra spaces, I'll stick with the TRIM & REPLACE value.
My application is database agnostic (or just not very religious... I support 7). I wrote a UDF for each of the other 6 databases that does the space normalization and character stripping much more efficiently than the built-in database functions. It really annoys me that I can write the UDF in Access as a VB macro and use that macro within Access but I can't use it from .NET.
I do need to be able to index on the value, so pulling the entire column(s) into .NET and then performing my calculation won't work.
I think you are running into the ceiling of what Access can do (and trying to go beyond). Access really doesn't have the power to do really complex TSQL statements like you are attempting. However, there are a couple ways to accomplish what you are looking for.
First, if the results of MyFunc don't change often, you could create a function in a module that loops through each record in atable and runs your MyFunc against it. You could either store that data in the table itself (in a new column) or you could build an in-memory dataset that you use for whatever purposes you want.
The second way of doing this is to do the manipulation in .NET since it seems you have the ability to do so. Do the SELECT statement and pull out the data you want from Access (without trying to run MyFunc against it). Then run whatever logic you want against the data and either use it from there or put it back into the Access database.
Why don't you want to create an additional field in your atable, which is atable.afieldX = MyFunc(atable.afield)? All what you need - to run UPDATE command once.
You should try to write a SQL Server function MyFunc. This way you will be able to run the same query in SQLserver and in Access.
A few usefull links for you so you can get started:
MSDN article about user defined functions: http://msdn.microsoft.com/en-us/magazine/cc164062.aspx
SQLServer user defined functions: http://www.sqlteam.com/article/intro-to-user-defined-functions-updated
SQLServer string functions: http://msdn.microsoft.com/en-us/library/ms181984.aspx
What version of JET (now called Ace) are you using?
I mean, it should come as no surprise that if you going to use some Access VBA code, then you need the VBA library and a copy of MS Access loaded and running.
However, in Access 2010, we now have table triggers and store procedures. These store procedures do NOT require VBA and in fact run at the engine level. I have a table trigger and soundex routine here that shows how this works:
http://www.kallal.ca/searchw/WebSoundex.htm
The above means if Access, or VB.net, or even FoxPro via odbc modifies a row, the table trigger code will fire and run and save the soundex value in a column for you. And this feature also works if you use the new web publishing feature in access 2010. So, while the above article is written from the point of view of using Access Web services (available in office 365 and SharePoint), the above soundex table trigger will also work in a stand a alone Access and JET (ACE) only application.