MS Access, Pass through query with complex criteria. Criteria include Select statments and vba functions - odbc

I currently have multiple queries that query data from a few tables linked through ODBC, and some temporary tables that are edited through the user interface. I have complex criteria in my queries such as:
SELECT * from ThingsData
WHERE (Thing In(SELECT Thing from ListOfThings) AND getThingFlag() = True);
In this case Thing is a field and ListOfThings is a temporary table that the user defines from the user interface. Basically, the user puts together a list of the field Thing that he/she wants to filter the data based on and I want to query only the data that matches the Thing values that the user adds to his/her list. Currently, the data I am querying is in the linked ODBC table, and the temp table ListOfThings is just a regular, local table and everything works peachy. I want to get rid of the linked table and use a pass through query instead. However, when i do that, unless the criteria is incredibly simplistic, i get an error:
"ODBC--Call Failed. Invalid object name ListOfThings."
If I dont have any criteria it works fine.
Long story short: In a pass through query, how do I apply criterias that include SELECTs and functions from my modules and just basically filter the pass through table based on data from my local tables?

What is at the other end of that ODBC link? In a pass-through query you will have to honor the syntax required by the database server, not Access syntax. I would first suspect that you can't have mixed case table names and I would try listofthings as the name.
If you have a tool that can be used to test queries directly against the database server, get the query working there and then simply cut and paste it into an Access pass-through query.

Related

Include a hashtag in dbGetQuery()

I'm trying to use RJDBC to connect to a SAP HANA database and query for a temporary table, which is stored with a #-prefix:
test <- dbGetQuery(jdbcConnection,
"SELECT * FROM #CONTROL_TBL")
# Error in [...]: invalid table name: Could not find table/view #CONTROL_TBL in schema USER
If I execute the SQL statement in HANA, it works perfectly fine. I'm also able to query for permanent tables. Therefore I assume that R doesn't pass over the hashtag. Inserting escapes like "SELECT * FROM \\#CONTROL_TBL" however didn't solve my problem.
It's not possible to query for the data of a local or global temporary table from a different session, since they are by definition session-specific. In the case of a global temporary table one can query for the metadata of the table because they are shared across sessions.
Source: Tutorial for HANA temporary tables
You have to double-quote the table because it contains special characters, see SAP Help, identifiers for details.
test <- dbGetQuery(jdbcConnection,
'SELECT * FROM "#CONTROL_TBL"')
See also related discussion on stackoverflow.
Ok, local temporary tables are always only visible to the session in which they've been defined, while global temporary tables are visible just like normal tables, but the data is session private.
So, if you created the local temp. table (name starts with #) in a different session, then no wonder it cannot be found.
For your example, the question is: why do you need a temporary table in the first place?
Instead of that, you could e.g. define a view or a table function to select data from.

Providing default value for unmapped column in SQL Compare

Is it possible to provide a default value or a query to provide a value to an unmapped column in the target table using Redgate SQL Data Compare?
To explain the scenario I have a configuration database that holds settings data for several database instances. The data is all in the same shape, but the config database has an additional InstanceID field in most tables. This allows me to filter my compare to only compare against the InstanceID relating to the source Instance database. However if I generate Insert scripts they fail because the Target Instance ID fields are non nullable. I want to provide a default value that is then used in the Insert Scripts. Is this doable?
SQL Data Compare doesn't have an easy way of doing this I'm afraid.
There is one way to do it - you could create a view that selects everything from the source table along with a computed column, which just provides the "default value" that you want to insert. Then you can map the view to the table in the target database and compare them, deploying from the result.
I hope this helps.

Accessing a TEMP TABLE in a TRIGGER on a VIEW

I need to parameterize a view, and I am doing so by creating a TEMP TABLE which has the parameters for the view.
CREATE TEMP TABLE parms (parm1 INTEGER, parm2 INTEGER);
CREATE VIEW tableview AS ...
The VIEW is rather complex, but it basically uses these two parameters to kick start a recursive CTE, and there isn't any other way that I have found to express the view without these parameters.
The parameters must be stored in a temporary table because each connection should be able to have its own view with different parameters.
In any case, this works fine for creating the view itself, so long as I create the same TEMP TABLE at the start of any queries that use the view, e.g.:
CREATE TEMP TABLE parms (parm1 INTEGER, parm2 INTEGER);
INSERT INTO parms (parm1,parm2) VALUES (5,66);
SELECT * FROM tableview;
I am able to do the same thing to create a trigger to allow inserts on the view:
CREATE TEMP TABLE parms (parm1 INTEGER, parm2 INTEGER);
CREATE TRIGGER tableinsert INSTEAD OF INSERT ON tableview ...
However, when I try to do an actual INSERT (re-creating the TEMP TABLE first as before) I get an error:
no such table: main.parms
If I create a non-temporary table, I do not get this error, but then I have the problem that different connections can't have their own separate views.
I have review the documentation for triggers, and it mentions caveats of using temporary triggers on a non-temporary table, but I don't see anything regarding the reverse.
I did find a reference elsewhere that indicated that "the table... must exist in the same database as the table or view to which the trigger is attached". I thought a temporary table was part of the current database, is this not true? Is there some way to make this true?
I also tried accessing the parms table as temp.parms in the TRIGGER, but got the error:
qualified table names are not allowed on INSERT, UPDATE, and DELETE
statements within triggers
If I can't use a temporary table, is there some way to work around it to accomplish the same thing?
Update: Ok, so it seems to be an SQLite limitation. After digging around a bit in the SQLite source code, it seems to be pretty trivial to allow SELECT access to a temporary table in a trigger. However, allowing UPDATE access appears to be a lot harder.
Temporary objects are created in a separate database named temp, so they are not accessible from triggers in other databases.
The remaining mechanism to get a connection-specific value into a trigger is to use a user-defined function.

Teradata: Is there a way to generate DDL from a view or select statement?

I am using a global application user account to access database A. This user account does not have permissions to modify database A's schema (ie, create tables, modify tables, etc). This user also has access to database B, but only views. I need to run SQL to feed data from a view in database B into a table in database A.
In a perfect world, I would be able to use this SQL:
create database_a.mytable as (select * from database_b) with no data
However, the user can't create tables in database A. If I could get the DDL of the select statement then I could log in under my personal account (which doesn't have any access to database B) and run the DDL in database A to create the table.
The only other option is to manually write the SQL, but I don't want to do that, especially since this view I am wanting to copy has many columns of varying data types and sizes.
Edit: I may be getting closer. I just experimented with this:
show (select * from database_b.myview)
However, it generated the DLL of every single table that is used in the view itself, as well as the definition for the view. This doesn't really help me since I just want the schema of the select statement itself. In other words, I need what would be generated if I were to use the create table as statement mentioned above.
Edit for Rob: Perhaps "DDL" was the wrong term to use. Using show view db.myview just shows the definition of the view, not the schema it represents. In my above example of create table as, I show how you can create a table that mimics the schema of a result set returned in a select. It generates a DDL on the back end for creating a table and then executes that DDL to actually create the table. You can then say show table db.newtable and see the new table's DDL. I want to get that DDL directly from a select statement so that I can copy it, log out of the app account, into my personal account, and then execute the DDL to create the table.
This is only to save me the headache of having to type out the DDL manually by hand to save time and reduce typing errors, especially since the source view has so many columns. That said, I think hitting up the DBA or writing some snazzy stored procedure to do dynamic stuff would be a bit over the top for my needs. I think there has to be a way to get the DDL for creating a table schema directly from a select statement.
Generate DDL Statements for objects:
SHOW TABLE {DatabaseB}.{Table1};
SHOW VIEW {DatabaseB}.{View1};
Breakdown of columns in a view:
HELP VIEW {DatabaseB}.{View1};
However, without the ability to create the object in the target database DatabaseA your don't have much leverage. Obviously, if the object already existed INSERT INTO SELECT ... FROM DatabaseB.Table1 or MERGE INTO would be options that you already explored.
Alternative Solution
Would it be possible to have a stored procedure created that dynamically created the table based on the view name that is provided? The global application account would simply need privilege to execute the procedure. Generally the user creating the stored procedure would need the permissions to perform the actions contained within the stored procedure. (You have some additional flexibility with this in Teradata 13.10.)
There are some caveats with this approach. You are attempting to materialize views that could reference anywhere from hundreds to billions of records. These aren't simple 1:1 views that are put on top of the target tables. Trying to determine the required space in the target database to materialize the view will be difficult. Performance can and will vary depending on the complexity of the view and the data volumes. This will not be a fast-path or data block optimized operation.
As a DBA, I would be concerned with this approach being taken on by a global application account without fully understanding the intent. I trust you have an open line of communication with the DBA(s) involved for supporting this system. I'm sure there are reasons for your madness that can't be disclosed here.
Possible Solution - VOLATILE TABLE
Unless the implicit privilege for CREATE TABLE has been revoked from the global application account this solution should work.
Volatile tables do not require perm space. There table definitions persist for the duration of the session and any data inserted into them relies on the spool space of the user who instantiated it.
CREATE VOLATILE TABLE {Global Application UserID}.{TableA_Copy} AS
(
SELECT *
FROM {DatabaseB}.{TableA}
)
WITH NO DATA
NO PRIMARY INDEX
ON COMMIT PRESERVE ROWS;
SHOW TABLE {Global Application UserID}.{TableA_Copy};
I opted to use a Teradata 13.10 feature called NO PRIMARY INDEX. By default, CREATE TABLE AS will take the first column of the SELECT statement and make it the PRIMARY INDEX of the table. This could lead to skewing and perm space issues in your testing depending on the data demographics. You can specify an explicit PRIMARY INDEX on your own as you understand the underlying data. (See the DDL manuals for details on the syntax if you're uncertain.)
The use of ON COMMIT PRESERVE ROWS for the intent of this example is probably extraneous. But in reality if you popped any data into that table for testing this clause would be beneficial in Teradata mode as the data would otherwise be lost immediately after the CREATE TABLE or any other data manipulation was performed against the volatile table.

The field is too small to accept the amount of data you attempted to add

This is odd because I'm not inserting data, I'm pulling data with a query.
I'm trying to run
SELECT DISTINCT description FROM products;
Which outputs the error "The field is too small to accept the amount of data you attempted to add.".
However, running the following doesn't produce the error:
SELECT description FROM products;
So I'm confused as to what the issue would be.
I'm using OleDbDataReader and taking data out of an mdb database file.
This might be related to: http://support.microsoft.com/kb/896950/us
This problem occurs because when you
set the UniqueValues query property to
Yes, a DISTINCT keyword is added to
the resulting SQL statement. The
DISTINCT keyword directs Access to
perform a comparison between records.
When Access performs a comparison
between two Memo fields, Access treats
the fields as Text fields that have a
255-character limit. Sometimes Memo
field data that is larger than 255
characters will generate the error
message that is mentioned in the
"Symptoms" section. Sometimes only 255
characters are returned from the Memo
field.
Workaround:
To work around this problem, modify
the original query by removing the
Memo field. Then, create a second
query that is based on both the table
and the original query. This new query
uses all the fields from the original
query, and this new query uses the
Memo field from the table. When you
run the second query, the first query
runs. Then, this data is used to run
the second query. This behavior
returns the Memo field data based on
the returned data of the first query.

Resources