I've been stuck with the below and been searching for an answer. Also opened a ticket with Tableau support team and waiting for them to get back.
My Question:
I have some volatile tables that I created in Teradata.
In Tableau with the Live connection I placed the volatile tables in Initial SQL and in my custom SQL i'm just doing a Select * from that Volatile table which works completely fine.
Now, I have to declare a parameter in the initial SQL (This is where a user enters some value on the dashboard in the parameter field and gets the data).
Can this be accomplished? Appreciate your responses.
Related
I can run the 'guts' of my stored procedure as a giant query.. just fine from SQL Management Studio. Furthermore, I can even right click and 'execute' the stored procedure - .. y'know.. run it as a stored procedure - from SQL Management Studio.
When my ASP.NET MVC app goes to run this stored procedure, I get issues..
System.Data.SqlClient.SqlException: Invalid object name '#AllActiveOrders'.
Does the impersonation account that ASP.NET runs under need special permissions? That can't be it.. even when I run it locally from my Visual Studio (under my login account) I also get the temp table error message.
EDIT: Furthermore, it seems to work fine when called from one ASP.NET app (which is using a WCF service / ADO.NET to call the stored procedure) but does not work from a different ASP.NET app (which calls the stored proc directly using ADO.NET)
FURTHERMORE: The MVC app that doesn't crash, does pass in some parameters to the stored procedure, while the crashing app runs the Stored Proc with default parameters (doesn't pass any in). FWIW - when I run the stored procedure in SQL Mgt. Studio, it's with default parameters (and it doesn't crash).
If it's of any worth, I did have to fix a 'String or Binary data would be truncated' issue just prior to this situation. I went into this massive query and fixed the temptable definition (a different one) that I knew to be the problem (since I had just edited it a day or so ago). I was able to see the 'String/Binary truncation' issue in SQL Mgt. Studio / as well as resolve the issue in SQL Mgt Studio.. but, I'm really stumped as to why I cannot see the 'Invalid Object name' issue in SQL Mgt. Studio
Stored procedures and temp tables generally don't mix well with strongly typed implementations of database objects (ado, datasets, I'm sure there's others).
If you change your #temp table to a #variable table that should fix your issue.
(Apparently) this works in some cases:
IF 1=0 BEGIN
SET FMTONLY OFF
END
Although according to http://msdn.microsoft.com/en-us/library/ms173839.aspx, the functionality is considered deprecated.
An example on how to change from temp table to var table would be like:
create table #tempTable (id int, someVal varchar(50))
to:
declare #tempTable table (id int, someval varchar(50))
There are a few differences between temp and var tables you should consider:
What's the difference between a temp table and table variable in SQL Server?
When should I use a table variable vs temporary table in sql server?
Ok. Figured it out with the help of my colleague who did some better Google-fu than I had done prior..
First, we CAN indeed make SQL Management Studio puke on my stored procedure by adding the FMTONLY option:
SET FMTONLY ON;
EXEC [dbo].[My_MassiveStackOfSubQueriesToProduceADigestDataSet]
GO
Now, on to my two competing ASP.NET applications... why one of them worked and one of them didn't? Under the covers, both essentially used an ADO.NET System.Data.SqlClient.SqlDataAdapter to go get the data and each performed a .Fill(DataSet1)
However, the one that was crashing was trying to get the schema in advanced of the data, instead of just deriving the schema after the fact.. so, it was this line of code that was killing it:
da.FillSchema(DataSet1, SchemaType.Mapped)
If you're struggling with this same issue that I've had, you may have come across forums like this from MSDN which are all over the internets - which explain the details of what's going on quite adequately. It had just never occurred to me that when I called "FillSchema" that I was essentially tripping over this same issue.
Now I know!!!
Following on from bkwdesign's answer about finding the problem was due to ADO.NET DataAdapter.FillSchema using SET FMTONLY ON, I had a similar problem. This is how I dealt with it:
I found the simplest solution was to short-circuit the stored proc, returning a dummy recordset FillSchema could use. So at the top of the stored proc I added something like:
IF 1 = 0
BEGIN;
SELECT CAST(0 as INT) AS ID,
CAST(NULL AS VARCHAR(10)) AS SomTextCol,
...;
RETURN 0;
END;
The columns of the select statement are identical in name, data type and order to the schema of the recordset that will be returned from the stored proc when it executes normally.
The RETURN ensures that FillSchema doesn't look at the rest of the stored proc, and so avoids problems with temp tables.
I am having an issue with an SQLite database. I am using the SQLite ODBC from http://www.ch-werner.de/sqliteodbc/ Installed the 64-bit version and created the ODBC with these settings:
I open my Access database and link to the datasource. I can open the table, add records, but cannot delete or edit any records. Is there something I need to fix on the ODBC side to allow this? The error I get when I try to delete a record is:
The Microsoft Access database engine stopped the process because you and another user are attempting to change the same data at the same time.
When I edit a record I get:
The record has been changed by another user since you started editing it. If you save the record, you will overwrite the changed the other user made.
Save record is disabled. Only copy to clipboard or drop changes is available.
My initial attempt to recreate your issue was unsuccessful. I used the following on my 32-bit test VM:
Access 2010
SQLite 3.8.2
SQLite ODBC Driver 0.996
I created and populated the test table [tbl1] as documented here. I created an Access linked table and when prompted I chose both columns ([one] and [two]) as the Primary Key. When I opened the linked table in Datasheet View I was able to add, edit, and delete records without incident.
The only difference I can see between my setup and yours (apart from the fact that I am on 32-bit and you are on 64-bit) is that in the ODBC DSN settings I left the Sync.Mode setting at its default value of NORMAL, whereas yours appears to be set to OFF.
Try setting your Sync.Mode to NORMAL and see if that makes a difference.
Edit re: comments
The solution in this case was the following:
One possible workaround would be to create a new SQLite table with all the same columns plus a new INTEGER PRIMARY KEY column which Access will "see" as AutoNumber. You can create a unique index on (what are currently) the first four columns to ensure that they remain unique, but the new new "identity" (ROWID) column is what Access would use to identify rows for CRUD operations.
I had this problem too. I have a table with a primary key on a VARCHAR(30) (TEXT) field.
Adding an INTEGER PRIMARY KEY column didn't help at all. After lots of testing I found the issue was with a DATETIME field I had in the table. I removed the DATETIME field and I was able to update record values in MS-Access datasheet view.
So now any DATETIME fields I need in SQLite, I declare as VARCHAR(19) so they some into Access via ODBC as text. Not perfect but it works. (And of course SQLite doesn't have a real DATETIME field type anyway so TEXT is just fine and will convert OK)
I confirmed it's a number conversion issue. With an empty DATETIME field, I can add a time of 01-01-2014 12:01:02 via Access's datasheet view, if I then look at the value in SQLite the seconds have been rounded off:
sqlite> SELECT three from TEST where FLoc='1020';
2014-01-01 12:01:00.000
SYNCMODE should also be NORMAL not OFF.
Update:
If you have any text fields with a defined length (e.g. foo VARCHAR(10)) and the field contents contains more characters than the field definition (which SQLite allows) MS-Access will also barf when trying to update any of the fields on that row.
I've searched all similar posts as I had a similar issue with SQLite linked via ODBC to Access. I had three tables, two of them allowed edits, but the third didn't. The third one had a DATETIME field and when I changed the data type to a TEXT field in the original SQLite database and relinked to access, I could edit the table. So for me it was confirmed as an issue with the DATETIME field.
After running into this problem, not finding a satisfactory answer, and wasting a lot of time trying other solutions, I eventually discovered that what others have mentioned about DATETIME fields is accurate but another solution exists that lets you keep the proper data type. The SQLite ODBC driver can convert Julian day values into the ODBC SQL_TIMESTAMP / SQL_TYPE_TIMESTAMP types by looking for floating point values in the column, if you have that option enabled in the driver. Storing dates in this manner gives the ODBC timestamp value enough precision to avoid the write conflict error, as well as letting Access see the column as a date/time field.
Even storing sub-second precision in the date string doesn't work, which is possibly a bug in the driver because the resulting TIMESTAMP_STRUCT contains the same values, but the fractional seconds must be lost elsewhere.
I am using a global application user account to access database A. This user account does not have permissions to modify database A's schema (ie, create tables, modify tables, etc). This user also has access to database B, but only views. I need to run SQL to feed data from a view in database B into a table in database A.
In a perfect world, I would be able to use this SQL:
create database_a.mytable as (select * from database_b) with no data
However, the user can't create tables in database A. If I could get the DDL of the select statement then I could log in under my personal account (which doesn't have any access to database B) and run the DDL in database A to create the table.
The only other option is to manually write the SQL, but I don't want to do that, especially since this view I am wanting to copy has many columns of varying data types and sizes.
Edit: I may be getting closer. I just experimented with this:
show (select * from database_b.myview)
However, it generated the DLL of every single table that is used in the view itself, as well as the definition for the view. This doesn't really help me since I just want the schema of the select statement itself. In other words, I need what would be generated if I were to use the create table as statement mentioned above.
Edit for Rob: Perhaps "DDL" was the wrong term to use. Using show view db.myview just shows the definition of the view, not the schema it represents. In my above example of create table as, I show how you can create a table that mimics the schema of a result set returned in a select. It generates a DDL on the back end for creating a table and then executes that DDL to actually create the table. You can then say show table db.newtable and see the new table's DDL. I want to get that DDL directly from a select statement so that I can copy it, log out of the app account, into my personal account, and then execute the DDL to create the table.
This is only to save me the headache of having to type out the DDL manually by hand to save time and reduce typing errors, especially since the source view has so many columns. That said, I think hitting up the DBA or writing some snazzy stored procedure to do dynamic stuff would be a bit over the top for my needs. I think there has to be a way to get the DDL for creating a table schema directly from a select statement.
Generate DDL Statements for objects:
SHOW TABLE {DatabaseB}.{Table1};
SHOW VIEW {DatabaseB}.{View1};
Breakdown of columns in a view:
HELP VIEW {DatabaseB}.{View1};
However, without the ability to create the object in the target database DatabaseA your don't have much leverage. Obviously, if the object already existed INSERT INTO SELECT ... FROM DatabaseB.Table1 or MERGE INTO would be options that you already explored.
Alternative Solution
Would it be possible to have a stored procedure created that dynamically created the table based on the view name that is provided? The global application account would simply need privilege to execute the procedure. Generally the user creating the stored procedure would need the permissions to perform the actions contained within the stored procedure. (You have some additional flexibility with this in Teradata 13.10.)
There are some caveats with this approach. You are attempting to materialize views that could reference anywhere from hundreds to billions of records. These aren't simple 1:1 views that are put on top of the target tables. Trying to determine the required space in the target database to materialize the view will be difficult. Performance can and will vary depending on the complexity of the view and the data volumes. This will not be a fast-path or data block optimized operation.
As a DBA, I would be concerned with this approach being taken on by a global application account without fully understanding the intent. I trust you have an open line of communication with the DBA(s) involved for supporting this system. I'm sure there are reasons for your madness that can't be disclosed here.
Possible Solution - VOLATILE TABLE
Unless the implicit privilege for CREATE TABLE has been revoked from the global application account this solution should work.
Volatile tables do not require perm space. There table definitions persist for the duration of the session and any data inserted into them relies on the spool space of the user who instantiated it.
CREATE VOLATILE TABLE {Global Application UserID}.{TableA_Copy} AS
(
SELECT *
FROM {DatabaseB}.{TableA}
)
WITH NO DATA
NO PRIMARY INDEX
ON COMMIT PRESERVE ROWS;
SHOW TABLE {Global Application UserID}.{TableA_Copy};
I opted to use a Teradata 13.10 feature called NO PRIMARY INDEX. By default, CREATE TABLE AS will take the first column of the SELECT statement and make it the PRIMARY INDEX of the table. This could lead to skewing and perm space issues in your testing depending on the data demographics. You can specify an explicit PRIMARY INDEX on your own as you understand the underlying data. (See the DDL manuals for details on the syntax if you're uncertain.)
The use of ON COMMIT PRESERVE ROWS for the intent of this example is probably extraneous. But in reality if you popped any data into that table for testing this clause would be beneficial in Teradata mode as the data would otherwise be lost immediately after the CREATE TABLE or any other data manipulation was performed against the volatile table.
I would like to monitor 10 tables with 1000 records per table. I need to know when a record, and which record changed.
I have looked into SQL Dependencies, however it appears that SQL Dependencies would only be able to tell me that the table changed, and not which record changed. I would then have to compare all the records in the table to find the modified record. I suspect this would be a problem for me as the records constantly change.
I have also looked into SQL Trigger's, however I am not sure if triggers would work for monitoring which record changed.
Another thought I had, is to create a "Monitoring" table which would have records added to it via the application code whenever a record is modified.
Do you know of any other methods?
EDIT:
I am using SQL Server 2008
I have looked into Change Data Capture which is available in SQL 2008 and suggested by Martin Smith. Change Data Capture appears to be a robust, easy to implement and very attractive solution. I am going to roll CDC on my database.
You can add triggers and have them add rows to an audit table. They can audit the primary key of the rows that changed, and even additional information about the changes. For instance, in the case of an UPDATE, they can record the columns that changed.
Before you write/implement your own take a look at AutoAudit :
AutoAudit is a SQL Server (2005, 2008) Code-Gen utility that creates
Audit Trail Triggers with:
Created, CreatedBy, Modified, ModifiedBy, and RowVersion (incrementing INT) columns to table
Insert event logged to Audit table
Updates old and new values logged to Audit table
Delete logs all final values to the Audit table
view to reconstruct deleted rows
UDF to reconstruct Row History
Schema Audit Trigger to track schema changes
Re-code-gens triggers when Alter Table changes the table
What version and edition of SQL Server? Is Change Data Capture available? – Martin Smith
I am using SQL 2008 which supports Change Data Capture. Change Data Capture is a very robust method for tracking data changes as I would like to. Thanks for the answer.
Here's an idea.You can have a flag on each table that every time a record is created or updated is filled with current datetime. Then when you notice that a record has changed set its flag to null again.Thus unchanged records have null in their flag field and you can query not null values to see which record has changed/created and when (and set their flags to null again) .
I've got this query
UPDATE linkeddb...table SET field1 = 'Y' WHERE column1 = '1234'
This takes 23 seconds to select and update one row
But if I use openquery (which I don't want to) then it only takes half a second.
The reason I don't want to use openquery is so I can add parameters to my query securely and be safe from SQL injections.
Does anyone know of any reason for it to be running so slowly?
Here's a thought as an alternative. Create a stored procedure on the remote server to perform the update and then call that procedure from your local instance.
/* On remote server */
create procedure UpdateTable
#field1 char(1),
#column1 varchar(50)
as
update table
set field1 = #field1
where column1 = #column1
go
/* On local server */
exec linkeddb...UpdateTable #field1 = 'Y', #column1 = '1234'
If you're looking for the why, here's a possibility from Linchi Shea's Blog:
To create the best query plans when
you are using a table on a linked
server, the query processor must have
data distribution statistics from the
linked server. Users that have limited
permissions on any columns of the
table might not have sufficient
permissions to obtain all the useful
statistics, and might receive aless
efficient query plan and experience
poor performance. If the linked
serveris an instance of SQL Server, to
obtain all available statistics, the
user must own the table or be a member
of the sysadmin fixed server role, the
db_ownerfixed database role, or the
db_ddladmin fixed database role on the
linkedserver.
(Because of Linchi's post, this clarification has been added to the latest BooksOnline SQL documentation).
In other words, if the linked server is set up with a user that has limited permissions, then SQL can't retrieve accurate statistics for the table and might choose a poor method for executing a query, including retrieving all rows.
Here's a related SO question about linked server query performance. Their conclusion was: use OpenQuery for best performance.
Update: some additional excellent posts about linked server performance from Linchi's blog.
Is column1 primary key? Probably not. Try to select records for update using primary key (where PK_field=xxx), otherwise (sometimes?) all records will be read to find PK for records to update.
Is column1 a varchar field? Is that why are you surrounding the value 1234 with single-quotation marks? Or is that simply a typo in your question?