I am using Teredata.net to connect to my SQL assistant as i have some issues with my odbc connection.
The problem i am facing is i am able to create a volatile table but when i do a select * it says table doesn't exist. I am not sure if that has anything to do with the connection as i have never used teradata.net connection before. below is my table syntax;
CREATE MULTISET VOLATILE TABLE VT
(
COL1 VARCHAR(100)
)
ON COMMIT PRESERVE ROWS;
Can any one help me here.
Regards,
Amit
Check if you specify ON COMMIT PRESERVE ROWS
Check if you get the same number by select session;
Restart the Teradata SQL assistant - it helped me
Related
I have a VOLATILE TABLE in teradata that i created with the code below
CREATE VOLATILE TABLE Temp
(
ID VARCHAR(30),
has_cond INT
) ON COMMIT PRESERVE ROWS;
I want to insert records from a select statement that i have created which is a pretty big SQL statement and definitely requires a row lock before proceeding
INSERT INTO Temp
(ID ,has_cond)
SELECT * FROM....
Can anyone tell me how to safely lock the rows so i can insert the records into my VOLATILE TABLE as they are production tables and i don't want to lock out some ETL that might be happening in the background
I don't think you can apply a row lock for an insert unless you put the select in a view.
Or you switch to lock table, but don't forget to include all tables...
But in most production environments there's a database with 1-1-views including lock row access, you can use those (or you might already, check Explain).
I am getting this error while saving my data into the table. I have already created a 'product_Design' table in my database. I am using Sql Server 2008. Everything is working fine on local host but not on the server. I also tried to insert data in different tables and its working but I am just not able to insert data in this(product_Design) table ? I really need help regarding this thing.
here is my sql query
insert into z3ctjholo.dbo.product_Design values(#prodID, #productName, #designName, #designPath, #finalDesign, #front, #cont, #divHeight, GETDATE(), 0, 1)
I also tried this query
insert into product_Design values(#prodID, #productName, #designName, #designPath, #finalDesign, #front, #cont, #divHeight, GETDATE(), 0, 1)
Both the queries are generating error. Please help me out.
Thanks..
So finally i found what is the problem. if you ever face such kind of problem then execute this command in sql server and see whether your table is connected to any schema apart from dbo. Use this statement to check whether the table is connected to any other schema.
use yourDatabaseName
Then
SELECT * FROM INFORMATION_SCHEMA.TABLES
after that if you find that your table is connected with other schema apart from dbo then use your any statement like this
select * from schemaName.tableName
(eg. my schema name is z3ctjholo and my table name is product_Design)
so my statement would be like this
select * from z3ctjholo.product_Design
what i was doing wrong, i was using two schema names (z3ctjholo.dbo.product_Design).
I hope it will help someone..
Thanks...
There are two reasons, I can find so far.
1. Either the connection settings in web.config is incorrect.
2. your database is case sensitive collation and so check the name with case. May be you have created the table with name Product_Design and trying to insert in product_Design, in this case also, the command may not work.
Please check both the points.
I set up a SQLite Database DB Connection via the CF Admin after installing the JDBC Driver. After setting it up I got a successful connection message. I also know that I connected successfully because if I run a simple select query it doesn't fail out and if I run a CFDump it shows the proper columns. To further test this simple select statement, if I changed the table name it does fail. So, it's not a connection issue.
I am simply trying to insert records into a table and then check to see if those records were added. These are the queries I am using:
<cfquery datasource="fooDB" name="foo">
INSERT INTO FooTable
(FooColumn)
VALUES
('Test')
</cfquery>
<cfquery datasource="fooDB" name="checkIfwasSuccessful">
SELECT *
FROM FooTable
</cfquery>
This is my SQlite table creator:
CREATE TABLE FooTable (
id INTEGER PRIMARY KEY,
FooColumn TEXT,
OtherColumn1 TEXT,
OtherCOlumn2 TEXT
);
The CFDump of the query checkIfwasSuccessful is an empty result.
Any ideas??
Thank you in advance!!
Use Cftransaction to verify that your query is being commmited.
Have you tried either a) supplying an id with the insert, or b) using the AUTOINCREMENT keyword after PRIMARY KEY?
I am running a SQLite database in memory and I am attempting to drop a table with the following command.
DROP TABLE 'testing' ;
But when I execute the SQL statement, I get this error
SQL logic error or missing database
Before I run the "Drop Table" query I check to make sure that the table exists in the database with this query. So I am pretty sure that the table exists and I have a connection to the database.
SELECT count(*) FROM sqlite_master WHERE type='table' and name='testing';
This database is loaded in to memory from a file database and after I attempt to drop this table the database is saved from memory to the file system. I can then use a third party SQLite utility to view the SQLite file and check to see if the "testing" exists, it does. Using the same 3rd party SQLite utility I am able to run the "Drop TABLE" SQL statement with out error.
I am able to create/update tables without any problems.
My questions:
Is there a difference between a memory database and a file database in SQLite when dropping a table?
Is there a way to disable the ability to drop a table in SQLite that I may have accentually turned on somehow?
Edit: It appears to have something to do with a locked table. Still investigating.
You should not have quotes in your DROP TABLE command. Use this instead:
DROP TABLE testing
I had the same problem when using Sqlite with the xerial jbdc driver in the version 3.7.2. and JRE7
I first listed all the tables with the select command as follows:
SELECT name FROM sqlite_master WHERE type='table'
And then tried to delete a table like this:
DROP TABLE IF EXISTS TableName
I was working on a database stored on the file system and so it seems not to effect the outcome.
I used the IF EXISTS command to avoid listing all the table from the master table first, but I needed the complete table list anyway.
For me the solution was simply to change the order of the SELECT and DROP.
I've got this query
UPDATE linkeddb...table SET field1 = 'Y' WHERE column1 = '1234'
This takes 23 seconds to select and update one row
But if I use openquery (which I don't want to) then it only takes half a second.
The reason I don't want to use openquery is so I can add parameters to my query securely and be safe from SQL injections.
Does anyone know of any reason for it to be running so slowly?
Here's a thought as an alternative. Create a stored procedure on the remote server to perform the update and then call that procedure from your local instance.
/* On remote server */
create procedure UpdateTable
#field1 char(1),
#column1 varchar(50)
as
update table
set field1 = #field1
where column1 = #column1
go
/* On local server */
exec linkeddb...UpdateTable #field1 = 'Y', #column1 = '1234'
If you're looking for the why, here's a possibility from Linchi Shea's Blog:
To create the best query plans when
you are using a table on a linked
server, the query processor must have
data distribution statistics from the
linked server. Users that have limited
permissions on any columns of the
table might not have sufficient
permissions to obtain all the useful
statistics, and might receive aless
efficient query plan and experience
poor performance. If the linked
serveris an instance of SQL Server, to
obtain all available statistics, the
user must own the table or be a member
of the sysadmin fixed server role, the
db_ownerfixed database role, or the
db_ddladmin fixed database role on the
linkedserver.
(Because of Linchi's post, this clarification has been added to the latest BooksOnline SQL documentation).
In other words, if the linked server is set up with a user that has limited permissions, then SQL can't retrieve accurate statistics for the table and might choose a poor method for executing a query, including retrieving all rows.
Here's a related SO question about linked server query performance. Their conclusion was: use OpenQuery for best performance.
Update: some additional excellent posts about linked server performance from Linchi's blog.
Is column1 primary key? Probably not. Try to select records for update using primary key (where PK_field=xxx), otherwise (sometimes?) all records will be read to find PK for records to update.
Is column1 a varchar field? Is that why are you surrounding the value 1234 with single-quotation marks? Or is that simply a typo in your question?