I want to know the equivalent query of Teradata BTEQ "create set table" in Snowflake SQL. I'm working on query conversion between BTEQ to Snowflake. Is there any direct syntax? If not, how can I create a set(Allows only unique values/records) table?
Snowflake doesn't have this functionality, and I don't know any database other than Teradata that does.
You can try to emulate that e.g. by always inserting data using a temporary staging table and then MERGE or INSERT..SELECT.., explicitly avoiding duplicates (on the loading side), or access the data through a view that does SELECT DISTINCT * FROM table.
Related
I would like to load text from a field in a SQLite table and run it as a SQLite query. All done in a SQLite query. No external string operations, nor command line operations are possible. Pure SQLite only.
Let's say that I would create a table command_table with the rows:
COMMAND_NAME: COMMAND:
command1 SELECT * FROM table1
command2 SELECT * FROM table1 WHERE table1.row1 = '1'
The desired SQLite command would be able to load the COMMAND and interpret it.
The commands would be as complex as it gets, so using some generic comparisons like WHERE table1.row1 = command_table.command1" is not an option.
SQLite is designed as an embedded database, i.e., to be used together with a 'real' programming language. Therefore, it does not have any mechanism to execute dynamic SQL statements from within SQL itself.
The only thing I don't have an automated tool for when working with Oracle is a program that can create INSERT INTO scripts.
I don't desperately need it so I'm not going to spend money on it. I'm just wondering if there is anything out there that can be used to generate INSERT INTO scripts given an existing database without spending lots of money.
I've searched through Oracle with no luck in finding such a feature.
It exists in PL/SQL Developer, but errors for BLOB fields.
Oracle's free SQL Developer will do this:
http://www.oracle.com/technetwork/developer-tools/sql-developer/overview/index.html
You just find your table, right-click on it and choose Export Data->Insert
This will give you a file with your insert statements. You can also export the data in SQL Loader format as well.
You can do that in PL/SQL Developer v10.
1. Click on Table that you want to generate script for.
2. Click Export data.
3. Check if table is selected that you want to export data for.
4. Click on SQL inserts tab.
5. Add where clause if you don't need the whole table.
6. Select file where you will find your SQL script.
7. Click export.
Use a SQL function (I'm the author):
https://github.com/teopost/oracle-scripts/blob/master/fn_gen_inserts.sql
Usage:
select fn_gen_inserts('select * from tablename', 'p_new_owner_name', 'p_new_table_name')
from dual;
where:
p_sql – dynamic query which will be used to export metadata rows
p_new_owner_name – owner name which will be used for generated INSERT
p_new_table_name – table name which will be used for generated INSERT
p_sql in this sample is 'select * from tablename'
You can find original source code here:
http://dbaora.com/oracle-generate-rows-as-insert-statements-from-table-view-using-plsql/
Ashish Kumar's script generates individually usable insert statements instead of a SQL block, but supports fewer datatypes.
I have been searching for a solution for this and found it today. Here is how you can do it.
Open Oracle SQL Developer Query Builder
Run the query
Right click on result set and export
http://i.stack.imgur.com/lJp9P.png
You might execute something like this in the database:
select "insert into targettable(field1, field2, ...) values(" || field1 || ", " || field2 || ... || ");"
from targettable;
Something more sophisticated is here.
If you have an empty table the Export method won't work. As a workaround. I used the Table View of Oracle SQL Developer. and clicked on Columns. Sorted by Nullable so NO was on top. And then selected these non nullable values using shift + select for the range.
This allowed me to do one base insert. So that Export could prepare a proper all columns insert.
If you have to load a lot of data into tables on a regular basis, check out SQL Loader or external tables. Should be much faster than individual Inserts.
You can also use MyGeneration (free tool) to write your own sql generated scripts. There is a "insert into" script for SQL Server included in MyGeneration, which can be easily changed to run under Oracle.
Put simply, can I use an ADO NET Source task to query a Teradata VOLATILE TABLE? For context, using Teradata SQL Assistant, I can easily create a Teradata VOLATILE TABLE, insert data into it and select data from it. In Visual Studio, using SSIS SQL Tasks, I am also able to create and insert date into a Teradata VOLATILE TABLE. However, because the table does not actually exist yet, it appears we cannot use a separate ADO NET Source task to select data from it, meaning we also cannot map the columns. We get the error "[Teradata Database][3807] Object 'TABLE_NAME' does not exist." If the data in a VOLATILE TABLE, and more accurately the VOLATILE TABLE column definitions, are only available at run time can an ADO NET Source task be used to query a Teradata VOLATILE TABLE? If so, how?
Really old, and not sure if it will work. But You can set validation to false, that might do what you are wanting.
I wanted to add a constraint to an existing column in my SQLite database. However, I read that it is not possible to do so.
I tried the solution from How do I rename a column in a SQLite database table?, but there seems to be missing the copying of all the metadata.
I pretty much want an exact copy of a given table, except for the new constraints.
How does the INSERT command look like to copy all the metadata, thus the indexes will increase correctly, for example.
I'm not a heavy user of sqlite3, but you can use the command line to get the data and "create table" and "create index" commands. I am using the 'History' DB from the Google chrome browser which has a table called "visits". The 'mode insert' command says to provide output in a format that can be used to input this data. The '.schema visits' command says to show the 'create table' and 'create index' statements. The 'select..' statement gives you the data. The database I used doesn't seem to have any foreign key constraints, but they could very well be part of the 'create table' information if your DB has any.
sqlite3 History
.mode insert
.schema visits
select * from visits;
I am running a SQLite database in memory and I am attempting to drop a table with the following command.
DROP TABLE 'testing' ;
But when I execute the SQL statement, I get this error
SQL logic error or missing database
Before I run the "Drop Table" query I check to make sure that the table exists in the database with this query. So I am pretty sure that the table exists and I have a connection to the database.
SELECT count(*) FROM sqlite_master WHERE type='table' and name='testing';
This database is loaded in to memory from a file database and after I attempt to drop this table the database is saved from memory to the file system. I can then use a third party SQLite utility to view the SQLite file and check to see if the "testing" exists, it does. Using the same 3rd party SQLite utility I am able to run the "Drop TABLE" SQL statement with out error.
I am able to create/update tables without any problems.
My questions:
Is there a difference between a memory database and a file database in SQLite when dropping a table?
Is there a way to disable the ability to drop a table in SQLite that I may have accentually turned on somehow?
Edit: It appears to have something to do with a locked table. Still investigating.
You should not have quotes in your DROP TABLE command. Use this instead:
DROP TABLE testing
I had the same problem when using Sqlite with the xerial jbdc driver in the version 3.7.2. and JRE7
I first listed all the tables with the select command as follows:
SELECT name FROM sqlite_master WHERE type='table'
And then tried to delete a table like this:
DROP TABLE IF EXISTS TableName
I was working on a database stored on the file system and so it seems not to effect the outcome.
I used the IF EXISTS command to avoid listing all the table from the master table first, but I needed the complete table list anyway.
For me the solution was simply to change the order of the SELECT and DROP.