In Tableau Prep I have an output table that I want to import into an Oracle database.
In that output table, there's a column (file_date) with a date value (ex.: '2021-01-01'). The date value is the same for all rows.
Output table:
enter image description here
I need to write a custom SQL query (in Tableau Prep) that checks if my Oracle table already has any rows where the date = '2021-01-01'. If so, all rows need to be deleted before I import the new data.
Table_1:
enter image description here
Something like:
DELETE FROM table_1 WHERE date_column = '2021-01-01';
After checking it will find that the 1st row has the date = '2021-01-01' and deletes that row.
Table_1 after:
enter image description here
As the date changes every time new files come up, manually entering the date in the query is not a possibility. Is there any way to use a value from my table in a custom SQL query?
I'm aware that Tableau Desktop allows for the creation of parameters, but that's not available in Tableau Prep.
If the output is all the same date, then that must also be true of the input data. There is no reason to check for existing data values. Since finding no matching records on delete does not throw an error, checking is not necessary - just delete.
delete
from table_1
where date_column =
(select date_column
from import_table_1
fetch first 1 row only
);
I have an external table partitioned on Timestamp column which is of datetime type. So the external table definition looks like this:-
.create external table external_mytable (mydata:dynamic,Timestamp:datetime)
kind=blob
partition by bin(Timestamp,1d)
dataformat=json
(
h#'https://<mystorage>.blob.core.windows.net/<mycontainer>;<storagekey>'
)
The source table for the export is mytable which has a bunch of columns but I am only interested in a column called mydata holding actual payload and other columns year, month & day, which are required to drive partitioning.
My export looks like this:-
.export async to table external_mytable <| mysourcetable | project mydata,Timestamp=make_datetime(year,month,day)
Now in this case I don't ideally want Timestamp column to be part of actual exported JSON data. I am forced to specify it because this column is driving partitioning logic. Is there any way to avoid Timestamp appearing in the exported data and still be used in determining partitioning in this case?
Thanks for the ask Dhiraj, this is on our backlog. Feel feel to open similar asks on our user voice where we can update once it is complete.
I imported a SQLite DB into MS-Access using ODBC and it was successful except for one column. One of the columns in SQLite DB was of 'Integer' datatype but it contained a string since datatypes are flexible in SQLite. But in the imported MS-Access DB, the column corresponding to that particular datatype is blank. I assume that the Access DB was expecting a integer datatype and not accepting a string. The following is the table structure:
CREATE TABLE test
(
num INTEGER,
name TEXT,
script INTEGER NOT NULL
);
I have problem with the column 'script'. I am not allowed to change the datatype here so is there any solution that I can workout in Access?
You can create a SQLite VIEW named "forimport" to CAST the troublesome column as TEXT
CREATE VIEW forimport AS SELECT num, name, CAST(script AS TEXT) AS script FROM test;
and then import the view (instead of the table) from SQLite into Access
When I import a csv file to sqlite database, it imports number as string to integer column, how can I fix this? A line from my csv file like this:
31,c,BB ROSE - 031,c31,,9,7,0,"142,000",0
CSV files do no have data types; everything is a string.
To convert all values in a column into a number, use something like this:
UPDATE MyTable SET MyColumn = CAST(MyColumn AS INTEGER)
When importing csv files, SQLite assumes all fields are text fields. So you need to perform some extra steps in order to set the correct data types.
However, it is my understanding that you cannot use the ALTER TABLE statement to modify a column in SQLite. Instead, you will need to rename the table, create a new table, and copy the data into the new table.
https://www.techonthenet.com/sqlite/tables/alter_table.php
So suppose I have an employees.csv file I want to import into SQLite database with the correct data types.
employee_id,last_name,first_name,hire_date
1001,adams,john,2010-12-12
1234,griffin,meg,2000-01-01
2233,simpson,bart,1990-02-23
First, create a SQLite database called mydb.sqlite and import employees.csv into a SQLite table called employees.
# create sqlite database called mydb.sqlite
# import data from 'employees.csv' into a SQLite table called 'employees'
# unfortunately, sqlite assumes all fields are text fields
$ sqlite3 mydb.sqlite
sqlite> .mode csv
sqlite> .import employees.csv employees
sqlite> .quit
At this point, the data is imported as text. Let's first get the employees schema from the database and save it to employees.sql.We can use this to create a new script that would rename the table, create a new table, and copy the data into the new table.
$ sqlite3 mydb.sqlite
sqlite> .once employees.sql
sqlite> .schema employees
sqlite> .quit
You should now have employees.sql with the following schema:
CREATE TABLE employees(
"employee_id" TEXT,
"last_name" TEXT,
"first_name" TEXT,
"hire_date" TEXT
);
Let's now create a SQL filed called alterTable.sql that would rename the table, create a new table, and copy the data into the new table.
alterTable.sql
PRAGMA foreign_keys=off;
BEGIN TRANSACTION;
ALTER TABLE employees RENAME TO _employees_old;
CREATE TABLE employees
( "employee_id" INTEGER,
"last_name" TEXT,
"first_name" TEXT,
"hire_date" NUMERIC
);
INSERT INTO employees ("employee_id", "last_name", "first_name", "hire_date")
SELECT "employee_id", "last_name", "first_name", "hire_date"
FROM _employees_old;
COMMIT;
PRAGMA foreign_keys=on;
Finally, we can execute SQL in alterTable.sql and drop the old renamed table
$ sqlite3 mydb.sqlite
sqlite> .read alterTable.sql
sqlite> drop table _employees_old;
At this point, the imported employee data should have the correct data types instead of the default text field.
If you do it this way, you don't have to worry about headers in csv file being imported as data. Other methods might require you delete the header either before or after importing the csv file.
You just need to create the table first with correct types and then the CSV-import will keep this types, because the table already exists.
Here a sample:
create table table1(name TEXT, wert INT);
.mode csv
.separator ";"
.import "d:/temp/test.csv" table1
If you need to delete an imported header-line then use something like this after the import:
delete from table1 where rowid=1;
or use this in case you already did multiple imports into the same table:
delete from [table1] where "name"='name'; -- try to use a name of an INT-column for this.
at the end you can just check the correct import like this:
.header ON
select * from table1 order by wert;
In SQLite, you cannot change the type affinities of columns. Therefore you should create your table and then .import your CSV file into the table. If your CSV file has a header, that will be treated as data upon import. You can either delete the header before importing (in the CSV file), or delete the header after import (in the table). Since the typeof all the header fields will be TEXT, you can easily find this header in a table where some columns have numeric type affinities.
Import CSV file into SQLite.
Go to Database Structure and select imported CSV file
select modify table from the tab
select field one and change name to desired name of column.
Next select the desired data type from the drop down menu. You can now change from Text to Integer or Numeric depending on the data you are working with
I am using sqlite 3.39.4, I would do as follows:
as suggested above create a new table 'newtable' with the right types, then to import data from your 'mycsvtable.csv', type
.mode csv
.import --skip 1 mycsvtable.csv newtable
the --skip 1 avoids the first row if you have headers in your csv
I have a SQlite3 table that has typeless columns like in this example:
CREATE TABLE foo(
Timestamp INT NOT NULL,
SensorID,
Value,
PRIMARY KEY(Timestamp, SensorID)
);
I have specific reasons not to declare the type of the columns SensorID and Value.
When inserting rows with numeric SensorID and Value columns I notice that they are being written as plain text into the .db file.
When I change the CREATE TABLE statement to...
CREATE TABLE foo(
Timestamp INT NOT NULL,
SensorID INT,
Value REAL,
PRIMARY KEY(Timestamp, SensorID)
);
...then the values seem to be written in some binary format to the .db file.
Since I need to write several millions of rows to the database, I have concerns about the file size this data produces and so would like to avoid value storage in plain text form.
Can I force SQLite to use binary representation in it's database file without using explicitly typed columns?
Note: Rows are currently written with PHP::PDO using prepared statements.
The example in section 3.4 in the sqlite docs about types demonstrates the insertion of a number as int in a column without an explicit declaration of type. I guess the trick is leaving out the quotes around the number, which would convert it to a string (which, in the case of typed columns, would be coerced back into a number).
Section 2 in the page linked above also provides a lot of info about the type conversions taking place.