how to set serial data type starting point in Postgres? - postgresql-9.1

How to set starting point for serial data type in Postgres?
ALTER table employee_details ALTER COLUMN emp_id RESTART WITH 1001;

Related

How create a kudu table in cloudera quickstart VM

I have been trying to create a kudu table in impala using the cloudera quickstart VM following this example
https://kudu.apache.org/docs/quickstart.html
CREATE TABLE sfmta
PRIMARY KEY (report_time, vehicle_tag)
PARTITION BY HASH(report_time) PARTITIONS 8
STORED AS KUDU
AS SELECT
UNIX_TIMESTAMP(report_time, 'MM/dd/yyyy HH:mm:ss') AS report_time,
vehicle_tag,
longitude,
latitude,
speed,
heading
FROM sfmta_raw;
getting the following error:
ERROR: AnalysisException: Table property 'kudu.master_addresses' is required when the impalad startup flag -kudu_master_hosts is not used. The VM used is cloudera-quickstart-vm-5.13.0-0-virtualbox. Thanks in advance for your help
From the documentation
If the -kudu_master_hosts configuration property is not set, you can
still associate the appropriate value for each table by specifying a
TBLPROPERTIES('kudu.master_addresses') clause in the CREATE TABLE
statement or changing the TBLPROPERTIES('kudu.master_addresses') value
with an ALTER TABLE statement.
So your table creation should looks like
CREATE TABLE sfmta
PRIMARY KEY (report_time, vehicle_tag)
PARTITION BY HASH(report_time) PARTITIONS 8
STORED AS KUDU
TBLPROPERTIES ('kudu.master_addresses'='localhost:7051')
AS SELECT
UNIX_TIMESTAMP(report_time, 'MM/dd/yyyy HH:mm:ss') AS report_time,
vehicle_tag,
longitude,
latitude,
speed,
heading
FROM sfmta_raw;
7051 is the default port for kudu master.

Max date between two dates teradata

I am running the following proc sql to pull out the max date.
Proc sql;
Connect to TERADATA (login details);
Create table dates as
Select * from connection to TERADATA
( select max (date1,'2011-12-31') from table1
);
Quit;
Error:
Syntax error: expected something between the word 'date1' and ','
Can someone help me where I am doing wrong?
In most flavors of SQL, the max function is an aggregation function, which only takes one argument and then takes the column (or whatever is passed to it) and chooses the maximum value from that column.
SAS is different in that it overloads max to also work as a row-level function.
To do this you could do:
Proc sql;
Connect to TERADATA (login details);
Create table dates as
Select max(date1,'2011-12-31') from connection to TERADATA
( select date1 from table1
);
Quit;
Which pulls it out of the teradata and into SAS where it's legal to do that.
You can do this in-database (push down optimization) with Teradata if you use the GREATEST function and cast the dates to INTEGER:
Proc sql;
Connect to TERADATA (login details);
Create table dates as
Select * from connection to TERADATA
( select GREATEST (CAST(date1 AS INTEGER), CAST(CAST('2011-12-31' AS DATE) AS INTEGER)) from table1
);
Quit;
Note: I double casted the second parameter to be on the safe side, even though it is being passed to Teradata in an implicit ANSI date format. If your date is nullable in the table (date1), there may be some obstacles with COALESCE.

Can I set my Column name by referencing a column from another table?

I am using SQLITE3 on my raspberry pi, I have two tables (views), the schema for them both is below:
CREATE VIEW [PivotTemps1hr] AS
SELECT timeslot,strftime('%Y-%m-%d.%H:%M:%S',timeslot,'localtime'),
AVG(CASE WHEN sensor_id = '28-000005e31c72' THEN value END) AS Server_Cab,
AVG(CASE WHEN sensor_id = '28-000005ea2eea' THEN value END) AS Study,
AVG(CASE WHEN sensor_id = '28-000005eb3986' THEN value END) AS Master_Bed,
FROM TempsSlot1hr JOIN sensors USING (sensor_id, sensor_id)
GROUP BY timeslot;
CREATE TABLE sensors (sensor_id text,sensor_name text);
See above how in my PivotTemps1hr table I have hard coded the field names, examples: Server_Cab and Study
I am wondering can I somehow make this dynamic by reading the field called sensor_name from the sensors table (the join already exists between the two tables)? That way if I move my sensor from one room to another I only have to update the sensors table and everything is automatically updated.
SQLite is an embedded database, and designed to be used from within a 'real' programming language.
Therefore, it has no mechanism for creating dynamic SQL commands.
You have to recreate the view from outside SQLite.

Extracting data files for different dates from database table

I am on windows and on Oracle 11.0.2
I have a table TEMP_TRANSACTION consisting of transactions for 6 months or so. Each record has a transaction date and other data with it.
Now I want to do the following:
1. Extract data from the table for each transaction date
2. Create a flat file with a name of the transaction date;
3. Output the data for this transaction date to the flat file;
4. Move on to the next date and then do the steps 1-3 again.
I create a simple sql script to spool the data out for a transaction date and it works. Now I want to put this in a loop or something like that so that it iterates for each transaction date.
I know this is asking for something from scratch but I need pointers on how to proceed.
I have Powershell, Java at hand and no access to Unix.
Please help!
Edit: Removed powershell as my primary goal is to get it out from Oracle (PL/SQL) and if not then explore Powershell OR Java.
-Abhi
I was finally able to achieve what I was looking for. Below are the steps (may be not the most efficient ones, but it did work :) )
Created a SQL script which spools the data I was looking for (for a single day).
set colsep '|'
column spoolname new_val spoolname;
select 'TRANSACTION_' || substr(&1,0,8) ||'.txt' spoolname from dual;
set echo off
set feedback off
set linesize 5000
set pagesize 0
set sqlprompt ''
set trimspool on
set headsep off
set verify off
spool &spoolname
Select
''||local_timestamp|| ''||'|'||Field1|| ''||'|'||field2
from << transaction_table >>
where local_timestamp = &1;
select 'XX|'|| count(1)
from <<source_table>>
where local_timestamp = &1;
spool off
exit
I created a file named content.txt where I populated the local timestamp values (i.e. the transaction date time-stamps as
20141007000000
20140515000000
20140515000000
Finally I used a loop on powershell which picked up one value from content.txt and then called the sql script (from step 1) and passed the parameter:
PS C:\TEMP\data> $content = Get-Content C:\TEMP\content.txt
PS C:\TEMP\data> foreach ($line in $content){sqlplus user/password '#C:\temp\ExtractData.sql' $line}
And that is it!
I still have to refine few things but at least the idea of splitting the data is working :)
Hope this helps others who are looking for similar thing.

Teradata Alter Table Command to Modify existing column data type from varchar to char with same length

Within Teradata, when executing an ALTER TABLE command to modify the data type for an existing column from VARCHAR(10) to CHAR(10), I receive a 3558 error indicating that the specified attribute can not be altered. Is there an alternate method of coding this to achieve the desired objective or does the column need to be dropped and re-created in order to change the data type?
You can't modify the data type when the internal storage changes and this is the case for VARCHAR <-> CHAR.
Instead of ADD CHAR -> UPDATE CHAR from VARCHAR (needs a huge Transient Journal) -> DROP VARCHAR you better create a new table -> INSERT/SELECT (no TJ) -> DROP/RENAME.
Edit: As Rob Paller suggested, using MERGE INTO instead of INSERT SELECT will avoid spooling the source table.

Resources