I want to know if it's possible to query SAS EG data from the SSMS interface in a new query and store the results (if any) in a temp table.
I've seen a lot of material for connecting SAS EG to SSMS. But not a lot for connecting SSMS to SAS EG. I primarily use SSMS and want to pass SAS code or whatever from SSMS and retrieve data.
BEGIN
Select * into #GetSasData from (
SomeFunctionorProceedureinSSMSthatPullsSASData("Proc SQL Create table A as Select ... ; end;)
) [A]
Select * from #GetSasData
END
Related
I am working on a system where I need to create a view.I have two databases
1.CDR_DB
2.EMS_DB
I want to create the view on the EMS_DB using table from CDR_DB. This I am trying to do via dblink.
The dblink is created at the runtime, i.e. DB Name is decided at the time user installs the database, based on the dbname dblink is decided.
My issue is I am trying to create a query like below to create a view from a table which name is decided at run time. Please see below query :
select count(*)
from (SELECT CONCAT('cdr_log#', alias) db_name
FROM ems_dbs a,
cdr_manager b
WHERE a.db_type = 'CDR'
and a.ems_db_id = b.cdr_db_id
and b.op_state = 4 ) db_name;
In this query cdr_log#"db_name" is the runtime table name(db_name get's created at runtime).
When I'm trying to run above query, I'm not getting the desired result. The result of the above query is '1'.
When running only the sub-query from the above query :
SELECT CONCAT('cdr_log#', alias) db_name
FROM ems_dbs a,
cdr_manager b
WHERE a.db_type = 'CDR'
and a.ems_db_id = b.cdr_db_id
and b.op_state = 4;
i'm getting the desired result, i.e. cdr_log#cdrdb01
but when i'm trying to run the full query, getting result as '1'.
Also, when i'm trying to run as
select count(*) from cdr_log#cdrdb01;
I'm getting the result as '24' which is correct.
Expected Result is that I should get the same output similar to the query :
select count(*) from cdr_log#cdrdb01;
---24
But the desired result is coming as '1' using the full query mentioned initially.
Please let me know a way to solve the above problem. I found a way to do it via a procedure, but i'm not sure how can I invoke this procedure.
Can this be done as part of sub query as I have used above?
You're not going to be able to create a view that will dynamically reference an object over a database link unless you do something like create a pipelined table function that builds the SQL dynamically.
If the database link is created and named dynamically at installation time, it would probably make the most sense to create any objects that depend on the database link (such as the view) at installation time too. Dynamic SQL tends to be much harder to write, maintain, and debug than static SQL so it would make sense to minimize the amount of dynamic SQL you need. If you can dynamically create the view at installation time, that's likely the easiest option. Even better than directly referencing the remote object in the view, particularly if there are multiple objects that need to reference the remote object, would probably be to have the view reference a synonym and create the synonym at install time. Something like
create synonym cdr_log_remote
for cdr#<<dblink name>>
create or replace view view_name
as
select *
from cdr_log_remote;
If you don't want to create the synonym/ view at installation time, you'd need to use dynamic SQL to reference the remote object. You can't use dynamic SQL as the SELECT statement in a view so you'd need to do something like have a view reference a pipelined table function that invokes dynamic SQL to call the remote object. That's a fair amount of work but it would look something like this
-- Define an object that has the same set of columns as the remote object
create type typ_cdr_log as object (
col1 number,
col2 varchar2(100)
);
create type tbl_cdr_log as table of typ_cdr_log;
create or replace function getAllCDRLog
return tbl_cdr_log
pipelined
is
l_rows typ_cdr_log;
l_sql varchar(1000);
l_dblink_name varchar(100);
begin
SELECT alias db_name
INTO l_dblink_name
FROM ems_dbs a,
cdr_manager b
WHERE a.db_type = 'CDR'
and a.ems_db_id = b.cdr_db_id
and b.op_state = 4;
l_sql := 'SELECT col1, col2 FROM cdr_log#' || l_dblink_name;
execute immediate l_sql
bulk collect into l_rows;
for i in 1 .. l_rows.count
loop
pipe row( l_rows(i) );
end loop;
return;
end;
create or replace view view_name
as
select *
from table( getAllCDRLog );
Note that this will not be a particularly efficient way to structure things if there are a large number of rows in the remote table since it reads all the rows into memory before starting to return them back to the caller. There are plenty of ways to make the pipelined table function more efficient but they'll tend to make the code more complicated.
My Teradata query creates a volatile that is used to join to existing views. When linking query to excel the following error pops up: "Teradata: [Teradata Database] [3932] Only an ET or null statement is legal after a DDL Statement". Is there a workaround for this for someone that does not have write permissions in teradata to create a real view or table? I want to avoid linking to Teradata in SQL and running an open query to pull in the data needed.
This is for Excel 2016 64bit and using Teradata version 15.10.1.12
Normally this error will occur if you are using ANSI mode or have issued a BT (Begin Transaction) in BTET mode.
Here are a few workarounds to try:
Issue an ET; statement (commit) after the create volatile table statement. If you are using ANSI mode, use COMMIT; instead of ET;. If you are unsure, try each one in turn. Only one will be valid but both do the same thing. Make sure your Volatile table includes ON COMMIT PRESERVE ROWS
Try using BT ET mode (a.k.a. Teradata mode) when establishing the session. I do not remember where but there will be a setting in the ODBC configuration for this.
Try using a Global Temporary table. These work similarly to Volatile tables except you define them once and the definition sticks around. That is, you can create it in, say BTEQ, or SQL assistant etc. The definition is common to all users and sessions (i.e. your Excel session), but the content is transient and unique to each session (like a volatile table).
Move the select part of your insert into the volatile table into the query that selects the data from the volatile table. See simple example below.
If you do not have create Global Temporary table permissions, ask your DBA.
Here is a simple example to illustrate point 4.
Current:
create volatile table tmp (id Integer)
ON COMMIT PRESERVE ROWS;
insert into tmp
select customer_number
from customer
where X = Y and yr = 2019
;
select a,b,c
from another_tbl A join TMP T ON
A.id = T.id
;
Becomes:
select a,b,c
from another_tbl A join (
select customer_number
from customer
where X = Y and yr = 2019
) AS T
ON
A.id = T.id
;
Or better yet, just Join your tables directly.
Note The first sequence (create table, Insert into and select) is a three statement series. This will return 3 "result sets". The first two will be row counts the last will be the actual data. Most programs (including I think Excel) can not process multiple result set responses. This is one of the reasons it is difficult to use Teradata Macros with client tools like Excel.
The latter solution (a single select) avoids this potential problem.
The only thing I don't have an automated tool for when working with Oracle is a program that can create INSERT INTO scripts.
I don't desperately need it so I'm not going to spend money on it. I'm just wondering if there is anything out there that can be used to generate INSERT INTO scripts given an existing database without spending lots of money.
I've searched through Oracle with no luck in finding such a feature.
It exists in PL/SQL Developer, but errors for BLOB fields.
Oracle's free SQL Developer will do this:
http://www.oracle.com/technetwork/developer-tools/sql-developer/overview/index.html
You just find your table, right-click on it and choose Export Data->Insert
This will give you a file with your insert statements. You can also export the data in SQL Loader format as well.
You can do that in PL/SQL Developer v10.
1. Click on Table that you want to generate script for.
2. Click Export data.
3. Check if table is selected that you want to export data for.
4. Click on SQL inserts tab.
5. Add where clause if you don't need the whole table.
6. Select file where you will find your SQL script.
7. Click export.
Use a SQL function (I'm the author):
https://github.com/teopost/oracle-scripts/blob/master/fn_gen_inserts.sql
Usage:
select fn_gen_inserts('select * from tablename', 'p_new_owner_name', 'p_new_table_name')
from dual;
where:
p_sql – dynamic query which will be used to export metadata rows
p_new_owner_name – owner name which will be used for generated INSERT
p_new_table_name – table name which will be used for generated INSERT
p_sql in this sample is 'select * from tablename'
You can find original source code here:
http://dbaora.com/oracle-generate-rows-as-insert-statements-from-table-view-using-plsql/
Ashish Kumar's script generates individually usable insert statements instead of a SQL block, but supports fewer datatypes.
I have been searching for a solution for this and found it today. Here is how you can do it.
Open Oracle SQL Developer Query Builder
Run the query
Right click on result set and export
http://i.stack.imgur.com/lJp9P.png
You might execute something like this in the database:
select "insert into targettable(field1, field2, ...) values(" || field1 || ", " || field2 || ... || ");"
from targettable;
Something more sophisticated is here.
If you have an empty table the Export method won't work. As a workaround. I used the Table View of Oracle SQL Developer. and clicked on Columns. Sorted by Nullable so NO was on top. And then selected these non nullable values using shift + select for the range.
This allowed me to do one base insert. So that Export could prepare a proper all columns insert.
If you have to load a lot of data into tables on a regular basis, check out SQL Loader or external tables. Should be much faster than individual Inserts.
You can also use MyGeneration (free tool) to write your own sql generated scripts. There is a "insert into" script for SQL Server included in MyGeneration, which can be easily changed to run under Oracle.
I want to know the equivalent query of Teradata BTEQ "create set table" in Snowflake SQL. I'm working on query conversion between BTEQ to Snowflake. Is there any direct syntax? If not, how can I create a set(Allows only unique values/records) table?
Snowflake doesn't have this functionality, and I don't know any database other than Teradata that does.
You can try to emulate that e.g. by always inserting data using a temporary staging table and then MERGE or INSERT..SELECT.., explicitly avoiding duplicates (on the loading side), or access the data through a view that does SELECT DISTINCT * FROM table.
I am trying to fetch set of records from the database part by part.
I tried to use Limit and fetch but it seems like it does not working with oracle 11g. Is there any alternative solution to do this. I have tried many in google results but nothing is working properly.
You can use this query and do what u want.
SELECT A.*
FROM (SELECT A.*, ROWNUM ROWNUMBER
FROM Table1 T
WHERE ROWNUM <= TO) T
WHERE ROWNUMBER > FROM;
FROM is from which number and TO is to which number
A Sound application is based on sound design. Kindly check if you are trying to achieve a procedural requirement using an SQL. If yes, it is better to use PL/SQL instead of SQL.
Create a cursor using the required SQL without any limits.
Create a type of associative array to hold the batch records.
Create an associative array using the type created above
Open and loop the cursor.
FETCH created_cursor BULK COLLECT INTO created_associated_array LIMIT ;
Hope this helps.