Create a View with the identifier as a variable on u-sql - u-sql

I would like to create a view using U-SQL. The name of the view can only be defined at compile time using the DECLARE function.
Visual Studio throws a syntax error when i try to use the variable in the create view statement.
Is there a workaround for this?

U-SQL does not support dynamic U-SQL at this point although if you feel this is an important missing feature you could post a feature request here:
https://feedback.azure.com/forums/327234-data-lake
It looks like there is already a similar request you can vote for here.
As a workaround, you could generate the U-SQL dynamically then run the script separately either manually or using one of the SDKs eg using Powershell, .net. A simple example:
//Dynamic U-SQL
DECLARE #viewName string = "dbo.vw_yourViewName";
// Create dynamic U-SQL
#usql =
SELECT *
FROM ( VALUES
( "USE DATABASE yourDatabase;"),
( String.Format("DROP VIEW IF EXISTS {0};", #viewName)),
( String.Format("CREATE VIEW IF NOT EXISTS {0} AS EXTRACT col1 int, col2 string, col3 string, col4 string, col5 string FROM \"/input/input44.txt\" USING Extractors.Csv();", #viewName))
) AS x (usql);
// Output the statements in the correct sort order
OUTPUT #usql
TO "/output/dynamic.usql"
USING Outputters.Text(delimiter:' ', quoting:false);
Other examples of dynamic U-SQL (or more precisely, U-SQL generated dynamically) are here and here.

Related

How to Pass Parameter Values at Runtime in Informatica Mapping Parameter

I have a scenario like we need to load data from source file to target table from a particular date [like LOAD_DATE], So I’ll create a mapping parameter for LOAD_DATE and pass that in Source Qualifier query. My query looks like this.
SELECT * FROM my_TABLE where DATE >= ‘$$LOAD_DATE’
So here I need to pass parameter values for ‘$$LOAD_DATE’ from another external database. I know that I need to pass the values from the Parameter file.
But my requirement is not to hardcore the values in the Parameter file but to feed it in runtime from another database. I will appreciate your help and thoughts on this.
You dont have to hardcode.
You can do it like this -
option 1. Create a mapping to create the param file in particular format.
Read for the other DB.
In expression transformation create below port which will generate actual param string. Pls note, we need to add new line so its recognized like a actual param file.
out_str = '[<<name of folder . name of workflow or sessoin>>]' || chr(12) ||
'$$LOAD_DATE='|||| CHR(39) ||<<date value from another DB>>|| CHR(39)
Then link above port to a flat file target. Name the output file as session_param.txt or whatever suitable. Pls make sure the parameter is generated correctly.
Use above file as a parameter file in your actual workflow.
Option 2 - You can join another table with original table flow. This can be difficult and need to change existing mapping.
Join the another table from another DB with main table based on a dummy condition. make sure you get distinct values of LOAD_DATE from another table. Make sure you always have 1 value from this DB.
Once you have the LOAD_DATE field from another table, you can use it in filter transformation to filter the data.
After this point you can add your original mapping.
Whole mapping should be like this-
SQ_MAIN_TABLE ----------------------->|
sq_ANOTHER_TABLE --DISTINCT_LOAD_DT-->JNR--FIL on LOAD_DT --><<your mapping logic>>

using an expression as table name in sqlite

I am trying to check if a table exists prior to send a SELECT query on that table.
The table name is composed with a trailing 2 letters language code and when I get the full table name with the user's language in it, I don't know if the user language is actually supported by my database and if the table for that language really exists.
SELECT name FROM sqlite_master WHERE name = 'mytable_zz' OR name = 'mytable_en' ORDER BY ( name = 'mytable_zz' ) DESC LIMIT 1;
and then
SELECT * FROM table_name_returned_by_first_query;
I could have a first query to check the existence of the table like the one above, which returns mytable_zz if that table exists or mytable_en if it doesn't, and then make a second query using the result of the first as table name.
But I would rather have it all in one single query that would return the expected results from either the user's language table or the english one in case his language is not supported, without throwing a "table mytable_zz doesn't exist" error.
Anyone knows how I could handle this ?
Is there a way to use the result of the first query as a table name in the 2nd ?
edit : I don't have the hand of the database itself which is generated automatically, I don't want to get involved in a complex process of manually updating any new database that I get. Plus this query is called multiple times and having to retrieve the result of a first query before launching a second one is too long. I use plain text queries that I send through a SQLite wrapper. I guess the simplest would rather be to check if the user's language is supported once for all in my program and store a string with either the language code of the user or "en" if not supported, and use that string to compose my table name(s). I am going to pick that solution unless someone has a better idea
Here is a simple MRE :
CREATE TABLE IF NOT EXISTS `lng_en` ( key TEXT, value TEXT );
CREATE TABLE IF NOT EXISTS `lng_fr` ( key TEXT, value TEXT );
INSERT INTO `lng_en` ( key , value ) VALUES ( 'question1', 'What is your name ?');
INSERT INTO `lng_fr` ( key , value ) VALUES ( 'question1', 'Quel est votre nom ?');
SELECT `value` FROM lng_%s WHERE `key` = 'question1';
where %s is to be replaced by the 2 letters language code. This example will work if the provided code is 'en' or 'fr' but will throw an error if the code is 'zh', in this case I would like to have the same result returned as with 'en' ....
Not in SQL, without executing it dynamically.. But if this is your front end that is running this SQL then it doesn't matter so much. Because your table name came out of the DB there isn't really any opportunity for SQL injection hacking with it:
var tabName = db.ExecuteScalar("SELECT name FROM sqlite_master WHERE name = 'mytable_zz' OR name = 'mytable_en' ORDER BY ( name = 'mytable_zz' ) DESC LIMIT 1;")
var results = db.ExecuteQuery("SELECT * FROM " + tabName);
Yunnosch's comment is quite pertinent; you're essentially storing in a table name information that really should be in a column.. You could consider making a single table and then a bunch of views like mytable_zz the definition of which is SELECT * FROM mytable WHERE lang = 'zz' etc, and make instead-of triggers if you want to cater for a legacy app that you cannot change; the legacy app would select from / insert into the views thinking they are tables, but in reality your data is single table and easier to manage

Need to get data from a table using database link where database name is dynamic

I am working on a system where I need to create a view.I have two databases
1.CDR_DB
2.EMS_DB
I want to create the view on the EMS_DB using table from CDR_DB. This I am trying to do via dblink.
The dblink is created at the runtime, i.e. DB Name is decided at the time user installs the database, based on the dbname dblink is decided.
My issue is I am trying to create a query like below to create a view from a table which name is decided at run time. Please see below query :
select count(*)
from (SELECT CONCAT('cdr_log#', alias) db_name
FROM ems_dbs a,
cdr_manager b
WHERE a.db_type = 'CDR'
and a.ems_db_id = b.cdr_db_id
and b.op_state = 4 ) db_name;
In this query cdr_log#"db_name" is the runtime table name(db_name get's created at runtime).
When I'm trying to run above query, I'm not getting the desired result. The result of the above query is '1'.
When running only the sub-query from the above query :
SELECT CONCAT('cdr_log#', alias) db_name
FROM ems_dbs a,
cdr_manager b
WHERE a.db_type = 'CDR'
and a.ems_db_id = b.cdr_db_id
and b.op_state = 4;
i'm getting the desired result, i.e. cdr_log#cdrdb01
but when i'm trying to run the full query, getting result as '1'.
Also, when i'm trying to run as
select count(*) from cdr_log#cdrdb01;
I'm getting the result as '24' which is correct.
Expected Result is that I should get the same output similar to the query :
select count(*) from cdr_log#cdrdb01;
---24
But the desired result is coming as '1' using the full query mentioned initially.
Please let me know a way to solve the above problem. I found a way to do it via a procedure, but i'm not sure how can I invoke this procedure.
Can this be done as part of sub query as I have used above?
You're not going to be able to create a view that will dynamically reference an object over a database link unless you do something like create a pipelined table function that builds the SQL dynamically.
If the database link is created and named dynamically at installation time, it would probably make the most sense to create any objects that depend on the database link (such as the view) at installation time too. Dynamic SQL tends to be much harder to write, maintain, and debug than static SQL so it would make sense to minimize the amount of dynamic SQL you need. If you can dynamically create the view at installation time, that's likely the easiest option. Even better than directly referencing the remote object in the view, particularly if there are multiple objects that need to reference the remote object, would probably be to have the view reference a synonym and create the synonym at install time. Something like
create synonym cdr_log_remote
for cdr#<<dblink name>>
create or replace view view_name
as
select *
from cdr_log_remote;
If you don't want to create the synonym/ view at installation time, you'd need to use dynamic SQL to reference the remote object. You can't use dynamic SQL as the SELECT statement in a view so you'd need to do something like have a view reference a pipelined table function that invokes dynamic SQL to call the remote object. That's a fair amount of work but it would look something like this
-- Define an object that has the same set of columns as the remote object
create type typ_cdr_log as object (
col1 number,
col2 varchar2(100)
);
create type tbl_cdr_log as table of typ_cdr_log;
create or replace function getAllCDRLog
return tbl_cdr_log
pipelined
is
l_rows typ_cdr_log;
l_sql varchar(1000);
l_dblink_name varchar(100);
begin
SELECT alias db_name
INTO l_dblink_name
FROM ems_dbs a,
cdr_manager b
WHERE a.db_type = 'CDR'
and a.ems_db_id = b.cdr_db_id
and b.op_state = 4;
l_sql := 'SELECT col1, col2 FROM cdr_log#' || l_dblink_name;
execute immediate l_sql
bulk collect into l_rows;
for i in 1 .. l_rows.count
loop
pipe row( l_rows(i) );
end loop;
return;
end;
create or replace view view_name
as
select *
from table( getAllCDRLog );
Note that this will not be a particularly efficient way to structure things if there are a large number of rows in the remote table since it reads all the rows into memory before starting to return them back to the caller. There are plenty of ways to make the pipelined table function more efficient but they'll tend to make the code more complicated.

Oracle SQL Update passed as parameter (into stored procedure) string from .NET

I would like to know how to accomplish this task. I've looked at CASE, DECODE or IF condition and I'm not able to make it work. My goal is to pass a block of predefined column/value pair constructed from ASP.NET data to my Oracle stored procedure. I am trying to only update certain columns out of many to preserve other columns not needing updates. So here's my set up:
Stored procedure:
UpdateSelectedColumns(myValuePairString, updatedBy)
-- Passed variable from ASP.NET, myValuePairString = 'col1 = 10,col2 = 'Dog''
-- update statement final
UPDATE MyTable
SET
col1 = 10,
col2 = 'Dog'
col3 = 'john';
COMMIT;
Thank you in advance...
Ricky
For once I'm gonna advise to not use a stored proc. There is no point here in using a stored procedure.
As it is your stored procedure would blindly accept its arguments and execute the update without adding any value. Furthermore, by using this procedure, you preclude the use of binds and exposes yourself to bugs (whenever you encouner a value with a quote '), performance hit and SQL injection vulnerability.
The advantage of PL/SQL (simple transparent binding, transparent use and reuse of cursors, strict static SQL parsing and metadata dependancy) are all pointless if you take an aribtrary string as argument and put it in a dynamic cursor.
You'll be better off to use your language native cursors and use bind variables.
If you really want to use PL/SQL, replace your single argument with a couple of tables. One for the column names, one for the variable values. You could then use DBMS_SQL to parse the statement and use appropriate bind variables. You'll need some convention to be able to parse date, number and character values. You'll need to read metadata from the database to check the datatypes. This would be a lot of code for not a bit of value.

ASP.NET DataSet from Oracle Stored Procedure

I have read so many questions and articles from this web site.
However I am getting tired of looking for something I want to manipulate.
In SQL Server, I used to call procedures like "EXEC Some_Procedure_name arg1, 'arg2', arg3, 'arg4'".
When input parameters are in numeric, I woudn't use sing quotation.
But in oracle, do I really need to write something like using Input and Output parameters?
Let's say that the procedure is below:
CREATE OR REPLACE PROCEDURE GET_JOB
(
p_JOB_ID IN varchar2,
outCursor OUT MYGEN.sqlcur
)
IS
BEGIN
OPEN outCursor FOR
SELECT *
FROM JOB
WHERE JOB_ID = p_JOB_ID;
END GET_JOB;
/
Then I must specify the name of input parameter's name in my c# code like below:
var userNameParameter = command.Parameters.Add("p_JOB_ID", Job_ID);
returnValueParameter.Direction = ParameterDirection.In;
Can't I just call it like "Execute GET_JOB 'j208';"?
To return datasets from a stored procedure in Oracle, you need to use a "REF CURSOR".
This is explained in detail, with code examples for .NET, here:
http://www.oracle.com/technetwork/articles/dotnet/williams-refcursors-092375.html

Resources