We currently use Oracle/PL/SQL and Oracle Forms WEB as User Interface.
The thing is that we decided to migrate the UI from Forms to another UI (probably HTML5/ Angular...).
Our system architecture is layered in a way that the batch code will remain untouched and all we have to do is to access the GUI Façade from the new technology (still to be chosen). The problem is: All the data this GUI Façade provides (curretly to Oracle Forms) is structured in collections like:
TYPE tp_rc_cod IS RECORD(
-- Return code
cd_return NUMBER(2),
-- Name
cd_name some_table.name%TYPE
);
TYPE tp_table_rc_cod IS TABLE OF tp_rc_cod INDEX BY PLS_INTEGER;
So, Is there any way to quicky convert the returns of our current GUI Façade from table records to XML or JSON?
We thought about building a Wrapper in the middle of the new UI and current GUI Façade, however the system is not small, so it could became hard to build and maybe have performance issues.
I already know that It is not feasible for Oracle JDBC drivers to support calling arguments or return values of the PL/SQL RECORD, BOOLEAN, or table with non-scalar element types. However, Oracle JDBC drivers support PL/SQL index-by table of scalar element types. If this happens, How can Oracle Forms, for instance, do it? Does it build a Wrapper itself?
Any suggestions?
IF your types are actual oracle types (and not package types) you can cast them to a CLOB containing the xml output with code similar to
declare
l_tab tp_table_rc_cod := tp_table_rc_cod();
--new variables
l_rc SYS_REFCURSOR;
l_xmlctx number;
l_new_retval CLOB;
begin
l_tab.extend(1);
l_tab(1) := tp_rc_cod( 1, 'testname');
--TABLE RETURN
--return l_tab;
-- XML RETURN
open l_rc for select * from table(l_tab);
l_xmlctx := SYS.DBMS_XMLGEN.NEWCONTEXT(l_rc);
l_new_retval := dbms_xmlgen.getXML(l_xmlctx);
DBMS_XMLGEN.closeContext(l_xmlctx);
--return l_new_retval;
end;
/
but as you see it is still some effort. and there are other DBMS_XMLGEN options you'd probably want to set.
I also think oracle 12 does remove the "oracle type" requirement but I am not sure.
im not sure that exactly answers your question but I hope it helps
Related
I have a package in oracle10g with multi procedures and functions.
Is there any way to track execution time for each procedure and function in the package without using dbms_utility.get_time in each one of them
You can use the Oracle PL/SQL profiler to do this.
CREATE OR REPLACE PROCEDURE proc_a IS
BEGIN
dbms_lock.sleep(seconds => 3);
END proc_a;
/
Start and stop the profiler around your code.
DECLARE
v_run_number BINARY_INTEGER;
BEGIN
dbms_profiler.start_profiler(run_number => v_run_number);
proc_a;
dbms_profiler.stop_profiler;
END;
/
-- 3 seconds
You will get data into the plsql_profiler_% tables. A run is a single call of the profiler, units are the various procedures, function, packages and data is the run time, call occurrences, etc of each line of each unit.
SELECT *
FROM plsql_profiler_runs r
ORDER BY r.run_date DESC,
r.runid DESC;
-- runid = 3006
-- plsql_profiler_data
-- plsql_profiler_units
We use Allaround Automations PL/SQL Developer (great tool by the way), but I believe Oracle SQL Developer and Quest Toad also support viewing this data (and perhaps even handing the start and stop of the profiler for you: I know the tool we use does).
For a simple SQL based way to see the data I would recommend the following Metalink article. It provides a script you run and provide your run ID to and will generate you a nice HTML file with a report of your program(s).
Script to produce HTML report with top consumers out of PL/SQL Profiler DBMS_PROFILER data (Doc ID 243755.1)
I'm a developer on the Microsoft stack (C#, SQL Server, EF, etc) that has inherited a webforms app that connects to an Oracle 11g database. The app is currently laden with inline SQL statements which I'd like to convert to parameterized stored procedures. However, being accustomed to T-SQL, I'm finding the move to PL/SQL a fairly steep learning curve.
Most of the SQL statements are fairly simple statements which return filtered datasets from the base table
select field1, field2, fieldn
from foo
where field1 = 'blah'
In T-SQL, this would be fairly straightforward
create procedure fooproc
#filter varchar(100)
as
begin
select field1, field2, field3
from foo
where field1 = #filter
end
Unfortunately, it doesn't seem to be this straightforward in PL/SQL. Upon searching, I've found answers which include:
Use a function instead of a procedure (which leads me to wonder if procedures in SQL Server map one-to-one to procedures in Oracle)
Create a "package" for the procedure (still not quite sure what that is)
Use a cursor or for loop (which seems unholy and just wrong)
In addition, most of the examples I've found online of Oracle stored procedures return a scalar value or no value at all. I'd think this to be a fairly common task that many people want to perform, but my google-fu must not be very strong on this one. So if anyone can help me translate, I'd be appreciative.
Thanks
A SQL Server stored procedure that just returns a result set would most naturally translate into an Oracle stored function that returns a cursor. Something like
CREATE OR REPLACE FUNCTION fooFunc( p_field1 IN foo.field1%type )
RETURN sys_refcursor
IS
l_rc sys_refcursor;
BEGIN
OPEN l_rc
FOR SELECT field1, field2, field3
FROM foo
WHERE field1 = p_field1;
RETURN l_rc;
END;
In Oracle 12.1, there is some syntactic sugar for implicit results to make conversions from SQL Server easier by allowing procedures to return ref cursors implicitly but your question indicates that you're still on 11g so that probably isn't an option.
You could also have a procedure that has an out parameter of type sys_refcursor. Normally, though, you should use functions for objects that merely return results and procedures for objects that modify the data.
Normally, all of your Oracle procedures and functions would be wrapped up into packages that group together bits of related functionality. If you have half a dozen functions that let you query foo using different criteria, you'd want to put all of those functions in a single package just to keep things organized.
I can run the 'guts' of my stored procedure as a giant query.. just fine from SQL Management Studio. Furthermore, I can even right click and 'execute' the stored procedure - .. y'know.. run it as a stored procedure - from SQL Management Studio.
When my ASP.NET MVC app goes to run this stored procedure, I get issues..
System.Data.SqlClient.SqlException: Invalid object name '#AllActiveOrders'.
Does the impersonation account that ASP.NET runs under need special permissions? That can't be it.. even when I run it locally from my Visual Studio (under my login account) I also get the temp table error message.
EDIT: Furthermore, it seems to work fine when called from one ASP.NET app (which is using a WCF service / ADO.NET to call the stored procedure) but does not work from a different ASP.NET app (which calls the stored proc directly using ADO.NET)
FURTHERMORE: The MVC app that doesn't crash, does pass in some parameters to the stored procedure, while the crashing app runs the Stored Proc with default parameters (doesn't pass any in). FWIW - when I run the stored procedure in SQL Mgt. Studio, it's with default parameters (and it doesn't crash).
If it's of any worth, I did have to fix a 'String or Binary data would be truncated' issue just prior to this situation. I went into this massive query and fixed the temptable definition (a different one) that I knew to be the problem (since I had just edited it a day or so ago). I was able to see the 'String/Binary truncation' issue in SQL Mgt. Studio / as well as resolve the issue in SQL Mgt Studio.. but, I'm really stumped as to why I cannot see the 'Invalid Object name' issue in SQL Mgt. Studio
Stored procedures and temp tables generally don't mix well with strongly typed implementations of database objects (ado, datasets, I'm sure there's others).
If you change your #temp table to a #variable table that should fix your issue.
(Apparently) this works in some cases:
IF 1=0 BEGIN
SET FMTONLY OFF
END
Although according to http://msdn.microsoft.com/en-us/library/ms173839.aspx, the functionality is considered deprecated.
An example on how to change from temp table to var table would be like:
create table #tempTable (id int, someVal varchar(50))
to:
declare #tempTable table (id int, someval varchar(50))
There are a few differences between temp and var tables you should consider:
What's the difference between a temp table and table variable in SQL Server?
When should I use a table variable vs temporary table in sql server?
Ok. Figured it out with the help of my colleague who did some better Google-fu than I had done prior..
First, we CAN indeed make SQL Management Studio puke on my stored procedure by adding the FMTONLY option:
SET FMTONLY ON;
EXEC [dbo].[My_MassiveStackOfSubQueriesToProduceADigestDataSet]
GO
Now, on to my two competing ASP.NET applications... why one of them worked and one of them didn't? Under the covers, both essentially used an ADO.NET System.Data.SqlClient.SqlDataAdapter to go get the data and each performed a .Fill(DataSet1)
However, the one that was crashing was trying to get the schema in advanced of the data, instead of just deriving the schema after the fact.. so, it was this line of code that was killing it:
da.FillSchema(DataSet1, SchemaType.Mapped)
If you're struggling with this same issue that I've had, you may have come across forums like this from MSDN which are all over the internets - which explain the details of what's going on quite adequately. It had just never occurred to me that when I called "FillSchema" that I was essentially tripping over this same issue.
Now I know!!!
Following on from bkwdesign's answer about finding the problem was due to ADO.NET DataAdapter.FillSchema using SET FMTONLY ON, I had a similar problem. This is how I dealt with it:
I found the simplest solution was to short-circuit the stored proc, returning a dummy recordset FillSchema could use. So at the top of the stored proc I added something like:
IF 1 = 0
BEGIN;
SELECT CAST(0 as INT) AS ID,
CAST(NULL AS VARCHAR(10)) AS SomTextCol,
...;
RETURN 0;
END;
The columns of the select statement are identical in name, data type and order to the schema of the recordset that will be returned from the stored proc when it executes normally.
The RETURN ensures that FillSchema doesn't look at the rest of the stored proc, and so avoids problems with temp tables.
Created the following code in SQL however need to use it in sqlite (phonegap specifically).
INSERT INTO actions(Action) VALUES ('Go to the pub');
SET #aid = LAST_INSERT_ID();
INSERT INTO statements(statement, Language) VALUES ('Have a pint', 'English');
SET #sid = LAST_INSERT_ID();
INSERT INTO Relationships(SID,AID) VALUES (#sid,#aid);
The issue we are having however is how to declare the variables in sqlite.
The LAST_INSERT_ID() will become last_insert_rowid(), however what is the sqlite version of SET #aid = ?
SQLite does not have variables.
In an embedded database such as SQLite, there is no separate server machine or even process, so it would not make sense to add a programming language to the DB engine when the same control flow and processing logic could be just as well done in the application itself.
Just use three separate INSERT statements.
(In WebSQL, the result object has the insertId property.)
I am a .Net web developer working with a legacy Oracle database. In the past I have worked with orm tools like nHibernate but all database communication here is required to be done via stored procedures. Our dba is asking us to pass a bunch of administrative info to every procedure we call including the username/domain/ip of the end user. This data is then used to call another stored procedure that logs usage info each time a procedure is called.
I am not that well versed in Oracle or Pl/Sql and I am trying to write my .Net code in a clean way that meets best practices whenever possible. It seems to me that this process of passing extra data through to every procedure is messy and tedious on both the .Net and Oracle ends.
Does anyone know of a better way to accomplish the dba's goal without all the overhead? Or is this a standard way of doing things that I should get used to.
I'd use a context rather than passing additional parameters to every stored procedure call. A context is a convenient place to store arbitrary session-level state data that the stored procedures can all reference.
For example, I can create a context MYAPP_CTX for my application and create a simple package that lets me set whatever values I want in the context.
SQL> create context myapp_ctx using ctx_pkg;
Context created.
SQL> create package ctx_pkg
2 as
3 procedure set_value( p_key in varchar2, p_value in varchar2 );
4 end;
5 /
Package created.
SQL> create package body ctx_pkg
2 as
3 procedure set_value( p_key in varchar2, p_value in varchar2 )
4 as
5 begin
6 dbms_session.set_context( 'MYAPP_CTX', p_key, p_value );
7 end;
8 end;
9 /
Package body created.
When the application gets a connection from the connection pool, it would simply set all the context information once.
SQL> begin
2 ctx_pkg.set_value( 'USERNAME', 'JCAVE' );
3 ctx_pkg.set_value( 'IP_ADDRESS', '192.168.17.34' );
4 end;
5 /
PL/SQL procedure successfully completed.
Subsequent calls and queries in the same session can then just ask for whatever values are stored in the context.
SQL> select sys_context( 'MYAPP_CTX', 'USERNAME' )
2 from dual;
SYS_CONTEXT('MYAPP_CTX','USERNAME')
--------------------------------------------------------------------------------
JCAVE
Realistically, you'd almost certainly want to add a clear_context procedure to the package that called dbms_session.clear_context( 'MYAPP_CTX' ) to clear whatever values had been set in the context when a connection was returned to the connection pool to avoid inadvertently allowing context information from one session to bleed over into another. You would probably also design the package with separate procedures to set and to get at least the common keys (username, ip address, etc.) rather than having 'USERNAME' hard-coded multiple places. I used a single generic set_value method just for simplicity.