MS Access CREATE PROCEDURE Or use Access Macro in .NET - asp.net

I need to be able to run a query such as
SELECT * FROM atable WHERE MyFunc(afield) = "some text"
I've written MyFunc in a VB module but the query results in "Undefined function 'MyFunc' in expression." when executed from .NET
From what I've read so far, functions in Access VB modules aren't available in .NET due to security concerns. There isn't much information on the subject but this avenue seems like a daed end.
The other possibility is through the CREATE PROCEDURE statement which also has precious little documentation: http://msdn.microsoft.com/en-us/library/bb177892%28v=office.12%29.aspx
The following code does work and creates a query in Access:
CREATE PROCEDURE test AS SELECT * FROM atable
However I need more than just a simple select statement - I need several lines of VB code.
While experimenting with the CREATE PROCEDURE statement, I executed the following code:
CREATE PROCEDURE test AS
Which produced the error "Invalid SQL statement; expected 'DELETE', 'INSERT', 'PROCEDURE', 'SELECT', or 'UPDATE'."
This seems to indicate that there's a SQL 'PROCEDURE' statement, so then I tried
CREATE PROCEDURE TEST AS PROCEDURE
Which resulted in "Syntax error in PROCEDURE clause."
I can't find any information on the SQL 'PROCEDURE' statement - maybe I'm just reading the error message incorrectly and there's no such beast. I've spent some time experimenting with the statement but I can't get any further.
In response to the suggestions to add a field to store the value, I'll expand on my requirements:
I have two scenarios where I need this functionality.
In the first scenario, I needed to enable the user to search on the soundex of a field and since there's no soundex SQL function in Access I added a field to store the soundex value for every field in every table where the user wants to be able to search for a record that "soundes like" an entered value. I update the soundex value whenever the parent field value changes. It's a fair bit of overhead but I considered it necessary in this instance.
For the second scenario, I want to normalize the spacing of a space-concatenation of field values and optionally strip out user-defined characters. I can come very close to acheiving the desired value with a combination of TRIM and REPLACE functions. The value would only differ if three or more spaces appeared between words in the value of one of the fields (an unlikely scenario). It's hard to justify the overhead of an extra field on every field in every table where this functionality is needed. Unless I get specific feedback from users about the issue of extra spaces, I'll stick with the TRIM & REPLACE value.
My application is database agnostic (or just not very religious... I support 7). I wrote a UDF for each of the other 6 databases that does the space normalization and character stripping much more efficiently than the built-in database functions. It really annoys me that I can write the UDF in Access as a VB macro and use that macro within Access but I can't use it from .NET.
I do need to be able to index on the value, so pulling the entire column(s) into .NET and then performing my calculation won't work.

I think you are running into the ceiling of what Access can do (and trying to go beyond). Access really doesn't have the power to do really complex TSQL statements like you are attempting. However, there are a couple ways to accomplish what you are looking for.
First, if the results of MyFunc don't change often, you could create a function in a module that loops through each record in atable and runs your MyFunc against it. You could either store that data in the table itself (in a new column) or you could build an in-memory dataset that you use for whatever purposes you want.
The second way of doing this is to do the manipulation in .NET since it seems you have the ability to do so. Do the SELECT statement and pull out the data you want from Access (without trying to run MyFunc against it). Then run whatever logic you want against the data and either use it from there or put it back into the Access database.

Why don't you want to create an additional field in your atable, which is atable.afieldX = MyFunc(atable.afield)? All what you need - to run UPDATE command once.

You should try to write a SQL Server function MyFunc. This way you will be able to run the same query in SQLserver and in Access.
A few usefull links for you so you can get started:
MSDN article about user defined functions: http://msdn.microsoft.com/en-us/magazine/cc164062.aspx
SQLServer user defined functions: http://www.sqlteam.com/article/intro-to-user-defined-functions-updated
SQLServer string functions: http://msdn.microsoft.com/en-us/library/ms181984.aspx

What version of JET (now called Ace) are you using?
I mean, it should come as no surprise that if you going to use some Access VBA code, then you need the VBA library and a copy of MS Access loaded and running.
However, in Access 2010, we now have table triggers and store procedures. These store procedures do NOT require VBA and in fact run at the engine level. I have a table trigger and soundex routine here that shows how this works:
http://www.kallal.ca/searchw/WebSoundex.htm
The above means if Access, or VB.net, or even FoxPro via odbc modifies a row, the table trigger code will fire and run and save the soundex value in a column for you. And this feature also works if you use the new web publishing feature in access 2010. So, while the above article is written from the point of view of using Access Web services (available in office 365 and SharePoint), the above soundex table trigger will also work in a stand a alone Access and JET (ACE) only application.

Related

How to implement INSERT where not exists for ORACLE in Mule4

I am trying to implement a use-case in Mule4 where a tour needs to be assigned to a user if it has not already been assigned.
I was hoping that I could implement it using Mule db:insert component and using INSERT WHERE NOT EXISTS SQL script as below.
INSERT INTO TL_MAPPING_TOUR(TOURNO,TLID,SYSTEM) select :tourno,:tlid,:system from DUAL
where not exists(select * from TL_MAPPING_TOUR where (TOURNO=:tourno and TLID=:tlid and SYSTEM=:system))
However, this is resulting in Mule Exception
Message : ORA-01722: invalid number
Error type : DB:BAD_SQL_SYNTAX
TL_MAPPING_TOUR table has an id column (Primary Key), but that is auto-generated by a sequence.
The same script, modified for running directly in SQL developer, as shown below, is working fine.
INSERT into TL_MAPPING_TOUR(TOURNO,TLID,SYSTEM)
select 'CLLO001474','123456789','AS400'
from DUAL
where not exists(select * from TL_MAPPING_TOUR where (TOURNO='CLLO001474' and TLID='123456789' and SYSTEM='AS400'));
Clearly Mule db:insert component doesn't like the syntax, but it's not very clear to me what is wrong here. I can't find any INSERT WHERE NOT EXISTS example implementation for the Mule4 Database component either.
stackoverflow page https://stackoverflow.com/questions/54910330/insert-record-into-sql-server-when-it-does-not-already-exist-using-mule directs to page not found.
Any idea what is wrong here and how to implement this in Mule4 without using another Mule4 db:select component before db:insert?
I don't know "mule4", but this:
Message : ORA-01722: invalid number
doesn't mean that syntax is wrong (as you already tested it - the same statement works OK in another tool).
Cause: You executed a SQL statement that tried to convert a string to a number, but it was unsuccessful.
Resolution:
The option(s) to resolve this Oracle error are:
Option #1: Only numeric fields or character fields that contain numeric values can be used in arithmetic operations. Make sure that all expressions evaluate to numbers.
Option #2: If you are adding or subtracting from dates, make sure that you added/substracted a numeric value from the date.
In other words, it seems that one of columns is declared as NUMBER, while you passed something that is a string. Oracle performed implicit conversion when you tested the statement in SQL Developer, but it seems that mule4 didn't and hence the error.
The most obvious cause (based on what you posted) is putting '123456789' into TLID as other values are obviously strings. Therefore, pass 123456789 (a number, no single quotes around it) and see what happens. Should work.
SQL Developer is too forgiving. It will convert string to numbers and vise versa automatically when it can. And it can a lot.
Mulesoft DB connector tries the same but it is not as succefule as native tools. Pretty often it fails to convert, especially on dates but this is not your case.
In short - do not trust too much data sense of Mulesoft. If it works - great! Otherwise try to eliminate any intelligence from it and do all conversions in the query and better from the string. Usually number works fine but if doesn't - use to_number function to mark properly that this is the number.
More about this is here https://simpleflatservice.com/mule4/AvoidCoversionsOrMakeThemNative.html

CustTableListPage filtering is too slow

When I'm trying to filter CustAccount field on CustTableListPage it's taking too long to filter. On the other fields there is no latency. I'm trying to filter just part of account number like "*123".
I have done reindexing for custtable and also updated statics but not appreciable difference at all.
When i have added listpage's query in a view it's filtering custAccount field normally like the other fields.
Any suggestion?
Edit:
Our version is AX 2012 r2 cu8, not a user based problem it occurs for every user, Interaction class has some custimizations but just for setting some buttons enable/disable props. etc... i tryed to look query execution what i found is not clear. something like FETCH_API_CURSOR_000000..x
Record a trace of this execution and locate what is a bottleneck.
Keep in mind that that wildcards (such as *) have to be used with care. Using a filter string that starts with a wildcard kills all performance because the SQL indexes cannot be used.
Using a wildcard at the end
Imagine that you have a dictionnary and have to list all the words starting with 'Foo'. You can skip all entries before 'F', then all those before 'Fo', then all those before 'Foo' and start your result list from there.
Similarly, asking the underlying SQL engine to list all CustAccount entries starting with '123' (= filter string '123*') allows using an index on CustAccount to quickly skip to the relevant data.
Using a wildcard at the start
Imagine that you still have that dictionnary and have to list all the words ending with 'ing'. You would have no other choice than going through the entire dictionnary and checking the ending of every word (due to the alphabetical sorting).
This explains why asking the SQL engine to list all CustAccount entries ending with '123' (= filter string '*123') means that all CustAccount values must be investigated. So the AOS loops through all the entries and uses an SQL cursor to do this. That is the FETCH_API_CURSOR statement you see on the SQL level.
Possible solutions
Educate your end user that using a wildcard at the beginning of a filter string will always be slow on a large table.
Step up the SQL server hardware / allocated resources (faster CPU, more RAM, faster disk, ...).
Create a full text index on CustAccount (not a fan of this one and performance impact should be thoroughly investigated).
I've solve the problem. CustTableListPage query had a sorting over DirPartyTable.Name field. When I remove this sorting, filtering with wildcard working like a charm.

Is there a way to query Oracle DB server name and use in conditional compilation?

I got bit trying to maintain code packages that run on two different Oracle 11g2 systems when a line of code to be changed slipped by me. We develop on one system with a specific data set and then test on another system with a different data set.
The differences aren't tremendous, but include needing to change a single field name in two different queries in two different packages to have the packages run. On one system, we use one field, on the other system... a different one. The databases have the same schema name, object names, and field names, but the hosting system server names are different.
The change is literally as simple as
INSERT INTO PERSON_HISTORY
( RECORD_NUMBER,
UNIQUE_ID,
SERVICE_INDEX,
[... 140 more fields... ]
)
SELECT LOD.ID RECORD_NUMBER ,
-- for Mgt System, use MD5 instead of FAKE_SSN
-- Uncomment below, and comment out Dev system statement
-- MD5 UNIQUE_ID ,
-- for DEV system, use below
'00000000000000000000' || LOD.FAKE_SSN UNIQUE_ID ,
null SERVICE_INDEX ,
[... 140 more fields... ]
FROM LEGACY_DATE LOD
WHERE (conditions follow)
;
I missed one of the field name changes in one of the queries, and our multi-day run is crap.
For stupid reasons I won't go into, I wind up maintaining all of the code, including having to translate and reprocess developer changes manually between versions, then transfer and update the required changes between systems.
I'm trying to reduce the repetitive input I have to provide to swap out code -- I want to automate this step so I don't overlook it again.
I wanted to implement conditional compilation, pulling the name of the database system from Oracle and having the single line swap automatically -- but Oracle conditional compilation requires a package static constant (boolean in this case). I can't use the sys_context function to populate the value. Or, it doesn't seem to let ME pull data from the sys_context and evaluate it conditionally and assign that to a constant. Oracle isn't having any. DB_DOMAIN, DB_NAME, or SERVER_HOST might work to differentiate the systems, but I can't find a way to USE the information.
An option is to create a global constant that I set manually when I move the code to the other system, but at this point, I have so many steps to do for a transfer that I'm worried that I'd even screw that up. I would like to make this independent of other packages or my own processes.
Is there a good way to do this?
-------- edit
I will try the procedure and try to figure out the view over the weekend. Ultimately, the project will be turned over to a customer who expects to "just run it", so they won't understand what any switches are meant to do, or why I have "special" code in a package. And, they won't need to... I don't even know if they'll look at the comments.
Thank you
As Mat says in the comments for this specific example you can solve with a view, however there are other ways for more complex situations.
If you're compiling from a filesystem or using any automatic system you can create a separate PL/SQL block/procedure, which you execute in the same session prior to compilation. I'd do something like this:
declare
l_db varchar2(30) := sys_context('userenv','instance_name');
begin
if l_db = 'MY_DB' then
execute immediate 'alter session set plsql_ccflags = ''my_db:true''';
end if;
end;
/
One important point; conditional compilation does not involve a "package static constant", but a session one. So, you need to ensure that your compilation flags are identical/unique across packages/sessions.

What is Better for Mimicking PL/SQL Returning SQL in Interactive Reports: Collection or Pipelined-Function

The worst aspect of the Interactive Report (IR) is that you cannot create it using a PL/SQL returning SQL statement. I have gotten around this using two methods:
1) APEX_COLLECTION.CREATE_COLLECTION in the Before Header Process, which takes a SQL statement (that is constructed in PL/SQL in the process), and have the IR's source be select c001 alias1, c002 alias2 ... from apex_collections a where collection_name = '...'
2) Make a badass pipeline function with a parameter list as long as you need and then have the IR's source be select * from table(package_name.pipelined_function_name(:P1_parameter1, :P1_Parameter2))
Is there a performance difference? I originally used the first method but then ran into an occurrence where it was giving me a bug so I tried the pipelined function and found I just liked it better and have tended to use them ever since unless it was inappropriate to do so (namely when there is a large number of items to be passed to the parameter).
First method gives you opportunity to cache data by re-creating the collection only when you need it. Using n00X and d00X columns will give you some additional performance and right column types for the report definition. You can also create a view based on that collection with type casting and column aliases to add more convenience:
create or replace view apx_my_report
as
select n001 id, c001 data, d001 some_date
from apex_collections
where collection_name = 'MY_REPORT'
/
In that case you report source will be like that:
select id, data, some_date from apx_my_report
/
On the other hand, when you need to execute an ad-hoc query every time when page is rendered, it leads to the unavoidable re-creation of a such collection, therefore the performance goes down because of unwanted transaction maintaining: undo, redo etc.
So, it depends.

System level trigger on DML Command in plsql

Suppose there are n number of tables in the database. Whatever insert,update,delete happen across any table in the database, have to be captured in a table called "Audit_Trail", where we have the below columns in the audit trail tables.
Server_Name
AT_date
AT_time
Table_name
Column_name
Action
Old_value
New_Value
The server on which table, on which column, on which date and time need to be captured. Also, the "Action" column tracks whether an action is an insert, update or delete and we have to capture the old value and new value as well.
So what is the best way to do this? Can we create a database level trigger which can fire trigger in case of any insert, update or delete?
The best way would be to use Oracle's own auditing functionality.
AUDIT ALL ON DEFAULT BY ACCESS;
http://docs.oracle.com/cd/E11882_01/network.112/e36292/auditing.htm#DBSEG392
In response to comment ...
There is nothing unusual in wanting to audit every change made to tables in the database -- hence there is already functionality provided in the system for doing exactly that. It is better then using triggers because it cannot be bypassed as easily. However, if you want to use this pre-supplied, robust, simple to use functionality you might have to compromise on your specific requirements a little, but the payoff will be a superior solution that will use code and configuration in common with thousands of other Oracle systems.

Resources