I have the requirement that the BUSINESS_UNIT field needs to be updated in the Voucher tables(VCHR_LINE_STG), I'm writing SQL in SQL action in the application engine After the update statement, the results are returning the null values.
The total process is not running. How can we handle null Values?
Will PeopleCode work for this scenario? If works which code will work for Null values
Table name: VCHR_LINE_STG,
Field name: BUSINESS_UNIT.
Thanks
Related
In my Oracle Apex 19.3 application I have a SQL statement that needs to be used on several pages and changes slightly based on the user that is logged in. So that I do not need to duplicate this code over and over on each page I generate this statement as an application item called: QUERY_BASED_ON_USER.
An application computation then statically sets it to SELECT j.* FROM table(pkg_jobstatus.report()) j WHERE j.id IN (:USERIDS)
(USERIDS is a separate application item)
I wish to use the application item QUERY_BASED_ON_USER as the sql statement for a table. When setting the data source to PL/SQL and using the following code,
BEGIN
return :QUERY_BASED_ON_USER;
END;
I get this error: PL/SQL function body did not return a value.
I tried debugging this by settings a static page region to: &QUERY_BASED_ON_USER. and it outputs the query correctly.
My assumption is that the code editor does not evaluate the application computation and thus it returns an empty string, which it then refuses to validate or save. But I do not know how to validate this or how to work around this.
How can I use the application item as the sql statement?
You need to set "Use Generic Column Names" to true, and specify the number of columns your query will return:
Then the query is not parsed until runtime, when the item value is available.
Im new to polybase. I have linked my SQL 2019 server to a third parties Azure cosmos and i am able to query data out of my collection. I am getting an error out when i try to query date fields though. In the documents the dates are defined as:
"created" : {
"$date" : 1579540834768
},
In my external table i have the column defined as
[created] DATE,
I have tried to create the column as int and nvarchar(128) but the schema detection rejects it each time. (i have tried to create a field created_date but the schema detection also disagree's that this is correct.
When i try a query that returns any of the date fields i get this error:
Msg 105082, Level 16, State 1, Line 8
105082;Generic ODBC error: [Microsoft][Support] (40460) Fractional data truncated while performing conversion. .
OLE DB provider "MSOLEDBSQL" for linked server "(null)" returned message "Unspecified error".
Msg 7421, Level 16, State 2, Line 8
Cannot fetch the rowset from OLE DB provider "MSOLEDBSQL" for linked server "(null)". .
This happens if i try and exclude null values in my query - even when filtering to specific records where the date is populated (validated using the Azure portal interface)
Is there something i should be doing to handle the integer date from the json records; or another type i can use to get my external table to work?
Found a solution. SQL Server recommends the wrong type for mongodb dates in the schema. Using DateTime2 resolved the issue. Found this on a polybase type mapping page in msdn.
I'm working with the Microsoft Parallel Data Warehouse appliance and attempting to use flyway to handle table migrations in that environment. The issue I'm running into is that the default script for establishing the schema_version table fails.
Here is the default script as far as I can tell that is being executed upon calling baseline().
CREATE TABLE [dbo].[dbresult_migration] (
[installed_rank] INT NOT NULL,
[version] NVARCHAR(50),
[description] NVARCHAR(200),
[type] NVARCHAR(20) NOT NULL,
[script] NVARCHAR(1000) NOT NULL,
[checksum] INT,
[installed_by] NVARCHAR(100) NOT NULL,
[installed_on] DATETIME NOT NULL DEFAULT GETDATE(),
[execution_time] INT NOT NULL,
[success] BIT NOT NULL
);
ALTER TABLE [dbo].[dbresult_migration] ADD CONSTRAINT [dbresult_migration_pk] PRIMARY KEY ([installed_rank]);
CREATE INDEX [dbresult_migration_s_idx] ON [dbo].[dbresult_migration] ([success]);
Specifically the Microsoft Parallel Data Warehouse (MS PDW or APS as it is now known) doesn't support expressions with default constraints.
Msg 104338, Level 16, State 1, Line 1
An expression cannot be used with a default constraint. Specify only constants for a default constraint.
Which causes an error when GETDATE() is used as the default for the installed_on column.
The ALTER TABLE statement will also fail as PRIMARY KEYs and INDICES are managed differently in the environment.
Is there a way to override the default initialization script for the schema_version?
UPDATE
Further investigation reveals that the next failure occurs when attempting to insert records into the schema_version table. Specifically the current implementation attempts to identify the current user based on a call to dbSupport.getCurrentUserFunction(). For SQL Server this function is SUSER_SNAME(). While this function is available on both the standard SQL Server and the Parallel Data Warehouse the current implementation of the Parallel Data Warehouse does not allow for function calls within the values portion of an insert statement. As such, the following error is returned:
Insert values statement can contain only constant literal values or variable references.
When the query that is attempted is logged as:
INSERT INTO [dbo].[dbresult_migration] ([installed_rank],[version],[description],[type],[script],[checksum],[installed_by],[execution_time],[success]) VALUES (#P0, #P1, #P2, #P3, #P4, #P5, SUSER_SNAME(), #P6, #P7)
UPDATE 2
I now have a fork of flyway-core that correctly identifies if you are connecting to SQL Server vs SQL Server parallel data warehouse. Another issue that I have identified is that SQL Server PDW does not allow DDL within transactions and so an attempt to baseline fails as this appears to be attempted from within a transaction template. Ultimately this is evolving from a question of understanding how to modify an initialization script to a need for support of a new database platform. I've submitted this as a new issue on the flyway repo on github here.
I 'm working on an ASP.NET Dynamic Data Entities Web Application.
When trying to insert a new entry in one of the tables, with some value in the column_name field, I get the following message :
Cannot insert the value NULL into column 'column_name', table 'DATABASE.dbo.table_name'; column does not allow nulls. INSERT fails.
The column properties are :
Entity Key : True
Nullable : False
Type : String
I believe Dynamic Data is trying to send null value to entity framework for some reason, but I don't know which.
Do you know why Dynamic Data is behaving that way ?
Or have you any idea how to debug the insert process ?
Thanks
I found where the problem come from. A table from the database the model is based contains a trigger. For some reason the model makes an association between the two tables involved in the trigger.
I have an asp.net Gridview that handles insert operations into a SQL database. Records are only permitted to be inserted if they meet a uniqueness criteria, and this constraint is being enforced using unique indexes in SQL server. If the user attempts to insert a record that already exists, an error message is displayed.
I'm wondering what the best practice is for implementing this.
Check if the record exists SQL side, using IF EXISTS, and locking hints (updlock, holdlock, etc). Return an error code to ASP.net depending on whether the record was inserted
Perform the INSERT operation inside a SQL server try/catch block, relying on the unique index to prevent the insert from occurring if the record exists. Return an error code depending on whether an exception was thrown.
Perform the INSERT operation SQL side, but without SQL try/catch. Handle the PK violation exception inside ASP.net instead.
Normally I'd consider using exceptions to handle valid operations to be bad practice - i.e. software should not throw exceptions unless something is broken. However if the unique index on the table in SQL is going to implement the desired constraint, why bother performing a manual check for existence of the record?
I would make a separate call to check if the record already exists. If yes, show message to user, if no make insert. The reason I would do it this way is because I prefer keeping all the business logic in the application.
If you insist in making just one stored proc call:
I would check before I insert. I would also add an output parameter to the stored proc that returns a message if the insert was unsuccessful. In my application if I see a message in the output parameter, I will display that to the user.