Intershop: How To Delete a Channel After Orders Have Been Created - intershop

I am able to delete a channel from the back office UI and run the DeleteDomainReferences job in SMC to clear the reference and be able to create a new channel again with the same id.
However, once an order has been created, the above mentioned process won't work.
I heard that we can run some stored procedures against the database for situation like this.
Question: what are the stored procedures and steps to take to be able to clean any reference in Intershop so that I can create a channel with the same id again?
Update 9/26:
I did configure a new job in SMC to call DeleteDomainReferencesTransaction pipeline with ToBeRemovedDomainID attribute set to the domain id that I am trying to clean up.
The job ran without error in the log file. The job finished almost instantly, though.
Then I ran the DeleteDomainReferences job in SMC. This is the job I normally run after deleting a channel when there is no order in that channel. This job failed the following exception in the log file.
ORA-02292: integrity constraint (INTERSHOP.BASKETADDRESS_CO001) violated - child record found
ORA-06512: at "INTERSHOP.SP_DELETELINEITEMCTNRBYDOMAIN", line 226
ORA-06512: at line 1
Then I checked BASKETADDRESS table and did see the records for that domain id. This is, I guess, the reason why DeleteDomainReferences job failed.
I also execute the SP_BASKET_OBSERVER with that domain id, but it didn't seem to make a difference.
Is there something I am missing?

sp_deleteLineItemCtnrByDomain
-- Description : This procedure deletes basket/order related stuff.
-- Input : domainID The domain id of the domain to be deleted.
-- Output : none
-- Example : exec sp_deleteLineItemCtnrByDomain(domainid)
This stored procedure should delete the orders. Look up the domainid that you want to delete in the domaininformation table and call this procedure.
You can also call the pipeline DeleteDomainReferencesTransaction. Setup an smc job that calls this pipeline with the domainid that you want to clean up as a parameter. It also calls a second sp that cleans up the payment data so it actually a better approach.
Update 9/27
I tried this out on my local 7.7 environments. The DeleteDomainReferences job also removes the orders from the isorder table. No need to run sp_deleteLineItemCtnrByDomain separately. Recreating the channel I see no old orders. I'm guessing that you discovered a bug in the version you are running. Maybe related to the address table being split into different tables. Open a ticket for support to have them look at this.

With the assistance from intershop support, it has been determined that, in IS 7.8.1.4, the sp_deleteLineItemCtnrByDomain.sql has issue.
line 117 and 118 from 7.8.1.4
delete from staticaddress_av where ownerid in (select uuid from staticaddress where lineitemctnrid = i.uuid);
delete from staticaddress where lineitemctnrid = i.uuid;
should be replaced by
delete from basketaddress_av where ownerid in (select uuid from basketaddress where basketid = i.uuid);
delete from basketaddress where basketid = i.uuid;
After making the stored procedure update, running DeleteDomainReference job finishes without error and I was able to re-create the same channel again.
The fix will become available in 7.8.2 hotfix as I was told.

Related

Mariadb SELECT not failing on lock

I’m trying to cause a ‘SELECT’ query to fail if the record it is trying to read is locked.
To simulate it I have added a trigger on UPDATE that sleeps for 20 seconds and then in one thread (Java application) I’m updating a record (oid=53) and in another thread I’m performing the following query:
“SET STATEMENT max_statement_time=1 FOR SELECT * FROM Jobs j WHERE j.oid =53”.
(Note: Since my mariadb server version is 10.2 I cannot use the “SELECT … NOWAIT” option and must use “SET STATEMENT max_statement_time=1 FOR ….” instead).
I would expect that the SELECT will fail since the record is in a middle of UPDATE and should be read/write locked, but the SELECT succeeds.
Only if I add ‘for update’ to the SELECT query the query fails. (But this is not a good option for me).
I checked the INNODB_LOCKS table during the this time and it was empty.
In the INNODB_TRX table I saw the transaction with isolation level – REPEATABLE READ, but I don’t know if it is relevant here.
Any thoughts, how can I make the SELECT fail without making it 'for update'?
Normally consistent (and dirty) reads are non-locking, they just read some sort of snapshot, depending on what your transaction isolation level is. If you want to make the read wait for concurrent transaction to finish, you need to set isolation level to SERIALIZABLE and turn off autocommit in the connection that performs the read. Something like
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SET autocommit = 0;
SET STATEMENT max_statement_time=1 FOR ...
should do it.
Relevant page in MariaDB KB
Side note: my personal preference would be to use innodb_lock_wait_timeout=1 instead of max_statement_time=1. Both will make the statement fail, but innodb_lock_wait_timeout will cause an error code more suitable for the situation.

BizTalk 2013 file receive location trigger on non-event

I have a file receive location which is schedule to run at specific time of day. I need to trigger a alert or mail if receive location is unable to find any file at that location.
I know I can create custom components or I can use BizTalk 360 to do so. But I am looking for some out of box BizTalk feature.
BizTalk is not very good at triggering on non-events. Non-events are things that did not happen, but still represent a certain scenario.
What you could do is:
Insert the filename of any file triggering the receive location in a custom SQL table.
Once per day (scheduled task adapter or polling via stored procedure) you would trigger a query on the SQL table, which would only create a message in case no records were made that day.
Also think about cleanup: that approach will require you to delete any existing records.
Another options could be a scheduled task with a custom c# program which would create a file only if there were no input files, etc...
The sequential convoy solution should work, but I'd be concerned about a few things:
It might consume the good message when the other subscriber is down, which might cause you to miss what you'd normally consider a subscription failure
Long running orchestrations can be difficult to manage and maintain. It sounds like this one would be running all day/night.
I like Pieter's suggestion, but I'd expand on it a bit:
Create a table, something like this:
CREATE TABLE tFileEventNotify
(
ReceiveLocationName VARCHAR(255) NOT NULL primary key,
LastPickupDate DATETIME NOT NULL,
NextExpectedDate DATETIME NOT NULL,
NotificationSent bit null,
CONSTRAINT CK_FileEventNotify_Dates CHECK(NextExpectedDate > LastPickupDate)
);
You could also create a procedure for this, which should be called every time you receive a file on that location (from a custom pipeline or an orchestration), something like
CREATE PROCEDURE usp_Mrg_FileEventNotify
(
#rlocName varchar(255),
#LastPickupDate DATETIME,
#NextPickupDate DATETIME
)
AS
BEGIN
IF EXISTS(SELECT 1 FROM tFileEventNotify WHERE ReceiveLocationName = #rlocName)
BEGIN
UPDATE tFileEventNotify SET LastPickupDate = #LastPickupDate, NextPickupDate = #NextPickupDate WHERE ReceiveLocationName = #rlocName;
END
ELSE
BEGIN
INSERT tFileEventNotify (ReceiveLocationName, LastPickupDate, NextPickupDate) VALUES (#rlocName, #LastPickupDate, #NextPickupDate);
END
END
And then you could create a polling port that had the following Polling Data Available statement:
SELECT 1 FROM tFileEventNotify WHERE NextPickupDate < GETDATE() AND NotificationSent <> 1
And write up a procedure to produce a message from that table that you could then map to an email sent via SMTP port (or whatever other notification mechanism you want to use). You could even add columns to tFileEventNotify like EmailAddress or SubjectLine etc. You may want to add a field to the table to indicate whether a notification has already been sent or not, depending on how large you make the polling interval. If you want it sent every time you can ignore that part.
One option is to set up a BAM Alert to trigger if no file is received during the day.
Here's one mostly out of the box solution:
BizTalk Server: Detecting a Missing Message
Basically, it's an Orchestration that listens for any message from that Receive Port and resets a timer. If the timer expires, it can do something.

Is MLOAD executed in a single transaction?

I have an MLOAD job that inserts data from an Oracle database into a Teradata database. One of the things it does it drop the destination table and recreate it. Our production website populates a dropdown list based on what's in the destination table.
If the MLOAD script is not on a single transaction then it's possible that the dropdown list could fail to populate properly if the binding occurs during the MLOAD job. If it is transactional, however, it would be a seamless process because the changes would not show until the transaction is committed.
I checked the dbc.DBQLogTbl and dbc.DBQLQryLogsql views after running the MLOAD job and it appears there are several transactions occurring within the job, so it would seem that the entire job is not done in a single transaction. However, I wanted to verify that this is indeed the case before I make assumptions.
A transaction in Teradata cannot include multiple DDL statements, each DDL must be commited seperatly.
A MLoad is treated logically as a single transaction even if you see multiple transactions in DBQL, these are steps to prepare and cleanup.
When your application tries to select from the target table everything will be ok (unless it's doing a dirty read using LOCKING ROW FOR ACCESS).
Btw, there might be another error message "table doesn't exist" when the application tries to select. Why do you drop/recreate the table instead of a simple DELETE?
Another solution would be a loading a copy of the table and use view switching:
mload tab2;
replace view v as select * from tab2;
delete from tab1;
The next load will do:
mload tab1;
replace view v as select * from tab1;
delete from tab2;
And so on. Of course your load job needs to implement the switching logic.

Handling Constraint SqlException in Asp.net

Suppose I have a user table that creates strong relationships (Enforce Foreign Key Constraint) with many additional tables. Such orders table ..
If we try to delete a user with some orders then SqlException will arise.. How can I catch this exception and treat it properly?
Is this strategy at all?
1) first try the delete action if an exception Occur handel it?
2) Or maybe before the delete action using code adapted to ensure that offspring records throughout the database and alert according to .. This piece of work ...
So how to do it?
--Edit:
The goal is not to delete the records from the db! the goal is to inform the user that this record has referencing records. do i need to let sql to execute the delete command and try to catch SqlException? And if so, how to detect that is REFERENCE constraint SqlException?
Or - should I need to write some code that will detect if there are referencing records before the delete command. The last approach give me more but its a lot of pain to implement this kind of verification to each entity..
Thanks
Do you even really want to actually delete User records? Instead I'd suggest having a "deleted" flag in your database, so when you "delete" a user through the UI, all it does is update that record to set the flag to 1. After all, you wouldn't want to delete users that had orders etc.
Then, you just need to support this flag in the appropriate areas (i.e. don't show "deleted" users in the UI).
Edit:
"...but just for the concept, assume that i do want delete the user how do i do that?"
You'd need to delete the records from the other tables that reference that user first, before deleting the user record (i.e. delete the referencing records first then delete the referenced records). But to me that doesn't make sense as you would be deleting e.g. order data.
Edit 2:
"And if so, how to detect that is REFERENCE constraint SqlException?"
To detect this specific error, you can just check the SqlException.Number - I think for this error, you need to check for 547 (this is the error number on SQL 2005). Alternatively, if using SQL 2005 and above, you could handle this error entirely within SQL using the TRY...CATCH support:
BEGIN TRY
DELETE FROM User WHERE UserId = #MyUserId
END TRY
BEGIN CATCH
IF (ERROR_NUMBER() = 547)
BEGIN
-- Foreign key constraint violation. Handle as you wish
END
END CATCH
However, I'd personally perform a pre-check like you suggested though, to save the exception. It's easily done using an EXISTS check like this:
IF NOT EXISTS(SELECT * FROM [Orders] WHERE UserId=#YourUserId)
BEGIN
-- User is not referenced
END
If there are more tables that reference a User, then you'd need to also include those in the check.

How to find out which package/procedure is updating a table?

I would like to find out if it is possible to find out which package or procedure in a package is updating a table?
Due to a certain project being handed over (the person who handed over the project has since left) without proper documentation, data that we know we have updated always go back to some strange source point.
We are guessing that this could be a database job or scheduler that is running the update command without our knowledge. I am hoping that there is a way to find out where the source code is calling from that is updating the table and inserting the source as a trigger on that table that we are monitoring.
Any ideas?
Thanks.
UPDATE: I poked around and found out
how to trace a statement back to its
owning PL/SQL object.
In combination with what Tony mentioned, you can create a logging table and a trigger that looks like this:
CREATE TABLE statement_tracker
( SID NUMBER
, serial# NUMBER
, date_run DATE
, program VARCHAR2(48) null
, module VARCHAR2(48) null
, machine VARCHAR2(64) null
, osuser VARCHAR2(30) null
, sql_text CLOB null
, program_id number
);
CREATE OR REPLACE TRIGGER smb_t_t
AFTER UPDATE
ON smb_test
BEGIN
INSERT
INTO statement_tracker
SELECT ss.SID
, ss.serial#
, sysdate
, ss.program
, ss.module
, ss.machine
, ss.osuser
, sq.sql_fulltext
, sq.program_id
FROM v$session ss
, v$sql sq
WHERE ss.sql_address = sq.address
AND ss.SID = USERENV('sid');
END;
/
In order for the trigger above to compile, you'll need to grant the owner of the trigger these permissions, when logged in as the SYS user:
grant select on V_$SESSION to <user>;
grant select on V_$SQL to <user>;
You will likely want to protect the insert statement in the trigger with some condition that only makes it log when the the change you're interested in is occurring - on my test server this statement runs rather slowly (1 second), so I wouldn't want to be logging all these updates. Of course, in that case, you'd need to change the trigger to be a row-level one so that you could inspect the :new or :old values. If you are really concerned about the overhead of the select, you can change it to not join against v$sql, and instead just save the SQL_ADDRESS column, then schedule a job with DBMS_JOB to go off and update the sql_text column with a second update statement, thereby offloading the update into another session and not blocking your original update.
Unfortunately, this will only tell you half the story. The statement you're going to see logged is going to be the most proximal statement - in this case, an update - even if the original statement executed by the process that initiated it is a stored procedure. This is where the program_id column comes in. If the update statement is part of a procedure or trigger, program_id will point to the object_id of the code in question - you can resolve it thusly:
SELECT * FROM all_objects where object_id = <program_id>;
In the case when the update statement was executed directly from the client, I don't know what program_id represents, but you wouldn't need it - you'd have the name of the executable in the "program" column of statement_tracker. If the update was executed from an anonymous PL/SQL block, I'm not how to track it back - you'll need to experiment further.
It may be, though, that the osuser/machine/program/module information may be enough to get you pointed in the right direction.
If it is a scheduled database job then you can find out what scheduled database jobs exist and look into what they do. Other things you can do are:
look at the dependencies views e.g. ALL_DEPENDENCIES to see what packages/triggers etc. use that table. Depending on the size of your system that may return a lot of objects to trawl through.
Search all the database source code for references to the table like this:
select distinct type, name
from all_source
where lower(text) like lower('%mytable%');
Again that may return a lot of objects, and of course there will be some "false positives" where the search string appears but isn't actually a reference to that table. You could even try something more specific like:
select distinct type, name
from all_source
where lower(text) like lower('%insert into mytable%');
but of course that would miss cases where the command was formatted differently.
Additionally, could there be SQL scripts being run through "cron" jobs on the server?
Just write an "after update" trigger and, in this trigger, log the results of "DBMS_UTILITY.FORMAT_CALL_STACK" in a dedicated table.
The purpose of this function is exactly to give you the complete call stack of al the stored procedures and triggers that have been fired to reach your code.
I am writing from the mobile app, so i can't give you more detailed examples, but if you google for it you'll find many of them.
A quick and dirty option if you're working locally, and are only interested in the first thing that's altering the data, is to throw an error in the trigger instead of logging. That way, you get the usual stack trace and it's a lot less typing and you don't need to create a new table:
AFTER UPDATE ON table_of_interest
BEGIN
RAISE_APPLICATION_ERROR(-20001, 'something changed it');
END;
/

Resources