BizTalk send/receive - does it wait for completion of a called stored procedure? - biztalk

I've setup a BizTalk design that chains a couple of send/receives to a SQL stored procedure (which inserts the data to relevant tables).
It's organised in a specific sequence, so data goes into Table A, and following tables after this check that the data exists in Table A at the Stored Procedure level (simple (IF EXISTS in Table A setup...).
I've noticed though that the flow isn't consistent further down the chain, almost as if SQL is executing the stored procedure to insert/update the record slower than the BizTalk transaction is occurring.
I've made sure that my Biz design is send/receive, as I assumed the transaction wouldn't progress until Biz received a response from the stored procedure (which would indicate SQL has finished inserting the required data).
The below example highlights where the process writes data to the Person table, but is later called upon by the Student Programme/Student Module. Occasionally, it will dehydrate on the Programme or Module stored procedure (from what I can tell, because the stored procedures are looking to see if a Person record created at the start of the flow exists)
Can anyone please confirm if;
Send/Receive will wait for a SQL stored procedure to finish executing before progressing the BizTalk transaction further through the orchestration?

BizTalk Orchestrations have some smarts built into it, if there are no dependencies in the next shapes on the response, then no, it might not wait for the response to execute the next shapes. What you can try is to enable the Delivery Notification to Transmitted on the Logical Send Port settings.

Related

Locking transactions (SQL Server + EF5)

I am trying to debug a performance issue in an ASP.NET application using .NET 4.5, EF5 (with a 2nd level cache and lazy loaded navigation properties) and SQL Server 2014. We are experiencing a number of wait locks in the SQL server. When I look at the locking transactions, they contain a very quick UPDATE, and then a very large SELECT. The UPDATE is ostensibly a necessary one, but I am confused as to why the SELECT is being run in the same transaction (and why anything is being selected at all). The fundamental issue is that the table referenced in the UPDATE statement is locked for the duration of the SELECT statement.
We use repository pattern for getting data from the db, and DbContext.SaveChanges() for committing changes. I cannot figure out how it is possible that EF produces a transaction where there is both a write and a read, and I am not getting relevant results when I try to search Google.
We have a number of interfaces into the system, and a couple of console applications working on the database as well, but they all go through the same setup/versions of .NET and EF.
I figure that it must be through SaveChanges, since this is (AFAIK) the only time that things are written to the database.
Does anyone here have a hint as to how these locking transactions might be produced?
The fundamental issue is that the table referenced in the UPDATE
statement is locked for the duration of the SELECT statement.
The answer is in your question:
the SELECT is being run in the same transaction
X lock is always held until the end of the transaction, i.e. until it commits or rolls back. So if after your quick update there is a long select, all that update locked in your table remains locked until your select ends.
You can separate your update and select if your business rules permit, you can add an appropriate index on the updated table to lock only some rows and not the whole table, or you can optimize your select to execute faster.

how to save a table when it is created in Oracle 11g?

I gave the command to CREATE TABLE and INSERT the values into the table but its not saved. The next time I login and the table is there but the data inserted is not there. "no selected rows" is the statement shown in screen.
please help to save the changes in ORACLE 11g.
Solution
First set your Oracle Database client application to always implicitly commit or rollback transactions before termination (behaviour recommended by Oracle). Then learn about database transactions and either set your Oracle Database client application in autocommit mode or explicitly commit your D.M.L. transactions (with the COMMIT statement) because you should not rely on the commit behaviour of a client application.
Summary on database transactions
Database transactions are a fundamental concept of database management systems. The essential property of a transaction is its atomicity: a transaction is either
committed (database changes are all made permanent, that is stored on disk); or
rolled back (database changes are all discarded).
Lexically, a transaction consists of one or more S.Q.L. statements of these kinds:
data definition language (D.D.L.) statements (CREATE, DROP, ALTER, GRANT, REVOKE, …) that change the structure of the database;
data manipulation language (D.M.L.) statements (INSERT, DELETE, UPDATE, SELECT, …) that retrieve or change the contents of the database.
Database client applications can operate in two different modes:
autocommit mode, where implicit transactions are implicitly started before D.D.L. and D.M.L. statements and implicitly committed on success and rolled back on failure after D.D.L. and D.M.L. statements, and explicit transactions are explicitly started at BEGIN statements and explicitly committed at COMMIT statements and rolled back at ROLLBACK statements;
non-autocommit mode, where mixed transactions are implicitly started before D.D.L. or D.M.L. statements and explicitly committed at COMMIT statements and rolled back at ROLLBACK statements.
In both modes, transactions are implicitly committed before the database client application terminates normally and implicitly rolled back before the database client terminates abnormally. For database management systems that do not support rolling back D.D.L. statements (Maria D.B., Oracle Database, S.Q.L. Server), transactions are also implicitly committed after D.D.L. statements.
Explanation
When you issued your CREATE statement, since it is a D.D.L. statement and Oracle Database does not support rolling back D.D.L. statements, the transaction that creates the table was implicitly committed. That is why the table structure was still there after you started a new session. But when you issued an INSERT statement, since it is a D.M.L. statement and you are not in autocommit mode, the transaction that populates the table was not implicitly committed. And as your Oracle Database client application was not set to implicitly commit transactions before termination, that is why the table contents had gone after you started a new session.

How to transfer data from SQL Server to Informix and vice versa

I want to transfer tables data from SQL server to Informix and vice versa.
The transferring should be run scheduled and sometimes when the user make a specific action.
I do this operation through delete and insert transactions and it takes along long time through the web between 15 minute to 30 minute.
How to do this operation in easy way taking the performance in consideration?
Say I have
Vacation table in SQL Server and want to transfer all the updated data to the Vacation table in Informix.
and
Permission table in Informix and want to transfer all the updated data to the Permission table in SQL Server.
DISCLAIMER: I am not an SQL Server DBA. However, I have been an Informix DBA for over ten years and can make some recommendations as to its performance.
Disclaimer aside, it sounds like you already have a functional application, but the performance is a show-stopper and that is where you are mainly looking for advice.
There are some technical pieces of information that would be helpful to know, but in their absence, I'm going to make the following assumptions about your environment and application. Please comment or edit your question if I am wrong on any of these.
Database server versions. From the tags, it appears you are using SQL server 2012. However, I cannot determine the Informix server and version. I will assume you are running at least IDS 11.50 or greater.
How the data is being exchanged currently. Are you connecting directly from your .NET application to Informix? I would assume that is the case with SQL Server and will make the same assumption for your Informix connection as well.
Table structures. I assume you have proper indexing on the tables. On the Informix side, dbschema -d *dbname* -t *tablename* will give the basic schema.
If you haven't tried exporting data to CSV and as long as you don't have any compliance concerns doing this, I would suggest loading the data from a comma-delimited file. (Informix normally deals with pipe-delimited files, so you'll either need to adjust the delimiter on the SQL Server side to a pipe | or on the Informix import side). On the Informix end, this would be a
LOAD FROM 'source_file_from_sql_server' DELIMITER '|' INSERT INTO vacation (field1, field2, ..)
For reusability, I would recommend putting this in a stored procedure. Just wrap that load statement inside a BEGIN WORK; and COMMIT WORK; to keep your transactional integrity. Michał Niklas suggested some ways to track changes. If there is any correlation between the transfer of data to the vacation table in Informix and the permission table back in SQL Server, I would propose another option, which is adding a trigger to the vacation table so that you write all new values to a staging table.
With the import logic in a stored procedure, you can fire the import on demand:
EXECUTE PROCEDURE vacation_import();
You also mentioned the need to schedule the import, which can be accomplished with Informix's "dbcron". Using this feature, you'll create a scheduled task that executes vacation_import() periodically as well. If you haven't used this feature before, using OAT will be helpful. You will also want to do some housekeeping with the CSV files. This can be addressed with the system() call, which you can make from stored procedures in Informix.
Some ideas:
Add was_transferred column to source tables setting its default value to 0 (you can use 0/1 instead of false/true).
From source table select data with was_transferred=0.
After transferring data update selected source row, set its was_transferred to 1.
Make table syncro_info with fields like date_start and date_stop. If you discover that there is record with date_stop IS NULL it will mean that you are tranferring data. This will protect you against synchronizing data twice.

Microsoft AX Dynamics Process Integration through Outbound Ports

I would like to know the Process Integration steps.
Through Outbound ports
If any of the event occurs at AX Dynamics, we just want to know that events in the form of XML(Process Integration).
Example: Sales Order Creation, Customer Creation, Purchase Order Creation..
Outbound ports are only useful for asynchronous communication.
See AX 2012 Export Data with Outbound ports for an example (using the file system).
The steps to initiate sending data is in the AIF_SendCustomer.
As this is no lightweight operation, you may consider logging the records which needs integration in a custom integration table, then doing the processing in batch.
This is done in the insert and/or update and maybe delete method.
Deletes requires you store the RecId field value in the external system to be used for delete requests. The following does not cover this.
For logged table make the following method:
void syncRecord()
{
XXXRecordLog log;
log.RefTableId = this.TableId;
log.RefRecId = this.RecId;
log.insert();
}
Then call this.syncRecord() in the insert and update methods.
In the query to the outbound service be sure to exists join your table and the log table. This way only changed records are exported.
Make a batch job to do the transfer using the AIF_SendCustomer as a template.
After a synchronous (AifSendMode::Sync) transfer of the records, delete the log records (or mark them transferred).
Finally call AIFoutboundProcessingService to flush the file:
new AIFoutboundProcessingService().run();
Try to keeps things simple. It might be simpler to do a comma file export of the changed records!

When I use Fixture with SqlAlchemy in my unit tests, why am I unable to confirm changes to the database during test?

I am testing a message processer that uses SqlAlchemy (v0.7.4). In my test, I am using Fixture (v1.4) with Sqlite to set up and tear down a temporary database. My fixture data includes a file table with a status field that should get updated when the processor runs.
I have confirmed that the test, the processor being tested, and the fixture are all sharing the same database session.
I query the status field on the file record before the processor is run and afterwards. The value should change (from an int representing "Processing" to "Complete"). I have added debug code within the processor to verify that the field is being updated with the correct new status value. I am also able to independently verify that the processor runs successfully by checking the contents of an output file it produces. However, when I query the status at the end of my test using my test's database session, it is always the same as the value at the beginning.
I have tried explicitly committing and flushing the session before the final status query. Nothing works. Any ideas?
The issue here was twofold: 1) My test was using a temporary Sqlite database in memory. 2) Within my functional test, the processor was being spawned within a new process.
So even though I had hacked the processor class to use the same database session as the test itself, since processor and database were in separate memory spaces, the database updates the processor was making were invisible to the test code trying to verify the results.
Solution: set up the temporary Sqlite database in-file rather than in-memory.
Additionally, I discovered that, when Fixture does its teardown, it will throw an error if your fixture data isn't in the same state as it was setup in (noted at the end of this section). But that was a separate issue.

Resources