BPEL Transactions getting STUCK often in Weblogic SOA - soa

We are really having a tough time recovering the BPEL transactions manually that are getting stuck in SOA often.. What could be the possible reason for that ? Why is it getting stuck often? We are using 11g.

Are you using Bpel database adapters? Perheaps there are problems with the datasource.
Or are you accessing web services?
Did you tried to use Bpel Fault Policies?
http://docs.oracle.com/cd/E23943_01/dev.1111/e10224/bp_faults.htm#SOASE478
It provides 6 options:
ora-retry
ora-terminate
ora-human-intervention
ora-rethrow-fault
ora-replay-scope
ora-java

Related

Migrating to Axon Server

Currently I am running Axon Framework and using Postgresql as the event store. I am investigating how to scale this horizontally but adding nodes via the k8s HPA functionality regularly results in errors
org.axonframework.modelling.command.ConcurrencyException: An event for aggregate [x] at sequence [y] was already inserted
I was reading that Axon Server enables this type of scaling but I cannot find anything on migrating the current event store to Axon Server. Is this possible?
Well it seems my google skills are lacking. Found the process here:
https://docs.axoniq.io/reference-guide/axon-server/migration/non-axon-server-to-axon-server
This only covers migration to Axon Server EE though so I will have to dig more I guess.

Biztalk SQL Adapter or a .NET SqlClient

My application has BizTalk orchestration which needs to do basic insert into a table. Which is the better way of doing it
using a SQL adapter that calls the stored procedure(with just one insert statement) on SQL server
including a method call on orchestration where the method uses the SQLclient.SQLCommand.ExecuteNonQuery method with the SQLstored procedure on the GetCommand.
I want to know the best way of inserting the data into a database in the BizTalk orchestration.
The correct and best way to integrate with SQL Server in a BizTalk app is with the WCF SQL-Adapter.
Do not use the SQL Client in code since you have a greater chance of making things worse, operations, maintenance and performance, over using the built in tools.
Never guess or make assumptions about performance since without knowing exactly what to address, through test and measurement, you will either A) Spend time 'fixing' a problem that doesn't exist or B) make things worse by implementing something less optimized than the base product or C) both.
Use the WCF-SQL Adapter and if you measure a specific gap with an SLA, let us know, we can help you with that. 99.99%, the solution will not involve using the SQL Client directly.
You should always prefer using the SQL Adapter.
It will be tracked in the Group Hub
You'll get better diagnostics and tracking options around the port
Retry logic is built in and configurable
The biggest downside is performance - it will create another persistence point in your orchestration on the send shape, whereas an inline SQL call would avoid that.

Where is the optimal place to implement SQL retry logic when using ASP.Net TableAdapters?

I'm using Visual Studio 2010, TableAdapters, and writing my code in C#. I'm currently venturing into debugging my code after having written some data population code and I'm looking through what SQL issues I'm encountering. At this point I want to implement some SQL retry code to handle retrying any queries if they timeout for various reason. I can implmement them in my business logic layer but then I'd be duplicating that code in all of the BLL methods that call into my TableAdapters. I believe I need to do this at the TableAdapter level or even somewhere higher as I have on the order of 30 or so different TableAdapters corresponding to different data. Does anyone have any sample code on implementing code that will be run by all TableAdapters? I have seen sample code on extending TableAdapters but I'm looking for something even more centralized for retry logic.
You can avoid this problem by setting the command TimeOut property = 0
SqlCommand.CommandTimeout
Alternatively you can put the retry logic in the DAL layer for 2-3 attempts.

LINQ to SQL: First call

I'm using LINQ to SQL to access the database (SQL Server 2005). The first call takes up to 10 seconds to retrieve the data, a second call takes less than a second.
What can be done to improve the performance of the first call to the database?
The database action happens in the controller of a asp.net mvc application.
Thanks
I believe what you are experiencing is SQL Server caching the query and is normal. Now if the original 10 seconds is too much, then you need to capture the sql query (I would suggest profiler) and then review it. In the past I would run the sql in the management console with show actual execution plan selected. There are resources on the web to explain how to read it, but it should help you to find the bottleneck. HTH
Edit
I mean to say it is normal for long running queries to speed up after they have been run once, since SQL Server caches the query (I believe the execution plan to be exact) for later use.
Wade
Not sure this kind of timeout is LINQ or ASP.NET related. Do you also notice the timeout when using the database with ADO.NET?
I doubt very much Linq-to-SQL is the culprit here. Can you post the T-SQL L2S is generating, along with rows counts and information on indexing?
I think what you're experiencing is the Asp.Net compilation process the first time the page is loaded, not a performance problem with LTS. One way to measure performance is to profile it with the Linq to Sql Profiler. It will tell you exactly what the query is that is being generated as well as execution times for both the query and your code.

designing a distributed (over many servers) error logging feature, WCF or?

I am designing a error logging feature so our servers (each donig different things) can have a central data store for logging errors.
Would it be a good idea to have the various applications writing to the error log file using a WCF service, or is that a bad idea?
they can do it just by ADO.NET to the database, which I think is the simpler route.
How about having a look at syslog? It was made for exactly that purpose.
I'd say just log to your local data store. The advantages are :
Speed - it's pretty rapid to just
dump your chosen error report to an
existing data connection.
Tracability - What happens if you
have an error in your service? You
lose all ability to chase down
errors on all servers.
Simplicity - If you change the
endpoint for your errors service,
you have to update every other
application that uses the error
service.
Reporting - Do you really want to
trawl through error reports from
tens / hundreds of applications in
one place when you could easily find
them in the data store local to the
app?
Of course, any of these points could be viewed from the other side, these are just my opinions.
We're looking at a similar approach, except for audit logging as well as error handling.
Looking at using WCF over netTcp, also looking at using the event log, but that seems to require high trust settings, and maybe performance issues.
Not convinced by ZombieSheep's objections:
It's pretty rapid to dump your chosen error report over an existing WCF connection. Seriously. Plus, you can do it async/queued. Not a key factor for me.
You log to the central service and the local service. When the erroer service comes back on line, you poll your machines for events since the last timestamp. Problem solved.
Use a dns alias, and don't change the path - the way you should do internal addressing anyway IMO.
What if you have multiple apps on a single machine? What if you want to see the timing of errors across multiple apps?

Resources