I am running a Glassfish server 3.1 that uses distributed transactions to an Oracle db using Oracle XA datasource and a JMS hub using active mq.
When looking at active transactions I have hundreds of transaction that have a global transaction ID but shows as no transaction and are in an unknown state. I can not see in the logs why this is happening and would like to know how to clear these down.
My concern is that theses transactions in this weird state may start to block other transactions.
Any help would be most appreciated. I am a support person not developer so have no idea what the code is doing..
I believe this is a bug in Glassfish that causes transactions to go into a in doubt state if the transaction monitor is turned on.
To get the entries to purge themselves set the property
server-config.transaction-service.property.purge-cancelled-transactions-after = 0
This has fixed our issue.
Related
There is an issue in a topology of 3 hosts.
Primary has a scheduled OS-task (every hour) to delete archive logs older 3 hours with RMAN. Archivelog deletion policy is configured to "Applied on all standby".
There are 2 remote log_archive_dest entries - Physical Standby and Downstream. Every day there appears a "RMAN-08120: WARNING: archived log not deleted, not yet applied by standby" in the task's logs and than it resolves in 2-3 hours.
I've checked V$ARCHIVE_LOG during the issue and figured out, that the redos are not applied on the Downstream server. I have not caught the issue on the Downstream server yet, but during the "good" periods all the apply processes are enabled, but the dba_apply_progress view tells me, that apply_time of the messages is 01-JAN-1970.
The dba_capture view tells that capture processes' statuses are ENABLED, status_change_time is approximately the time of the RMAN-issue resolved.
I'm new to the Golden Gate, Streams and Downstream technologies. I've read the reference Oracle Docs, but couldn't find anything about some schedule for capture or apply processes.
Could someone please help to figure out, what else to check or what to read?
Grateful for every response.
Thanks.
I'm trying to allow my search indexer to connect to a cosmosdb behind a vnet. When adding a shared private access, the provisioning state is set to failed, without giving me an explanation. I have a private endpoint on the CosmosDB setup already. How do I make this work?
I had the same issue occur on the same day as you reported it.
I had been setting up running this connection via Azure Pipelines the day before but suddenly the same Pipeline stopped working.
I raised it as an issue with MS and it was quickly reproduced by the 1st line and passed onto escalation engineers who confirmed there was a recent change in the Fluent SDK used, when we initiate an ARM deployment for Shared Private Link resources we end up specifying both template link and template id (incorrectly/by accident). As a result, customers creating SPL resources are failing to do so with a 500.
I am told there is a world-wide fixed being rolled out and to be completed on Monday pacific time.
We have been seeing the following 'warnings' in the event log of our BizTalk
machine since upgrading to BTS 2006. They seem to occur
randomly 6 or 8 times per day.
Does anyone know what this means and what needs to be done to clear it up?
we have only one BizTalk server which is running on only one machine.
I am new to BizTalk, so I am unable to find how many tracking host instances running for BizTalk server. Also, can you please let me know that we can configure only one instance for one server/machine?
Source: BAM EventBus Service
Event: 5
Warning Details:
Execute batch error. Exception information: TDDS failed to batch execution
of streams. SQLServer: bizprod, Database: BizTalkDTADb.Cannot insert
duplicate key row in object 'dta_MessageFieldValues' with unique index
'IX_MessageFieldValues'.
The statement has been terminated..
I see you got a partial answer in your MSDN Post
go to BizTalk Admin Console ,check in Platform Settings -> Hosts, in the list of hosts on the right, confirm that only a single Host has the Tracking column marked as Yes.
As to your other question. Yes you can run a Single Host Instance on a Single Server. Although when your server starts to come under a bit of load you may want to consider setting up some more so you can balance the workload better.
We have a long request that does a catalog search and then calculates some information and then stores it in another document. Do call is made to index the document after the store.
In the logs we're getting errors such as INFO ZPublisher.Conflict ConflictError at /pooldb/csvExport/runAgent: database conflict error (oid 0x017f1eae, class BTrees.IIBTree.IISet, serial this txn started with 0x03a30beb4659b266 2013-11-27 23:07:16.488370, serial currently committed 0x03a30c0d2ca5b9aa 2013-11-27 23:41:10.464226) (19 conflicts (0 unresolved) since startup at Mon Nov 25 15:59:08 2013)
and the transaction is getting aborted and restarted.
All the documentation I've read says that due to MVVC in ZODB ReadConflicts no longer occur. Since this is written in RestrictedPython putting in a savepoint is not a simple option (but likely my only choice I'm guessing).
Is there another to avoid this conflict? If I need use savepoints, can anyone think of a security reason why I shouldn't whitelist the savepoint transaction method for use in PythonScripts?
The answer is there really is no way to do a large transaction that involves writes, when there are other small transactions on the same objects at the same time. BTrees are supposed to have some special conflict handling code but it doesn't seem to work in the cases I'm dealing with.
The only way I think to manage this is something like this code which breaks the reindex into smaller transactions to reduce the chance of a conflict. There really should be something like this built into zope/plone.
After moving most of our BT-Applications from BizTalk 2009 to BizTalk 2010 environment, we began the work to remove old applications and unused host. In this process we ended up with a zombie host instance.
This has resulted in that the bts_CleanupDeadProcesses startet to fail with error “Executed as user: RH\sqladmin. Could not find stored procedure 'dbo.int_ProcessCleanup_ProcessLabusHost'. [SQLSTATE 42000] (Error 2812). The step failed.”
After looking at the CleanupDeatProcess process, I found the zombie host instance found in the BTMsgBox.ProcessHeartBeats table, with dtNextHeartbeatTime set to the time when the host was removed.
(I'm assuming that the Host Instance Processes don't exist in your services any longer, and that the SQL Agent job fails)
From looking at the source of the [dbo].[bts_CleanupDeadProcesses] job, it loops through the dbo.ProcessHeartbeats table with a cursor (btsProcessCurse, lol) looking for 'dead' hearbeats.
Each process instance has its own cleanup sproc int_ProcessCleanup_[HostName] and a sproc for the heartbeat watchdog to call, viz bts_ProcessHeartbeat_[HostName] (although FWR the SPROC calls it #ApplicationName), filtered by WHERE (s.dtNextHeartbeatTime < #dtCurrentTime).
It is thus tempting to just delete the record for your deleted / zombie host (or, if you aren't that brave, to simply update the Next dtNextHeartbeatTime on the heartbeat record for your dead host instance to sometime next century). Either way, the SQL agent job should skip the dead instances.
An alternative could be to try and re-create the Host and Instances with the same name through the Admin Console, just to delete them (properly) again. This might however cause additional problems as BizTalk won't be able to create the 2 SPROCs above because of the undeleted objects.
However, I wouldn't obviously do this on your prod environment until you've confirmed this works with a trial run first.
It looks like someone else got stuck with a similar situation here
And there is also a good dive into the details of how the heartbeat mechanism works by XiaoDong Zhu here
Have you tried BTSTerminator? That works for one-off cleanups.
http://www.microsoft.com/en-us/download/details.aspx?id=2846