ORPOS transaction queue is getting stuck - opos

ORPOS transaction queue is getting stuck due to "the customer name that is read from the magnetic stripe of the card comes in as 50+ characters that looks like an encryption code

The details you provide indicate the fault with the card and/or the validation system. But regarding the transaction queue thing, make sure the connection is robust because having the BO database or the store server offline leaves you getting transaction queued up at POS.

Related

How to write a flow where the Oracle service signs transactions in parallel with Peers in Corda

From the docs, I understand that there is a way to allow for parallel signing between peers and Oracle but don't see how this works functionally in the given flow on the Corda docs:
The creator of the transaction that depends on the interest rate
asks for the current rate. They can abort at this point if they want
to.
They insert a command with that rate and the time it was
obtained into the transaction.
They then send it to the oracle for
signing, along with everyone else, potentially in parallel. The
oracle checks that the command has the correct data for the asserted
time, and signs if so.
Was the command not provided by the Oracle in the first place thus signing the transaction in step one? I understand that the aim here is to avoid first-sign from the Oracle but how is the above flow facilitating this? How can the rate be obtained if not via the Oracle, and would the Oracle not have to sign on that first provision? Is this a case where the Oracle attests twice; once when providing the fact to the requestor and twice when the requestor re-inserts the fact as a command to be validated by both the Oracle and the remaining peers?
As of Corda 3, there is no way to request signatures in parallel. This feature will likely be added in a future release. For now, you have to request the signatures in a specific order.
The oracle does not sign the command it provides. Instead:
The creator of the transaction receives the command from the oracle and includes it in the transaction
Once the transaction is fully built, the creator of the transaction sends the transaction back to the oracle
The oracle decides whether to sign:
If the data in the command is correct, the oracle should sign the entire transaction
If the data in the command is incorrect, the oracle should refuse to sign
This approach prevents signed oracle data from being reused across transactions. Since each transaction has a unique hash, a signature needs to be requested for each individual use of the oracle data, allowing the oracle to charge per-use and have a viable business model.

How to make BizTalk only take one message at a time from the MSMQ

I have a BizTalk orchestration that is picking up messages from an MSMQ. It processes the message and sends it on to another system.
The thing is, whenever a message is put on the queue, BizTalk dequeues it immediately even if it is still processing the previous message. This is a real pain because if I restart the orchestration then all the unprocessed messages get deleted.
Is there any way to make BizTalk only take one message at a time, so that it completely finishes processing the message before taking the next one?
Sorry if this is an obvious question, I have inherited a BizTalk system and can't find the answer online.
There are three properties of the BizTalk MSMQ adapter you could try to play around with:
batchSize
Specifies the number of messages that the adapter will take off the queue at a time. The default value is 20.
This may or may not help you. Even when set to 1, I suspect BTS will try to consume remaining "single" messages concurrently as it will always try parallel processing, but I may be wrong about that.
serialProcessing
Specifies messages are dequeued in the order they were enqueued. The default is false.
This is more likely to help because to guarantee ordered processing, you are fundamentally limited to single threaded processing. However, I'm not sure if this will be enough on its own, or whether it will only mediate the ordering of message delivery to the message box database. You may need to enable ordered delivery throughout the BTS application too, which can only be done at design time (i.e. require code changes).
transactional
Specifies that messages will be sent to the message box database as part of a DTC transaction. The default is false.
This will likely help with your other problem where messages are "getting lost". If the queue is non-transactional, and moreover, not enlisted in a larger transaction scope which reaches down to the message box DB, that will result in message loss if messages are dequeued but not processed. By making the whole process atomic, any messages which are not committed to the message box will be rolled back onto the queue.
Sources:
https://msdn.microsoft.com/en-us/library/aa578644.aspx
While you can process the messages in order by using Ordered Delivery, there is no way to serialize to they way you're asking.
However, merely stopping the Orchestration should not delete anything, much less 'all the unprocessed messages'. Seems that's you problem.
You should be able to stop processing without losing anything.
If the Orchestration is going into a Suspended state, then all you need to do is Resume that one orchestration and any messages queued will remain and be processed. This would be the default behavior even if the app was created 'correctly' by accident ;).
When you Stop the Application, you're actually Terminating the existing Orchestration and everything associated with it, including any queued messages.
Here's your potential problem, if the original developer didn't properly handle the Port error, the Orchestration might get stuck in an un-finishable Loop. That would require a (very minor) mod to the Orchestration itself.

Using SQS or DynamoDB to control order status

I am building a system that processes orders. Each order will follow a workflow. So this order can be, e.g., booked,accepted,payment approved,cancelled and so on.
Every time a status of a order changes I will post this change to SNS. To know if a status order has changed I will need to make a request to a external API, and compare to the last known status.
The question is: What is the best place to store the last known order status?
1. A SQS queue. So every time I read a message from queue, check status using the external API, delete the message and insert another one with the new status.
2. Use a database (like Dynamo DB) to control the order status.
You should not use the word "store" to describe something happening with stateful facts and a queue. Stateful, factual information should be stored -- persisted -- to a database.
The queue messages should be treated as "hints" on what work needs to be done -- a request to consider the reasonableness of a proposed action, and if reasonable, perform the action.
What I mean by this, is that when a queue consumer sees a message to create an order, it should check the database and create the order if not already present. Update an order? Check the database to see whether the order is in a correct status for the update to occur. (Canceling an order that has already shipped would be an example of a mismatched state).
Queues, by design, can't be as precise and atomic in their operation as a database should. The Two Generals Problem is one of several scenarios that becomes an issue in dealing with queues (and indeed with designing a queue system) -- messages can be lost or delivered more than once.
What happens in a "queue is authoritative" scenario when a message is delivered (received from the queue) more than once? What happens if a message is lost? There's nothing wrong with using a queue, but I respectfully suggest that in this scenario the queue should not be treated as authoritative.
I will go with the database option instead of SQS:
1) option SQS:
You will have one application which will change the status
Add the status value into SQS
Now another application will check your messages and send notification, delete the message
2) Option DynamoDB:
Insert you updated status in DynamoDB
Configure a Lambda function on update of that field
Lambda function will send notifcation
The database option looks clear additionally, you don't have to worry about maintaining any queue plus you can read one message from the queue at a time unless you implement parallel reader to read from the queue. In a database, you can update multiple rows and it will trigger the lambda and you don't have to worry about it.
Hope that helps

Oracle DB - Lock on cancellation of update query

if i have a very long run UPDATE query that takes hours and I happen to cancel in middle of when it's running.
I got this message below:
"User requested cancel of current operation"
Will Oracle automatically roll back the transactions?
Will DB lock be still acquired if I cancel the query? If so, how to unlock?
How to check which Update query is locking the database?
Thanks.
It depends.
Assuming that whatever client application you're using properly implemented a query timeout and that the error indicates that the timeout was exceeded, then Oracle will begin rolling back the transaction when the error is thrown. Once the transaction finishes rolling back, the locks will be released. Be aware, though, that it can easily take at least as long to roll back the query as it took to run. So it will likely be a while before the locks are released.
If, on the other hand, the client application hasn't implemented the cancellation properly, the client may not have notified Oracle to cancel the transaction so it will continue. Depending on the Oracle configuration and exactly what the client does, the database may detect some time later that the application was not responding and terminate the connection (going through the same rollback process discussed above). Or Oracle may end up continuing to process the query.
You can see what sessions are holding locks and which are waiting on locks by querying dba_waiters and dba_blockers.

Many small writes to SQLite

I have an application, which runs all the time and receives some messages (rate of them varies from several per second to none per hour). Every message should be put into a SQLite database. What's the best way to do this?
Opening and closing the database on each message doesn't sound good: if there are tens of them per second, it will be extremely slow.
On the other hand, opening the database once and just writing to it can lead to loss of data if the process unexpectedly terminates.
It sounds like whatever you do, you'll have to make a trade-off.
If safety is your top-most concern, then update the database on each message and take the speed hit.
If you want a compromise, then update the database write every so many messages. For instance, maintain a buffer and every 100th message, issue an update, wrapped in a transaction.
The transaction wrapping is important for two reasons. First, it maximizes speed. Second, it can help you recover from errors if you employ logging.
If you do the batch update above, you can add an additional level of safety by logging each message as it comes to a file. You will reset this log every time a database update is successfully issues. That way, if an update fails, you know it failed on the entire block (since you are using transactions) and your log will have the information that did not update. This will allow you to re-issue the update, or even see if there was a problem with the data that caused the failure. This of course assumes that keeping a log is cheaper than updating the database, which can be the case depending on how you are connecting.
If your top rate is "several per second" then I dont see a real problem with opening and closing the db. This is especially true if its critical that the data be recorded right away in case of server failure.
We use SQLite in a reporting product and the best performance we have been able to eek out is recording rows in blocks of several thousands at a time. Our default is around setting is 50k. That means our app waits around until 50k rows of data is collected then commits it as one transaction.
There is an easy algorithm to adjust your application's behaviour to the message rate:
When you have just written a message, check if there is any new message.
If yes, write that message too, and repeat.
Only when you have run out of immediately available messages, commit the transaction and close the database.
In that manner, every message will be saved immediately, unless the message rate becomes too high for that.
Note: closing the database will not increase data durability (that's what transaction commit is for), it will just free up a little bit of memory.

Resources