Cybersource rejects transaction with different values of purchaseTotals_grandTotalAmount - payment-processing

When perform a transaction through CyberSource it accepts some amounts while rejects some value of amounts. e.g if i put following amount, CyberSource will reject transaction with Error Code 203.
request.put("purchaseTotals_grandTotalAmount", "1500");
while if i change the amount to any of following, CyberSource will Accept the transaction
request.put("purchaseTotals_grandTotalAmount", "7676");
request.put("purchaseTotals_grandTotalAmount", "324");
i don't know what is the reason behind it. Is there any limitation or criteria for the amount or what.

Cyber source inorder to simulate various scenratios, give error codes based on the dollar amounts you pass to the service
Based on the simulator that your service is tied up to you will get different responses.
https://www.cybersource.com/developers/test_and_manage/testing/simple_order_api/fdiglobal/soapi_fdiglobal_errors.html
Please see some of the above responses If NAshville Global simulator is configured as part of your apps cybersource account

Related

How to setup web hooks to send message to Slack when Firebase functions crash?

I need to actively receive crash notifications for firebase functions.
Is there any way to set up Slack webhooks to receive a message when Firebase Functions throw an Error, functions crash, or something like that?
I would love to receive issue messages by velocity ie: Firebase Functions crash 50 times a day.
Thank you so much.
First you have to create a log based (counter) metric that will be counting specific error occurencies and second - you create alerting policy with Slack notification channel.
Let's start from finding corresponding logs that appear when the function throws an error. Since I didn't have one that would crash I used logs that indicated that it was started.
Next you have to create a log based metric. Ignore the next screen and go to Monitoring > Alerting. Click on "Create new policy", find your metric and select "Rolling Window" to whatever time period you need. For testing I used 1 minute. Then set "Rollind windows function" to "mean".
Now configure when the alert has to be triggered - I chose over 3 (within 1 minute window).
On the next screen you select notification channel. In case of Slack it has to be configured first in "Notification Channels".
You can save policy the policy now.
After a few minutes I gathered enough data to generate two incidents:
And here's some alerting related documentation that may help you understand how to use them.

How many clients are connected to my firestore?

I am working on a flutter app that fetches 341 documents from the firestore, after 2 days of analysis I found out that my read requests are increasing too much. So I made a chart on the stackdriver metrics explorer from which I get to know that my app is just reading 341 docs a single time, it's the firebase console which is increasing my reads.
Now, comes to what are the questions that are bothering me,
1)How reads are considered when we see data on the console and how can I reduce my read requests? Basically there are 341 docs but it is showing more than 600 reads whenever I refresh my console.
2)As you can see in the picture there are two types of document reads 'LOOKUP' and 'QUERY', what's the exact difference between them?
3)I am getting data from the firestore with a single instance and when I open my app the chart shows 1 active client which is cool but in the next 5 minutes, the number of active clients starts to increase.
Can anybody please explain to me why this is happening?
For the last question, I tried to disable all the service accounts and then again opened my app but got the same thing again.
Firestore.instance.collection("Lectures").snapshots(includeMetadataChanges: true).listen((d){
print(d.metadata.isFromCache);//prints false everytime
print(d.documents.length);// 341
print(d.documentChanges.length);//341
});
This is the snippet I am using. When the app starts it runs only once.
I will try to answer your questions:
How reads are considered when we see data on the console and how can I
reduce my read requests? Basically there are 341 docs but it is
showing more than 600 reads whenever I refresh my console.
Reads are considered depending on your how you query your Firestore database in addition to your access to this database from the console so using of the Firebase console will incur reads and even if you leave the console open to do other stuff, when new changes to database occured these changes will incur reads also, automatically.and any document read from the server is going to be billed. It doesn't matter where the read came from. The console should be included in that.
Check this official documentation under the "Manage data" title you can see there is a note : "Note: Read, write, and delete operations performed in the console count towards your Cloud Firestore usage."
Saying that if you think there is an issue with this, you can contact Firebase support directly to have more detailed answers.
However, If you check the free plan of Firebase you can see that you have 50K free reads per day.
A workaround that I found for this (thanks to Dependar Sethi)
Bookmarking the Usage tab of the Firestore page. (So you basically
'Skip' the Data Tab)
Adding a dummy collection in a certain way that ensures it is the
first collection(alphabetically) which gets loaded by default on
the Firestore page.
you can find his full solution here.
Also, you can optimise your queries however you want to retreive only the data that you want using where() method and pagination with Firebase
As you can see in the picture there are two types of document reads
'LOOKUP' and 'QUERY', what's the exact difference between them?
I guess there are no important difference between them but "QUERY" is getting the actual data(when you call data() method) while "LOOKUP" is getting a reference of these data(without calling data() method).
I am getting data from the firestore with a single instance and when I
open my app the chart shows 1 active client which is cool but in the
next 5 minutes, the number of active clients starts to increase.
For this question, considering the metrics that you are choosing in Stackdriver I can see 3 connected clients. and as per the decription of "connected client" metric:
The number of active connections. Each mobile client will have one connection. Each listener in admin SDK will be one connection. Sampled every 60 seconds. After sampling, data is not visible for up to 240 seconds.
So please check: how many mobiles are connected to this instance and how many listeners do you have in your app. The sum of all of them is the actual number of connected clients that you are seeing in Stackdriver.

Flow Sessions we're not provided for following transaction participants - Corda 4

i am getting this error in corda 4 Flow sessions were not provided for the following transaction participants I don't want this participant to sign my transaction then why I got this error?
I have two parties defined in my state class and I want only one of them of to sign the transaction, and I didn't created the flow session for the other party.
Please help to resolve this.
In Corda 4, you are required to pass FinalityFlow a list of sessions that includes all of the transaction's participants, so that the transaction can be distributed accordingly.
Just because someone is in this list of participants, it does not make them a required signer. The required signers are determined by the public keys listed on the transaction's commands.

Query limits for day reached

I have been testing an application I am developing using the webapi, and I have started to get the following error message:
GCSP: Hello error: [1010] The Gracenote ODP 15822 [Name: *registered-name*]
[App: *registered-app*] application has reached is daily lookup limit with
Gracenote. You may try again tomorrow or may contact Gracenote support at
support#gracenote.com.
[Gracenote Error: <ERR>]
The application I am developing is looking up track details and cover artwork for songs being streamed from Mood/Pandora for Business service. It is making approximately one call for each song, so something like 15 searches per hour on average. I may have done more during testing, but not a lot more.
Once completed, I would expect this service to make fewer than 500 searches per day per location, and for it initially to be used at 4 locations.
What are the lookup limits I am running into?
What are my options to get a higher lookup limit?
Thanks

BizTalk TPE continuations and uncompleted activities

Within my BizTalk 2010 solution I have the following orchestration that’s is started by the receipt of a courier update message. It them makes a couple of call to AX's WCF AIF via two solicit-response ports, a Find request and an Update request.
For this application we are meeting audit requirements through use of the tracking database to store the message body. We are able to link to this from references provided in BAM when we use the TPE. The result for the customer is nice, they get a web portal from which they can view BAM data of message timings etc. but they can also click a link to pull up a copy of the message payloads from the tracking db. Although this works well and makes use of out-of-box functionality for payload storage it has led to relatively complex jobs for the archiving of the tracking db (but that's another story!).
My problem relates to continuation. I have created the following Tracking Profile:
I have associated the first of the orchestration's two solicit response ports with the continuation RcvToOdx based on the interchange Id and this works, I get the following single record written to the Completed activity table:
So, in this case we can assume that an entry was first written on receipt in the inbound message, with the TimeReceivedIntoBts column populated by the physical file receive port. The FindRequestToAx column was then populated by the physical WCF send port. Because this was bound to the RcvToOdx continuation Id and used the same interchange Id and the previously mentioned file receive message, the update was made to the same activity. Notification of the resulting response was also updated to the same activity record - into the FindResponseFromAx column.
My problem is that I would also like BAM to record a timestamp for the subsequent UpdateRequestToAx. Because this request will have the same interchange Id as the previous messages I would expect to be able to solve this problem by simply binding the AxUpdate send port (both send and receive parts of it) to the same continuation id, as seen in the following screen grab:
I also map the UpdateRequestToAx milestone to the physical Ax_TrackAndTraceUpdate_SendPort (Send) and the OrchestrationCompleted milestone to Ax_TrackAndTraceUpdate_SendPort (Receive)
Unfortunately, when I try this I get the following result:
Two problems can be seen from the above db screen grab:
1. Date for the update send port was inserted into a new activity record
2. The record was never completed
I was surprised by this because I'd thought since they update port was enlisted to use the same continuation, and the single InterchangeId was being used by all ports for the continuation Id then all the data milestones would be applied to a single activity.
In looking for a solution to this problem I came across the following post on Stack Overflow suggesting that the continuation must be closed from the BAM API: BAM Continuation issue with TPE. So, I tried this by calling the following method from an expression shape in my orchestration:
public static void EndBAMContinuation(string continuationId)
{
OrchestrationEventStream.EndActivity(CARRIER_ORDER_ACTIVITY_NAME, continuationId);
}
I can be sure the method was called ok because I wrapped the call with a log entry from the CAT framework which I could see in debug view:
I checked the RcvToOdx{867… continuation Id against the non-closed activity and confirmed they matched:
This would suggest that perhaps the request to end the continuation is being processed before the milestone of the received message from the UpdateAx call?
When I query the Relationsips tables I get the following results:
Could anyone please advise why the UpdateToAx activity is not being completed?
I realise that I may be able to solve the problem using only the BAM API but I really want to exhaust any possibility of the TPE being fit for purpose first since the TPE is widely used in other BizTalk solutions of the organisation.
To solve this problem I created a 2nd continuation in the TPE.
"RcvToOdx" continuation for the Find and "OdxToUpdate" continuation for the update - source is InterchangeId on the initial receive port - UPS_TrackAndTrace (same as for other "RcvToOdx" continuation), dest Id is the InterchangeId mapped to update send port.

Resources