How to manually lock a PNR for testing purposes - sabre

Is there a way to manually lock a PNR in Sabre in order to test error functionality?
We currently have retry logic in our code but creating a testing situation of what to do when a PNR is locked for longer than 3 retries seems to be an issue. Any guidance you could provide for this would be greatly appreciated.

There is no way to 'lock' a GDS PNR as far as I know, are you referring to what happens when simultaneous changes are detected? There are ways to put a PNR into that state by using multiple sessions but your question is a little unclear.

Related

Schedule a conditional email message with Akka.Net

I need to implement the following logic - I send a message to the user, and if he doesn't reply, I send it again after 12 hours.
I wonder what is the best way to do this? I was thinking about using Akka.NET - after a certain amount of time the actor would check if the user replied to my message and if not, would send it again.
Is there maybe an easier way? If not, there are some questions for Akka.NET
Do you know any good sources where I can see how this library should be used in ASP.NET Core? The documentation is not clear enough for me.
Where to keep the actors and the logic associated with them? In a separate project? Where can I create an actorSystem?
I'm new to this topic, thank you in advance for all the answers.
I theory you could just use standard actor system schedule a message order to resend an email after 12h, but this has natural problems with a fact, that if your process will crash, all of its in-memory state will be lost.
In practice you could use one of two existing plugins, which give you durable schedules:
Akka.Persistence.Reminders which works on top of Akka.Persistence, so you can use it on top of any akka.net persistence plugin.
Another way is to use Akka.Quartz.Actor which offers dedicated actors on top of Quartz.NET and makes use of Quartz's persistence capabilities.

design the flow of connecting my app to message broker

I want to write an SMS bulk app. using .net core webapi app and rabbitMQ.
now, the end user want to send a message to huge number of cellphones.
I guess can do it using two flow.
is it correct? has another solution? or bether solution?
I guess the green flow is better because the user waiting is less than red flow?
maybe you say this two flow totally wrong. and I need another solution. can any one help me!
You need to break your problem down, without getting to your selected technology first.
Question:How much traffic you are required to terminate per second
Question:How much traffic sms-api(SMS Provider) can handle per second
Question:How much traceability you need per message, is the log enough or you need a permanent storage
I believe after answering these questions you would be able to have a rough idea about the design of application

Performance tip: x% of this request was spent in waiting

When reviewing Application Insights for slow API requests I noticed a message stating: "98.49% of this request was spent in waiting.". I'm finding next to no explanation about this online.
What does this mean? What is it waiting for?
How can I fix it?
Application Insights collects performance details for the different operations in your application. By identifying those operations with the longest duration, you can diagnose potential problems or best target your ongoing development to improve the overall performance of the application.
The Performance Tip at the top of the screen supports the assessment that the excessive duration is due to waiting. Click the waiting link for documentation on interpreting the different types of events.
These are all indication of slow server operations.
You can read more about here. Also please look for the event which is causing waiting time duration and then work accordingly.
Let me know if you need any help related to fix perf issue.

app insights think time does not take into account think times

I am executing web tests in App Insights as availability tests.
The problem is that, those web test contain requests with certain think time.
For the tests I am doing the think time is crucial.
The problem I have is that seems that Application Insights does not take into account the think time values, so I don't see any way to pause the request calls within a web test.
Is there any way to make think times work in App Insights? Is it foreseen to solve this issue soon? Is there any recommendation or workaround?
This question was answered here on MSDN.
The answer provided there:
"At this point - we do not have plans on supporting arbitrary think times. We ourselves, and some customers work around this by calling a controller that can take a parameter on the duration it waits on before responding, from the web test.
Hope this helps.
"

Concurrent transactional calls resulting in duplicate responses in .net web service

Hi all I have a question on concurrent stored procedure calls from a .net web service written in 3.5 framework.
I am building a web service that is used by many users and needs to return contact information from oracle database.
The issue is when more than 1 user clicks at the same time the db returns the same contact info. I have written the status update query in the SP. I am having this issue only when 2 or more requests read the same record before the status update happens. I have used transaction and transaction-scope but it doesn't solve the issue. Can anyone tell me if i am tackling the problem right or should i be looking at some other way. Any help is greatly appreciated.
Sounds like your stored procedure code is what we term in the trade 'dodgy'.
Generally it should be an
UPDATE table
SET status = 'READ'
WHERE ...
RETURNING col_1, col_2 INTO var1, var2;
RETURN;
It is probably doing a SELECT then an UPDATE based on ID without checking to see whether the status has been changed by another transaction
Your "read" operation on the database isn't really a read, since it updates the record in question. Sounds like what you need is for the first read-and-update operation to complete and commit before the second read-and-update operation begins.
I'm not a database/oracle guru, so I apologize if this answer is somewhat vague: is there a way to lock the table in question at the start of the read-and-update operation - perhaps an oracle transaction setting? If so, that should accomplish what you're looking for.
Note that you're essentially single-threading access to that table, which could potentially have performance implications, depending on the number of concurrent users and how often the table is accessed. You might want to consider alternate ways of accomplishing the requirement.

Resources