Error while calling Sabre BargainFinderMax REST API — ERR.2SG.PROVIDER_TIMEOUT - sabre
I am getting occasional errors while calling Sabre BargainFinderMax REST API for the following request:
{
"OTA_AirLowFareSearchRQ":{
"OriginDestinationInformation":[
{
"DepartureDateTime":"2018-09-22T00:00:00",
"DestinationLocation":{
"LocationCode":"DEL"
},
"OriginLocation":{
"LocationCode":"BOM"
},
"RPH":"0"
},
{
"DepartureDateTime":"2018-09-29T00:00:00",
"DestinationLocation":{
"LocationCode":"BOM"
},
"OriginLocation":{
"LocationCode":"DEL"
},
"RPH":"1"
}
],
"TravelPreferences":{
"ValidInterlineTicket":true,
"CabinPref":[
{
"Cabin":"Y",
"PreferLevel":"Preferred"
}
]
},
"POS":{
"Source":[
{
"PseudoCityCode":"J6UJ",
"RequestorID":{
"CompanyName":{
"Code":"TN"
},
"ID":"REQ.ID",
"Type":"0.AAA.X"
}
}
]
},
"TPA_Extensions":{
"IntelliSellTransaction":{
"RequestType":{
"Name":"50ITINS"
}
}
},
"TravelerInfoSummary":{
"AirTravelerAvail":[
{
"PassengerTypeQuantity":[
{
"Code":"ADT",
"Quantity":1
}
]
}
],
"PriceRequestInformation":{
"CurrencyCode":"USD"
}
}
}
}
Response with error code - ERR.2SG.PROVIDER_TIMEOUT
{
"status":"NotProcessed",
"type":"Transport",
"errorCode":"ERR.2SG.PROVIDER_TIMEOUT",
"timeStamp":"2018-09-08T11:09:38.819-05:00",
"message":"Connection error"
}
Response with error code - ERR.2SG.SEC.MISSING_CREDENTIALS
{
"status":"NotProcessed",
"type":"Validation",
"errorCode":"ERR.2SG.SEC.MISSING_CREDENTIALS",
"timeStamp":"2018-09-08T11:26:06.919-05:00",
"message":"Authentication data is missing"
}
Response with error code - WARN.RAF.APPLICATION
{
"status":"Complete",
"reportingSystem":"RAF",
"timeStamp":"2018-09-08T19:03:26+00:00",
"type":"Application",
"errorCode":"WARN.RAF.APPLICATION",
"instance":"raf-darhlc006.sabre.com-8080",
"message":"{\"OTA_AirLowFareSearchRS\":{\"PricedItinCount\":0,\"BrandedOneWayItinCount\":0,\"SimpleOneWayItinCount\":0,\"DepartedItinCount\":0,\"SoldOutItinCount\":0,\"AvailableItinCount\":0,\"Version\":\"4.2.0\",\"Errors\":{\"Error\":[{\"Type\":\"SCHEDULES\",\"ShortText\":\"DSF server returned an error: unknown BRD airport=BOM\",\"Code\":\"PROCESS\",\"content\":\"\"},{\"Type\":\"SCHEDULES\",\"ShortText\":\"DSF server returned an error: unknown OFF airport=BOM\",\"Code\":\"PROCESS\",\"content\":\"\"},{\"Type\":\"IF2\",\"ShortText\":\"No complete journey can be built in IF2/ADVJR1.\",\"Code\":\"PROCESS\",\"content\":\"\"},{\"Type\":\"WORKERTHREAD\",\"ShortText\":\"4220224579781953781\",\"Code\":\"TRANSACTIONID\",\"MessageClass\":\"I\",\"content\":\"\"},{\"Type\":\"SERVER\",\"ShortText\":\"27033\",\"Code\":\"ASECT2LAPC00015.IDM.SGDCPROD.SABRE.COM\",\"MessageClass\":\"I\",\"content\":\"\"},{\"Type\":\"DRE\",\"ShortText\":\"21728\",\"Code\":\"RULEID\",\"MessageClass\":\"I\",\"content\":\"\"},{\"Type\":\"DEFAULT\",\"ShortText\":\"25238\",\"Code\":\"RULEID\",\"MessageClass\":\"I\",\"content\":\"\"},{\"Type\":\"SCHEDULES\",\"ShortText\":\"NO FLIGHTS FOUND FOR BOM-DEL\",\"Code\":\"MSG\",\"MessageClass\":\"I\",\"content\":\"\"},{\"Type\":\"SCHEDULES\",\"ShortText\":\"NO FLIGHTS FOUND FOR DEL-BOM\",\"Code\":\"MSG\",\"MessageClass\":\"I\",\"content\":\"\"},{\"Type\":\"SCHEDULES\",\"ShortText\":\"NO FLIGHT SCHEDULES FOR QUALIFIERS USED\",\"Code\":\"MSG\",\"MessageClass\":\"I\",\"content\":\"\"},{\"Type\":\"ERR\",\"ShortText\":\"Error during Processing\",\"Code\":\"ERR\",\"content\":\"\"}]}},\"Links\":[{\"rel\":\"self\",\"href\":\"https://api-crt.cert.havail.sabre.com/v4.2.0/shop/flights?mode=live\"},{\"rel\":\"linkTemplate\",\"href\":\"https://api-crt.cert.havail.sabre.com//shop/flights?mode=&limit=&offset=&enabletagging=\"}]}"
}
Response with no error
{
"OTA_AirLowFareSearchRS":{
"PricedItinCount":50,
"BrandedOneWayItinCount":0,
"SimpleOneWayItinCount":0,
"DepartedItinCount":0,
"SoldOutItinCount":0,
"AvailableItinCount":0,
"Version":"4.2.0",
"Success":{
},
"Warnings":{...},
"PricedItineraries":{...},
"TPA_Extensions":{...}
},
"Links":[...]
}
The website is currently using Sabre Test Environment for the REST API calls.
What can be the reasons that the API returns with error codes like mentioned above sometimes?
Will moving to the production environment remove these kind of errors?
Any help to resolve the above issue is appreciated.
While the Production environment will have significantly less errors, those errors can be found and you should consider them in your application.
ERR.2SG.PROVIDER_TIMEOUT is a timeout response that can also happen in Production. It's much more common in testing environments.
ERR.2SG.SEC.MISSING_CREDENTIALS can be avoided, it's that you are missing session details.
WARN.RAF.APPLICATION means that there is a validation error or something in the request is invalid.
Related
Sabre Revalidate Itinerary with Ancillaries
I'm currently working on the integration of the API to be able to search, confirm the price and book. Currently we have a problem on the second step: What I'm trying to get is to have a revalidate response having also all Ancillary and Baggage (hand and hold, also with a fee) to be able to create the page to show the information about the reservation. I've tried to add the following (in a successfully request): "TravelPreferences": { "AncillaryFees": { "Enable": true, "Summary": true }, "TPA_Extensions": { "VerificationItinCallLogic": { "Value": "B" } } }, but I'm getting following error: AIR EXTRAS SUMMARY REQUEST REQUIRES AT LEAST ONE GROUP CODE Error during Processing For the luggage, with this part "Baggage": { "CarryOnInfo": true, "Description": true }, I'll get info about baggage but no prices. Any idea? Thank you!
Ingest pipeline is not working over logs obtained from an event hub wih filebeat
I am sending logs to an azure eventhub with Serilog (using WriteTo.AzureEventHub(eventHubClient)), after that I am running a filebeat process with the azure module enabled, so I send these logs to elasticsearch to be able to explore them with Kibana. The problem I have is that all the information goes to the field "message", I would need to separate the information of my logs in different fields to be able to do good queries. The way I found was create an ingest pipeline in Kibana and through a grok processor I separate the fields inside the "meessage" and generate multiple fields. In the filebeat.yml I set the pipeline name, but nothing happen, it seems the pipeline is not working. output.elasticsearch: # Array of hosts to connect to. hosts: ["localhost:9200"] pipeline: "filebeat-otc" Does anybody knows what I am missing? THANKS in advance. EDITION. I will add an example of my pipeline and my data. In the simulation is working properly: POST _ingest/pipeline/_simulate { "pipeline": { "processors": [ { "grok": { "field": "message", "patterns": [ "%{TIME:timestamp}\\s%{LOGLEVEL}\\s{[a-zA-Z]*:%{UUID:CorrelationID},[a-zA-Z]*:%{TEXT:OperationTittle},[a-zA-Z]*:%{TEXT:OriginSystemName},[a-zA-Z]*:%{TEXT:TargetSystemName},[a-zA-Z]*:%{TEXT:OperationProcess},[a-zA-Z]*:%{TEXT:LogMessage},[a-zA-Z]*:%{TEXT:ErrorMessage}}" ], "pattern_definitions": { "LOGLEVEL" : "\\[[^\\]]*\\]", "TEXT" : "[a-zA-Z0-9- ]*" } } } ] }, "docs": [ { "_source": { "message": "15:13:59 [INF] {CorrelationId:83355884-a351-4c8b-af8d-b77c48462f36,OperationTittle:Operation1,OriginSystemName:Fexa,TargetSystemName:Usina,OperationProcess:Testing Log Data,LogMessage:Esto es una buena prueba,ErrorMessage:null}" } }, { "_source": { "message": "20:13:48 [INF] {CorrelationId:8451ee54-efca-40be-91c8-8c8e18e33f58,OperationTittle:null,OriginSystemName:Fexa,TargetSystemName:Donna,OperationProcess:Testing Log Data,LogMessage:null,ErrorMessage:null}" } } ] }
It seems when you use a module it will create and use an ingest pipeline in elasticsearch, and the pipeline option in the output is ignored. So my solution was modify the index.final_pipeline. For this, in Kibana I went to Stack Management / Index Management there I found my index, there I went to Edit Settings and set "index.final_pipeline": "the-name-of-my-pipeline". I hope this helps to anybody. This was thanks to leandrojmp
AppSync BatchDeleteItem not executes properly
I'm working on a React Native application with AppSync, and following is my schema to the problem: type JoineeDeletedConnection { items: [Joinee] nextToken: String } type Mutation { deleteJoinee(ids: [ID!]): [Joinee] } In 'request mapping template' to resolver to deleteJoinee, I have following (following the tutorial from https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-batch.html): #set($ids = []) #foreach($id in ${ctx.args.ids}) #set($map = {}) $util.qr($map.put("id", $util.dynamodb.toString($id))) $util.qr($ids.add($map)) #end { "version" : "2018-05-29", "operation" : "BatchDeleteItem", "tables" : { "JoineesTable": $util.toJson($ids) } } ..and in 'response mapping template' to the resolver, $util.toJson($ctx.result.data.JoineesTable) The problem is, when I ran the query, I got empty result and nothing deleted to database as well: // calling the query mutation DeleteJoinee { deleteJoinee(ids: ["xxxx", "xxxx"]) { id } } // returns { "data": { "deleteJoinee": [ null ] } }
I finally able to solve this puzzle, thanks to the answer mentioned here to point me to some direction. Although, I noticed that JoineesTable does have trusted entity/role to the IAM 'Roles' section, yet it wasn't working for some reason. Looking into this more, I noticed that the existing policy had following actions as default: "Action": [ "dynamodb:DeleteItem", "dynamodb:GetItem", "dynamodb:PutItem", "dynamodb:Query", "dynamodb:Scan", "dynamodb:UpdateItem" ] Once I added following two more actions to the list, things have started working: "dynamodb:BatchWriteItem", "dynamodb:BatchGetItem" Thanks to #Vasileios Lekakis and #Ionut Trestian on this appSync quest )
INVALID_ARGUMENT (400 error) when calling Stackdriver Error Reporting API
When trying to invoke the Stackdriver Error Reporting API (via the API explorer or via the Client-Side JavaScript library), I receive the following error: Request: { "message" : "test" } Response: { "error": { "code": 400, "message": "Request contains an invalid argument.", "status": "INVALID_ARGUMENT" } } The Stackdriver Error Reporting API is enabled and I have Owner rights to the App Engine project. Is the API simply not functional? If I'm doing something wrong, can someone try to help?
The documentation for reporting events says that a ServiceContext is required. If you're only sending a message (not a stacktrace / exception) you'll need to include a context with a reportLocation as well. This is noted in the documentation of the message field, but it's not obvious. The following works from the API explorer: { "context": { "reportLocation": { "functionName": "My Function" } }, "message": "error message", "serviceContext": { "service": "My Microservice", } } You might be interested in the docs on How Error are Grouped too. FWIW, I work on this product and I think the error message is too generic. The problem is (?) that the serving stack scrubs the message unless they're annotated as being for public consumption. I'll chase that down.
How to reset error when using Redux actions REQUEST, SUCCESS, FAILURE in Async situation
I am working on some Async actions using Redux. On the Redux GitHub documentations, one of the way they suggest is to define the following three actions. { type: 'FETCH_POSTS_REQUEST' } { type: 'FETCH_POSTS_FAILURE', error: 'Oops' } { type: 'FETCH_POSTS_SUCCESS', response: { ... } } Reference: https://github.com/reactjs/redux/blob/master/docs/advanced/AsyncActions.md In my project, after an error is received, I will display a message for a few second, then disappear by resetting error to null. The problem I found on the above approach is they are using verbs like SUCCESS and FAILURE, which will make it weird if I use { type: 'FETCH_POSTS_FAILURE', error: '' } to reset the error because the naming suggest it should dispatch an error. Another way is I define another action { type: 'FETCH_POSTS_ERROR_RESET' } But using this I will have to introduce 2 actions like { type: 'FETCH_POST_DATA_EMPTY' } { type: 'FETCH_POST_ERROR_RESET' } Is there any other better way to handle this? Is there any philosophy behind in using verbs like SUCCESS and FAILURE instead of FETCH_POST_SET_DATA or FETCH_POST_SET_ERROR?