mediaUpload/uploadedVideo Returns 504 Gateway Timeout frequently - linkedin

When uploading videos to LinkedIn via API we get very frequent 504 timeout errors. We have added in backoff and retry mechanisms, but still get a large percentage of failures due to this. We can manually retry the uploads and eventually they work without changing anything.
Anyone else navigate this issue? Anything we can do with our uploads to reduce or prevent these errors? Some of our upload code below:
data = self.session.post(
url="/assets",
params={"action": "registerUpload"},
json={
"registerUploadRequest": {
"owner": owner,
"recipes": ["urn:li:digitalmediaRecipe:feedshare-video"],
"serviceRelationships": [
{"identifier": "urn:li:userGeneratedContent", "relationshipType": "OWNER"}
],
"supportedUploadMechanism": ["SYNCHRONOUS_UPLOAD"],
}
},
)
upload_url = data["value"]["uploadMechanism"][
"com.linkedin.digitalmedia.uploading.MediaUploadHttpRequest"
]["uploadUrl"]
response = self.session.put(
url=upload_url,
headers={"Content-Type": "application/binary"},
data=media.file.open("rb"),
return_response=True,
)

We've been experiencing the same issue and what has worked for us was removing supportedUploadMechanism option from the registerUploadRequest.
So the updated JSON should look like the following for you
"registerUploadRequest": {
"owner": owner,
"recipes": ["urn:li:digitalmediaRecipe:feedshare-video"],
"serviceRelationships": [
{"identifier": "urn:li:userGeneratedContent", "relationshipType": "OWNER"}
]
}

Related

Sending messages with attachments via Linkedin v2 messages API Fail

Sending any attachments via https://api.linkedin.com/v2/messages results in an empty message on the receiver end. It was working fine during the development and until a few months ago.
I followed the steps as mentioned in the doc here: Messages API.
1. Register upload
POST 'https://api.linkedin.com/v2/assets?action=registerUpload'
Headers: { 'x-li-format: json' ,'X-Restli-Protocol-Version: 2.0.0', 'Authorization': <AUTH TOKEN'>, 'Content-Type: application/json'}
Body:
{
"registerUploadRequest": {
"recipes": [
"urn:li:digitalmediaRecipe:messaging-attachment"
],
"owner": "urn:li:person:tVBKuamGQA",
"serviceRelationships": [
{
"identifier": "urn:li:userGeneratedContent",
"relationshipType": "OWNER"
}
],
"supportedUploadMechanism": [
"SYNCHRONOUS_UPLOAD"
]
}
}
2. Media upload
PUT 'https://api.linkedin.com/mediaUpload/D5606AQEyWs_FuSOhpQ/messaging-attachmentFile/0?ca=vector_messaging&cn=uploads_encrypted&m=AQLwVYaN1VD0FAAAAYZWQ5iaB1P3EvIEqybGym69KocRancNKS12YAhD2A&app=109663&sync=1&v=beta&ut=39P3TAA2AwmGE1'
Headers: Same as above
Body: Image in bytes format
3. Send message with the Attachment URN
POST 'https://api.linkedin.com/v2/messages'
Additionally, I was also able to verify that the status of the upload was successful.
Body:
{
"recipients": [
"urn:li:person:eDPvPwsUVS"
],
"subject": null,
"body": "Body3",
"messageType": "MEMBER_TO_MEMBER",
"attachments": [
"urn:li:digitalmediaAsset:D5606AQEyWs_FuSOhpQ"
]
}
4. Check status of the upload:
Additionally, I was also able to verify that the upload was successful with the status "AVAILABLE"
GET 'https://api.linkedin.com/v2/assets/D5606AQEyWs_FuSOhpQ'
All API calls shared here were successful, making it even more challenging to figure out why the attachments fail. Sending only texts works fine. Any insights/guidance on this issue will be greatly appreciated. It's currently blocking our major release.
I have gone through the assets api, images api documentations from LI, but that doesn't seem to be relevant for sending attachments via private message.
Assets API
Images API

Sabre Revalidate Itinerary with Ancillaries

I'm currently working on the integration of the API to be able to search, confirm the price and book.
Currently we have a problem on the second step:
What I'm trying to get is to have a revalidate response having also all Ancillary and Baggage (hand and hold, also with a fee) to be able to create the page to show the information about the reservation.
I've tried to add the following (in a successfully request):
"TravelPreferences": {
"AncillaryFees": {
"Enable": true,
"Summary": true
},
"TPA_Extensions": {
"VerificationItinCallLogic": {
"Value": "B"
}
}
},
but I'm getting following error:
AIR EXTRAS SUMMARY REQUEST REQUIRES AT LEAST ONE GROUP CODE
Error during Processing
For the luggage, with this part
"Baggage": {
"CarryOnInfo": true,
"Description": true
},
I'll get info about baggage but no prices.
Any idea?
Thank you!

Ingest pipeline is not working over logs obtained from an event hub wih filebeat

I am sending logs to an azure eventhub with Serilog (using WriteTo.AzureEventHub(eventHubClient)), after that I am running a filebeat process with the azure module enabled, so I send these logs to elasticsearch to be able to explore them with Kibana.
The problem I have is that all the information goes to the field "message", I would need to separate the information of my logs in different fields to be able to do good queries.
The way I found was create an ingest pipeline in Kibana and through a grok processor I separate the fields inside the "meessage" and generate multiple fields. In the filebeat.yml I set the pipeline name, but nothing happen, it seems the pipeline is not working.
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
pipeline: "filebeat-otc"
Does anybody knows what I am missing? THANKS in advance.
EDITION. I will add an example of my pipeline and my data. In the simulation is working properly:
POST _ingest/pipeline/_simulate
{
"pipeline": {
"processors": [
{
"grok": {
"field": "message",
"patterns": [
"%{TIME:timestamp}\\s%{LOGLEVEL}\\s{[a-zA-Z]*:%{UUID:CorrelationID},[a-zA-Z]*:%{TEXT:OperationTittle},[a-zA-Z]*:%{TEXT:OriginSystemName},[a-zA-Z]*:%{TEXT:TargetSystemName},[a-zA-Z]*:%{TEXT:OperationProcess},[a-zA-Z]*:%{TEXT:LogMessage},[a-zA-Z]*:%{TEXT:ErrorMessage}}"
],
"pattern_definitions": {
"LOGLEVEL" : "\\[[^\\]]*\\]",
"TEXT" : "[a-zA-Z0-9- ]*"
}
}
}
]
},
"docs": [
{
"_source": {
"message": "15:13:59 [INF] {CorrelationId:83355884-a351-4c8b-af8d-b77c48462f36,OperationTittle:Operation1,OriginSystemName:Fexa,TargetSystemName:Usina,OperationProcess:Testing Log Data,LogMessage:Esto es una buena prueba,ErrorMessage:null}"
}
},
{
"_source": {
"message": "20:13:48 [INF] {CorrelationId:8451ee54-efca-40be-91c8-8c8e18e33f58,OperationTittle:null,OriginSystemName:Fexa,TargetSystemName:Donna,OperationProcess:Testing Log Data,LogMessage:null,ErrorMessage:null}"
}
}
]
}
It seems when you use a module it will create and use an ingest pipeline in elasticsearch, and the pipeline option in the output is ignored.
So my solution was modify the index.final_pipeline. For this, in Kibana I went to Stack Management / Index Management there I found my index, there I went to Edit Settings and set "index.final_pipeline": "the-name-of-my-pipeline".
I hope this helps to anybody.
This was thanks to leandrojmp

Google Measurement Protocol Timing hit type not shown in Real-time report

I'm in the process of adding support for Google Measurement Protocol in my macOS desktop app. Doing pageviews and events works fine. However, I'm not sure that my timing hit type works fine.
When I send
https://www.google-analytics.com/collect?v=1&t=timing&tid=UA-xxx-1&cid=58xxx&utc=archive&utv=bla&utt=4
I get the usual pixel back. The Hit Builder also validates fine. However, for pageviews and events I see something in Real-Time report of Google Analytics. This doesn't happen for the timing hit type.
I've tried with the debug url and got back the following:
{
"hitParsingResult": [ {
"valid": true,
"parserMessage": [ ],
"hit": "/debug/collect?v=1\u0026t=timing\u0026tid=UA-xxx-1\u0026cid=58xxx\u0026utc=archive\u0026utv=bla\u0026utt=4"
} ],
"parserMessage": [ {
"messageType": "INFO",
"description": "Found 1 hit in the request."
} ]
}
What am I doing wrong?

Python 3.6.5: Requests with streaming getting stuck in iter_content even if chunk_length is specified

I have been trying to use requests v2.19.1 in python 3.6.5 to download a ~2GB file from a remote URL. However, I have been repeatedly facing this issue where the code seems to get stuck forever in the for loop in trying to download the data.
My code snippet:
with requests.get(self.model_url, stream=True, headers=headers) as response:
if response.status_code not in [200, 201]:
raise Exception(
'Error downloading model({}). Got response code {} with content {}'.format(
self.model_id,
response.status_code,
response.content
)
)
with open(self.download_path, 'wb') as f:
for chunk in response.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
Each time I try to run this code, the download seems to stop at different points, and rarely reaches completion.
I have tried playing around with different chunk sizes, but I still keep seeing this issue.
Some additional details:
python -m requests.help
{
"chardet": {
"version": "3.0.4"
},
"cryptography": {
"version": "2.3.1"
},
"idna": {
"version": "2.7"
},
"implementation": {
"name": "CPython",
"version": "3.6.5"
},
"platform": {
"release": "3.10.0-693.11.1.el7.x86_64",
"system": "Linux"
},
"pyOpenSSL": {
"openssl_version": "1010009f",
"version": "18.0.0"
},
"requests": {
"version": "2.19.1"
},
"system_ssl": {
"version": "100020bf"
},
"urllib3": {
"version": "1.23"
},
"using_pyopenssl": true
}
Has anyone else faced a similar issue? If so, how did you resolve it?
It seems like if there is any interruption to the network during the download, the stream hangs up, and the connection goes dead. However, because no timeout is specified, the code seems to expect more packets to arrive over the dead connection. The best way I have found to handle this is to set a reasonable timeout. Once the timeout is reached after the last received package, the code exits the for loop with an exception which can be handled.

Resources