According to the LinkedIn API documentation, I'm trying to push video. Unfortunately, I get a 500 error without any details when I'm running PUT request with the binary video file on the given endpoint from initialization request.
My video fit with video specifications.
Did I miss something ?
i have been in the same situation as you few days ago.
the solution is :
if your file is more then 4MB you must divide your file.
and in the initialize upload you will get a list of uploadUrls. so use each link with parts of file.
Thanks #ARGOUBI Sofien for your answer. I found my mistake: the fileSizeBytes value was wrong and gave me only one link for the upload. With the good value, I have several endpoints.
So, I'm initializing upload with that body:
{
"initializeUploadRequest": {
"owner": "urn:li:organization:MY_ID",
"fileSizeBytes": 10903312,
"uploadCaptions": false,
"uploadThumbnail": true
}
}
I got this response:
{
"value": {
"uploadUrlsExpireAt": 1657618793558,
"video": "urn:li:video:C4E10AQEDRhUsYL99HQ",
"uploadInstructions": [
{
"uploadUrl": "https://www.linkedin.com/dms-uploads/C4E10AQEDRhUsYL99HQ/uploadedVideo?sau=aHR0cHM6Ly93d3[...]t3yak1",
"lastByte": 4194303,
"firstByte": 0
},
{
"uploadUrl": "https://www.linkedin.com/dms-uploads/C4E10AQEDRhUsYL99HQ/uploadedVideo?sau=aHR0cHM6Ly93d3cub[...]f13yak1",
"lastByte": 8388607,
"firstByte": 4194304
},
{
"uploadUrl": "https://www.linkedin.com/dms-uploads/C4E10AQEDRhUsYL99HQ/uploadedVideo?sau=aHR0cHM6Ly93d3cubGlua2V[...]V3yak1",
"lastByte": 10903311,
"firstByte": 8388608
}
],
"uploadToken": "",
"thumbnailUploadUrl": "https://www.linkedin.com/dms-uploads/C4E10AQEDRhUsYL9[...]mF3yak1"
}
}
It's look like better ✌️
EDIT
After several tests, the upload is ok when I have only one upload links but I does not get any response from the server when I have several upload URL.
My code:
const uploadPromises: Array<() => Promise<AxiosResponse<void>>> = [];
uploadData.data.value.uploadInstructions.map((uploadInstruction: UploadInstructionType) => {
const bufferChunk: Buffer = videoStream.data.subarray(uploadInstruction.firstByte, uploadInstruction.lastByte + 1);
const func = async (): Promise<AxiosResponse<void>> => linkedinRestApiRepository.uploadMedia(uploadInstruction.uploadUrl, bufferChunk, videoContentType, videoContentLength);
uploadPromises.push(func);
});
let uploadVideoResponses: Array<AxiosResponse<void>>;
try {
uploadVideoResponses = await series(uploadPromises);
} catch (e) {
console.error(e);
}
Something is wrong we I have several upload links but I does not know what 😞
in my case i have divide my file buffer into arrayBuffer
then you can use map to upload each buffer with the right urlUpload
Related
I created a next.js app to upload json
try {
const url = `https://api.pinata.cloud/pinning/pinFileToIPFS`;
const fileRes = await axios.post(url, formData, {
maxBodyLength: Infinity,
headers: {
"Content-Type": `multipart/form-data: boundary=${formData.getBoundary()}`,
pinata_api_key: pinataApiKey,
pinata_secret_api_key: pinataSecretApiKey,
},
});
console.log("fileRes", fileRes.data);
// we need to create formData
return res.status(200).send(fileRes.data);
} catch (error: any) {
console.error("errro in uploading image", error.response.data);
this is the response I get
fileRes {
IpfsHash: 'QmWb63anLitKNrfiZBiyNBLJGbA57vkFqugJUJvUVvvaKi',
PinSize: 35672,
Timestamp: '2022-05-20T17:48:31.784Z'
}
I successfully upload the image, It is in pinata files. But I cannot display. I have this error on console.
net::ERR_SSL_PROTOCOL_ERROR
When i click on the link I get this on browser:
I tried to upload through pinata upload, but I have the same issue.
I tried different images. delete them re-upload them but still error
SOLVED
I changed the DNS server to Cloudflare and it works now.
I am making a call in my SwiftUI app which will add a document to my Cloud Firestore database. The issue is this is taking around 5 seconds to complete and rendering the app useless for those 5 seconds. I think this could also be down to the cloud functions running when the document is added however I am not sure.
I have tried, rather unsuccessfully so far I might add, to look at using other threads however this is to no avail. Can anyone provide some advice?
This is how I am calling the function:
Button(action: {
DispatchQueue.main.async{
addCard(time: "", text: "\(userSettings.username) Recorded")
}
}){
Text("Another One")
}
This is the actual function that is running.
func addCard(_ card: Card) {
do {
_ = try store.collection(path).addDocument(from: card)
} catch {
fatalError("Unable to add card: \(error.localizedDescription).")
}
}
As I think it could also be my Cloud Functions causing an issue, here it is too:
exports.logActivities = functions.firestore.document('/{collection}/{id}')
.onCreate((snap, context) => {
return admin.messaging().send({
"notification": {
"title": 'New Record',
"body": snap.data()['text'] + ' a new "Tell Em Lad"'
},
"apns": {
"payload": {
"aps": {
"sound": 'notiSound.wav'
}
}
},
"topic": 'client'
});
})
Any help would be greatly appreciated!
We are getting No image present. error while attempting face detection with the cloud vision api.
We are using code from the official documentation.
Please see the code below.
const request1={
"requests":[
{
"image":{
"content": imgdatauri //It contains image data uri
},
"features": [
{
"type":"FACE_DETECTION",
"maxResults":1
}
]
}
]
};
client
.annotateImage(request1)
.then(response => {
console.log(response);
response.send(response);
})
.catch(err => {
console.error(err);
response.send(err);
});
Here is the error message.
Error: No image present.
at _coerceRequest (/rbd/pnpm-volume/e40024d2-3d05-4f3d-a435-6d4e6ca96fb0/node_modules/.registry.npmjs.org/#google-cloud/vision/1.1.3/node_modules/#google-cloud/vision/src/helpers.js:69:21)
at ImageAnnotatorClient.<anonymous> (/rbd/pnpm-volume/e40024d2-3d05-4f3d-a435-6d4e6ca96fb0/node_modules/.registry.npmjs.org/#google-cloud/vision/1.1.3/node_modules/#google-cloud/vision/src/helpers.js:224:12)
at PromiseCtor (/rbd/pnpm-volume/e40024d2-3d05-4f3d-a435-6d4e6ca96fb0/node_modules/.registry.npmjs.org/#google-cloud/promisify/1.0.2/node_modules/#google-cloud/promisify/build/src/index.js:71:28)
at new Promise (<anonymous>)
at ImageAnnotatorClient.wrapper [as annotateImage] (/rbd/pnpm-volume/e40024d2-3d05-4f3d-a435-6d4e6ca96fb0/node_modules/.registry.npmjs.org/#google-cloud/promisify/1.0.2/node_modules/#google-cloud/promisify/build/src/index.js:56:16)
We would like to know what we need to do to resolve the issue.
Method 1:
In case of vision API, if image is stored locally you must convert that image to base64 string. Now this converted string is passed as value to content.
Make sure that you are converting image to base64 string and then passing to the content value.
There are some services available online to convert image to base64 string. You can also convert image to base64 by writing a piece of code. You can find services online and select anyone them. I am providing link to one service.
https://www.browserling.com/tools/image-to-base64
Method 2:
You can provide public url of image to vision API.
{
"requests":[
{
"image":{
"source":{
"imageUri": PUBLIC_URL
}
},
"features":[
{
"type":TYPE_OF_DETECTION,
"maxResults":MAX_NUMBER_OF_RESULTS
}
]
}
]
}
Method 3:
You can create bucket and put image there.
Now you can provide this image object URL or path.
I think this will help you.
Thank you.
I have created a Cloud Function with Cloud Storage Trigger, my function gets triggered ( with event ) when I upload an image file I can see there is event.mediaLink event.selfLink, I tried using both to load image, but it keeps complaining about No Image being present
here is the code
exports.analyzeImage = function(event) {
const vision = require('#google-cloud/vision');
const client = new vision.ImageAnnotatorClient();
console.log('Event', event.mediaLink)
let image = {
source: {imageUri: event.mediaLink}
};
return client.labelDetection(image).then(resp => {
let labels = resp[0].labelAnnotations.map( l => {
return {
description: l.description,
score: l.score
};
});
return labels;
// const dataset = bigquery.dataset('dataset')
// return dataset.table
}).catch(err => {
console.error(err)
})
}
am playing around with downloading and serving mp3 files in Meteor.
I am trying to download an MP3 file (https://www.sample-videos.com/audio/mp3/crowd-cheering.mp3) on my MeteorJS Server side (to circumvent CORS issues) and then pass it back to the client to play it in a AUDIO tag.
In Meteor you use the Meteor.call function to call a server method. There is not much to configure, it's just a method call and a callback.
When I run the method I receive this:
content:
"ID3���#K `�)�<H� e0�)������1������J}��e����2L����������fȹ\�CO��ȹ'�����}$A�Lݓ����3D/����fijw��+�LF�$?��`R�l�YA:A��#�0��pq����4�.W"�P���2.Iƭ5��_I�d7d����L��p0��0A��cA�xc��ٲR�BL8䝠4���T��..etc..", data:null,
headers: {
accept-ranges:"bytes",
connection:"close",
content-length:"443926",
content-type:"audio/mpeg",
date:"Mon, 20 Aug 2018 13:36:11 GMT",
last-modified:"Fri, 17 Jun 2016 18:16:53 GMT",
server:"Apache",
statusCode:200
which is the working Mp3 file (the content-length is exactly the same as the file I write to disk on the MeteorJS Server side, and it is playable).
However, my following code doesn't let me convert the response into a BLOB:
```
MeteorObservable.call( 'episode.download', episode.url.url ).subscribe( ( result: any )=> {
console.log( 'response', result);
let URL = window.URL;
let blob = new Blob([ result.content ], {type: 'audio/mpeg'} );
console.log('blob', blob);
let audioUrl = URL.createObjectURL(blob);
let audioElement:any = document.getElementsByTagName('audio')[0];
audioElement.setAttribute("src", audioUrl);
audioElement.play();
})
When I run the code, the Blob has the wrong size and is not playable
Blob(769806) {size: 769806, type: "audio/mpeg"}
size:769806
type:"audio/mpeg"
__proto__:Blob
Uncaught (in promise) DOMException: Failed to load because no supported source was found.
On the backend I just run a return HTTP.get( url ); in the method which is using import { HTTP } from 'meteor/http'.
I have been trying to use btoa or atob but that doesn't work and as far as I know it is already a base64 encoded file, right?
I am not sure why the Blob constructor creates a larger file then the source returned from the backend. And I am not sure why it is not playing.
Can anyone point me to the right direction?
Finally found a solution that uses request instead of Meteor's HTTP:
First you need to install request and request-promise-native in order to make it easy to return your result to clients.
$ meteor npm install --save request request-promise-native
Now you just return the promise of the request in a Meteor method:
server/request.js
import { Meteor } from 'meteor/meteor'
import request from 'request-promise-native'
Meteor.methods({
getAudio (url) {
return request.get({url, encoding: null})
}
})
Notice the encoding: null flag, which causes the result to be binary. I found this in a comment of an answer related to downloading binary data via node. This causes not to use string but binary representation of the data (I don't know how but maybe it is a fallback that uses Node Buffer).
Now it gets interesting. On your client you wont receive a complex result anymore but either an Error or a Uint8Array which makes sense because Meteor uses EJSON to send data over the wires with DDP and the representation of binary data is a Uint8Array as described in the documentation.
Because you can just pass in a Uint8Array into a Blob you can now easily create the blob like so:
const blob = new Blob([utf8Array], {type: 'audio/mpeg'})
Summarizing all this into a small template if could look like this:
client/fetch.html
<template name="fetch">
<button id="fetchbutton">Fetch Mp3</button>
{{#if source}}
<audio id="player" src={{source}} preload="none" content="audio/mpeg" controls></audio>
{{/if}}
</template>
client/fetch.js
import { Template } from 'meteor/templating'
import { ReactiveVar } from 'meteor/reactive-var'
import './fetch.html'
Template.fetch.onCreated(function helloOnCreated () {
// counter starts at 0
this.source = new ReactiveVar(null)
})
Template.fetch.helpers({
source () {
return Template.instance().source.get()
},
})
Template.fetch.events({
'click #fetchbutton' (event, instance) {
Meteor.call('getAudio', 'https://www.sample-videos.com/audio/mp3/crowd-cheering.mp3', (err, uint8Array) => {
const blob = new Blob([uint8Array], {type: 'audio/mpeg'})
instance.source.set(window.URL.createObjectURL(blob))
})
},
})
Alternative solution is adding a REST endpoint *using Express) to your Meteor backend.
Instead of HTTP we use request and request-progress to send the data chunked in case of large files.
On the frontend I catch the chunks using https://angular.io/guide/http#listening-to-progress-events to show a loader and deal with the response.
I could listen to the download via
this.http.get( 'the URL to a mp3', { responseType: 'arraybuffer'} ).subscribe( ( res:any ) => {
var blob = new Blob( [res], { type: 'audio/mpeg' });
var url= window.URL.createObjectURL(blob);
window.open(url);
} );
The above example doesn't show progress by the way, you need to implement the progress-events as explained in the angular article. Happy to update the example to my final code when finished.
The Express setup on the Meteor Server:
/*
Source:http://www.mhurwi.com/meteor-with-express/
## api.class.ts
*/
import { WebApp } from 'meteor/webapp';
const express = require('express');
const trackRoute = express.Router();
const request = require('request');
const progress = require('request-progress');
export function api() {
const app = express();
app.use(function(req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
next();
});
app.use('/episodes', trackRoute);
trackRoute.get('/:url', (req, res) => {
res.set('content-type', 'audio/mp3');
res.set('accept-ranges', 'bytes');
// The options argument is optional so you can omit it
progress(request(req.params.url ), {
// throttle: 2000, // Throttle the progress event to 2000ms, defaults to 1000ms
// delay: 1000, // Only start to emit after 1000ms delay, defaults to 0ms
// lengthHeader: 'x-transfer-length' // Length header to use, defaults to content-length
})
.on('progress', function (state) {
// The state is an object that looks like this:
// {
// percent: 0.5, // Overall percent (between 0 to 1)
// speed: 554732, // The download speed in bytes/sec
// size: {
// total: 90044871, // The total payload size in bytes
// transferred: 27610959 // The transferred payload size in bytes
// },
// time: {
// elapsed: 36.235, // The total elapsed seconds since the start (3 decimals)
// remaining: 81.403 // The remaining seconds to finish (3 decimals)
// }
// }
console.log('progress', state);
})
.on('error', function (err) {
// Do something with err
})
.on('end', function () {
console.log('DONE');
// Do something after request finishes
})
.pipe(res);
});
WebApp.connectHandlers.use(app);
}
and then add this to your meteor startup:
import { Meteor } from 'meteor/meteor';
import { api } from './imports/lib/api.class';
Meteor.startup( () => {
api();
});
Goal
We would like users to be able to upload images to Google Cloud Storage.
Problem
We could achieve this indirectly with our server as a middle man -- first, the user uploads to our server, then our privileged server can upload to Cloud Storage.
However, we think this is unnecessarily slow, and instead would like the user to upload directly to Cloud Storage.
Proposed Solution
To achieve a direct upload, we generate a Signed URL on our server. The Signed URL specifies an expiration time, and can only be used with the HTTP PUT verb. A user can request a Signed URL, and then - for a limited time only - upload an image to the path specified by the Signed URL.
Problem with the Solution
Is there any way to enforce a maximum file upload size? Obviously we would like to avoid users attempting to upload 20GB files when we expect <1MB files.
It seems like this is an obvious vulnerability, yet I don't know how to address it while still using SignedURLs.
There seems to be a way to do this using Policy Documents (Stack Overflow answer), but the question is over 2 years old now.
For all looking at the answer today be aware that link
x-goog-content-length-range:0,25000
is the way to limit the upload size between 0 and 25000 bytes Cloud Storage.
X-Upload-Content-Length will not work and you are still able to upload larger files
Policy documents are still the right answer. They are documented here: https://cloud.google.com/storage/docs/xml-api/post-object#policydocument
The important part of the policy document you'll need is:
["content-length-range", <min_range>, <max_range>].
Signing content-length should do the trick.
Google Cloud will not allow uploads with larger file size even if the content-length is set to a lower value.
This is how the signed url options should look like:
const writeOptions: GetSignedUrlConfig = {
version: 'v4',
action: 'write',
expires: Date.now() + 900000, // 15 minutes
extensionHeaders: {
"content-length": length // desired length in bytes
}
}
My worked code in NodeJS was following https://blog.koliseo.com/limit-the-size-of-uploaded-files-with-signed-urls-on-google-cloud-storage/. You must use the version v4
public async getPreSignedUrlForUpload(
fileName: string,
contentType: string,
size: number,
bucketName: string = this.configService.get('DEFAULT_BUCKET_NAME'),
): Promise<string> {
const bucket = this.storage.bucket(bucketName);
const file = bucket.file(fileName);
const response = await file.getSignedUrl({
action: 'write',
contentType,
extensionHeaders: {
'X-Upload-Content-Length': size,
},
expires: Date.now() + 60 * 1000, // 1 minute
version: 'v4',
});
const signedUrl = this.maskSignedUrl(response[0], bucketName);
return signedUrl;
}
In the Frontend, We must set the same number of the Size in the header X-Upload-Content-Length
export async function uploadFileToGCP(
signedUrl: string,
file: any
): Promise<any> {
return new Promise((resolve, reject) => {
const xhr = new XMLHttpRequest();
xhr.withCredentials = process.env.NODE_ENV === 'production';
xhr.addEventListener('readystatechange', function () {
if (this.readyState === 4) {
resolve(this.responseText);
}
});
xhr.open('PUT', signedUrl, true);
xhr.setRequestHeader('Content-Type', file.type);
xhr.setRequestHeader('X-Upload-Content-Length', file.size);
xhr.send(file);
});
}
And also don't forget to config the responseHeader in the GS CORS
gsutil cors get gs://asia-item-images
[{"maxAgeSeconds": 3600, "method": ["GET", "OPTIONS", "PUT"], "origin": ["*"], "responseHeader": ["Content-Type", "Access-Control-Allow-Origin", "X-Upload-Content-Length", "X-Goog-Resumable"]}]
You can use X-Upload-Content-Length instead of Content-Length. See blog post here.
On the server side (Java):
Map<String, String> extensionHeaders = new HashMap<>();
extensionHeaders.put("X-Upload-Content-Length", "" + contentLength);
extensionHeaders.put("Content-Type", "application/octet-stream");
var url =
storage.signUrl(
blobInfo,
15,
TimeUnit.MINUTES,
Storage.SignUrlOption.httpMethod(HttpMethod.PUT),
Storage.SignUrlOption.withExtHeaders(extensionHeaders),
Storage.SignUrlOption.withV4Signature()
);
On the client side (typescript):
const response = await fetch(url, {
method: 'PUT',
headers: {
'X-Upload-Content-Length': `${file.size}`,
'Content-Type': 'application/octet-stream',
},
body: file,
});
You will need to set up a cors policy on your bucket:
[
{
"origin": ["https://your-website.com"],
"responseHeader": [
"Content-Type",
"Access-Control-Allow-Origin",
"X-Upload-Content-Length",
"x-goog-resumable"
],
"method": ["PUT", "OPTIONS"],
"maxAgeSeconds": 3600
}
]