I'm getting an error called NOT_FOUND while adding tasks to Google Cloud Tasks from the Firebase Functions. It's only 1 task that I tried to add. Not sure why this is happening.
The queue is present and from the command line, it works fine.
Here is the detailed error log:
Error: 5 NOT_FOUND: Requested entity was not found.
at Object.callErrorFromStatus (/workspace/node_modules/#grpc/grpc-js/build/src/call.js:31:26)
at Object.onReceiveStatus (/workspace/node_modules/#grpc/grpc-js/build/src/client.js:189:52)
at Object.onReceiveStatus (/workspace/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:365:141)
at Object.onReceiveStatus (/workspace/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:328:181)
at /workspace/node_modules/#grpc/grpc-js/build/src/call-stream.js:187:78
at processTicksAndRejections (node:internal/process/task_queues:78:11)
It would save my day.
Thanks in advance
check the service account email you are using, suppose you have the task object like this:
const task = {
httpRequest: {
httpMethod: 'POST',
url,
body: Buffer.from(JSON.stringify((payload))).toString('base64'), // required by cloud tasks api
headers: {
'Content-Type': 'application/json',
},
oidcToken: {
serviceAccountEmail // check if this is a valid serviceAccountEmail
}
},
scheduleTime: {
seconds: sendAtSeconds // THE STUFF
}
}
I have the following code on my Dynamic Routing page [id] that I am trying to use with next-i18next translations. However, it is throwing an error when being deployed on Vercel (working locally). I am trying to use the fallback function with an empty path array to somehow accept all possible paths(?). In my console I am getting a statuscode 500 GET-error and a "Failed to load static props"-error.
It is working when I specify a specific id within getStaticPaths and go to that matching path. However, I can't possibly specify thousands of ids for this to work. Shouldn't the fallback take care of this or how can I get past this?
export async function getStaticPaths() {
return {
paths: [], fallback: true
}
}
export async function getStaticProps(context) {
return {
props: {
params: context.params,
...(await serverSideTranslations(context.locale, ["common"])),
},
}
}
Update:
This is the Vercel function log (xxxxx-values is some id's I removed)
[GET] /_next/data/xxxxxxxxxxx-y/en/packages/490713.json
20:26:37:98
2022-02-28T19:26:39.300Z xxxxxxxxxxxxxxxxxxxxx ERROR Error: ENOENT: no such file or directory, scandir '/var/task/public/locales/en'
at Object.readdirSync (fs.js:1047:3)
at getLocaleNamespaces (/var/task/node_modules/next-i18next/dist/commonjs/config/createConfig.js:175:23)
at /var/task/node_modules/next-i18next/dist/commonjs/config/createConfig.js:181:20
at Array.map (<anonymous>)
at getNamespaces (/var/task/node_modules/next-i18next/dist/commonjs/config/createConfig.js:180:44)
at createConfig (/var/task/node_modules/next-i18next/dist/commonjs/config/createConfig.js:221:29)
at _callee$ (/var/task/node_modules/next-i18next/dist/commonjs/serverSideTranslations.js:199:53)
at tryCatch (/var/task/node_modules/regenerator-runtime/runtime.js:63:40)
at Generator.invoke [as _invoke] (/var/task/node_modules/regenerator-runtime/runtime.js:294:22)
at Generator.next (/var/task/node_modules/regenerator-runtime/runtime.js:119:21) {
errno: -2,
syscall: 'scandir',
path: '/var/task/public/locales/en',
page: '/packages/[id]'
}
RequestId: xxxxxxxxxxxxxxxxxxxx Error: Runtime exited with error: exit status 1
Runtime.ExitError
This line was missing in my next-i18next-config
localePath: path.resolve('./public/locales'),
I have created a model that was working when I had my backend functions running on my local machine, but when it uses AWS I get and authentication problem when the table is being queried:
2022-02-18T08:54:58.149Z 31785a81-ea8c-434b-832f-6dcff583c01c ERROR Unhandled Promise Rejection
{
"errorType": "Runtime.UnhandledPromiseRejection",
"errorMessage": "AccessDeniedException: User: arn:aws:sts::xxxxxxxxx:assumed-role/dev-production-history-role/ppc-backend-functions-dev-queryProductionHistoryItems is not authorized to perform: dynamodb:CreateTable on resource: arn:aws:dynamodb:eu-west-1:xxxxxxxxxxxx:table/dev-production-history-table",
"trace": [
"Runtime.UnhandledPromiseRejection: AccessDeniedException: User: arn:aws:sts::xxxxxxxxx:assumed-role/dev-production-history-role/ppc-backend-functions-dev-queryProductionHistoryItems is not authorized to perform: dynamodb:CreateTable on resource: arn:aws:dynamodb:eu-west-1:xxxxxxxxx:table/dev-production-history-table",
" at process.<anonymous> (/var/runtime/index.js:35:15)",
" at process.emit (events.js:400:28)",
" at processPromiseRejections (internal/process/promises.js:245:33)",
" at processTicksAndRejections (internal/process/task_queues.js:96:32)"
]
}
This is how my model is defined:
const model = dynamoose.model<ProductionHistory>(DatabaseTableNames.productionHistoryTable, {schema});
From looking at possible solutions, it seems that adding {“create”: false} to the parameters might solve the issue, but in version 3 of Dynamoose you cannot add three parameters, so this will not work:
const model = dynamoose.model<ProductionHistory>(DatabaseTableNames.productionHistoryTable,
schema, {“create”: false});
Does anyone know how to overcome this problem so that it works with Dynamoose version 3?
I have made the changes that Charlie Fish suggested and I am now getting the following error:
2022-02-18T16:39:39.211Z b00a36b8-c612-4886-b9fc-da7084527bf0 INFO AccessDeniedException: User: arn:aws:sts::874124979428:assumed-role/dev-production-history-role/ppc-backend-functions-dev-queryProductionHistoryItems is not authorized to perform: dynamodb:Query on resource: arn:aws:dynamodb:eu-west-1:874124979428:table/dev-production-history-table
at deserializeAws_json1_0QueryCommandError (/var/task/node_modules/dynamoose/node_modules/#aws-sdk/client-dynamodb/dist-cjs/protocols/Aws_json1_0.js:2984:41)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async /var/task/node_modules/dynamoose/node_modules/#aws-sdk/middleware-serde/dist-cjs/deserializerMiddleware.js:7:24
at async /var/task/node_modules/dynamoose/node_modules/#aws-sdk/middleware-signing/dist-cjs/middleware.js:11:20
at async StandardRetryStrategy.retry (/var/task/node_modules/dynamoose/node_modules/#aws-sdk/middleware-retry/dist-cjs/StandardRetryStrategy.js:51:46)
at async /var/task/node_modules/dynamoose/node_modules/#aws-sdk/middleware-logger/dist-cjs/loggerMiddleware.js:6:22
at async main (/var/task/node_modules/dynamoose/dist/aws/ddb/internal.js:6:20)
at async /var/task/node_modules/dynamoose/dist/ItemRetriever.js:105:32
at async Object.queryByDate (/var/task/functions/production-history/query.js:1:1723)
at async Runtime.l [as handler] (/var/task/functions/production-history/query.js:1:1974) {
__type: 'com.amazon.coral.service#AccessDeniedException',
'$fault': 'client',
'$metadata': {
httpStatusCode: 400,
requestId: 'DCB6SNOH9O2NTRAS9LL3OJGEU7VV4KQNSO5AEMVJF66Q9ASUAAJG',
extendedRequestId: undefined,
cfId: undefined,
attempts: 1,
totalRetryDelay: 0
},
'$response': HttpResponse {
statusCode: 400,
headers: {
server: 'Server',
date: 'Fri, 18 Feb 2022 16:39:39 GMT',
'content-type': 'application/x-amz-json-1.0',
'content-length': '331',
connection: 'keep-alive',
'x-amzn-requestid': 'DCB6SNOH9O2NTRAS9LL3OJGEU7VV4KQNSO5AEMVJF66Q9ASUAAJG',
'x-amz-crc32': '2950006190'
},
body: IncomingMessage {
_readableState: [ReadableState],
_events: [Object: null prototype],
_eventsCount: 2,
_maxListeners: undefined,
socket: null,
httpVersionMajor: 1,
httpVersionMinor: 1,
httpVersion: '1.1',
complete: true,
headers: [Object],
rawHeaders: [Array],
trailers: {},
rawTrailers: [],
aborted: false,
upgrade: false,
url: '',
method: null,
statusCode: 400,
statusMessage: 'Bad Request',
client: [TLSSocket],
_consuming: false,
_dumped: false,
req: [ClientRequest],
[Symbol(kCapture)]: false,
[Symbol(RequestTimeout)]: undefined
}
}
}
This is my code now:
const model = dynamoose.model<ProductionHistory>(DatabaseTableNames.productionHistoryTable, schema);
const Table = new dynamoose.Table(DatabaseTableNames.productionHistoryTable, [model], {"create": false, "waitForActive": false});
Any ideas?
Disclaimer: this answer is based on Dynamoose v3.0.0 beta 1. Answers based on beta versions can become outdated quickly, so be sure to check for any updated details for your version of Dynamoose.
In Dynamoose v3, a new class was introduced called Table. This represents a single DynamoDB Table. In previous versions of Dynamoose, a Model represented a single DynamoDB Table, but based on the API also kinda represented a specific entity or model in your data structure (ex. Movie, Order, User, etc). This lead to complications and confusion when it comes to single table design structures especially.
In terms of code, what this means is the following.
// If you have the following code in v2:
const User = dynamoose.model("User", {"id": String});
// It will be converted to this in v3:
const User = dynamoose.model("User", {"id": String});
const DBTable = new dynamoose.Table("DBTable", [User]);
So basically you create a new Table instance based on your Model. In v3 if you try to use your Model without created a Table instance based on it, it will throw an error.
Once you do that, the 3rd parameter of your Table constructor, you can pass in settings. Once of which being create. So you can set that to false as that parameter.
Your code specifically would look something like:
const model = dynamoose.model<ProductionHistory(DatabaseTableNames.productionHistoryTable, schema);
const DBTable = new dynamoose.Table(DatabaseTableNames.productionHistoryTable, [model], {"create": false});
I've been getting the following error from a Cloud function that uses the Cloud Vision API:
Error: 1 CANCELLED: The operation was cancelled.
at Object.callErrorFromStatus (/srv/functions/node_modules/#grpc/grpc-js/build/src/call.js:30:26)
at Http2CallStream.call.on (/srv/functions/node_modules/#grpc/grpc-js/build/src/client.js:96:33)
at Http2CallStream.emit (events.js:203:15)
at Http2CallStream.EventEmitter.emit (domain.js:466:23)
at process.nextTick (/srv/functions/node_modules/#grpc/grpc-js/build/src/call-stream.js:100:22)
at process._tickCallback (internal/process/next_tick.js:61:11)
code: 1,
details: 'The operation was cancelled.',
metadata:
Metadata {
internalRepr:
Map {
'google.rpc.debuginfo-bin' => [Array],
'grpc-status-details-bin' => [Array] },
options: {} },
note:
'Exception occurred in retry method that was not classified as transient' }
The code is as follows:
const vision = require('#google-cloud/vision');
const client = new vision.ImageAnnotatorClient();
const [result] = await client.textDetection(`gs://${process.env.GCLOUD_PROJECT}.appspot.com/${fileName}`)
.catch((err: any) => {
return db.doc(event.ref.path).update({ status: 'error' });
});
Not sure if this has to do with the problems Firebase had today?
I resolved this by adding using google-gax version 1.15.2 and adding the following resolution:
"resolutions": {
"google-gax": "1.15.2"
},
I am trying to Http GET from a database , where I do have access and can reproduce the GET result in Postman.
I have created a service in angular2 {N} where I execute this GET but I get an error of :
JS: EXCEPTION: Error: Uncaught (in promise): Response with status: 200 for URL: null
JS: STACKTRACE:
JS: Error: Uncaught (in promise): Response with status: 200 for URL: null
JS: at resolvePromise (/data/data/org.nativescript.ndemo/files/app/tns_modules/zone.js/dist/zone-node.js:496:32)
JS: at resolvePromise (/data/data/org.nativescript.ndemo/files/app/tns_modules/zone.js/dist/zone-node.js:481:18)
JS: at /data/data/org.nativescript.ndemo/files/app/tns_modules/zone.js/dist/zone-node.js:529:18
JS: at ZoneDelegate.invokeTask (/data/data/org.nativescript.ndemo/files/app/tns_modules/zone.js/dist/zone-node.js:314:38)
JS: at Object.NgZoneImpl.inner.inner.fork.onInvokeTask (/data/data/org.nativescript.ndemo/files/app/tns_modules/#angular/core/src/zone/ng_zone_impl.js:37:41)
JS: at ZoneDelegate.invokeTask (/data/data/org.nativescript.ndemo/files/app/tns_modules/zone.js/dist/zone-node.js:313:43)
JS: at Zone.runTask (/data/data/org.nativescript.ndemo/files/app/tns_modules/zone.js/dist/zone-node.js:214:48)
JS: at drainMicroTaskQueue (/data/data/org.nativescript.ndemo/files/app/tns_modules/zone.js/dist/zone-node.js:432:36)
JS: Unhandled Promise rejection: Response with status: 200 for URL: null ; Zone: angular ; Task: Promise.then ; Value: Response with status: 200 for URL: null
JS: Error: Uncaught
(in promise): Response with status: 200 for URL: null
My service :
import { Headers, Http } from '#angular/http';
import 'rxjs/add/operator/toPromise';
export function createNonJsonResponse(http: Http, fullUrl: string): Promise<string>{
return http.get(fullUrl)
.toPromise()
.then(response => response.text())
// .catch(this.handleError);
}
I have logged both the URL given in and the Http and they are fine.
I have no idea why is this happening and google couldn't help me find any solutions whatsoever.
It seems, at least for me, #angular/core Http module is not working as intended. I switched to the nativescript's Http service (https://docs.nativescript.org/cookbook/http) and managed to accomplish what I needed without any problems.
Are you injecting the service somewhere? Add #Injectable() above the export. What if you change how you write the service a little and see if you receive the same response?
#Injectable()
export class createNonJsonResponse {
fullUrl: string = 'http://httpbin.org/get';
constructor(private http: Http) {}
getNonJsonResponse() {
return this.http.get(this.fullUrl)
.toPromise()
.then(response => response.text())
.catch(this.handleErrors);
}
}
Then import it to your component
import {createNonJsonResponse} from "./yourservicename.service"
And finally call it
this.createNonJsonResponse.getNonJsonResponse().then(function (data) {
alert("here");
//console.log(data);
//console.dump(data);
})
This worked for me I was able to hit my alert.