I am a first time user of Gcloud. When I run the following command:
gcloud beta functions deploy FirstBot --stage-bucket [BUCKET_NAME] --trigger-http
I'm getting this error in my cmd:
ERROR: (gcloud.beta.functions.deploy) OperationError: code=3, message=Function l
oad error: File index.js or function.js that is expected to define function does
n't exist in the root directory.
I have tried 2 index.js files:
Here's no.1:
/*
HTTP Cloud Function.
#param {Object} req Cloud Function request context.
#param {Object} res Cloud Function response context.
*/
exports.FirstBot = function FirstBot (req, res) {
response = "This is a sample response from your webhook!" //Default response from the webhook to show it's working
res.setHeader('Content-Type', 'application/json'); //Requires application/json MIME type
res.send(JSON.stringify({ "speech": response, "displayText": response
//"speech" is the spoken version of the response, "displayText" is the visual version
}));
};
Here's the second one:
/
HTTP Cloud Function.
#param {Object} req Cloud Function request context.
#param {Object} res Cloud Function response context.
*/
exports.helloHttp = function helloHttp (req, res) {
response = "This is a sample response from your webhook!" //Default response from the webhook to show it's working
res.setHeader('Content-Type', 'application/json'); //Requires application/json MIME type
res.send(JSON.stringify({ "speech": response, "displayText": response
//"speech" is the spoken version of the response, "displayText" is the visual version
}));
};
The name of my project is FirstBot.
I have created a bucket also.
The path of my FirstBot folder is C:\FirstBot. The index.js file is inside it.
I am following the tutorial at: https://api.ai/docs/getting-started/basic-fulfillment-conversation
Kindly help..Would be grateful!
Is the file in your .gitignore? If a .gcloudignore isn't specified, the .gitignore is used to ignore files.
Adding an empty .gcloudignore should fix this.
The error can also occur if you're not running the deploy command from the same folder where the index.js is located. I.e. it's just "file not found".
I solved it by installing npm for JavaScript.
You're exceeding the Max deployment size limit. From Resource Limits:
Check the content of your project dir, you may have undesired stuff in there.
Adding a bucket to the project via the console solved it for me.
The command gcloud beta functions deploy FirstBot --stage-bucket [BUCKET_NAME] --trigger-http will deploy a function with name FirstBot by creating zip file with content of the directory from which your ran the command.
For the deployment to work you should:
Run the command when you're in C:\FirstBot, or
Add --source C:\FirstBot to your gcloud invocation to let gcloud know that your source code is in a different directory.
II was having a similar issue and solution for me was pretty easy to apply but hard to find.
In case someone can help. I noticed that my .gcloudignore had this line
#!include:.gitignore
and my .gitignore these two
# Compiled JavaScript files
**/*.js
**/*.js.map
Therefore .js files where being ignored and gcloud could not find index.js.
As i don't want compiled files to be in repository, i ended by removing the include at .gcloudignore and add the specific ignores directly, such as node_modules and sensitive data
Related
Using Firebase tools 11.21.0 and FIREBASE_STORAGE_EMULATOR_HOST=localhost:9199 and maven dependency
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-storage</artifactId>
<version>2.17.1</version>
</dependency>
I started the firebase emulator. And tried a simple file store:
emulatorStorage = StorageOptions.newBuilder()
.setProjectId(projectId)
.setHost("http://localhost:9199")
.setCredentials(NoCredentials.getInstance())
.build()
.getService();
And tried to save a file:
byte[] compress = "test".getBytes();
Blob blob = emulatorStorage.create(
BlobInfo.newBuilder(index, filename)
.setContentType("text/plain")
.build()
,compress,
Storage.BlobTargetOption.doesNotExist());
but even with the content type set I get this every time:
com.google.cloud.storage.StorageException: Failed to parse multipart request body part. Missing content type.
at com.google.cloud.storage.StorageException.translate(StorageException.java:163)
at com.google.cloud.storage.spi.v1.HttpStorageRpc.translate(HttpStorageRpc.java:297)
at com.google.cloud.storage.spi.v1.HttpStorageRpc.create(HttpStorageRpc.java:379)
at com.google.cloud.storage.StorageImpl.lambda$internalCreate$2(StorageImpl.java:208)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:103)
at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:50)
at com.google.cloud.storage.Retrying.run(Retrying.java:60)
at com.google.cloud.storage.StorageImpl.run(StorageImpl.java:1476)
at com.google.cloud.storage.StorageImpl.internalCreate(StorageImpl.java:205)
at com.google.cloud.storage.StorageImpl.create(StorageImpl.java:151)
and through debug I know that it is talking to the local emulator:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 400 Bad Request
POST http://localhost:9199/upload/storage/v1/b/demo-project.appspot.com/o?ifGenerationMatch=0&projection=full&uploadType=multipart
{
"code" : 400,
"message" : "Failed to parse multipart request body part. Missing content type."
}
What am I missing in the save operation, is the content type wrong? Or is this likely to be a bug in the emulator or compatibility issue with cloud-storage libs?
I have found github issue you raised and followed steps from there with both firebase v11.21.0 And v11.19.0 but I can successfully upload video files using firebase emulators:start --project demo-project --debug.
As per our conversation above in comments, it seems like you have mistaken firebase-tools-linux as a command. the doc you followed to set up the forebase for linux will just have downloadable file name as firebase-tools-linux It is just file name it does not mean that for linux we have to use firebase-tools-linux as a command for linux machines.If you observe step 3 in the doc you shared, It is pointing to log in and test the CLI where we have to use firebase login only. Hence try with firebase emulators:start --project demo-project --debug command.
Steps I have taken
Step 1:
Cloned source code from the github you shared. Changed directory to firebase-emulator-debug.
Step 2:
Ran below command
`firebase emulators:start --project demo-project --debug`.
Step 3:
Successfully uploaded video file of 2.3 MB from emulator.
FYI, I have also used the linux machine only for the above steps.
As mentioned by #Gridcell Coder, The Cloud Storage for Firebase emulator only supports a very small subset of the Cloud API and it is intended to only be used via the firebase-admin package. Admin SDK is not yet supported for Cloud Storage for Firebase.
I know there are a lot of similar thread discussing similar issue, but I still don't find any solution because maybe my case is slightly different.
so I have this error when performing testing using Mocha in Firebase emulator
Error: Bucket name not specified or invalid. Specify a valid bucket
name via the storageBucket option when initializing the app, or
specify the bucket name explicitly when calling the getBucket()
method.
import * as admin from "firebase-admin";
export const app = admin.initializeApp();
const storage = app.storage();
const defaultBucket = storage.bucket(); // it seem error is in here
I am using cloud function, so I assume initializeApp() with empty argument is fine according to the documentation in here. but I have that error when run the test in emulator
my mocha test script is like this
export FIRESTORE_EMULATOR_HOST="localhost:8080" && export FIREBASE_AUTH_EMULATOR_HOST="localhost:9099" && mocha -r ts-node/register src/tests/cloud_function_tests --recursive --extension .test.ts --timeout 60000 --exit
and it seems that error only appears when I perform the testing, if I run those code above using emulator (without mocha testing), that error will never occured
I am using
Node 14
firebase-admin: 9.6.0
firebase-functions: 3.13.2
firebase tools: 9.10.0
So... this morning... I got an email saying:
Our records show that you own projects with App Engine applications or
Cloud Functions that are still calling the pre-GA v0.1 and v1beta1
endpoints of the App Engine and Cloud Functions metadata server.
We’re writing to let you know that these legacy endpoints are
scheduled to be turned down on April 30, 2020. After April 30, 2020,
requests to the v0.1 and v1beta1 endpoints will no longer be
supported, and may return HTTP 404 NOT FOUND responses.
I'm only using Firebase Functions to send messages... and the email went on to identify my sendMessage function as the culprit. But I can't... for the life of me... figure out WHERE I need to update the endpoints. My sendMessage function is as follows:
exports.sendMessage = functions.database.ref('/messages/{receiverUid}/{senderUid}/{msgId}')
.onWrite(async (change, context) => {
const message = change.after.val().body;
const receiverUid = change.after.val().receiverUid;
const senderUid = change.after.val().senderUid;
const msgId = change.after.val().msgId;
if (!change.after.val()) {
return console.log('Sender ', senderUid, 'receiver ', receiverUid, 'message ', message);
}
console.log('We have a new message: ', message, 'for: ', receiverUid);
I've tried following some of the Curl suggestions from this link: https://cloud.google.com/compute/docs/migrating-to-v1-metadata-server
...but every time I try one of them I get:
curl: (6) Couldn't resolve host 'metadata.google.internal'
So... at this point... I have no idea what it is I'm supposed to change or where I'm supposed to look. Any help would be appreciated.
I had this same problem, and didn't see any of the libraries I was using listed here.
In my case, the culprit turned out to be firebase-admin. I was using version 7.3.0, and I found this gem:
$ grep -rni "computeMetadata/" *
firebase-admin/lib/auth/credential.js:30:var GOOGLE_METADATA_SERVICE_PATH = '/computeMetadata/v1beta1/instance/service-accounts/default/token';
So, I updated my Cloud Functions libraries as shown here:
npm install firebase-functions#latest --save
npm install firebase-admin#latest --save-exact
and then, voila!
$ grep -rni "computeMetadata/" *
node_modules/firebase-admin/lib/auth/credential.js:30:var GOOGLE_METADATA_SERVICE_PATH = '/computeMetadata/v1/instance/service-accounts/default/token';
Then I redeployed and problem solved.
I searched at the https://github.com/firebase/firebase-functions repo latest version (3.3.0), and I found the file: spec/fixtures/https.ts. Inside this file there are some mock functions, which use the old: /computeMetadata/v1beta1 endpoint.
This might mean that firebase-functions modules package should be updated to use the /computeMetadata/v1 endpoint instead.
Fwiw I found this old dependency in package.json was dragging in other very old packages:
"#google-cloud/functions-emulator": "^1.0.0-beta.6",
In particular it brought in gcs-resumable-upload v 0.10.2, which is below the v 0.13.0 recommended by google (see https://cloud.google.com/compute/docs/migrating-to-v1-metadata-server#apps-to-update). Probably others too.
The fix was to either:
remove #google-cloud/functions-emulator, or
switch to its modern replacement, #google-cloud/functions-framework
I have a Meteor app and want to call a server method from the command line, so that I can write a bash script to perform scheduled operations.
Is there any way to either call a method directly, or submit a form which will then trigger server-side code?
I've tried using curl to call a method, but either it's not possible or I'm missing something basic. This doesn't work:
curl "http://localhost:3000/Meteor.call('myMethod')"
nor does:
curl -s -d "http://localhost:3000/imports/api/test.js" > out.html
where test.js:
var test = function(){
console.log('hello');
}
I thought of using a form but I can't think how to create a submit event because the Meteor client uses template events that then call server methods.
I'll be very grateful for any help! This feels like it should be a simple thing but has me stumped.
Edit: I've also tried phantomjs and slimerjs as run through casperjs.
phantomjs is no longer maintained and generates an error:
TypeError: Attempting to change the setter of an unconfigurable property.
https://github.com/casperjs/casperjs/issues/1935
slimerjs errors with Firefox 60 and I can't figure out how to 'downgrade' back to the supported 59, and the option to disable automatic updates of Firefox no longer seems to exist. The error is:
c is undefined
https://github.com/laurentj/slimerjs/issues/694
You could make use of the node ddp package to call the Meteor method in an own js file that you created with a specific script. From there you can pipe all outs to wherever you want.
Let's assume the following Meteor method:
Meteor.methods({
'myMethod'() {
console.log("hello console")
return "hello result"
}
})
The upcoming steps will let you call this method from another shell, assuming your Meteor application is running.
1. Install ddp in your global npm directory
$ meteor npm install -g ddp
2. Create the script to call your method in your test directory
$ mkdir -p ddptest
$ cd ddptest
$ touch ddptest.js
Place the ddp script code into the file with the editor or command of your choice.
(The follwing code is freely taken from the package's readme. Feel free to configure to your needs.)
ddptest/ddptest.js
var DDPClient = require(process.env.DDP_PATH);
var ddpclient = new DDPClient({
// All properties optional, defaults shown
host : "localhost",
port : 3000,
ssl : false,
autoReconnect : true,
autoReconnectTimer : 500,
maintainCollections : true,
ddpVersion : '1', // ['1', 'pre2', 'pre1'] available
// uses the SockJs protocol to create the connection
// this still uses websockets, but allows to get the benefits
// from projects like meteorhacks:cluster
// (for load balancing and service discovery)
// do not use `path` option when you are using useSockJs
useSockJs: true,
// Use a full url instead of a set of `host`, `port` and `ssl`
// do not set `useSockJs` option if `url` is used
url: 'wss://example.com/websocket'
});
ddpclient.connect(function(error, wasReconnect) {
// If autoReconnect is true, this callback will be invoked each time
// a server connection is re-established
if (error) {
console.log('DDP connection error!');
console.error(error)
return;
}
if (wasReconnect) {
console.log('Reestablishment of a connection.');
}
console.log('connected!');
setTimeout(function () {
/*
* Call a Meteor Method
*/
ddpclient.call(
'myMethod', // namyMethodme of Meteor Method being called
['foo', 'bar'], // parameters to send to Meteor Method
function (err, result) { // callback which returns the method call results
console.log('called function, result: ' + result);
ddpclient.close();
},
function () { // callback which fires when server has finished
console.log('updated'); // sending any updated documents as a result of
console.log(ddpclient.collections.posts); // calling this method
}
);
}, 3000);
});
The code assumes that your app runs on localhost:3000, note that there is no conncection close on errors or undesired behavior.
As you can see at the top, the file imports your globally installed ddp package. Now in order to get it's path without using additional tools, just pass an environment variable (process.env.DDP_PATH) and let your shell handle the path resolving.
In order to get the installation path you can use npm root with the global flag.
Finally call your script via:
$ DDP_PATH=$(meteor npm root -g)/ddp meteor node ddptest.js
Which will give you the following output:
connected!
updated
undefined
called function, result: hello result
And logs hello console to the open session that is running your meteor app.
Edit: A note on using this in production
If you want to use this script in production you have to use the shell commands without the meteor command but using your installation of node and npm.
If you get in trouble with paths use process.execPath to find your node binary and npm root -g to find your global npm modules.
You can check out this documentation: Command Line | meteor shell.
While your meteor app is running, you can execute meteor shell to start an interactive console. In the console, you can do Meteor.call(...).
So if you want to write a script with using meteor shell, you might need to pipe the script file for meteor shell. Like,
$ meteor shell < script_file
See also the answer of "How can I pipe a command into the meteor shell?"
So I am running the sample code provided by Google:
package com.neat.backend;
/**
* An endpoint class we are exposing
*/
#Api(
name = "myApi",
version = "v1",
namespace = #ApiNamespace(
ownerDomain = "backend.neat.com",
ownerName = "backend.neat.com",
packagePath = ""
),
issuers = {
#ApiIssuer(
name = "firebase",
issuer = "https://securetoken.google.com/" + PROJECT_ID,
jwksUri = "https://www.googleapis.com/robot/v1/metadata/x509/securetoken#system.gserviceaccount.com")
})
public class MyEndpoint {
#ApiMethod(
path = "firebase_user",
httpMethod = ApiMethod.HttpMethod.GET,
authenticators = {EspAuthenticator.class},
issuerAudiences = {#ApiIssuerAudience(name = "firebase", audiences = {PROJECT_ID})}
)
public Email getUserEmailFirebase(User user) throws UnauthorizedException {
if (user == null) {
throw new UnauthorizedException("Invalid credentials");
}
Email response = new Email(user.getEmail());
return response;
}
}
I am getting a Firebase token from my Android client and try to send it to the backend by:
curl -H "Authorization: Bearer FIREBASE_JWT_TOKEN" \
-X GET \
http://localhost:8080/_ah/api/echo/v1/firebase_user
The error I see in the logs is the following:
[INFO] java.lang.IllegalStateException: method_info is not set in the request
[INFO] at com.google.api.server.spi.auth.EspAuthenticator.authenticate(EspAuthenticator.java:67)
[INFO] at com.google.api.server.spi.request.Auth.authenticate(Auth.java:100)
[INFO] at com.google.api.server.spi.request.ServletRequestParamReader.getUser(ServletRequestParamReader.java:191)
[INFO] at com.google.api.server.spi.request.ServletRequestParamReader.deserializeParams(ServletRequestParamReader.java:136)
[INFO] at com.google.api.server.spi.request.RestServletRequestParamReader.read(RestServletRequestParamReader.java:123)
[INFO] at com.google.api.server.spi.SystemService.invokeServiceMethod(SystemService.java:350)
[INFO] at com.google.api.server.spi.handlers.EndpointsMethodHandler$RestHandler.handle(EndpointsMethodHandler.java:114)
[INFO] at com.google.api.server.spi.handlers.EndpointsMethodHandler$RestHandler.handle(EndpointsMethodHandler.java:102)
[INFO] at com.google.api.server.spi.dispatcher.PathDispatcher.dispatch(PathDispatcher.java:49)
[INFO] at com.google.api.server.spi.EndpointsServlet.service(EndpointsServlet.java:71)
[INFO] at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
I have tried deploying the exact same code to App Engine and it works perfectly. I have tried debugging EspAuthenticator and it seems that it is expecting the Servlet filters to inject some attributes in the request.
It took me a while and a bit of debugging to realize that the filter that is supposed to inject method_info was not being fired.
I could fix it by modifying the mapping in web.xml adding the following dispatcher tags:
<filter-mapping>
<filter-name>endpoints-api-configuration</filter-name>
<servlet-name>EndpointsServlet</servlet-name>
<dispatcher>REQUEST</dispatcher>
<dispatcher>INCLUDE</dispatcher>
<dispatcher>FORWARD</dispatcher>
</filter-mapping>
generat and deploy the OpenAPI configuration file
$ mvn clean package endpoints-framework:openApiDocs -DskipTests
$ gcloud endpoints services deploy target/openapi-docs/openapi.json
$ mvn appengine:run
I got the same error message, and eventually tracked it down to making a request with a trailing / in the URL where my API did not specify one. The trailing slash only causes an error for calls that provide an authorization token.
Apparently ControlFilter (and therefore also GoogleAppEngineControlFilter) did not recognize it as a valid endpoint and therefore did not bother attaching method_info to the request. But then EndpointsServlet thought it was valid and tried to authenticate without all the needed info!
The fix was easy: remove the trailing slash from the URL in my request. Tracking down the problem, however, was not! I see this was not your problem, but maybe this answer will help someone else.
#Kevendra's answer highlights that this issue can be caused if an openapi.json file is missing a reference to the endpoint API method. Firebase may be using this to reference and discover the API method.
From the Google OpenAPI Overview:
Basic structure of an OpenAPI document:
An OpenAPI document describes
the surface of your REST API, and defines information such as:
The name and description of the API. The individual endpoints (paths)
in the API. How the callers are authenticated.
Follow these steps to regenerate and deploy the openapi.json file:
generate:
$ mvn clean package endpoints-framework:openApiDocs -DskipTests
mvn clean - run a Maven goal to clean your project. package - compile and package it
endpoints-framework:openApiDocs generate OpenAPI documents. This will generate the openapi.json file at the location: target/openapi-docs/openapi.json- see generating and deploying an api configuration.
-DskipTests skips running any tests, to avoid any test failure due
to the openapi.json not yet being generated
deploy:
(As a precaution you can first validate the project ID returned from the following command to make sure that the service isn't created in the wrong project - gcloud config list project)
$ gcloud endpoints services deploy target/openapi-docs/openapi.json
Deploys the openapi.json file from its generated location (in the 'generate' step above). See the Google docs on Deploying the OpenAPI document