So I am running the sample code provided by Google:
package com.neat.backend;
/**
* An endpoint class we are exposing
*/
#Api(
name = "myApi",
version = "v1",
namespace = #ApiNamespace(
ownerDomain = "backend.neat.com",
ownerName = "backend.neat.com",
packagePath = ""
),
issuers = {
#ApiIssuer(
name = "firebase",
issuer = "https://securetoken.google.com/" + PROJECT_ID,
jwksUri = "https://www.googleapis.com/robot/v1/metadata/x509/securetoken#system.gserviceaccount.com")
})
public class MyEndpoint {
#ApiMethod(
path = "firebase_user",
httpMethod = ApiMethod.HttpMethod.GET,
authenticators = {EspAuthenticator.class},
issuerAudiences = {#ApiIssuerAudience(name = "firebase", audiences = {PROJECT_ID})}
)
public Email getUserEmailFirebase(User user) throws UnauthorizedException {
if (user == null) {
throw new UnauthorizedException("Invalid credentials");
}
Email response = new Email(user.getEmail());
return response;
}
}
I am getting a Firebase token from my Android client and try to send it to the backend by:
curl -H "Authorization: Bearer FIREBASE_JWT_TOKEN" \
-X GET \
http://localhost:8080/_ah/api/echo/v1/firebase_user
The error I see in the logs is the following:
[INFO] java.lang.IllegalStateException: method_info is not set in the request
[INFO] at com.google.api.server.spi.auth.EspAuthenticator.authenticate(EspAuthenticator.java:67)
[INFO] at com.google.api.server.spi.request.Auth.authenticate(Auth.java:100)
[INFO] at com.google.api.server.spi.request.ServletRequestParamReader.getUser(ServletRequestParamReader.java:191)
[INFO] at com.google.api.server.spi.request.ServletRequestParamReader.deserializeParams(ServletRequestParamReader.java:136)
[INFO] at com.google.api.server.spi.request.RestServletRequestParamReader.read(RestServletRequestParamReader.java:123)
[INFO] at com.google.api.server.spi.SystemService.invokeServiceMethod(SystemService.java:350)
[INFO] at com.google.api.server.spi.handlers.EndpointsMethodHandler$RestHandler.handle(EndpointsMethodHandler.java:114)
[INFO] at com.google.api.server.spi.handlers.EndpointsMethodHandler$RestHandler.handle(EndpointsMethodHandler.java:102)
[INFO] at com.google.api.server.spi.dispatcher.PathDispatcher.dispatch(PathDispatcher.java:49)
[INFO] at com.google.api.server.spi.EndpointsServlet.service(EndpointsServlet.java:71)
[INFO] at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
I have tried deploying the exact same code to App Engine and it works perfectly. I have tried debugging EspAuthenticator and it seems that it is expecting the Servlet filters to inject some attributes in the request.
It took me a while and a bit of debugging to realize that the filter that is supposed to inject method_info was not being fired.
I could fix it by modifying the mapping in web.xml adding the following dispatcher tags:
<filter-mapping>
<filter-name>endpoints-api-configuration</filter-name>
<servlet-name>EndpointsServlet</servlet-name>
<dispatcher>REQUEST</dispatcher>
<dispatcher>INCLUDE</dispatcher>
<dispatcher>FORWARD</dispatcher>
</filter-mapping>
generat and deploy the OpenAPI configuration file
$ mvn clean package endpoints-framework:openApiDocs -DskipTests
$ gcloud endpoints services deploy target/openapi-docs/openapi.json
$ mvn appengine:run
I got the same error message, and eventually tracked it down to making a request with a trailing / in the URL where my API did not specify one. The trailing slash only causes an error for calls that provide an authorization token.
Apparently ControlFilter (and therefore also GoogleAppEngineControlFilter) did not recognize it as a valid endpoint and therefore did not bother attaching method_info to the request. But then EndpointsServlet thought it was valid and tried to authenticate without all the needed info!
The fix was easy: remove the trailing slash from the URL in my request. Tracking down the problem, however, was not! I see this was not your problem, but maybe this answer will help someone else.
#Kevendra's answer highlights that this issue can be caused if an openapi.json file is missing a reference to the endpoint API method. Firebase may be using this to reference and discover the API method.
From the Google OpenAPI Overview:
Basic structure of an OpenAPI document:
An OpenAPI document describes
the surface of your REST API, and defines information such as:
The name and description of the API. The individual endpoints (paths)
in the API. How the callers are authenticated.
Follow these steps to regenerate and deploy the openapi.json file:
generate:
$ mvn clean package endpoints-framework:openApiDocs -DskipTests
mvn clean - run a Maven goal to clean your project. package - compile and package it
endpoints-framework:openApiDocs generate OpenAPI documents. This will generate the openapi.json file at the location: target/openapi-docs/openapi.json- see generating and deploying an api configuration.
-DskipTests skips running any tests, to avoid any test failure due
to the openapi.json not yet being generated
deploy:
(As a precaution you can first validate the project ID returned from the following command to make sure that the service isn't created in the wrong project - gcloud config list project)
$ gcloud endpoints services deploy target/openapi-docs/openapi.json
Deploys the openapi.json file from its generated location (in the 'generate' step above). See the Google docs on Deploying the OpenAPI document
Related
I was trying the free trial version of recaptcha solver by apify for the following page https://www.google.com/recaptcha/api2/demo
Input :
{
"key": "apify_api_KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK",
"webUrl": "https://www.google.com/recaptcha/api2/demo",
"siteKey": "6Le-wvkSAAAAAPBMRTvw0Q4Muexq9bi0DJwx_mJ-"
}
I tried running the same with the above inputs, but I get the error
2022-08-14T10:13:34.970Z ACTOR: Pulling Docker image from repository.
2022-08-14T10:13:35.109Z ACTOR: Creating Docker container.
2022-08-14T10:13:35.168Z ACTOR: Starting Docker container.
2022-08-14T10:13:36.323Z
2022-08-14T10:13:36.327Z WARNING: The npm start script not found in package.json. Using "node main.js" instead. Please update your package.json file. For more information see https://github.com/apifytech/apify-cli/blob/master/MIGRATIONS.md
2022-08-14T10:13:36.330Z
2022-08-14T10:13:38.579Z Solving re-captcha with Anticaptcha: https://www.google.com/recaptcha/api2/demo
2022-08-14T10:13:38.780Z User function threw an exception:
2022-08-14T10:13:38.782Z Account authorization key not found in the system
You need to have an anti-captcha subscription to be able to use it. https://apify.com/petr_cermak/anti-captcha-recaptcha
"key": ANTI_CAPTCHA_KEY
You trying to use Apify API key to authorize access to anti-captcha.com service, it fails with error Account authorization key not found in the system
We moved to a new authenticated Nexus to act as a proxy to get dependencies.
I've tried to to give SBT (1.1.1) the credentials it needs, in multiple ways, but I always endup getting :
[error] Unable to find credentials for [Sonatype Nexus Repository Manager # nexus3.company.com]
[debug] CLIENT ERROR: Unauthorized url=https://nexus3.company.com/repository/maven2-proxy-all/org/scala-sbt/actions_2.12/1.1.1/actions_2.12-1.1.1.pom
It's repeated for a lot of dependencies.
I've created a .credentials file in my project as follow:
realm=Sonatype Nexus Repository Manager
host=nexus3.company.com
user=xxxxx
password=xxxxx
Here's what I've tried, based on inputs I got from other threads on the internet:
Adding the path to this credentials file in the command : -Dsbt.boot.credentials=.credentials
Adding the path to this credentials file to an environment variable : $SBT_CREDENTIALS = PATH
Adding the following line in the build.sbt : credentials += Credentials(new File(".credentials"))
Adding the following line in the build.sbt : credentials += Credentials("Sonatype Nexus Repository Manager", "nexus3.company.com", "xxxxx", "xxxxxx")
Checking what's going on with a proxy : my requests don't seem to have any authorization header and all come back as HTTP 401
And yet, when I access the URL mentioned from the same machine, with the credentials in the file, there is no issue at all.
I'm running out of ideas here :(
After more attempts, adding :
~/.sbt/1.0/credentials.sbt
credentials += Credentials("Sonatype Nexus Repository Manager", "nexus3.company.com", USER, PWD)
AND
The SBT_CREDENTIALS variable mentioned above,
Seems to do the job.
I also updated the image we use for our pipelines, not sure if it helped.
Hello I want to use the automatic deploymen on bitbucket to the galaxy server with a deployment token.
For this reason I am creating a deployment token that is comitted in the repository.
https://galaxy-guide.meteor.com/deploy-guide.html#deployment-token
To strenghten the security I would like to use Repository variables in bitbucket pipelines:
https://confluence.atlassian.com/bitbucket/environment-variables-794502608.html
And to store the deployment token of meteor in the variables instead in file.
For the deployment we use in the command:
METEOR_SESSION_FILE=deployment_token.json
And my question is - Is there any way so that I use some variable(string) where the token is used like
METEOR_SESSION_DEPLOYMENT_TOKEN=$METEOR_TOKEN
instead to call it from a file?
Some research, after having the same problem, brought me to this article, which simply solves the problem that you can't feed meteor just the json in an env var in the following simple way:
By adding the json file content as an env var and then echoes it out into a file on deploy.
echo $METEOR_TOKEN_FILE > deploy_token.json
METEOR_SESSION_FILE=deploy_token.json
Thanks to this article I figured it out.
Save json settings as env variable and then in deployment procesS:
echo $DEPLOY_SESSION_FILE > deployment_token.json
METEOR_SESSION_FILE=deployment_token.json DEPLOY_HOSTNAME=galaxy.meteor.com meteor deploy --allow-superuser myApp-staging.meteorapp.com --settings config/staging/settings.json --owner username
Is there a way to retrieve an artifact with the maven timestamp as it was originally uploaded by maven?
from jenkins logs:
Uploading: http://artifactory.foo/artifactory/libs-snapshot-local/com/foo/foo-web-service/1.16.0-SNAPSHOT/foo-web-service-1.16.0-20160504.182015-2.tar.gz
Results from artifactory REST api:
$ curl -X GET 'http://artifactory.foo/artifactory/api/search/gavc?g=com.foo&a=foo-web-service&v=1.16.0-SNAPSHOT&c=*&repos=libs-snapshot-local'
{
"results" : [ {
"uri" : "http://artifactory.foo/artifactory/api/storage/libs-snapshot-local/com/foo/foo-web-service/1.16.0-SNAPSHOT/foo-web-service-1.16.0-SNAPSHOT-sources.jar"
}, {
"uri" : "http://artifactory.foo/artifactory/api/storage/libs-snapshot-local/com/foo/foo-web-service/1.16.0-SNAPSHOT/foo-web-service-1.16.0-SNAPSHOT.pom"
}, {
"uri" : "http://artifactory.foo/artifactory/api/storage/libs-snapshot-local/com/foo/foo-web-service/1.16.0-SNAPSHOT/foo-web-service-1.16.0-SNAPSHOT.tar.gz"
}, {
"uri" : "http://artifactory.foo/artifactory/api/storage/libs-snapshot-local/com/foo/foo-web-service/1.16.0-SNAPSHOT/foo-web-service-1.16.0-SNAPSHOT.war"
} ]
}
I'd like to get the the same name as it was uploaded to via a wget or equivalent...
What I want to acheive:
jenkins uploads foo-web-service-1.16.0-20160504.182015-2.tar.gz to libs-snapshot-local
query REST api to get latest artifact link that includes the timestamps in the name with parameters a=foo-web-service&version=1.16.0&...
wget $artifact_link_with_timestamp
What I currently acheive that does not satisfy my need:
jenkins uploads foo-web-service-1.16.0-20160504.182015-2.tar.gz to libs-snapshot-local
query REST api via gavc search with parameters a=foo-web-service&version=1.16.0&...
wget $artifact_link
Conclusion as stated in the accepted answer, the problem was in the artifactory config itself. To achieve what I wanted, I needed the snapshots to be unique.
As long as your repository is configured to use unique snapshots (or to use client snapshot policy and you use Maven 3 and up), you can always use the Maven timestamp as a version. Replacing it with -SNAPSHOT is a "runtime" trick to make the resolution easier.
If your repository is configured to use non-unique snapshots, the files are actually stored with -SNAPSHOT instead of version and override previous snapshots (don't do that).
I am coding a Java project and I'm automating the build and the publishing to JFrog Artifactory using SBT.
Whenever it's time to publish to Artifactory I want to do it using the Ivy directory layout and obviously publish the Ivy XML file along with the jar. I managed to achieve this by using the following lines in the build.sbt file:
crossPaths := false
publishTo := Some("Artifactory Realm" at "http://<Artifactory IP>:<Artifactory Port>/artifactory/org.project.my")
credentials += Credentials(Path.userHome / ".ivy2" / ".credentials")
publishMavenStyle := false
However it only works when anonymous users are allowed to deploy into Artifactory. I realized that sbt is not really passing my credentials to Artifactory but, instead, logging in as anonymous.
My $HOME/.ivy2/.credentials file looks like this:
realm=Artifactory Realm
host=http://<Artifactory IP>:<Artifactory Port>/artifactory/org.project.my
user=<my user name>
password=<my user name>
However, if I change the Artifactory configuration in order to prevent anonymous users from deploying new Artifacts, when I run "sbt publish" I get the following output:
[error] Unable to find credentials for [Artifactory Realm # <Artifactory IP>].
java.io.IOException: Access to URL http://<Artifactory IP>:<Artifactory Port>/artifactory//org.project.my/org/project/my/project-my/1.0.0/project-my-1.0.0.jar was refused by the server: Unauthorized
The Artifactory request.log file then contains:
20160219011657|319|REQUEST|10.0.2.2|anonymous|PUT|/org.project.my/org/project/my/project-my/1.0.0/project-my-1.0.0.jar|HTTP/1.1|401|24978
I have also tried passing the credentials manually instead of using a file:
credentials += Credentials("Artifactory Realm", "localhost", "<USERNAME>", "<PASS>")
But I am getting the same result.
Any idea what I might be missing?
try:
host=<Artifactory IP>
old answer (doesn't work):
host=<Artifactory IP>:<Artifactory port>
I had a different problem: I had the wrong realm set on my .credentials file.
Looking at the error output from sbt, I was able to figure out that I should use:
realm=Artifactory Realm
Error shows the expected values for realm and host:
[error] Unable to find credentials for [Artifactory Realm # myhost].