JSON Logs in Google Cloud Logging using Winston in nodeJS - syslog

I am running a nodeJS server on Ubuntu 14.04 in Google Compute Engine. I want to use google cloud logging for my application so I installed google fluentd logging agent as per https://cloud.google.com/logging/docs/agent/installation
I used winston and winston-syslog for writing logs. Here is the code.
var winston = require('winston');
var winstonsyslog = require('winston-syslog').Syslog;
var options = {
json : true
};
winston.add(winston.transports.Syslog, options);
When I am writing a log using
winston.log('info', "27", { anything: 'This is metadata' });
I am getting
{
metadata: {…}
textPayload: "May 14 10:47:44 localhost node[7633]: {"anything":"This is metadata","level":"info","message":"27"}"
insertId: "..."
log: "syslog.local0.info"
}
How to get structPayload instead of textPayload which displays log as JSON instead of String.

Logging agent has it's own configuration files, and most of them have format none (see https://github.com/GoogleCloudPlatform/fluentd-catch-all-config). Thus, all log lines go to textPayload.
The solution is to write your own fluentd configuration file and use fluent-plugin-google-cloud as output. fluent-plugin-google-cloud should be installed as a gem directly, not using Logging Agent As long as your entry is a valid JSON, you will see that entry on Stackdriver having structPayload set properly. Key/value filters work as well.
I have never used winston, but here is sample configuration:
<source>
#type tail
format apache
path /var/log/apache/access.log
tag apache.access
</source>
<match **>
#type google_cloud
buffer_chunk_limit 10k
</match>
Notice the buffer_chunk_limit 10k, setting it to values too big or leaving it at default may result in Google::Apis::ClientError badRequest: Request payload size exceeds the limit.

Related

kong error using deck: cannot create or update 'services' entities when not using a database

I have set up kong in dbless mode on RHEL by following the below documentation
https://docs.konghq.com/gateway/latest/install-and-run/rhel/
Kong gateway is successfully started. Below are the configurations I added in kong.conf file where database is turned to off and path to declarative kong.yaml is specified
declarative_config = /temp/kong/kong.yml
database = off
Also, below is current .yaml file where I created a service using below link
https://docs.konghq.com/gateway/2.8.x/get-started/comprehensive/expose-services/
_format_version: "1.1"
services:
- host: mockbin.org
name: example_service
port: 80
protocol: http
routes:
- name: mocking
paths:
- /mock
strip_path: true
I have also installed deck to sync this the declarative configuration.
However, when I use the deck sync command to add this service to kong, I get below error
creating service example_service
Summary:
Created: 0
Updated: 0
Deleted: 0
Error: 1 errors occurred:
while processing event: {Create} service example_service failed: HTTP status 405 (message: "cannot create or update 'services' entities when not using a database")
Kindly need ideas on what could be wrong as I believe we can create a service in dbless mode, and I also think that this is the declarative format which should work. Looking forward to hear. Thanks
Kindly need ideas on what could be wrong as I believe we can create a service in dbless mode, and I also think that this is the declarative format which should work. Looking forward to hear. Thanks
You are correct that we can create a service in dbless mode, however the approach will be different.
If you already have the new config file in yaml format. you can load it to Kong using /config endpoint
I also think that decK should be process-agnostic and can be used with both db and dbless mode, But as it stands, loading yaml config file with /config endpoint looks like the best option.

Deploy on Meteor galaxy server with bitbucket and deployment token as variable

Hello I want to use the automatic deploymen on bitbucket to the galaxy server with a deployment token.
For this reason I am creating a deployment token that is comitted in the repository.
https://galaxy-guide.meteor.com/deploy-guide.html#deployment-token
To strenghten the security I would like to use Repository variables in bitbucket pipelines:
https://confluence.atlassian.com/bitbucket/environment-variables-794502608.html
And to store the deployment token of meteor in the variables instead in file.
For the deployment we use in the command:
METEOR_SESSION_FILE=deployment_token.json
And my question is - Is there any way so that I use some variable(string) where the token is used like
METEOR_SESSION_DEPLOYMENT_TOKEN=$METEOR_TOKEN
instead to call it from a file?
Some research, after having the same problem, brought me to this article, which simply solves the problem that you can't feed meteor just the json in an env var in the following simple way:
By adding the json file content as an env var and then echoes it out into a file on deploy.
echo $METEOR_TOKEN_FILE > deploy_token.json
METEOR_SESSION_FILE=deploy_token.json
Thanks to this article I figured it out.
Save json settings as env variable and then in deployment procesS:
echo $DEPLOY_SESSION_FILE > deployment_token.json
METEOR_SESSION_FILE=deployment_token.json DEPLOY_HOSTNAME=galaxy.meteor.com meteor deploy --allow-superuser myApp-staging.meteorapp.com --settings config/staging/settings.json --owner username

Google Cloud Endpoints + Firebase Auth: method_info is not set

So I am running the sample code provided by Google:
package com.neat.backend;
/**
* An endpoint class we are exposing
*/
#Api(
name = "myApi",
version = "v1",
namespace = #ApiNamespace(
ownerDomain = "backend.neat.com",
ownerName = "backend.neat.com",
packagePath = ""
),
issuers = {
#ApiIssuer(
name = "firebase",
issuer = "https://securetoken.google.com/" + PROJECT_ID,
jwksUri = "https://www.googleapis.com/robot/v1/metadata/x509/securetoken#system.gserviceaccount.com")
})
public class MyEndpoint {
#ApiMethod(
path = "firebase_user",
httpMethod = ApiMethod.HttpMethod.GET,
authenticators = {EspAuthenticator.class},
issuerAudiences = {#ApiIssuerAudience(name = "firebase", audiences = {PROJECT_ID})}
)
public Email getUserEmailFirebase(User user) throws UnauthorizedException {
if (user == null) {
throw new UnauthorizedException("Invalid credentials");
}
Email response = new Email(user.getEmail());
return response;
}
}
I am getting a Firebase token from my Android client and try to send it to the backend by:
curl -H "Authorization: Bearer FIREBASE_JWT_TOKEN" \
-X GET \
http://localhost:8080/_ah/api/echo/v1/firebase_user
The error I see in the logs is the following:
[INFO] java.lang.IllegalStateException: method_info is not set in the request
[INFO] at com.google.api.server.spi.auth.EspAuthenticator.authenticate(EspAuthenticator.java:67)
[INFO] at com.google.api.server.spi.request.Auth.authenticate(Auth.java:100)
[INFO] at com.google.api.server.spi.request.ServletRequestParamReader.getUser(ServletRequestParamReader.java:191)
[INFO] at com.google.api.server.spi.request.ServletRequestParamReader.deserializeParams(ServletRequestParamReader.java:136)
[INFO] at com.google.api.server.spi.request.RestServletRequestParamReader.read(RestServletRequestParamReader.java:123)
[INFO] at com.google.api.server.spi.SystemService.invokeServiceMethod(SystemService.java:350)
[INFO] at com.google.api.server.spi.handlers.EndpointsMethodHandler$RestHandler.handle(EndpointsMethodHandler.java:114)
[INFO] at com.google.api.server.spi.handlers.EndpointsMethodHandler$RestHandler.handle(EndpointsMethodHandler.java:102)
[INFO] at com.google.api.server.spi.dispatcher.PathDispatcher.dispatch(PathDispatcher.java:49)
[INFO] at com.google.api.server.spi.EndpointsServlet.service(EndpointsServlet.java:71)
[INFO] at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
I have tried deploying the exact same code to App Engine and it works perfectly. I have tried debugging EspAuthenticator and it seems that it is expecting the Servlet filters to inject some attributes in the request.
It took me a while and a bit of debugging to realize that the filter that is supposed to inject method_info was not being fired.
I could fix it by modifying the mapping in web.xml adding the following dispatcher tags:
<filter-mapping>
<filter-name>endpoints-api-configuration</filter-name>
<servlet-name>EndpointsServlet</servlet-name>
<dispatcher>REQUEST</dispatcher>
<dispatcher>INCLUDE</dispatcher>
<dispatcher>FORWARD</dispatcher>
</filter-mapping>
generat and deploy the OpenAPI configuration file
$ mvn clean package endpoints-framework:openApiDocs -DskipTests
$ gcloud endpoints services deploy target/openapi-docs/openapi.json
$ mvn appengine:run
I got the same error message, and eventually tracked it down to making a request with a trailing / in the URL where my API did not specify one. The trailing slash only causes an error for calls that provide an authorization token.
Apparently ControlFilter (and therefore also GoogleAppEngineControlFilter) did not recognize it as a valid endpoint and therefore did not bother attaching method_info to the request. But then EndpointsServlet thought it was valid and tried to authenticate without all the needed info!
The fix was easy: remove the trailing slash from the URL in my request. Tracking down the problem, however, was not! I see this was not your problem, but maybe this answer will help someone else.
#Kevendra's answer highlights that this issue can be caused if an openapi.json file is missing a reference to the endpoint API method. Firebase may be using this to reference and discover the API method.
From the Google OpenAPI Overview:
Basic structure of an OpenAPI document:
An OpenAPI document describes
the surface of your REST API, and defines information such as:
The name and description of the API. The individual endpoints (paths)
in the API. How the callers are authenticated.
Follow these steps to regenerate and deploy the openapi.json file:
generate:
$ mvn clean package endpoints-framework:openApiDocs -DskipTests
mvn clean - run a Maven goal to clean your project. package - compile and package it
endpoints-framework:openApiDocs generate OpenAPI documents. This will generate the openapi.json file at the location: target/openapi-docs/openapi.json- see generating and deploying an api configuration.
-DskipTests skips running any tests, to avoid any test failure due
to the openapi.json not yet being generated
deploy:
(As a precaution you can first validate the project ID returned from the following command to make sure that the service isn't created in the wrong project - gcloud config list project)
$ gcloud endpoints services deploy target/openapi-docs/openapi.json
Deploys the openapi.json file from its generated location (in the 'generate' step above). See the Google docs on Deploying the OpenAPI document

SBT not passing credentials when publishing to Artifactory

I am coding a Java project and I'm automating the build and the publishing to JFrog Artifactory using SBT.
Whenever it's time to publish to Artifactory I want to do it using the Ivy directory layout and obviously publish the Ivy XML file along with the jar. I managed to achieve this by using the following lines in the build.sbt file:
crossPaths := false
publishTo := Some("Artifactory Realm" at "http://<Artifactory IP>:<Artifactory Port>/artifactory/org.project.my")
credentials += Credentials(Path.userHome / ".ivy2" / ".credentials")
publishMavenStyle := false
However it only works when anonymous users are allowed to deploy into Artifactory. I realized that sbt is not really passing my credentials to Artifactory but, instead, logging in as anonymous.
My $HOME/.ivy2/.credentials file looks like this:
realm=Artifactory Realm
host=http://<Artifactory IP>:<Artifactory Port>/artifactory/org.project.my
user=<my user name>
password=<my user name>
However, if I change the Artifactory configuration in order to prevent anonymous users from deploying new Artifacts, when I run "sbt publish" I get the following output:
[error] Unable to find credentials for [Artifactory Realm # <Artifactory IP>].
java.io.IOException: Access to URL http://<Artifactory IP>:<Artifactory Port>/artifactory//org.project.my/org/project/my/project-my/1.0.0/project-my-1.0.0.jar was refused by the server: Unauthorized
The Artifactory request.log file then contains:
20160219011657|319|REQUEST|10.0.2.2|anonymous|PUT|/org.project.my/org/project/my/project-my/1.0.0/project-my-1.0.0.jar|HTTP/1.1|401|24978
I have also tried passing the credentials manually instead of using a file:
credentials += Credentials("Artifactory Realm", "localhost", "<USERNAME>", "<PASS>")
But I am getting the same result.
Any idea what I might be missing?
try:
host=<Artifactory IP>
old answer (doesn't work):
host=<Artifactory IP>:<Artifactory port>
I had a different problem: I had the wrong realm set on my .credentials file.
Looking at the error output from sbt, I was able to figure out that I should use:
realm=Artifactory Realm
Error shows the expected values for realm and host:
[error] Unable to find credentials for [Artifactory Realm # myhost].

Deploy script for Meteor.com hosting via Codeship

I'm using Meteor's built-in hosting for staging, with Codeship handling the continuous deployment. All tests and notifications succeed as expected in Codeship, but nothing is getting deployed.
My script:
expect -c "set timeout 60; spawn meteor deploy staging.myapp.com; expect “Email:” { send $METEOR_DEPLOY_EMAIL\r; expect eof } expect "Password:" { send $METEOR_DEPLOY_PASSWORD\r; expect eof }"
When that script runs during the build process I see the following:
spawn meteor deploy staging.myapp.com
=> Running Meteor from a checkout -- overrides project version (0.8.1)
To instantly deploy your app on a free testing server, just enter your
email address!
ail:
The ail: isn't a typo...that's what Codeship displays. It appears it eventually times out and moves on, though no errors are shown.
First time setting up a CI server (and using Expect), so thanks in advance for the help!
Figured it out...had two syntax issues:
Left/right double quotation marks snuck in there (instead of
standard quotation mark)
Missing semicolon
So, for anyone looking for a script to deploy to *.meteor.com using Codeship, here is the working script:
expect -c "set timeout 60; spawn meteor deploy example.com; expect "Email:" { send $METEOR_DEPLOY_EMAIL\r; expect eof }; expect "Password:" { send $METEOR_DEPLOY_PASSWORD\r; expect eof }"

Resources