logfile endpoint does not find log path - spring-boot-actuator

We have a Log4j2.xml which configures a RollingFileAppender. Now I want to expose these logs with the logfile actuator endpoint.
If I specify logging.file in the application.yml like this
logging:
file: target/logs/mylog.log
everything is fine.
However, this does not work (invoking the logfile-endpoint results in a 404):
logging:
path: /target/logs
How can I get the logfile endpoint to show the mylog-1.log, mylog-2.log,... files?

Related

Enabling account deletion on nats server

I was trying to prune some users from my nats server by doing:
nsc push --system-account SYS -u nats://localhost:4222 -P
but I got the following error:
server nats-comm-2 responded with error: delete accounts request by SOME_KEY_VALUE failed - delete must be enabled in server config
The meaning of the error is pretty obvious, when I examine the help documentation for nsc push -P:
Only works with nats-resolver enabled nats-server. Mutually exclusive of account-removal/diff
But I'm not sure how to enable this in my nats server config. How do I allow for account pruning?
I found documentation in the resolver section, here, showing that I could add allow_delete: true to the config, but as the YAML format is in camel-case, I had to modify it to be allowDelete: true instead.
nats:
auth:
enabled: true
resolver:
type: full
allowDelete: true

How to log request channel into a log file?

I am currently running a Symfony 5 project in dev environment.
I would like to output requests logs (like 10:01:39 request.INFO Matched route "login_route") into a file.
I have the following config/packages/dev/monolog.yaml file:
monolog:
handlers:
main:
type: stream
path: "%kernel.logs_dir%/%kernel.environment%.log"
level: debug
channels: [event]
With the YAML above, it logs correctly into the file /tmp/dev-logs/dev.log when I execute bin/console cache:clear.
But, it does not log anything when I perform requests on the application, no matter if I set channels: [request] or channels: ~ or even no channel param at all.
How can I edit the settings of that monolog.yaml file in order to log request channel logs ?
I have found the answer! This is very specific to my configuration.
In fact, I have two Docker containers (that both mount the project directory as a volume) for development:
one for code edition (with a linter, syntax checker, specific vim configuration...etc...)
one to access the application through HTTP using PHP-FPM (the one that is used when I make HTTP requests on the app)
So, when I perform a bin/console cache:clear from the first container I use for development, it logs into the /tmp/dev-logs/dev.log file of that first container; but when I execute HTTP requests, it logs into the /tmp/dev-logs/dev.log file of that second container;
I was checking the first container file only while it was logging into the second container file instead. So, I was simply not checking the right file.
Everything works. :)

How to write airflow logs to Elasticsearch?

I am using Airflow 1.10.5. Can't seem to find complete documentation or sample on how to setup remote logging using Elasticsearch. I saw airflow documentation about logging, but it wasn't helpful. I am trying to write the airflow (not task) logs to ES.
As far as I understand the docs, the ES log handler can only read from ES. You would have to setup your logging to print into a file, then use something like filebeat to post the file content to ES and Airflow can then read them back...
https://airflow.readthedocs.io/en/stable/howto/write-logs.html#writing-logs-to-elasticsearch
Writing Logs to Elasticsearch
Airflow can be configured to read task
logs from Elasticsearch and optionally write logs to stdout in
standard or json format. These logs can later be collected and
forwarded to the Elasticsearch cluster using tools like fluentd,
logstash or others.
I was able to achieve using [filebeat][1] shipper.
Input config section in filebeat.yml
</snip>
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /path/to/logs/*.log
</snip>
Output config section in filebeat.yml
<snip>
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "elastic"
password: "changeme"
</snip>
Good doc to read especially about airflow --> ES.

How to disable interception of errors by Ingress in a Tectonic kubernetes setup

I have a couple of NodeJS backends running as pods in a Kubernetes setup, with Ingress-managed nginx over it.
These backends are API servers, and can return 400, 404, or 500 responses during normal operations. These responses would provide meaningful data to the client; besides the status code, the response has a JSON-serialized structure in the body informing about the error cause or suggesting a solution.
However, Ingress will intercept these error responses, and return an error page. Thus the client does not receive the information that the service has tried to provide.
There's a closed ticket in the kubernetes-contrib repository suggesting that it is now possible to turn off error interception: https://github.com/kubernetes/contrib/issues/897. Being new to kubernetes/ingress, I cannot figure out how to apply this configuration in my situation.
For reference, this is the output of kubectl get ingress <ingress-name>: (redacted names and IPs)
Name: ingress-name-redacted
Namespace: default
Address: 127.0.0.1
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
public.service.example.com
/ service-name:80 (<none>)
Annotations:
rewrite-target: /
service-upstream: true
use-port-in-redirects: true
Events: <none>
I have solved this on Tectonic 1.7.9-tectonic.4.
In the Tectonic web UI, go to Workloads -> Config Maps and filter by namespace tectonic-system.
In the config maps shown, you should see one named "tectonic-custom-error".
Open it and go to the YAML editor.
In the data field you should have an entry like this:
custom-http-errors: '404, 500, 502, 503'
which configures which HTTP responses will be captured and be shown with the custom Tectonic error page.
If you don't want some of those, just remove them, or clear them all.
It should take effect as soon as you save the updated config map.
Of course, you could to the same from the command line with kubectl edit:
$> kubectl edit cm tectonic-custom-error --namespace=tectonic-system
Hope this helps :)

Reload nginx config from salt state, only if the configtest passes

I've recently written a salt state which handles the nginx config for a number of servers from some static variables in pillar. I wanted to roll this out to all the servers, but before I did this I wanted to make sure before the config is applied on a server it has first been tested.
Nginx has an inbuilt configtest which I use frequently on command line, and I found that salt has an nginx module which can be used to run configtest.
I have the following in my state file:
reload-nginx:
service.running:
- enabled: True
- reload: True
- watch:
- pkg: nginx
- file: /etc/nginx/sites-available/*
- file: /etc/nginx/nginx.conf
This should reload nginx if the config files change, or if the nginx install is upgraded/changed. I believe I can run a config test using the following in my state file (untested):
nginx-config-test:
module.run:
- name: nginx.configtest
And I believe if I add this state to the watch in the reload-nginx state it would reload if the configtest passed.
However, I want the reload to happen only if either of the config files have changed AND the config test passes, or if nginx changes AND the configtest passes. I see I can use onlyif to run a state if ALL of the things are True, and from experience you can' have multiple uses of the same method (so I can't have 3 different onlyif's - correct me if I am wrong).
But I don't see any way to reload nginx only if the config files have changed (or nginx has been updated) and the configtest has passed.
Is this possible?
Have the reload state watch the config-test state; have the config-test state watch the config files state and the pkg state. The test will only run if something changes, and the reload will only occur if the test runs and passes.
Caveat: Structurally this will work, but I've never used nginx.configtest, so I can't promise it behaves the way you think.
You will also need to use module.wait rather than module.run; watch statements don't work with .run. Reference here.
So that becomes:
reload-nginx:
service.running:
- name: nginx
- enable: True
- reload: True
- watch:
- module: nginx-config-test
nginx-config-test:
module.wait:
- name: nginx.configtest
- watch:
- file: /etc/nginx/sites-available/*

Resources