Flyway - are there debug levels? - flyway

On every migration, I don't want to list all the files that are Flyway-readable, which is done twice in two blocks (with hundreds of files, this is too much of unnecessary reading):
...(first block, before doing anything, flyway scans the directory and subdirectories)
...(part one: "Found filesystem resource:" hundreds of times)
[DEBUG] Found filesystem resource: sql/Migrations/Released/Release_5.2/V05_02_00_13__#########.sql
...(immediately after)
...(part two: "Filtering out resource:" hundreds of times)
[DEBUG] Filtering out resource: sql/Migrations/Released/Release_5.2/V05_02_00_13__#########.sql (filename: V05_02_00_13__#########.sql)
...
...(real migration actions)
...
...(second block, after ALL things done
...(part one: "Found filesystem resource:" hundreds of times)
...(part two: "Filtering out resource:" hundreds of times)
...
I would like to have few debug levels, so that the listing of files would appear on deepest/highest debug level, so that I don't see it every time. I don't need it.
A flag/variable would be fine. What do you say? :-)

The
[DEBUG] Found filesystem resource
and
[DEBUG] Filtering out resource
messages are only printed when flyway is run with the -X parameter
flyway migrate -X
If you run it without the -X, you will only get the actual migration actions printed out.

Related

How to log request channel into a log file?

I am currently running a Symfony 5 project in dev environment.
I would like to output requests logs (like 10:01:39 request.INFO Matched route "login_route") into a file.
I have the following config/packages/dev/monolog.yaml file:
monolog:
handlers:
main:
type: stream
path: "%kernel.logs_dir%/%kernel.environment%.log"
level: debug
channels: [event]
With the YAML above, it logs correctly into the file /tmp/dev-logs/dev.log when I execute bin/console cache:clear.
But, it does not log anything when I perform requests on the application, no matter if I set channels: [request] or channels: ~ or even no channel param at all.
How can I edit the settings of that monolog.yaml file in order to log request channel logs ?
I have found the answer! This is very specific to my configuration.
In fact, I have two Docker containers (that both mount the project directory as a volume) for development:
one for code edition (with a linter, syntax checker, specific vim configuration...etc...)
one to access the application through HTTP using PHP-FPM (the one that is used when I make HTTP requests on the app)
So, when I perform a bin/console cache:clear from the first container I use for development, it logs into the /tmp/dev-logs/dev.log file of that first container; but when I execute HTTP requests, it logs into the /tmp/dev-logs/dev.log file of that second container;
I was checking the first container file only while it was logging into the second container file instead. So, I was simply not checking the right file.
Everything works. :)

How to control vhost_shared_traffic memory K8s nginx ingress?

Background
We run a kubernetes cluster that handles several php/lumen microservices. We started seeing the app php-fpm/nginx reporting 499 status code in it's logs, and it seems to correspond with the client getting a blank response (curl returns curl: (52) Empty reply from server) while the applications log 499.
10.10.x.x - - [09/Mar/2020:18:26:46 +0000] "POST /some/path/ HTTP/1.1" 499 0 "-" "curl/7.65.3"
My understanding is nginx will return the 499 code when the client socket is no longer open/available to return the content to. In this situation that appears to mean something before the nginx/application layer is terminating this connection. Our configuration currently is:
ELB -> k8s nginx ingress -> application
So my thoughts are either ELB or ingress since the application is the one who has no socket left to return to. So i started hitting ingress logs...
Potential core problem?
While looking the the ingress logs i'm seeing quite a few of these:
2020/03/06 17:40:01 [crit] 11006#11006: ngx_slab_alloc() failed: no memory in vhost_traffic_status_zone "vhost_traffic_status"
Potential Solution
I imagine if i gave vhost_traffic_status_zone some more memory at least that error would go away and on to finding the next error.. but I can't seem to find any configmap value or annotation that would allow me to control this. I've checked the docs:
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/
Thanks in advance for any insight / suggestions / documentation I might be missing!
here is the standard way to look up how to modify the nginx.conf in the ingress controller. After that, I'll link in some info on suggestions on how much memory you should give the zone.
First start by getting the ingress controller version by checking the image version on the deploy
kubectl -n <namespace> get deployment <deployment-name> | grep 'image:'
From there, you can retrieve the code for your version from the following URL. In the following, I will be using version 0.10.2.
https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.10.2
The nginx.conf template can be found at rootfs/etc/nginx/template/nginx.tmpl in the code or /etc/nginx/template/nginx.tmpl on a pod. This can be grepped for the line of interest. I the example case, we find the following line in the nginx.tmpl
vhost_traffic_status_zone shared:vhost_traffic_status:{{ $cfg.VtsStatusZoneSize }};
This gives us the config variable to look up in the code. Our next grep for VtsStatusZoneSize leads us to the lines in internal/ingress/controller/config/config.go
// Description: Sets parameters for a shared memory zone that will keep states for various keys. The cache is shared between all worker processe
// https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone
// Default value is 10m
VtsStatusZoneSize string `json:"vts-status-zone-size,omitempty"
This gives us the key "vts-status-zone-size" to be added to the configmap "ingress-nginx-ingress-controller". The current value can be found in the rendered nginx.conf template on a pod at /etc/nginx/nginx.conf.
When it comes to what size you may want to set the zone, there are the docs here that suggest setting it to 2*usedSize:
If the message("ngx_slab_alloc() failed: no memory in vhost_traffic_status_zone") printed in error_log, increase to more than (usedSize * 2).
https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone
"usedSize" can be found by hitting the stats page for nginx or through the JSON endpoint. Here is the request to get the JSON version of the stats and if you have jq the path to the value: curl http://localhost:18080/nginx_status/format/json 2> /dev/null | jq .sharedZones.usedSize
Hope this helps.

NGINX (Operation not permitted) while reading upstream

I have NGINX working as a cache engine and can confirm that pages are being cached as well as being served from the cache. But the error logs are getting filled with this error:
2018/01/19 15:47:19 [crit] 107040#107040: *26 chmod()
"/etc/nginx/cache/nginx3/c0/1d/61/ddd044c02503927401358a6d72611dc0.0000000007"
failed (1: Operation not permitted) while reading upstream, client:
xx.xx.xx.xx, server: *.---.com, request: "GET /support/applications/
HTTP/1.1", upstream: "http://xx.xx.xx.xx:80/support/applications/",
host: "---.com"
I'm not really sure what the source of this error could be since NGINX is working. Are these errors that can be safely ignored?
It looks like you are using nginx proxy caching, but nginx does not have the ability to manipulate files in it's cache directory. You will need to get the ownership/permissions correct on the cache directory.
Not explained in the original question is that the mounted storage is an Azure file share. So in the FSTAB I had to include the gid= and uid= for the desired owner. This then removed the need for chown and chmod also became unnecessary. This removed the chmod() error but introduced another.
Then I was getting errors on rename() without permission to perform this. At this point I scrapped what I was doing, moved to a different type of Azure storage (specifically a Disk attached to the VM) and all these problems went away.
So I'm offering this as an answer but realistically, the problem was not solved.
We noticed the same problem. Following the guide from Microsoft # https://learn.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv#create-a-storage-class seems to have fixed it.
In our case the nginx process was using a different user for the worker threads, so we needed to find that user's uid and gid and use that in the StorageClass definition.

How to input many big static json files into logstash?

My inputs are hundreds of big 1-line json files (~10MB-20MB).
After getting out-of-memory errors with my real setup (with two custom filters), I simplified the setup to isolate the problem.
logstash --verbose -e 'input { tcp { port => 5000 } } output { file { path => "/dev/null" } }'
`
My test input is a multi-level nested object in json:
$ ls -sh example_fixed.json
9.7M example_fixed.json
If I send the file once, it works fine. But if I do:
$ repeat 50 cat example_fixed.json|nc -v localhost 5000
I get the error message:
Logstash startup completed
Using version 0.1.x codec plugin 'line'. This plugin isn't well supported by the community and likely has no maintainer. {:level=>:info}
Opening file {:path=>"/dev/null", :level=>:info}
Starting stale files cleanup cycle {:files=>{"/dev/null"=>#<IOWriter:0x6f51765 #active=true, #io=#<File:/dev/null>>}, :level=>:info}
Error: Your application used more memory than the safety cap of 500M.
Specify -J-Xmx####m to increase it (#### = cap size in MB).
Specify -w for full OutOfMemoryError stack trace
I have determined that the error triggers if I send the input more than 30 times with a heap size of 500MB. If I increase heap size, this limit goes up accordingly.
However, from documentation I understand logstash should be able to throttle the input when it cannot process the events quickly enough.
In fact, If I do a sleep 0.1 after sending new events, it can handle up to 100 repetitions. But not 1000. So I assume this means the input is not being throttled properly, and whenever the input rate is higher than the processing rate, it's a matter of time before the heap is filled and logstash crashes.

Easy way to find failed files using wget -i

I have a list of files (images, pdfs) and I want download them all via wget (using option -i). The problem is that I got different number of files downloaded - some are missing. All files in list are unique - I already double-check.
I know, that I can compare my list with 'ls' in folder, but I'm curious if there is some "wget" way to find failed operations.
UPDATE:
My problem in steps:
I have list 'list.txt' with URLs
I run command:
wget -i list.txt
for each file I get a long output with status e.g:
--2015-07-07 13:46:34-- http://www.example.com/example.png
Reusing existing connection to www.example.com:80.
HTTP request sent, awaiting response... 200 OK
Length: 261503 (255K) [image/png]
Saving to: 'example.png'
example.png 100%[===================================================>] 255.37K 1.51MB/s in 0.2s
2015-07-07 13:46:34 (1.51 MB/s) - 'example.png' saved [261503/261503]
at the end i got 1 more message:
FINISHED --2015-07-07 13:46:34--
Total wall clock time: 1m 15s
Downloaded: 109 files, 130M in 1m 11s (1.84 MB/s)
My question is: I have 'list.txt' with 112 lines, the result told me that it downloads 109 files, so how can I know which files are not downloaded?

Resources