Jenkins Artifactory plugin give Unexpected character when trying to upload large artifacts - nginx

I use the Jenkins Artifactory plugin. Artifactory is installed beside Nginx server. Sometimes, Jenkins return an error on upload:
[main] ERROR org.jfrog.build.extractor.maven.BuildInfoClientBuilder - Failed while reading the response from: PUT https://XXXX.XXX/XX-XXXXXXX-XXX/com/XXXX/XXXX/xxxxxxxx/xxxxxxx-api/1.0.0-SNAPSHOT/xxxxxxx-api-1.0.0-SNAPSHOT-jar-with-dependencies.jar;build.timestamp=1457104033410;build.name=xxxxxxx-build;build.number=75 HTTP/1.1
org.codehaus.jackson.JsonParseException: Unexpected character ('<' (code 60)): expected a valid value (number, String, array, object, 'true', 'false' or 'null')
This error is only when the file is larger then a specific size.

This problem is an Nginx problem. When I try to use "PUT" action from an other software (ex : DNC), I have an Nginx error message, not Artifactory. This is why Artifactory is unable to understand it.
PUT https://XXXX.XXX/XX-XXXXXXX-XXX/com/XXXX/XXXX/xxxxxxxx/xxxxxxx-api/1.0.0-SNAPSHOT/xxxxxxx-api-1.0.0-SNAPSHOT-jar-with-dependencies.jar;build.timestamp=1457104033410;build.name=xxxxxxx-build;build.number=75 HTTP/1.1
Result:
<html>
<head><title>413 Request Entity Too Large</title></head>
<body bgcolor="white">
<center><h1>413 Request Entity Too Large</h1></center>
<hr><center>nginx/1.8.1</center>
</body>
</html>
You need to increase the client_max_body_size in your Nginx config file: /etc/nginx/nginx.conf
# set client body size to 500M #
client_max_body_size 500M;
500M represents the maximum size of your artifact you need to upload.
more informations here: http://www.cyberciti.biz/faq/linux-unix-bsd-nginx-413-request-entity-too-large/

Related

How to log "x-kong-proxy-latency" in custom log formatter in Kong

I would like to log values of "x-kong-proxy-latency" and "x-kong-upstream-latency" headers to Kong log. How can I get access to those value in log_format?
KONG_PROXY_ACCESS_LOG: /dev/stdout custom_formatter
KONG_NGINX_HTTP_LOG_FORMAT: custom_formatter 'xkpl $$x_kong_proxy_latency'
This gets me an error:
2022-11-20 00:13:18 Run with --v (verbose) or --vv (debug) for more details
2022-11-20 00:14:20 Error: could not prepare Kong prefix at /usr/local/kong: nginx configuration is invalid (exit code 1):
2022-11-20 00:14:20 nginx: [emerg] unknown "x_kong_proxy_latency" variable
2022-11-20 00:14:20 nginx: configuration file /usr/local/kong/nginx.conf test failed
Now, what is the correct way to get this data in a variable?
As the error implied, x_kong_proxy_latency is not a variable nginx knows by default.
x-kong-proxy-latency and x-kong-upstream-latency are HTTP headers that are sent to the client indicating kong latency and upstream latency respectively
Since these headers are created by Kong to send t clients, we can use $sent_http_ nginx's prefix to inject the header into access_log, for example:
KONG_PROXY_ACCESS_LOG:/dev/stdout latency
KONG_NGINX_HTTP_LOG_FORMAT:latency '$$sent_http_x_kong_proxy_latency $$sent_http_x_kong_upstream_latency'
will inject the value of both headers you're looking for.
In case you are looking for more configs, you can look at the nginx document here

How to control vhost_shared_traffic memory K8s nginx ingress?

Background
We run a kubernetes cluster that handles several php/lumen microservices. We started seeing the app php-fpm/nginx reporting 499 status code in it's logs, and it seems to correspond with the client getting a blank response (curl returns curl: (52) Empty reply from server) while the applications log 499.
10.10.x.x - - [09/Mar/2020:18:26:46 +0000] "POST /some/path/ HTTP/1.1" 499 0 "-" "curl/7.65.3"
My understanding is nginx will return the 499 code when the client socket is no longer open/available to return the content to. In this situation that appears to mean something before the nginx/application layer is terminating this connection. Our configuration currently is:
ELB -> k8s nginx ingress -> application
So my thoughts are either ELB or ingress since the application is the one who has no socket left to return to. So i started hitting ingress logs...
Potential core problem?
While looking the the ingress logs i'm seeing quite a few of these:
2020/03/06 17:40:01 [crit] 11006#11006: ngx_slab_alloc() failed: no memory in vhost_traffic_status_zone "vhost_traffic_status"
Potential Solution
I imagine if i gave vhost_traffic_status_zone some more memory at least that error would go away and on to finding the next error.. but I can't seem to find any configmap value or annotation that would allow me to control this. I've checked the docs:
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/
Thanks in advance for any insight / suggestions / documentation I might be missing!
here is the standard way to look up how to modify the nginx.conf in the ingress controller. After that, I'll link in some info on suggestions on how much memory you should give the zone.
First start by getting the ingress controller version by checking the image version on the deploy
kubectl -n <namespace> get deployment <deployment-name> | grep 'image:'
From there, you can retrieve the code for your version from the following URL. In the following, I will be using version 0.10.2.
https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.10.2
The nginx.conf template can be found at rootfs/etc/nginx/template/nginx.tmpl in the code or /etc/nginx/template/nginx.tmpl on a pod. This can be grepped for the line of interest. I the example case, we find the following line in the nginx.tmpl
vhost_traffic_status_zone shared:vhost_traffic_status:{{ $cfg.VtsStatusZoneSize }};
This gives us the config variable to look up in the code. Our next grep for VtsStatusZoneSize leads us to the lines in internal/ingress/controller/config/config.go
// Description: Sets parameters for a shared memory zone that will keep states for various keys. The cache is shared between all worker processe
// https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone
// Default value is 10m
VtsStatusZoneSize string `json:"vts-status-zone-size,omitempty"
This gives us the key "vts-status-zone-size" to be added to the configmap "ingress-nginx-ingress-controller". The current value can be found in the rendered nginx.conf template on a pod at /etc/nginx/nginx.conf.
When it comes to what size you may want to set the zone, there are the docs here that suggest setting it to 2*usedSize:
If the message("ngx_slab_alloc() failed: no memory in vhost_traffic_status_zone") printed in error_log, increase to more than (usedSize * 2).
https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone
"usedSize" can be found by hitting the stats page for nginx or through the JSON endpoint. Here is the request to get the JSON version of the stats and if you have jq the path to the value: curl http://localhost:18080/nginx_status/format/json 2> /dev/null | jq .sharedZones.usedSize
Hope this helps.

nginx+php "The connection was reset" on file upload

I get an error that says "The connection was reset" immediately when I upload a file over a certain size, I think it's over around 4MB.
My web server is running on nginx, I tried set client_max_body_size 1G or even setting to 0, no success.
I'd be glad to hear a solution.
Thanks!
I just had to restart the nginx service by using sudo service nginx restart and it solved itself!
In my case, the file was bigger than the allowed size by NGINX in the setting "client_max_body_size". To change this setting open in your terminal the file /etc/nginx/nginx.conf and add the following inside the http section:
http {
...
client_max_body_size 128m; #Any desired size in MB
...
}
In nginx versions from 1.0 and above, this setting is not included by default in the nginx.conf file.

Rails 4.2 + NGINX - application root won't load

My server setup works for a Rails 4.0 app, but fails on a 4.2 app. I get this error:
An error occurred.
Sorry, the page you are looking for is currently unavailable.
Please try again later.
If you are the system administrator of this resource then you should check the error log for details.
NGINX config:
server {
listen 80;
server_name localhost;
passenger_enabled on;
rails_env production;
root /home/deploy/myapp/current/public;
}
NGINX error.log:
2014/10/13 16:17:06 [error] 9261#0: *9 upstream prematurely closed connection while reading response header from upstream, client: ***.***.***.***, server: localhost, request: "GET / H$
Rails production.log:
W, [2014-10-13T16:11:57.305892 #10891] WARN -- : Warning. Error encountered while saving cache a4b17298d22d34199795f642dc5b96ec8d58cc6c/orders.css.scssc: can't dump anonymous class #<$
W, [2014-10-13T16:11:57.314170 #10891] WARN -- : Warning. Error encountered while saving cache a4b17298d22d34199795f642dc5b96ec8d58cc6c/pages.css.scssc: can't dump anonymous class #<C$
W, [2014-10-13T16:11:57.319744 #10891] WARN -- : Warning. Error encountered while saving cache a4b17298d22d34199795f642dc5b96ec8d58cc6c/registrations.css.scssc: can't dump anonymous c$
If I manually put in an index.html file in the public directory, I can see that. But it fails when I want to go to the root path of the app. Any ideas?
OK so this is a bit embarrassing. To further troubleshoot my application, I started in production mode on my local machine, and when I loaded the app I got the following error on the web page:
Missing `secret_key_base` for 'production' environment, set this value in `config/secrets.yml`
So that was it. I guess I missed this new security feature by jumping straight from Rails 4.0 to 4.2 Not sure why it didn't show in the logs, but at least I found it eventually.

How do I allow a PUT file request on Nginx server?

I am using an application which needs to PUT a file on a HTTP server. I am using Nginx as the server but getting a 405 Not Allowed error back. Here is an example of a test with cURL:
curl -X PUT \
-H 'Content-Type: application/x-mpegurl' \
-d /Volumes/Extra/playlist.m3u8 http://xyz.com
And what I get back from Nginx:
<html>
<head><title>405 Not Allowed</title></head>
<body bgcolor="white">
<center><h1>405 Not Allowed</h1></center>
<hr><center>nginx/1.1.19</center>
</body>
</html>
What do I need to do to allow the PUT?
Any clues would be awesome!
To add HTTP and WebDAV methods like PUT, DELETE, MKCOL, COPY and MOVE you need to compile nginx with HttpDavModule (./configure --with-http_dav_module). Check nginx -V first, maybe you already have the HttpDavModule (I installed nginx from the Debian repository and I already have the module).
Then change your nginx-config like that:
location / {
root /var/www;
dav_methods PUT;
}
You can get more info on the nginx docs entry for the HttpDavModule.
Another reason for 405 Not Allowed is that you don't have permission to write files on the destination you're PUTing. If you have HttpDavModule and still getting this error, make sure you've given nginx write permissions where you're PUTing the files.
Adding this block solved the problem for me in a Laravel application.
location / {
try_files $uri $uri/ /index.php?$query_string;
}
nginx is mainly a proxy and a lot of other things, it share something with web server, not all.
You may want to check: https://www.nginx.com/resources/wiki/modules/upload/,
better is to have a rest interface and let nginx do the proxy, balancing, buffering, TSL ..

Resources