I'm having issues with ssl certificate verification. When I am trying to send logs to the server to nginx, I get an error message that says:
Feb 14 21:38:53 username td-agent-bit[31178]: [2022/02/14 21:38:53] [error] [tls] /tmp/fluent-bit-1.8.12/src/tls/mbedtls.c:380 X509 - Certificate verification failed, e.g. CRL, CA or signature check
Feb 14 21:38:53 username td-agent-bit[31178]: [2022/02/14 21:38:53] [error] [output:http:http.0] no upstream connections available to 127.0.0.1:443
Feb 14 21:38:53 username td-agent-bit[31178]: [2022/02/14 21:38:53] [ warn] [engine] failed to flush chunk '31025-1644867441.221825565.flb', retry in 32 seconds: task_id=20, input=storage_backlog.6 > out
put=http.0 (out_id=0)
Feb 14 21:38:53 username td-agent-bit[31178]: [2022/02/14 21:38:53] [ info] [output:http:http.0] 127.0.0.1:443, HTTP status=200
Feb 14 21:38:53 username td-agent-bit[31178]: {"status":200}
Feb 14 21:38:54 username td-agent-bit[31178]: [2022/02/14 21:38:54] [error] [tls] /tmp/fluent-bit-1.8.12/src/tls/mbedtls.c:380 X509 - Certificate verification failed, e.g. CRL, CA or signature check
Feb 14 21:38:54 username td-agent-bit[31178]: [2022/02/14 21:38:54] [error] [output:http:http.0] no upstream connections available to 127.0.0.1:443
Feb 14 21:38:54 username td-agent-bit[31178]: [2022/02/14 21:38:54] [ warn] [engine] failed to flush chunk '31025-1644867401.174594241.flb', retry in 37 seconds: task_id=12, input=storage_backlog.6 > out
put=http.0 (out_id=0)
Feb 14 21:38:54 username td-agent-bit[31178]: [2022/02/14 21:38:54] [error] [tls] /tmp/fluent-bit-1.8.12/src/tls/mbedtls.c:380 X509 - Certificate verification failed, e.g. CRL, CA or signature check
Feb 14 21:38:54 username td-agent-bit[31178]: [2022/02/14 21:38:54] [error] [output:http:http.0] no upstream connections available to 127.0.0.1:443
Feb 14 21:38:54 username td-agent-bit[31178]: [2022/02/14 21:38:54] [ warn] [engine] failed to flush chunk '31025-1644867416.136883568.flb', retry in 12 seconds: task_id=15, input=storage_backlog.6 > out
put=http.0 (out_id=0)
Feb 14 21:38:54 username td-agent-bit[31178]: [2022/02/14 21:38:54] [error] [tls] /tmp/fluent-bit-1.8.12/src/tls/mbedtls.c:380 X509 - Certificate verification failed, e.g. CRL, CA or signature check
Feb 14 21:38:54 username td-agent-bit[31178]: [2022/02/14 21:38:54] [error] [output:http:http.0] no upstream connections available to 127.0.0.1:443
Feb 14 21:38:54 username td-agent-bit[31178]: [2022/02/14 21:38:54] [ warn] [engine] failed to flush chunk '31025-1644867481.167299560.flb', retry in 10 seconds: task_id=28, input=storage_backlog.6 > out
put=http.0 (out_id=0)
Feb 14 21:38:54 username td-agent-bit[31178]: [2022/02/14 21:38:54] [ info] [output:http:http.0] 127.0.0.1:443, HTTP status=200
Feb 14 21:38:54 username td-agent-bit[31178]: {"status":200}
Feb 14 21:38:55 username td-agent-bit[31178]: [2022/02/14 21:38:55] [error] [tls] /tmp/fluent-bit-1.8.12/src/tls/mbedtls.c:380 X509 - Certificate verification failed, e.g. CRL, CA or signature check
Feb 14 21:38:55 username td-agent-bit[31178]: [2022/02/14 21:38:55] [error] [output:http:http.0] no upstream connections available to 127.0.0.1:443
Feb 14 21:38:55 username td-agent-bit[31178]: [2022/02/14 21:38:55] [ warn] [engine] failed to flush chunk '31178-1644867522.155353155.flb', retry in 19 seconds: task_id=3, input=tail.2 > output=http.0 (
out_id=0)
Feb 14 21:38:55 username td-agent-bit[31178]: [2022/02/14 21:38:55] [ info] [output:http:http.0] 127.0.0.1:443, HTTP status=200
Feb 14 21:38:55 username td-agent-bit[31178]: {"status":200}
CRL, CA or signature verification failed, for some reason. Verification passes only after certain number of attempts.
How to fix it?
td-agent-bit.conf:
[SERVICE]
# Flush
# =====
# set an interval of seconds before to flush records to a destination
flush 5
# Daemon
# ======
# instruct Fluent Bit to run in foreground or background mode.
daemon Off
# Log_Level
# =========
# Set the verbosity level of the service, values can be:
#
# - error
# - warning
# - info
# - debug
# - trace
#
# by default 'info' is set, that means it includes 'error' and 'warning'.
log_level info
# Parsers File
# ============
# specify an optional 'Parsers' configuration file
parsers_file parsers.conf
# Plugins File
# ============
# specify an optional 'Plugins' configuration file to load external plugins.
plugins_file plugins.conf
# HTTP Server
# ===========
# Enable/Disable the built-in HTTP Server for metrics
http_server Off
http_listen 0.0.0.0
http_port 2020
# Storage
# =======
# Fluent Bit can use memory and filesystem buffering based mechanisms
#
# - https://docs.fluentbit.io/manual/administration/buffering-and-storage
#
# storage metrics
# ---------------
# publish storage pipeline metrics in '/api/v1/storage'. The metrics are
# exported only if the 'http_server' option is enabled.
#
# storage.metrics on
# storage.path
# ------------
# absolute file system path to store filesystem data buffers (chunks).
#
storage.path /tmp/fluent-bit-storage/
# storage.sync
# ------------
# configure the synchronization mode used to store the data into the
# filesystem. It can take the values normal or full.
#
storage.sync normal
# storage.checksum
# ----------------
# enable the data integrity check when writing and reading data from the
# filesystem. The storage layer uses the CRC32 algorithm.
#
storage.checksum off
# storage.backlog.mem_limit
# -------------------------
# if storage.path is set, Fluent Bit will look for data chunks that were
# not delivered and are still in the storage layer, these are called
# backlog data. This option configure a hint of maximum value of memory
# to use when processing these records.
#
storage.backlog.mem_limit 2M
[INPUT]
name tail
tag log.development.production
path /home/username/production.log
Buffer_Max_Size 2mb
Refresh_interval 5
Offset_Key offset
Path_Key path
storage.type filesystem
DB /tmp/production.db
DB.sync normal
DB.locking false
DB.journal_mode wal
# Read interval (sec) Default: 1
#interval_sec 1
[INPUT]
name tail
tag log.development.nginx
path /home/username/nginx.log
Buffer_Max_Size 2mb
Refresh_interval 5
Offset_Key offset
Path_Key path
storage.type filesystem
DB /tmp/nginx.db
DB.sync normal
DB.locking false
DB.journal_mode wal
# Read interval (sec) Default: 1
#interval_sec 1
[INPUT]
name tail
tag log.development.apache
path /home/username/apache.log
Buffer_Max_Size 2mb
Refresh_interval 5
Offset_Key offset
Path_Key path
storage.type filesystem
DB /tmp/apache.db
DB.sync normal
DB.locking false
DB.journal_mode wal
# Read interval (sec) Default: 1
#interval_sec 1
[INPUT]
name tail
tag log.development.syslog
path /home/username/syslog.log
Buffer_Max_Size 2mb
Refresh_interval 5
Offset_Key offset
Path_Key path
storage.type filesystem
DB /tmp/syslog.db
DB.sync normal
DB.locking false
DB.journal_mode wal
# Read interval (sec) Default: 1
#interval_sec 1
[INPUT]
name tail
tag log.development.postgres
path /home/username/postgres.log
Buffer_Max_Size 2mb
Refresh_interval 5
Offset_Key offset
Path_Key path
storage.type filesystem
DB /tmp/postgres.db
DB.sync normal
DB.locking false
DB.journal_mode wal
# Read interval (sec) Default: 1
#interval_sec 1
[INPUT]
name tail
tag log.development.zabbix
path /home/username/zabbix.log
Buffer_Max_Size 2mb
Refresh_interval 5
Offset_Key offset
Path_Key path
storage.type filesystem
DB /tmp/zabbix.db
DB.sync normal
DB.locking false
DB.journal_mode wal
# Read interval (sec) Default: 1
#interval_sec 1
[OUTPUT]
Name http
Match *
Host 127.0.0.1
Port 443
http_User fluentbit
http_Passwd fluentbit
tls on
tls.verify on
tls.debug 4
tls.ca_file /home/username/cert/ca_1/CA.pem
tls.crt_file /home/username/cert/ca_1/signed_certificates/server.crt
tls.key_file /home/username/cert/ca_1/signed_certificates/server.key
Format json
Header_tag header_tag_is_here
Header Location localhost
Retry_Limit no_limits
nginx.conf:
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
ssl on;
ssl_certificate /home/username/cert/ca_1/signed_certificates/server.crt;
ssl_certificate_key /home/username/cert/ca_1/signed_certificates/server.key;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
server_name _;
location / {
proxy_pass http://localhost:3000/;
}
}
I am trying to configure ModSecurity to use it in an NGINX server that has the ngx_http_auth_request_module installed and I am receiving the following error:
2021/11/30 01:52:58 [info] 7#0: *1 ModSecurity: Warning. Matched "Operator `Rx' with parameter `^0?$' against variable `REQUEST_HEADERS:Content-Length' (Value: `51' ) [file "/usr/local/coreruleset-3.3.2/rules/REQUEST-920-PROTOCOL-ENFORCEMENT.conf"] [line "161"] [id "920170"] [rev ""] [msg "GET or HEAD Request with Body Content"] [data "51"] [severity "2"] [ver "OWASP_CRS/3.3.2"] [maturity "0"] [accuracy "0"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-protocol"] [tag "paranoia-level/1"] [tag "OWASP_CRS"] [tag "capec/1000/210/272"] [hostname "172.17.0.6"] [uri "/api/v1/busquedas/criterios"] [unique_id "1638237178"] [ref "o0,3v0,3v96,2"], client: 172.17.0.1, server: servername, request: "POST /api/v1/busquedas/criterios HTTP/1.1", subrequest: "/auth", host: "localhost", referrer: "http://localhost/clientes/busqueda"
2021/11/30 01:52:58 [debug] 7#0: *1 malloc: 000056265DD6F8B0:4096
2021/11/30 01:52:58 [debug] 7#0: *1 malloc: 000056265DD708C0:4096
2021/11/30 01:52:58 [debug] 7#0: *1 free: 000056265DD6F8B0
2021/11/30 01:52:58 [debug] 7#0: *1 free: 000056265DD708C0
2021/11/30 01:52:58 [error] 7#0: *1 [client 172.17.0.1] ModSecurity: Access denied with code 403 (phase 2). Matched "Operator `Ge' with parameter `5' against variable `TX:ANOMALY_SCORE' (Value: `5' ) [file "/usr/local/coreruleset-3.3.2/rules/REQUEST-949-BLOCKING-EVALUATION.conf"] [line "80"] [id "949110"] [rev ""] [msg "Inbound Anomaly Score Exceeded (Total Score: 5)"] [data ""] [severity "2"] [ver "OWASP_CRS/3.3.2"] [maturity "0"] [accuracy "0"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-generic"] [hostname "172.17.0.6"] [uri "/api/v1/busquedas/criterios"] [unique_id "1638237178"] [ref ""], client: 172.17.0.1, server: servername, request: "POST /api/v1/busquedas/criterios HTTP/1.1", subrequest: "/auth", host: "localhost", referrer: "http://localhost/clientes/busqueda"
2021/11/30 01:52:58 [debug] 7#0: *1 http finalize request: 403, "/auth?" a:1, c:2
2021/11/30 01:52:58 [debug] 7#0: *1 auth request done s:0
2021/11/30 01:52:58 [debug] 7#0: *1 http special response: 403, "/auth?"
20
It is supposed that ModSecurity executes a SecRule to validate a POST request that my web application is sending to the server, however as you can see in the previous warning, it is validating a GET request instead.
After some troubleshooting, I discovered that the SecRule is validating that a GET request should not have a Content-Length header with a value different than "" (empty string), which is logical. However, as I am also using ngx_http_auth_request_module module to generate subrequests to the following location in my NGINX configuration file:
location /auth {
internal;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header access_token $http_access_token;
proxy_pass http://authentication-gwy:8080/cmr-experience-serv-gateway-loggin/v1/auth/token-valid;
}
, then ModSecurity is also trying to evaluate the GET subrequest, which maintains the Content-Length with a 51 value, even though I am setting it to "" in the location configuration.
Set Content-Length to 0, it will fix the problem.
I am using Firebase realtime database. I want to delete a node which has huge records. So i can not delete with normal ways. When i try to delete with using rest, i can not do that. Default setting is "large". I have to set defaultWriteSizeLimit to "unlimited".
$firebase database:settings:set defaultWriteSizeLimit unlimited --instance <dbname>
When i execute above code, i receive an error message.
"Error: Unexpected error fetching configs at defaultWriteSizeLimit"
Below is firebase-debug.log:
[debug] [2020-12-09T09:24:13.317Z] ----------------------------------------------------------------------
[debug] [2020-12-09T09:24:13.318Z] Command: /usr/local/lib/node_modules/firebase-tools/lib/bin/firebase.js /Users/emrekhan/.cache/firebase/tools/lib/node_modules/firebase-tools/lib/bin/firebase database:settings:set defaultWriteSizeLimit unlimited --instance dbname
[debug] [2020-12-09T09:24:13.318Z] CLI Version: 8.17.0
[debug] [2020-12-09T09:24:13.318Z] Platform: darwin
[debug] [2020-12-09T09:24:13.318Z] Node Version: v12.18.1
[debug] [2020-12-09T09:24:13.319Z] Time: Wed Dec 09 2020 12:24:13 GMT+0300 (GMT+03:00)
[debug] [2020-12-09T09:24:13.319Z] ----------------------------------------------------------------------
[debug] [2020-12-09T09:24:13.319Z]
[debug] [2020-12-09T09:24:13.322Z] > command requires scopes: ["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","https://www.googleapis.com/auth/cloud-platform"]
[debug] [2020-12-09T09:24:13.323Z] > authorizing via signed-in user
[debug] [2020-12-09T09:24:13.324Z] [iam] checking project dbname for permissions ["firebase.projects.get","firebasedatabase.instances.update"]
[debug] [2020-12-09T09:24:13.325Z] > refreshing access token with scopes: ["email","https://www.googleapis.com/auth/cloud-platform","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","openid"]
[debug] [2020-12-09T09:24:13.325Z] >>> HTTP REQUEST POST https://www.googleapis.com/oauth2/v3/token
<request body omitted>
[debug] [2020-12-09T09:24:13.668Z] <<< HTTP RESPONSE 200 {"date":"Wed, 09 Dec 2020 09:24:14 GMT","expires":"Mon, 01 Jan 1990 00:00:00 GMT","pragma":"no-cache","cache-control":"no-cache, no-store, max-age=0, must-revalidate","content-type":"application/json; charset=utf-8","vary":"X-Origin, Referer, Origin,Accept-Encoding","server":"scaffolding on HTTPServer2","x-xss-protection":"0","x-frame-options":"SAMEORIGIN","x-content-type-options":"nosniff","alt-svc":"h3-29=\":443\"; ma=2592000,h3-T051=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"","accept-ranges":"none","transfer-encoding":"chunked"}
[debug] [2020-12-09T09:24:13.682Z] >>> HTTP REQUEST POST https://cloudresourcemanager.googleapis.com/v1/projects/dbname:testIamPermissions
{"permissions":["firebase.projects.get","firebasedatabase.instances.update"]}
[debug] [2020-12-09T09:24:14.782Z] <<< HTTP RESPONSE 200 {"content-type":"application/json; charset=UTF-8","vary":"X-Origin, Referer, Origin,Accept-Encoding","date":"Wed, 09 Dec 2020 09:24:15 GMT","server":"ESF","cache-control":"private","x-xss-protection":"0","x-frame-options":"SAMEORIGIN","x-content-type-options":"nosniff","server-timing":"gfet4t7; dur=768","alt-svc":"h3-29=\":443\"; ma=2592000,h3-T051=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"","accept-ranges":"none","transfer-encoding":"chunked"}
[debug] [2020-12-09T09:24:14.783Z] >>> HTTP REQUEST GET https://firebasedatabase.googleapis.com/v1beta/projects/dbname/locations/-/instances/dbname
[debug] [2020-12-09T09:24:15.499Z] <<< HTTP RESPONSE 200 {"content-type":"application/json; charset=UTF-8","vary":"X-Origin, Referer, Origin,Accept-Encoding","date":"Wed, 09 Dec 2020 09:24:16 GMT","server":"ESF","cache-control":"private","x-xss-protection":"0","x-frame-options":"SAMEORIGIN","x-content-type-options":"nosniff","alt-svc":"h3-29=\":443\"; ma=2592000,h3-T051=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"","accept-ranges":"none","transfer-encoding":"chunked"}
[debug] [2020-12-09T09:24:15.501Z] > refreshing access token with scopes: []
[debug] [2020-12-09T09:24:15.501Z] >>> HTTP REQUEST POST https://www.googleapis.com/oauth2/v3/token
<request body omitted>
[debug] [2020-12-09T09:24:15.766Z] <<< HTTP RESPONSE 200 {"date":"Wed, 09 Dec 2020 09:24:16 GMT","cache-control":"no-cache, no-store, max-age=0, must-revalidate","expires":"Mon, 01 Jan 1990 00:00:00 GMT","pragma":"no-cache","content-type":"application/json; charset=utf-8","vary":"X-Origin, Referer, Origin,Accept-Encoding","server":"scaffolding on HTTPServer2","x-xss-protection":"0","x-frame-options":"SAMEORIGIN","x-content-type-options":"nosniff","alt-svc":"h3-29=\":443\"; ma=2592000,h3-T051=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"","accept-ranges":"none","transfer-encoding":"chunked"}
[debug] [2020-12-09T09:24:15.780Z] >>> [apiv2][query] PUT https://dbname.firebaseio.com/.settings/defaultWriteSizeLimit.json [none]
[debug] [2020-12-09T09:24:15.780Z] >>> [apiv2][body] PUT https://dbname.firebaseio.com/.settings/defaultWriteSizeLimit.json "unlimited"
[debug] [2020-12-09T09:24:16.568Z] <<< [apiv2][status] PUT https://dbname.firebaseio.com/.settings/defaultWriteSizeLimit.json 400
[debug] [2020-12-09T09:24:16.568Z] <<< [apiv2][body] PUT https://dbname.firebaseio.com/.settings/defaultWriteSizeLimit.json {"error":"defaultWriteSizeLimit should be one of {\"large\", \"medium\", \"small\", \"unlimited\"}"}
[debug] [2020-12-09T09:24:16.907Z] FirebaseError: HTTP Error: 400, defaultWriteSizeLimit should be one of {"large", "medium", "small", "unlimited"}
at module.exports (/Users/emrekhan/.cache/firebase/tools/lib/node_modules/firebase-tools/lib/responseToError.js:38:12)
at Client.<anonymous> (/Users/emrekhan/.cache/firebase/tools/lib/node_modules/firebase-tools/lib/apiv2.js:191:27)
at Generator.next (<anonymous>)
at fulfilled (/Users/emrekhan/.cache/firebase/tools/lib/node_modules/firebase-tools/lib/apiv2.js:5:58)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
[error]
[error] Error: Unexpected error fetching configs at defaultWriteSizeLimit
I was able to fix this problem for myself by patching firebase-tools: in database/settings.js I replaced line 23 with return JSON.stringify(input) and updated the setting successfully.
This has also been reported to Firebase Tools Issues on Github and fixed and merged today in this Pull Request
I assume we can expect a version bump on firebase-tools and then running
firebase database:settings:set defaultWriteSizeLimit unlimited
should work after updating to the latest version.
We see time-outs during some calls to external REST service from within a Spring Boot application. They do not seem to occur when we connect to the REST service directly. Debug logging on org.apache.http has given us a very peculiar aspect of the failing requests: it contains an inbound log entry '<< "[read] I/O error: Read timed out"' in the middle of sending headers - the same millisecond the first headers were sent.
How can we see an inbound 'Read timed out' a few milliseconds after sending the first headers? And why does it not immediately interrupt the request/connection with a time-out, but instead waits the full 4500ms until it times out with an exception?
Here is our production log for a failing request, redacted. Note the 4500ms delay between lines two and three. My question is about the occurrence of http-outgoing-104 << "[read] I/O error: Read timed out" at 16:55:08.258, not the first one on line 2.
16:55:12.764 Connection released: [id: 104][route: {s}-><<website-redacted>>:443][total kept alive: 0; route allocated: 0 of 2; total allocated: 0 of 20]
16:55:12.763 http-outgoing-104 << "[read] I/O error: Read timed out"
16:55:08.259 http-outgoing-104 >> "<<POST Body Redacted>>"
16:55:08.259 http-outgoing-104 >> "[\r][\n]"
16:55:08.258 http-outgoing-104: set socket timeout to 4500
16:55:08.258 Executing request POST <<Endpoint Redacted>> HTTP/1.1
16:55:08.258 Target auth state: UNCHALLENGED
16:55:08.258 Proxy auth state: UNCHALLENGED
16:55:08.258 Connection leased: [id: 104][route: {s}-><<website-redacted>>:443][total kept alive: 0; route allocated: 1 of 2; total allocated: 1 of 20]
....
16:55:08.258 http-outgoing-104 >> "POST <<Endpoint Redacted>> HTTP/1.1[\r][\n]"
16:55:08.258 http-outgoing-104 >> "Accept: text/plain, application/json, application/*+json, */*[\r][\n]"
16:55:08.258 http-outgoing-104 >> Cookie: <<Redacted>>
16:55:08.258 http-outgoing-104 >> "Content-Type: application/json[\r][\n]"
16:55:08.258 http-outgoing-104 >> "Connection: close[\r][\n]"
16:55:08.258 http-outgoing-104 >> "X-B3-SpanId: <<ID>>[\r][\n]"
16:55:08.258 http-outgoing-104 << "[read] I/O error: Read timed out"
16:55:08.258 http-outgoing-104 >> "X-Span-Name: https:<<Endpoint Redacted>>[\r][\n]"
16:55:08.258 http-outgoing-104 >> "X-B3-TraceId: <<ID>>[\r][\n]"
16:55:08.258 http-outgoing-104 >> "X-B3-ParentSpanId: <<ID>>[\r][\n]"
16:55:08.258 http-outgoing-104 >> "Content-Length: 90[\r][\n]"
16:55:08.258 http-outgoing-104 >> "User-Agent: Apache-HttpClient/4.5.3 (Java/1.8.0_172)[\r][\n]"
16:55:08.258 http-outgoing-104 >> "Cookie: <<Redacted>>"
16:55:08.258 http-outgoing-104 >> "Host: <<Host redacted>>[\r][\n]"
16:55:08.258 http-outgoing-104 >> "Accept-Encoding: gzip,deflate[\r][\n]"
16:55:08.258 http-outgoing-104 >> "X-B3-Sampled: 1[\r][\n
Update 1: a second occurrence:
In another request that timed out the same behavior roughly occurs, but the timeout message is logged even before sending headers and eventually receiving the actual timeout. Note: this request is actually older, after it I have configured the request to include 'Connection: close' to circumvent a firewall dropping the connection under 'Keep Alive'.
19:28:08.102 http-outgoing-36 << "[read] I/O error: Read timed out"
19:28:08.102 http-outgoing-36: Shutdown connection
19:28:08.102 http-outgoing-36: Close connection
19:28:03.598 http-outgoing-36 >> "Connection: Keep-Alive[\r][\n]"
19:28:03.598 http-outgoing-36 >> "Content-Type: application/json;charset=UTF-8[\r][\n]"
...
19:28:03.598 http-outgoing-36 >> "Accept-Encoding: gzip,deflate[\r][\n]"
...
19:28:03.597 http-outgoing-36 >> Cookie: ....
19:28:03.597 http-outgoing-36 >> Accept-Encoding: gzip,deflate
19:28:03.597 http-outgoing-36 >> User-Agent: Apache-HttpClient/4.5.3 (Java/1.8.0_172)
19:28:03.596 Connection leased: [id: 36][route: {s}-><< Site redacted >>:443][total kept alive: 0; route allocated: 1 of 2; total allocated: 1 of 20]
19:28:03.596 http-outgoing-36: set socket timeout to 4500
19:28:03.596 Executing request POST HTTP/1.1
19:28:03.596 Target auth state: UNCHALLENGED
19:28:03.596 http-outgoing-36 << "[read] I/O error: Read timed out"
19:28:03.594 Connection request: [route: {s}-><< Site redacted >>:443][total kept alive: 1; route allocated: 1 of 2; total allocated: 1 of 20]
19:28:03.594 Auth cache not set in the context
Update 2: added HttpClientBuilder configuration
RequestConfig.Builder requestBuilder = RequestConfig.custom()
.setSocketTimeout(socketTimeout)
.setConnectTimeout(connectTimeout);
CloseableHttpClient httpClient = HttpClientBuilder.create()
.setDefaultRequestConfig(requestBuilder.build())
.build();
HttpComponentsClientHttpRequestFactory rf = new HttpComponentsClientHttpRequestFactory(httpClient);
return new RestTemplate(rf);
Is there any way to change the response code nginx sends? When the server receives a file that exceeds its client_max_body_size as defined in the config, can I have it return a 403 code instead of a 413 code?
Below works fine for me
events {
worker_connections 1024;
}
http {
server {
listen 80;
location #change_upload_error {
return 403 "File uploaded too large";
}
location /post {
client_max_body_size 10K;
error_page 413 = #change_upload_error;
echo "you reached here";
}
}
}
Results for posting a 50KB file
$ curl -vX POST -F file=#test.txt vm/post
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying 192.168.33.100...
* TCP_NODELAY set
* Connected to vm (192.168.33.100) port 80 (#0)
> POST /post HTTP/1.1
> Host: vm
> User-Agent: curl/7.54.0
> Accept: */*
> Content-Length: 51337
> Expect: 100-continue
> Content-Type: multipart/form-data; boundary=------------------------67df5f3ef06561a5
>
< HTTP/1.1 403 Forbidden
< Server: openresty/1.11.2.2
< Date: Mon, 11 Sep 2017 17:58:55 GMT
< Content-Type: text/plain
< Content-Length: 23
< Connection: close
<
* Closing connection 0
File uploaded too large%
and nginx logs
web_1 | 2017/09/11 17:58:55 [error] 5#5: *1 client intended to send too large body: 51337 bytes, client: 192.168.33.1, server: , request: "POST /post HTTP/1.1", host: "vm"
web_1 | 192.168.33.1 - - [11/Sep/2017:17:58:55 +0000] "POST /post HTTP/1.1" 403 23 "-" "curl/7.54.0"