Apache Traffic Server: segmentation fault on request transformation - encryption

I'm trying to write a simple encryption/decryption plugin for Apache Traffic Server. The plugin should transform requests/responses in order to encrypt/decrypt them. I decided to use LUA https://github.com/portl4t/ts-lua
function encrypt(data, eos)
if (data == '') then
return data, eos
end
if (eos == 1) then
ts.debug('End of Stream')
end
return data, eos
end
function do_remap()
ts.debug('do_remap')
if (ts.client_request.get_method() == 'POST') then
ts.hook(TS_LUA_REQUEST_TRANSFORM, encrypt)
end
ts.http.resp_cache_transformed(0)
ts.http.resp_cache_untransformed(0)
return 0
end
Everything works fine for GET and DELETE requests, but when I send a POST chunked-encoded request, ATS crashes almost every time.
Here is the stack trace:
[May 20 13:16:28.105] Server {0x7f045a1c1700} DEBUG: (http_redirect) [HttpSM::do_redirect]
[May 20 13:16:28.105] Server {0x7f045a1c1700} DEBUG: (http_redirect) [HttpTunnel::deallocate_postdata_copy_buffers]
NOTE: Traffic Server received Sig 11: Segmentation fault
bin/traffic_server - STACK TRACE:
/lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f045cd29cb0]
bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x171)[0x5c274f]
bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x883)[0x5c24cf]
bin/traffic_server(_ZN6HttpSM23do_api_callout_internalEv+0x1b7)[0x5ceaef]
bin/traffic_server(_ZN6HttpSM14do_api_calloutEv+0x26)[0x5dc18e]
bin/traffic_server(_ZN6HttpSM14set_next_stateEv+0x12f9)[0x5d6a19]
bin/traffic_server(_ZN6HttpSM32call_transact_and_set_next_stateEPFvPN12HttpTransact5StateEE+0x1ba)[0x5d5718]
bin/traffic_server(_ZN6HttpSM36state_common_wait_for_transform_readEP17HttpTransformInfoMS_FiiPvEiS2_+0x39b)[0x5c1a11]
bin/traffic_server(_ZN6HttpSM37state_request_wait_for_transform_readEiPv+0x1e1)[0x5c1483]
bin/traffic_server(_ZN6HttpSM12main_handlerEiPv+0x333)[0x5c5eeb]
bin/traffic_server(_ZN12Continuation11handleEventEiPv+0x68)[0x4f06b2]
bin/traffic_server(_ZN17TransformTerminus12handle_eventEiPv+0x2f6)[0x538d2a]
bin/traffic_server(_ZN12Continuation11handleEventEiPv+0x68)[0x4f06b2]
bin/traffic_server(_ZN7EThread13process_eventEP5Eventi+0x11e)[0x7537e2]
bin/traffic_server(_ZN7EThread7executeEv+0xc9)[0x753a27]
bin/traffic_server[0x752ca7]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f045cd21e9a]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f045c0383fd]
Segmentation fault (core dumped)
I've already tried to use one of the example plugins for request transformation and I still have the same problem. The only way to make the problem disappear is to avoid request transformation.
Is there something wrong in the way I'm transforming requests? How can I fix this segmentation fault?
Thanks

Related

nginx - connection timed out while reading upstream

I have a flask server with and endpoint that processes some uploaded .csv files and returns a .zip (in a JSON reponse, as a base64 string)
This process can take up to 90 seconds
I've been setting it up for production using gunicorn and nginx and I'm testing the endpoint with smaller .csv s. They get processed fine and in a couple seconds I get the "got blob" log. But nginx doesn't return it to the client and finally it times out. I set up a longer fail-timeout of 10 minutes and the client WILL wait 10 minutes, then time out
the proxy read timeout offered as solution here is set to 3600s
Also the proxy connect timeout is set to 75s according to this
also the timeout for the gunicorn workers according to this
The error log says: "upstream timed out connection timed out while reading upstream"
I also see examples of nginx receiving an OPTIONS request and immediately after the POST request (some CORS weirdness from the client) where nginx passes the OPTIONS request but fails to pass the POST request to gunicorn despite nginx having received it
Question:
What am I doing wrong here?
Many thanks
http {
upstream flask {
server 127.0.0.1:5050 fail_timeout=600;
}
# error log
# 2022/08/18 14:49:11 [error] 1028#1028: *39 upstream timed out (110: Connection timed out) while reading upstream, ...
# ...
server {
# ...
location /api/ {
proxy_pass http://flask/;
proxy_read_timeout 3600;
proxy_connect_timeout 75s;
# ...
}
# ...
}
}
# wsgi.py
from main import app
if __name__ == '__main__':
app.run()
# flask endpoint
#app.route("/process-csv", methods=['POST'])
def process_csv():
def wrapped_run_func():
return blob, export_filename
# ...
try:
blob, export_filename = wrapped_run_func()
b64_file = base64.b64encode(blob.getvalue()).decode()
ret = jsonify(file=b64_file, filename=export_filename)
# return Response(response=ret, status=200, mimetype="application/json")
print("got blob")
return ret
except Exception as e:
app.logger.exception(f"0: Error processing file: {export_filename}")
return Response("Internal server error", status=500)
ps. getting this error from stackoverflow
"Your post appears to contain code that is not properly formatted as code. Please indent all code by 4 spaces using the code toolbar button or the CTRL+K keyboard shortcut. For more editing help, click the [?] toolbar icon."
for having perfectly well formatted code with language syntax, I'm sorry that I had to post it ugly
Sadly I got no response
See last lines for the "solution" finally implemented
CAUSE OF ERROR: I believe the problem is that I'm hosting the Nginx server on wsl1
I tried updating to wsl2 and see if that fixed it but I need to enable some kind of "nested virtualization", as the wsl1 is running already on a VM.
Through conf changes I got it to the point where no error is logged, gunicorn return the file then it just stays in the ether. Nginx never gets/sends the response
"SOLUTION":
I ended up changing the code for the client, the server and the nginx.conf file:
the server saves the resulting file and only returns the file name
the client inserts the filename into an href that then displays a link
on click a request is sent to nginx which in turn just sends the file from a static folder, leaving gunicorn alone
I guess this is the optimal way to do it anyway, though it still bugs me I couldn't (for sure) find the reason of the error

Error while trying to send logs with rsyslog without local storage

I'm trying to send logs into datadog using rsyslog. Ideally, I'm trying to do this without having the logs stored on the server hosting rsyslog. I've run into an error in my config that I haven't been able to find out much about. The error occurs on startup of rsyslog.
omfwd: could not get addrinfo for hostname '(null)':'(null)': Name or service not known [v8.2001.0 try https://www.rsyslog.com/e/2007 ]
Here's the portion I've added into the default rsyslog.config
module(load="imudp")
input(type="imudp" port="514" ruleset="datadog")
ruleset(name="datadog"){
action(
type="omfwd"
action.resumeRetryCount="-1"
queue.type="linkedList"
queue.saveOnShutdown="on"
queue.maxDiskSpace="1g"
queue.fileName="fwdRule1"
)
$template DatadogFormat,"00000000000000000 <%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% - - - %msg%\n "
$DefaultNetstreamDriverCAFile /etc/ssl/certs/ca-certificates.crt
$ActionSendStreamDriver gtls
$ActionSendStreamDriverMode 1
$ActionSendStreamDriverAuthMode x509/name
$ActionSendStreamDriverPermittedPeer *.logs.datadoghq.com
*.* ##intake.logs.datadoghq.com:10516;DatadogFormat
}
First things first.
The module imudp enables log reception over udp.
The module omfwd enables log forwarding over (tcp, udp, ...)
So most probably - or atleast as far as i can tell - with rsyslog you just want to log messages locally and then send them to datadog.
I don't know anything about the $ActionSendStreamDriver tags, so I can't help you there. But what is jumping out is, that in your action you haven't defined where the logs should be sent to.
ruleset(name="datadog"){
action(
type="omfwd"
target="10.100.1.1"
port="514"
protocol="udp"
...
)
...
}

why doesn't resty.redis work with the ngx.timer?

I've asked here but thought I'd post on SO as well:
given this code:
local redis = require('resty.redis')
local client = redis:new()
client:connect(host,port)
ngx.thread.spawn(function()
ngx.say(ngx.time(),' ',#client:keys('*'))
end)
ngx.timer.at(2,function()
ngx.say(ngx.time(),' ',#client:keys('*'))
end)
I get this error:
---urhcdhw2pqoz---
1611628086 5
2021/01/26 10:28:08 [error] 4902#24159: *4 lua entry thread aborted: runtime error: ...local/Cellar/openresty/1.19.3.1_1/lualib/resty/redis.lua:349: bad request
stack traceback:
coroutine 0:
[C]: in function 'send'
...local/Cellar/openresty/1.19.3.1_1/lualib/resty/redis.lua:349: in function 'keys'
./src/main.lua:20: in function <./src/main.lua:19>, context: ngx.timer
so it seems that threads work with redis but timers don't. Why is that?
There are two errors in your code.
It is not possible to pass the cosocket object between Lua handlers (emphasis added by me):
The cosocket object created by this API function has exactly the same lifetime as the Lua handler creating it. So never pass the cosocket object to any other Lua handler (including ngx.timer callback functions) and never share the cosocket object between different Nginx requests.
https://github.com/openresty/lua-nginx-module#ngxsockettcp
In your case, the reference to the cosocket object is stored in the client table (client._sock).
ngx.print/ngx.say are not available in the ngx.timer.* context.
https://github.com/openresty/lua-nginx-module#ngxsay (check the context: section).
You can use ngx.log instead (it writes to nginx log, set error_log stderr debug; in nginx.conf to print logs to stderr).
The following code works as expected:
ngx.timer.at(2, function()
local client = redis:new()
client:connect('127.0.0.1' ,6379)
ngx.log(ngx.DEBUG, #client:keys('*'))
end)

How can I stop nginx failling over when openresty throws runtime error deploying cert

We are using openresty and the lua-resty-auto-ssl package to generate certificates from Lets Encrypt but lately the server keeps falling over. Im guessing its triggered when a certificate trys to auto renew as generating a certificate for first time works fine ... the error we are seeing is
2019/05/12 08:25:24 [error] 2623#2623: *1024227 lua entry thread aborted: runtime error: ...sty/luajit/share/lua/5.1/resty/auto-ssl/servers/hook.lua:40: assertion failed!
stack traceback:
coroutine 0:
[C]: in function 'assert'
...sty/luajit/share/lua/5.1/resty/auto-ssl/servers/hook.lua:40: in function 'server'
.../local/openresty/luajit/share/lua/5.1/resty/auto-ssl.lua:99: in function 'hook_server'
content_by_lua(nginx.conf:194):2: in function <content_by_lua(nginx.conf:194):1>, client: 127.0.0.1, server: , request: "POST /deploy-cert HTTP/1.1", host: "127.0.0.1:8999"
From what I can see in the error it is failing to assert something when trying to deploy the cert which could be any of 4 things
assert(params["domain"])
assert(params["fullchain"])
assert(params["privkey"])
assert(params["expiry"])
Im a bit stuck to what I can do, its no good having the server dropping out on use. Thats the last error thats reported before the server goes offline so im guessing thats the cause? but not 100% sure.
Is there anywhere I can look to find out more information what causes the crash. Im new to nginx/openresty so fumbling my round a bit. Has anyone come across a similar issue?
Wrap it all in a function and call it with pcall or xpcall and add some logic to deal with the error.

Make wget retry original URL after 3XX Redirect

I have a service that redirects users to temporary pre-signed AWS downloads. These are large files, often 5-10gb. To prevent download sharing, we have a relatively short (30 seconds) valid lifespan.
Everything is working except that on slow internet connections, they tend to fail or get interrupted. wget has a feature that automatically retries the download. However, instead of retrying the original URL (eg: http://service.com/download/file.zip), wget retries the redirected pre-signed URL (eg: http://service.s3.amazonaws.com/file.zip?AWSAccessKeyId=XXXX&Signature=XXXX&Expires=1468000000)
Since these are large files, and the pre-signed lifespan is so short, that temporary url is no longer valid and the user gets a 403 Forbidden result.
Originally, when we noticed the problem, we were using 302 Found temporary redirects. A little research seemed to indicate we SHOULD have been using 307 Temporary Redirect. However, that didn't resolve the problem with wget. For grins and giggles, we tried 303 See Other, but that didn't work either.
Does anyone have any idea how get wget to retry the original URL instead of the redirected URL?
below is a wget example log:
--2016-07-06 10:29:51-- https://service.com/download/file.zip
Connecting to service.com (service.com)|10.0.0.1|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location:
https://service.s3.amazonaws.com/file.zip?AWSAccessKeyId=XXXX&Signature=XXXX&Expires=1468000000
[following]
--2016-07-06 10:29:52-- https://service.s3.amazonaws.com/file.zip?AWSAccessKeyId=XXXX&Signature=XXXX&Expires=1468000000
Resolving service.s3.amazonaws.com (service.s3.amazonaws.com)...
54.231.12.129
Connecting to service.s3.amazonaws.com
(service.s3.amazonaws.com)|54.231.12.129|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2070666907 (1.9G) [application/zip]
Saving to: ‘file.zip’
file.zip 53%[=========> ] 1.03G --.-KB/s in 18m 7s
2016-07-06 10:47:59 (995 KB/s) - Read error at byte
1107205784/2070666907 (The specified session has been invalidated for
some reason.). Retrying.
--2016-07-06 10:48:00-- (try: 2) https://service.s3.amazonaws.com/file.zip?AWSAccessKeyId=XXXX&Signature=XXXX&Expires=1468000000
Connecting to service.s3.amazonaws.com
(service.s3.amazonaws.com)|54.231.12.129|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2016-07-06 10:48:01 ERROR 403: Forbidden.
I had a similar issue, and a similar answer as #panzerito, but broke it up into a script i called loopdone
#!/bin/bash
until `$1`; do sleep 1; echo restarting; done
then I can just do loopdone "wget -c http://my.url/" (incl quotes) to force it to run again and again (and resume, unless server does not support it) until exit code is 0. (meaning no error)
Bash-code:
initial_error_EXIT_STATUS; until [ "$?" -eq "0" ]; do wget https://example.com/download/file.zip -c; done

Resources