nginx redis\memcached modules gzip_flag - nginx

both plugin seems to use the same code for redis_gzip_flag and memcached_gzip_flag not provide any instructions about this flag and how to set it, as redis string doesn't have any flag support
so what is this flag?
where I set it in redis?
what number should I choose in the nginx config?

Hadn't heard of this but I found an example here, looks like you add it manually to your location block when you know the data you're going to be requesting from redis is gzipped.

Related

How to define https connection in Airflow using environment variables

In Airflow http (and other) connections can be defined as environment variables. However, it is hard to use an https schema for these connections.
Such a connection could be:
export AIRFLOW_CONN_MY_HTTP_CONN=http://example.com
However, defining a secure connection is not possible:
export AIRFLOW_CONN_MY_HTTP_CONN=https://example.com
Because Airflow strips the scheme (https) and in the final connection object the url gets http as scheme.
It turns out that there is a possibility to use https by defining the connection like this:
export AIRFLOW_CONN_MY_HTTP_CONN=https://example.com/https
The second https is called schema in the airflow code (like in DSN's e.g. postgresql://user:passw#host/schema). This schema is then used as the scheme in the construction of the final url in the connection object.
I am wondering if this is by design, or just an infortunate mixup of scheme and schema.
For those who land in this question in the future, I confirm that #jjmurre 's answer works well for 2.1.3 .
In this case we need URI-encoded string.
export AIRFLOW_CONN_SLACK='http://https%3a%2f%2fhooks.slack.com%2fservices%2f...'
See this post for more details.
Hope this can save other fellows an hour which I've spent on investigating.
You should use Connections and then you can specify schema.
This is what worked for me using bitnami airflow:
.env
MY_SERVER=my-conn-type://xxx.com:443/https
docker-compose.yml
environment:
- AIRFLOW_CONN_MY_SERVER=${MY_SERVER}

GoReplay - replay from .log instead of .gor

I am looking into GoReplay as to reproduce part of the production traffic that occurred yesterday.
The traffic I want to reproduce has been recorded with nginx, and I can save it as a .log or .csv file.
From what I can tell from the replay http traffic docs it is possible to reproduce traffic using a command like:
sudo gor --input-file request.gor --output-http="http://localhost:3001"
but this requires a .gor file.
My question is, is the reproduction of traffic (using GoReplay) restricted to .gor files, or could I use nginx .log files to do so?
If this is not possible, and given that I don't have a .gor file describing the yesterday requests, would you recommend creating a file conversion script, to convert the log files into .gor files, or can you recommend a better approach?
After asking this question on the GoReplay GitHub page, I got the answer that:
* there is no way to reproduce traffic directly from logs;
* you must use .gor files to recreate the traffic;
Thus, the only way to replay from traffics is to create a .log to .gor file converter.
link to official answer: https://github.com/buger/goreplay/issues/668
I've found that I can use another package to replay the logs I have, as-is, locally. At the same time, you can have goreplay listen for traffic to capture that traffic and save to log files. Then you can run goreplay with those newly created logs, updating the domain and whatever else you need.
Let me know if you want me to provide a step-by-step.

nginx writre log to sqlite

please could you help me
can I let NGINX to write access log directly to sqlite table which I will make the same field as in access.log?
I know I can try to do it with LUA, but I do not know how to put the trigger to nginx to run LUA script on every record in access.log file
You would use the log_by_lua phase to write access logs, as it runs last and allows you to access variables like upstream_response_time, etc.

patching Meteor default_connection

I want to change the current default DDP connection, and reconnect another URL.
(this is for switching ELB port when it fails to use websocket like this article.)
since I haven't found a proper way in the documents, I tried patching like
Meteor.connection = DDP.connect('new server url')
but it seemed to keep using the existing connection.
after trying several ways in the browser console and finally got something like working.
Meteor.disconnect();
Meteor.default_connection._stream.rawUrl = 'new server url';
Meteor.reconnect();
but I think it's a sort of hack since it is not documented.
do you know a better way to change default url?
when and what the DDP_DEFAULT_CONNECTION_URL affects?
ps. I'm using Meteor 1.3.5.1
According to the source
Meteor.reconnect({ _forced: 1, url: 'new.url' });
will reconnect to different Url.

Biztalk File send port with a variable path

Is it possible to make the send port change output location based on a promoted property?
We have an interface that needs to send it to a different port based on the client. But we add clients on a regular basis, so adding a new send port (both in the administrator and orchestration) will require a lot of maintenance, while the only thing that happens is a directory change
The folders are like this ...
\\server\SO\client1\Out
\\server\SO\client2\Out
\\server\SO\client3\Out
I tried using the SourceFilename to create a file name like client1\Out\filename.xml but this doesn't work.
Is there any way to do this with a single send port?
It is possible to set the OutboundTransportLocation property in context. This property contains the full path/name of the file that will be output by the file adapter. So in your case I guess you could do something along the line (if it had to be done in a pipeline component):
message.Context.Write(
OutboundTransportLocation.Name,
OutboundTransportLocation.Namespace,
string.format(#"\\server\SO\{0}\Out", client));
Of course you can do a similar thing in your orchestration.
No need of a dynamic port...

Resources