What I'm doing is mildly insane, but since GET requests have very strict size limit, solr uses POST requests to /solr/select URL to do what is "semantically" a GET.
I'm trying to put varnish in front of solr to do some caching. I put this in vcl_recv function:
if (!(req.request == "GET" || req.request == "HEAD" ||
(req.request == "POST" && req.url == "/solr/select"))) {
/* We only deal with GET and HEAD by default */
/* Modified to support POST to /solr/select */
return (pass);
}
and varnish now tries to handle that except it automatically converts a POST to a GET.
I'm aware all of that is fairly ridiculous and far from any best practices, but in any case, is there an easy way to use varnish this way?
I got it working after reading this tutorial from.
What the tutorial doesn't say is that there is a bug in one of the required VMODS when using with Varnish 4.1, this bug has the effect that the first POST request is passed to the backend with a truncated body.
I solved this by using Varnish 5 and works like a charm.
If you want to give it a try I have a Dockerfile for this:
Dockerfile:
FROM alpine:3.7
LABEL maintainer lloiacono#*******.com
RUN apk update \
&& apk add --no-cache varnish \
&& apk add git \
&& git clone https://github.com/varnish/varnish-modules.git \
&& apk add automake && apk add varnish-dev \
&& apk add autoconf && apk add libtool \
&& apk add py-docutils && apk add make \
&& cd varnish-modules/ \
&& ./bootstrap && ./configure && make && make install
COPY start.sh /usr/local/bin/docker-app-start
RUN chmod +x /usr/local/bin/docker-app-start
CMD ["docker-app-start"]
start.sh
#!/bin/sh
set -xe
varnishd -a :80 -f /etc/varnish/default.vcl -s malloc,256m
varnishlog
You could try changing the req.POST into a GET, and transform the POST data to GET parameters (you probably would have to use inline-C) and do a lookup / fetch.
This GET request limit from the HTTP spec is not necessarily implemented by either Varnish or your back-end server. As you don't depend on intermediate caches and User-Agents outside your control to handle long urls, you could give it a try.
Related
I know that is possible to use && (and) statement to go running multiple commands for a same alias. However for long combinations it loses in readability. For example:
save = !git status && git add -A && git commit -m \"$1\" && git push --force && git log && :
Is there a multi-line way to write it?
Maybe wrapping it with {} for example?
You can use a line escape (\) to break lines like this:
[alias]
save = !git status \
&& git add -A \
&& git commit -m \"$1\" \
&& git push -f \
&& git log -1 \
&& : # Used to distinguish last command from arguments
You can also put multiple statements inside a function like this:
[alias]
save = "!f() { \
git status; \
git add -A; \
git commit -m "$1"; \
git push -f; \
git log -1; \
}; \
f; \
unset f"
See Also: Git Alias - Multiple Commands and Parameters
I'd refrain from writing such extensive aliases in the config file. You can also add new commands by adding an executable file named git-newcommand to your PATH. This could be a Bash script, Python script or even a binary as long as it's executable and named with the prefix "git-".
In case of scripts you've to add the proper Hashbang:
#!/usr/bin/env python
Export the PATH, for example in your home:
export PATH="${PATH}:${HOME}/bin"
This is more modular, portable and easier debuggable.
I have a basic Nginx docker image, acting as a reverse-proxy, that currently uses basic authentication sitting in front of my application server. I'm looking for a way to integrate it with our SSO solution in development that uses JWT, but all of the documentation says it requires Nginx+. So, is it possible to do JWT validation inside of open-sourced Nginx, or do I need the paid version?
Sure, there are open source codes, which you can use and customize for your case (example).
IMHO there are better implementations, which you can use as an "auth proxy" in front of your application. My favorite is keycloak-gatekeeper (you can use it with any OpenID IdP, not only with the Keycloak), which can provide authentication, authorization, token encryption, refresh token implementation, small footprint, ...
There's also lua-resty-openidc: https://github.com/zmartzone/lua-resty-openidc
lua-resty-openidc is a library for NGINX implementing the OpenID
Connect Relying Party (RP) and/or the OAuth 2.0 Resource Server (RS)
functionality.
When used as an OpenID Connect Relying Party it authenticates users
against an OpenID Connect Provider using OpenID Connect Discovery and
the Basic Client Profile (i.e. the Authorization Code flow). When used
as an OAuth 2.0 Resource Server it can validate OAuth 2.0 Bearer
Access Tokens against an Authorization Server or, in case a JSON Web
Token is used for an Access Token, verification can happen against a
pre-configured secret/key .
Given that you have a configuration set up without authentication, I found this and got it to work: https://hub.docker.com/r/tomsmithokta/nginx-oss-okta which is entirely based on the lua-resty-openidc as mentioned above. The fact that it was already built was helpful for me though.
First configure your Okta app in the Okta web GUI then fill in the proper fields that are not commented out in the NGINX example conf. The only caveat is to uncomment the redirect_uri and fill that in but instead comment out or remove the redirect_uri_path which is a deprecated field. All the other things in the config are parameters you can play with or just accept them as is.
By default it passes you onto a headers page but if you adjust the proxy_pass field you should be able to pass it to your app.
based on this gist https://gist.github.com/abbaspour/af8dff3b297b0fcc6ba7c625c2d7c0a3
here's how I did it in a dockerfile ( based on buster-slim )
FROM python:3.9-slim as base
FROM base as builder
ENV LANG en_GB.UTF-8 \
LANGUAGE en_GB.UTF-8 \
PYTHONUNBUFFERED=True \
PYTHONIOENCODING=UTF-8
RUN apt-get update \
&& apt-get install --no-install-recommends --no-install-suggests -y \
build-essential \
patch \
git \
wget \
libssl-dev \
libjwt-dev \
libjansson-dev \
libpcre3-dev \
zlib1g-dev \
&& wget https://nginx.org/download/nginx-1.18.0.tar.gz \
&& tar -zxvf nginx-1.18.0.tar.gz \
&& git clone https://github.com/TeslaGov/ngx-http-auth-jwt-module \
&& cd nginx-1.18.0 \
&& ./configure --add-module=../ngx-http-auth-jwt-module \
--with-http_ssl_module \
--with-http_v2_module \
--with-ld-opt="-L/usr/local/opt/openssl/lib" \
--with-cc-opt="-I/usr/local/opt/openssl/include" \
&& make
FROM base
COPY --from=builder /nginx-1.18.0/objs/nginx /usr/sbin/nginx
COPY --from=builder /nginx-1.18.0/conf /usr/local/nginx/conf
ENV LANG en_GB.UTF-8 \
LANGUAGE en_GB.UTF-8 \
PYTHONUNBUFFERED=True \
PYTHONIOENCODING=UTF-8
RUN apt-get update && \
apt-get install --no-install-recommends --no-install-suggests -y \
libssl-dev \
libjwt-dev \
libjansson-dev \
libpcre3-dev \
zlib1g-dev
I install nginx with WAF (Using Docker)
mkdir -p /usr/src \
&& cd /usr/src/ \
&& git clone --depth 1 -b v3/master --single-branch https://github.com/SpiderLabs/ModSecurity \
&& cd ModSecurity \
&& git submodule init \
&& git submodule update \
&& ./build.sh \
&& ./configure \
&& make -j$(getconf _NPROCESSORS_ONLN) \
&& make install
... previous commands to install nginx from source...
&& cd /usr/src \
&& git clone --depth 1 https://github.com/SpiderLabs/ModSecurity-nginx.git \
&& cd /usr/src/nginx-$NGINX_VERSION \
&& ./configure --with-compat --add-dynamic-module=../ModSecurity-nginx \
&& make modules \
&& cp objs/ngx_http_modsecurity_module.so /etc/nginx/modules \
&& mkdir /etc/nginx/modsec \
&& wget -P /etc/nginx/modsec/ https://raw.githubusercontent.com/SpiderLabs/ModSecurity/v3/master/modsecurity.conf-recommended \
&& mv /etc/nginx/modsec/modsecurity.conf-recommended /etc/nginx/modsec/modsecurity.conf \
&& sed -i 's/SecRuleEngine DetectionOnly/SecRuleEngine On/' /etc/nginx/modsec/modsecurity.conf \
&& sed -i 's/SecRequestBodyInMemoryLimit 131072//' /etc/nginx/modsec/modsecurity.conf \
&& sed -i 's#SecAuditLog /var/log/modsec_audit.log#SecAuditLog /var/log/nginx/modsec_audit.log#' /etc/nginx/modsec/modsecurity.conf \
&& mkdir /opt \
&& cd /opt \
&& git clone -b v3.0/master --single-branch https://github.com/SpiderLabs/owasp-modsecurity-crs.git \
&& cd owasp-modsecurity-crs/ \
&& cp /opt/owasp-modsecurity-crs/crs-setup.conf.example /opt/owasp-modsecurity-crs/crs-setup.conf
but suddenly began to mark this error:
nginx: [emerg] "modsecurity_rules_file" directive Rules error. File: /opt/owasp-modsecurity-crs/crs-setup.conf. Line: 96. Column: 43. SecCollectionTimeout is not yet supported.
In documentation:
==============
#
# -- [[ Collection timeout ]] --------------------------------------------------
#
# Set the SecCollectionTimeout directive from the ModSecurity default (1 hour)
# to a lower setting which is appropriate to most sites.
# This increases performance by cleaning out stale collection (block) entries.
#
# This value should be greater than or equal to:
# tx.reput_block_duration (see section "Blocking Based on IP Reputation") and
# tx.dos_block_timeout (see section "Anti-Automation / DoS Protection").
#
# Ref: https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual#wiki-SecCollectionTimeout
# Please keep this directive uncommented.
# Default: 600 (10 minutes)
SecCollectionTimeout 600
==============
I solve it by adding this line to the command (disabling the rule):
&& sed -i 's/SecCollectionTimeout 600/# SecCollectionTimeout 600/' /opt/owasp-modsecurity-crs/crs-setup.conf
But I do not know what consequences it has, or if it is the correct way to apply it.
Some example of the one that can guide me?
I think that you need to Re - configure the WAF OWASP to resolve that issue.
Check that link for that...
Last time my friend resolve that issue to re configure it ...
https://support.cloudflare.com/hc/en-us/articles/115000223771-How-do-I-configure-the-WAF-
I autorespond:
Source:
https://github.com/SpiderLabs/ModSecurity/issues/1705
it happens due the fact that the SecCollectionTimeout directive is not
currently configurable on libModSecurity (aka v3) as stated on the
reference manual.
Commenting out the SecCollectionTimeout directive on crs-setup.conf
solves the problem without side effects.
A change to the parser to avoid the error is underway here. You can
also choose to apply this change on the code for now as well. It's
already being merged to main.
The funny thing is that the question I did 20 days ago in stackoverflow ... and the issue was raised 22 days ago on github, look for some issue related to "SecCollectionTimeout" and there was nothing related...at that moment
In short, the code you put up is totally functional, so that the one that serves him, the only thing that I did was to recompile the image so that it did pull to the repository of https://github.com/SpiderLabs/ModSecurity and ready
I have a build tool that runs a patch command and if the patch command returns non-zero, it will cause the build to fail. I am applying a patch that may or may not already be applied, so I use the -N option to patch, which skips as it should. However, when it does skip, patch is returning non-zero. Is there a way to force it to return 0 even if it skips applying patches? I couldn't find any such capability from the man page.
Accepted answer did not work for me because patch was returning 1 also on other types of errors (maybe different version or sth).
So instead, in case of error I am checking output for "Skipping patch" message to ignore such but return error on other issues.
OUT="$(patch -p0 --forward < FILENAME)" || echo "${OUT}" | grep "Skipping patch" -q || (echo "$OUT" && false);
I believe that the following recipe should do the trick, it is what I am using in the same situation;
patches: $(wildcard $(SOMEWHERE)/patches/*.patch)
for patch_file in $^; do \
patch --strip=2 --unified --backup --forward --directory=<somewhere> --input=$$patch_file; \
retCode=$$?; \
[[ $$retCode -gt 1 ]] && exit $$retCode; \
done; \
exit 0
This recipe loops over the dependencies (in my case the patch files) and calls patch for each one. The "trick" on which I am relying is that patch returns 1 if the patch has already been applied and other higher numbers for other errors (such as a non existent patch file). The DIAGNOSTICS section of the patch manual entry describes the return code situation. YMMV
You can also do that as a one line only
patch -p0 --forward < patches/patch-babylonjs.diff || true
So if you want to apply the patch and make sure that's it's working:
(patch -p0 --forward < patches/patch-babylonjs.diff || true) && echo OK
No matter whether the patch has already been applied or not, you'll always get "OK" displayed here.
Below is a script that iterates on the above idea from #fsw and handles removal of .rej files as necessary.
#! /bin/sh
set +x
set -euo pipefail
bn=$(basename "$0")
patch="$1"; shift
r=$(mktemp /tmp/"$bn".XXXX)
if ! out=$(patch -p1 -N -r "$r" < "$patch")
then
echo "$out" | grep -q "Reversed (or previously applied) patch detected! Skipping patch."
test -s "$r" # Make sure we have rejects.
else
test -f "$r" && ! test -s "$r" # Make sure we have no rejects.
fi
rm -f "$r"
I wish to get a particular set of files and the only access I have on that box is through the http inteface, which I can get from wget. Now the issue is that I want the latest files, and there are multiple which must be of the same time stamp.
wget http://myserver/abc_20090901.tgz
wget http://myserver/xyz_20090901.tgz
wget http://myserver/pqr_20090901.tgz
The issue being that I do not know if all the above files exist and I wish to get only when all 3 files with the above time stamp exist.
The other issue is that I have another file in a separate folder which is also to be obtained. How do I get these files? Any suggestions?
wget http://myserver/text/myfile_20090901.txt
wget offers some basic resource presence detection with its --spider option. Something like this should do the trick:
wget --spider http://myserver/abc_20090901.tgz &&
wget --spider http://myserver/xyz_20090901.tgz &&
wget --spider http://myserver/pqr_20090901.tgz &&
wget http://myserver/abc_20090901.tgz &&
wget http://myserver/xyz_20090901.tgz &&
wget http://myserver/pqr_20090901.tgz &&
wget http://myserver/text/myfile_20090901.txt