I have a basic Nginx docker image, acting as a reverse-proxy, that currently uses basic authentication sitting in front of my application server. I'm looking for a way to integrate it with our SSO solution in development that uses JWT, but all of the documentation says it requires Nginx+. So, is it possible to do JWT validation inside of open-sourced Nginx, or do I need the paid version?
Sure, there are open source codes, which you can use and customize for your case (example).
IMHO there are better implementations, which you can use as an "auth proxy" in front of your application. My favorite is keycloak-gatekeeper (you can use it with any OpenID IdP, not only with the Keycloak), which can provide authentication, authorization, token encryption, refresh token implementation, small footprint, ...
There's also lua-resty-openidc: https://github.com/zmartzone/lua-resty-openidc
lua-resty-openidc is a library for NGINX implementing the OpenID
Connect Relying Party (RP) and/or the OAuth 2.0 Resource Server (RS)
functionality.
When used as an OpenID Connect Relying Party it authenticates users
against an OpenID Connect Provider using OpenID Connect Discovery and
the Basic Client Profile (i.e. the Authorization Code flow). When used
as an OAuth 2.0 Resource Server it can validate OAuth 2.0 Bearer
Access Tokens against an Authorization Server or, in case a JSON Web
Token is used for an Access Token, verification can happen against a
pre-configured secret/key .
Given that you have a configuration set up without authentication, I found this and got it to work: https://hub.docker.com/r/tomsmithokta/nginx-oss-okta which is entirely based on the lua-resty-openidc as mentioned above. The fact that it was already built was helpful for me though.
First configure your Okta app in the Okta web GUI then fill in the proper fields that are not commented out in the NGINX example conf. The only caveat is to uncomment the redirect_uri and fill that in but instead comment out or remove the redirect_uri_path which is a deprecated field. All the other things in the config are parameters you can play with or just accept them as is.
By default it passes you onto a headers page but if you adjust the proxy_pass field you should be able to pass it to your app.
based on this gist https://gist.github.com/abbaspour/af8dff3b297b0fcc6ba7c625c2d7c0a3
here's how I did it in a dockerfile ( based on buster-slim )
FROM python:3.9-slim as base
FROM base as builder
ENV LANG en_GB.UTF-8 \
LANGUAGE en_GB.UTF-8 \
PYTHONUNBUFFERED=True \
PYTHONIOENCODING=UTF-8
RUN apt-get update \
&& apt-get install --no-install-recommends --no-install-suggests -y \
build-essential \
patch \
git \
wget \
libssl-dev \
libjwt-dev \
libjansson-dev \
libpcre3-dev \
zlib1g-dev \
&& wget https://nginx.org/download/nginx-1.18.0.tar.gz \
&& tar -zxvf nginx-1.18.0.tar.gz \
&& git clone https://github.com/TeslaGov/ngx-http-auth-jwt-module \
&& cd nginx-1.18.0 \
&& ./configure --add-module=../ngx-http-auth-jwt-module \
--with-http_ssl_module \
--with-http_v2_module \
--with-ld-opt="-L/usr/local/opt/openssl/lib" \
--with-cc-opt="-I/usr/local/opt/openssl/include" \
&& make
FROM base
COPY --from=builder /nginx-1.18.0/objs/nginx /usr/sbin/nginx
COPY --from=builder /nginx-1.18.0/conf /usr/local/nginx/conf
ENV LANG en_GB.UTF-8 \
LANGUAGE en_GB.UTF-8 \
PYTHONUNBUFFERED=True \
PYTHONIOENCODING=UTF-8
RUN apt-get update && \
apt-get install --no-install-recommends --no-install-suggests -y \
libssl-dev \
libjwt-dev \
libjansson-dev \
libpcre3-dev \
zlib1g-dev
Related
I am new to setting up the gRPC web based client side. Our backend is already up and running on Go with gRPC. I am testing out what it's like converting the .proto file into TS. I am successfully able to generate some of the files, however, I am missing the TypeScript "Service" file.
I pretty much followed the instructions from the grpc_tools_node_protoc_ts site.
Setup a script to generate files for 1) the service and 2) the client model:
PROTOC_GEN_TS_PATH="./node_modules/.bin/protoc-gen-ts"
GRPC_TOOLS_NODE_PROTOC_PLUGIN="./node_modules/.bin/grpc_tools_node_protoc_plugin"
GRPC_TOOLS_NODE_PROTOC="./node_modules/.bin/grpc_tools_node_protoc"
OUT_DIR="./_protos_/proto/"
# JavaScript code generating
${GRPC_TOOLS_NODE_PROTOC} \
--plugin=protoc-gen-grpc="${GRPC_TOOLS_NODE_PROTOC_PLUGIN}" \
--js_out=import_style=commonjs,binary:"${OUT_DIR}" \
--grpc_out="${OUT_DIR}" \
-I "${OUT_DIR}" \
"${OUT_DIR}"/*.proto
${GRPC_TOOLS_NODE_PROTOC} \
--plugin=protoc-gen-ts="${PROTOC_GEN_TS_PATH}" \
--ts_out="${OUT_DIR}" \
-I "${OUT_DIR}" \
"${OUT_DIR}"/*.proto
What I get on the output is missing the *_grpc_pb.d.ts. I am under the impression I need this? 🤷🏻♂️
I have also tried adding the service option to the flag:
--ts_out="service=grpc-web:${OUT_DIR}" \
This now generates a *_pb_service.d.ts output file, still without the *_grpc_pb.d.ts file. I was reading the docs more and am thinking this service=grpc-web is actually the option I need since we're not running a node server.
Does this seem right? This is what I have now:
# Note the ts_out flag "service=grpc-node":
# This does generate the *_grpc_pb.d.ts but not the service files
protoc \
--plugin="protoc-gen-ts=${PROTOC_GEN_TS_PATH}" \
--plugin=protoc-gen-grpc=${GRPC_TOOLS_NODE_PROTOC_PLUGIN} \
--js_out="import_style=commonjs,binary:${OUT_DIR}" \
--ts_out="service=grpc-node:${OUT_DIR}" \
--grpc_out="${OUT_DIR}" \
-I "${OUT_DIR}" \
"${OUT_DIR}"/*.proto
# Note the ts_out flag "service=grpc-web":
# This does generate the service files, but not the *_grpc_pb.d.ts file
protoc \
--plugin="protoc-gen-ts=${PROTOC_GEN_TS_PATH}" \
--plugin=protoc-gen-grpc=${GRPC_TOOLS_NODE_PROTOC_PLUGIN} \
--js_out="import_style=commonjs,binary:${OUT_DIR}" \
--ts_out="service=grpc-web:${OUT_DIR}" \
--grpc_out="${OUT_DIR}" \
-I "${OUT_DIR}" \
"${OUT_DIR}"/*.proto
Is it possible to use firebase-functions from my laptop? If not, is firebase-admin the only option remaining?
Here are some examples:
How can I rent and use my own servers for cloud functions?
Listen only to additions to a cloud firestore collection?
Does Firebase Admin SDK perform any caching?
I am able to make an index.js file on my laptop, npm install firebase-admin module, link to my Firestore database and make changes to data just fine, using admin-credentials. When I also try npm install firebase-functions make use of event-triggers onCreate/onWrite/onUpdate/onDelete, they do not get any updates?
To my understanding, the only way possible to make use of event-triggers is by uploading to cloud functions, since you need Google's infrastructure to use those and you can't use them on your local machine, which you can with firebase-admin package. You can use use the local emulator(?), but it isn't production ready and is not for that use case(?).
So, in order to listen for new events on my Firestore database, only using my laptop (not Google Cloud Functions platform or some other server-hosted option), I have to use .onSnapshot() from firebase-admin npm.
However that module is unable to cache, and you are left querying the whole firestore database, downloading every document.
Is this correct? or is there any way possible to make firebase-functions work from my laptop server using firebase-admin + admin credentials, almost as if I uploaded the file to cloud platform. I don't require this part of data to be on the cloud, so I want to make changes and adjust firestore database from my laptop's Terminal.
You will need to balance scalability and what you want to achieve. One approach uses a Parse Server and a Docker container that works with Express. This method's advantage is its flexibility as to where the parse server can run. You can run this on your laptop or move it to Google Cloud if you need more processing power. However, it is worth noting that the container cannot access all Firebase trigger types.
I am not sure which Operating System you are using, but for Ubuntu, you can install Docker and run Parse Server like this:
# Update the apt package first
$ sudo apt-get update
# install dependencies
$ sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common \
git
# Add Docker’s GPG key:
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# add the Docker repository
$ sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
# Update the apt package again
$ sudo apt-get update
# Install Docker
$ sudo apt-get install docker-ce docker-ce-cli containerd.io
$ git clone https://github.com/parse-community/parse-server
$ cd parse-server
$ docker build --tag parse-server .
$ docker run --name my-mongo -d mongo
To run the Parse Server:
$ docker run --name my-parse-server -v config-vol:/parse-server/config \
-p 1337:1337 --link my-mongo:mongo -d parse-server --appId APPLICATION_ID \
--masterKey MASTER_KEY --databaseURI mongodb://mongo/test
To link Firebase and the Docker Parse Server, you will need an adapter. The container above is an example, but it should be enough to get you started running from your laptop.
I created a lot of logging metrics and alert policies in line with CIS Benchmark requirements, and in testing this, it worked fine. But only for while. Now, when I go to edit the logging metric I can see that changes I perform are displayed in the results using the filter I have, but when I go to Stackdriver and look at the alert policy, it hasn't picked up anything. I haven't changed anything to my knowledge.
Here are the commands I've used to create all this:
gcloud beta logging metrics create vpc_firewall_changes \
--description="Actions related to VPC firewall changes (CIS 2.7)" \
--log-filter="resource.type=\"gce_firewall_rule\" AND \
jsonPayload.event_subtype=\"compute.firewalls.patch\" OR \
jsonPayload.event_subtype=\"compute.firewalls.insert\""
gcloud alpha monitoring channels create \
--type=email \
--display-name="VPC Firewall Changes" \
--description="E-mail channel for alerting VPC firewall changes" \
--channel-labels=email_address=[my email]
gcloud alpha monitoring policies create \
--notification-channels=[channel uri] \
--condition-filter="resource.type=\"global\" AND metric.type=\"logging.googleapis.com/user/vpc_firewall_changes\"" \
--aggregation="{
\"alignmentPeriod\": \"60s\", \
\"crossSeriesReducer\": \"REDUCE_COUNT\", \
\"perSeriesAligner\": \"ALIGN_RATE\" \
}" \
--condition-display-name="logging/user/vpc_firewall_changes [COUNT]" \
--duration=60s \
--if="> 0.001" \
--trigger-count=1 \
--combiner=OR \
--display-name="VPC Firewall Changes Alerts" \
--documentation="You are receiving this alert because changes have been made relating to the VPC firewall rules."
Has anyone experienced this?
I am trying to create airflow connections on Cloud Composer using gcloud CLI.
I follow the document and run the following comment.
https://cloud.google.com/composer/docs/how-to/managing/connections#creating_new_airflow_connections
gcloud composer environments run my-env \
--project my-project \
--location asia-northeast1 \
connections -- --add \
--conn_id=my-conn-id \
--conn_type=google_cloud_platform \
--conn_extra '{"extra\__google\_cloud\_platform\__project": "my-project", \
"extra\__google\_cloud\_platform\__key_path": "/home/airflow/gcs/data/keys/my-key.json", \
"extra\__google\_cloud\_platform\__scope": "https://www.googleapis.com/auth/cloud-platform"}'
kubeconfig entry generated for asia-northeast1-my-env-44718514-gke.
Executing within the following kubectl namespace: default
W0525 22:51:11.244104 93234 flags.go:39] conn_extra is DEPRECATED and will be removed in
a future version. Use conn-extra instead.
W0525 22:51:11.244246 93234 flags.go:39] conn_extra is DEPRECATED and will be removed in
a future version. Use conn-extra instead.
W0525 22:51:11.244256 93234 flags.go:39] conn_extra is DEPRECATED and will be removed in
a future version. Use conn-extra instead.
[2019-05-25 14:51:13,215] {settings.py:176} INFO - setting.configure_orm():
Using pool settings. pool_size=5, pool_recycle=1800
[2019-05-25 14:51:13,598] {default_celery.py:80} WARNING - You have configured a
result_backend of redis://airflow-redis-service:6379/0, it is highly recommended to use an
alternative result_backend (i.e. a database).
[2019-05-25 14:51:13,600] {__init__.py:51} INFO - Using executor CeleryExecutor
[2019-05-25 14:51:13,680] {app.py:51} WARNING - Using default Composer Environment Variables.
Overrides have not been applied.
[2019-05-25 14:51:13,688] {configuration.py:516} INFO - Reading the config from
/etc/airflow/airflow.cfg
[2019-05-25 14:51:13,698] {configuration.py:516} INFO - Reading the config from
/etc/airflow/airflow.cfg
The connection is successfully created but project ID, key file path and scope are empty. So the connection is invalid.
When I manually created, those properties are not empty. Am I missing something?
Composer image: composer-1.5.0-airflow-1.10.1
I am not able to replicate it. When I run the following command the connection does get added with the extras field:
gcloud composer environments run my-env \
--project my-project \
--location europe-west1 \
connections -- --add \
--conn_id=my-conn-id \
--conn_type=google_cloud_platform \
--conn_extra='{"extra__google_cloud_platform__project": "my-project", "extra__google_cloud_platform__key_path":"/tesf"}
Found some syntax error on escaping quotation. This works fine.
$ CONNECTION_CREATE_COMMAND="gcloud composer environments run $COMPOSER_ENVIRONMENT \
--project $COMPOSER_PROJECT \
--location ${COMPOSER_LOCATION} \
connections -- --add \
--conn_id=${CONN_ID_BASE}_${app}_${c} \
--conn_type=google_cloud_platform \
--conn_extra '{\"extra__google_cloud_platform__project\": \"${BQ_PROJECT}\", \
\"extra__google_cloud_platform__key_path\": \"${KEY_JSON_FILE_PATH}\", \
\"extra__google_cloud_platform__scope\": \"https://www.googleapis.com/auth/cloud-platform\"}'"
$ eval $CONNECTION_CREATE_COMMAND
What I'm doing is mildly insane, but since GET requests have very strict size limit, solr uses POST requests to /solr/select URL to do what is "semantically" a GET.
I'm trying to put varnish in front of solr to do some caching. I put this in vcl_recv function:
if (!(req.request == "GET" || req.request == "HEAD" ||
(req.request == "POST" && req.url == "/solr/select"))) {
/* We only deal with GET and HEAD by default */
/* Modified to support POST to /solr/select */
return (pass);
}
and varnish now tries to handle that except it automatically converts a POST to a GET.
I'm aware all of that is fairly ridiculous and far from any best practices, but in any case, is there an easy way to use varnish this way?
I got it working after reading this tutorial from.
What the tutorial doesn't say is that there is a bug in one of the required VMODS when using with Varnish 4.1, this bug has the effect that the first POST request is passed to the backend with a truncated body.
I solved this by using Varnish 5 and works like a charm.
If you want to give it a try I have a Dockerfile for this:
Dockerfile:
FROM alpine:3.7
LABEL maintainer lloiacono#*******.com
RUN apk update \
&& apk add --no-cache varnish \
&& apk add git \
&& git clone https://github.com/varnish/varnish-modules.git \
&& apk add automake && apk add varnish-dev \
&& apk add autoconf && apk add libtool \
&& apk add py-docutils && apk add make \
&& cd varnish-modules/ \
&& ./bootstrap && ./configure && make && make install
COPY start.sh /usr/local/bin/docker-app-start
RUN chmod +x /usr/local/bin/docker-app-start
CMD ["docker-app-start"]
start.sh
#!/bin/sh
set -xe
varnishd -a :80 -f /etc/varnish/default.vcl -s malloc,256m
varnishlog
You could try changing the req.POST into a GET, and transform the POST data to GET parameters (you probably would have to use inline-C) and do a lookup / fetch.
This GET request limit from the HTTP spec is not necessarily implemented by either Varnish or your back-end server. As you don't depend on intermediate caches and User-Agents outside your control to handle long urls, you could give it a try.