I am new to setting up the gRPC web based client side. Our backend is already up and running on Go with gRPC. I am testing out what it's like converting the .proto file into TS. I am successfully able to generate some of the files, however, I am missing the TypeScript "Service" file.
I pretty much followed the instructions from the grpc_tools_node_protoc_ts site.
Setup a script to generate files for 1) the service and 2) the client model:
PROTOC_GEN_TS_PATH="./node_modules/.bin/protoc-gen-ts"
GRPC_TOOLS_NODE_PROTOC_PLUGIN="./node_modules/.bin/grpc_tools_node_protoc_plugin"
GRPC_TOOLS_NODE_PROTOC="./node_modules/.bin/grpc_tools_node_protoc"
OUT_DIR="./_protos_/proto/"
# JavaScript code generating
${GRPC_TOOLS_NODE_PROTOC} \
--plugin=protoc-gen-grpc="${GRPC_TOOLS_NODE_PROTOC_PLUGIN}" \
--js_out=import_style=commonjs,binary:"${OUT_DIR}" \
--grpc_out="${OUT_DIR}" \
-I "${OUT_DIR}" \
"${OUT_DIR}"/*.proto
${GRPC_TOOLS_NODE_PROTOC} \
--plugin=protoc-gen-ts="${PROTOC_GEN_TS_PATH}" \
--ts_out="${OUT_DIR}" \
-I "${OUT_DIR}" \
"${OUT_DIR}"/*.proto
What I get on the output is missing the *_grpc_pb.d.ts. I am under the impression I need this? 🤷🏻♂️
I have also tried adding the service option to the flag:
--ts_out="service=grpc-web:${OUT_DIR}" \
This now generates a *_pb_service.d.ts output file, still without the *_grpc_pb.d.ts file. I was reading the docs more and am thinking this service=grpc-web is actually the option I need since we're not running a node server.
Does this seem right? This is what I have now:
# Note the ts_out flag "service=grpc-node":
# This does generate the *_grpc_pb.d.ts but not the service files
protoc \
--plugin="protoc-gen-ts=${PROTOC_GEN_TS_PATH}" \
--plugin=protoc-gen-grpc=${GRPC_TOOLS_NODE_PROTOC_PLUGIN} \
--js_out="import_style=commonjs,binary:${OUT_DIR}" \
--ts_out="service=grpc-node:${OUT_DIR}" \
--grpc_out="${OUT_DIR}" \
-I "${OUT_DIR}" \
"${OUT_DIR}"/*.proto
# Note the ts_out flag "service=grpc-web":
# This does generate the service files, but not the *_grpc_pb.d.ts file
protoc \
--plugin="protoc-gen-ts=${PROTOC_GEN_TS_PATH}" \
--plugin=protoc-gen-grpc=${GRPC_TOOLS_NODE_PROTOC_PLUGIN} \
--js_out="import_style=commonjs,binary:${OUT_DIR}" \
--ts_out="service=grpc-web:${OUT_DIR}" \
--grpc_out="${OUT_DIR}" \
-I "${OUT_DIR}" \
"${OUT_DIR}"/*.proto
I tried to create a trigger for user deletes (Firebase Auth) that should call my Cloud Run instance but it seems that it doesn't trigger even after the 10 minutes. This is how I created the trigger:
gcloud eventarc triggers create on-delete-user \
--event-filters="methodName=google.firebase.auth.user.v1.deleted" \
--event-filters="serviceName=identitytoolkit.googleapis.com" \
--event-filters="type=google.cloud.audit.log.v1.written" \
--destination-run-service=grpc-v1-dev \
--destination-run-path=/api.v1.DeleteUserEvent \
--service-account user#projectId.iam.gserviceaccount.com \
--location europe-west4 \
--project projectId \
--format json
What am I missing?
UPDATE:
I also tried to user the method name that I saw in the Logs.
gcloud eventarc triggers create on-delete-user-2 \
--event-filters="type=google.cloud.audit.log.v1.written" \
--event-filters="serviceName=identitytoolkit.googleapis.com" \
--event-filters="methodName=google.cloud.identitytoolkit.v1.AccountManagementService.DeleteAccount" \
--destination-run-service=grpc-v1-dev \
--destination-run-path=/api.v1.DeleteUserEvent \
--service-account user#projectId.iam.gserviceaccount.com \
--location europe-west4 \
--project projectId \
--format json
UPDATE2:
I also tried to create it using the console, but still no events.
UPDATE3:
I do get events from google.cloud.identitytoolkit.v1.AccountManagementService.DeleteAccount in the log and my cloud instance receives the call, but the data I received has nothing to do with the model here.
https://github.com/googleapis/google-cloudevents/blob/master/proto/google/events/firebase/auth/v1/events.proto
Background
I try to learn to use Azure blob through azure doc.
Troubles
I got an error when I want to use these codes.
blobStorageAccount=probestudent
blobStorageAccountKey=$(az storage account keys list -g ProbeStudent \
-n $blobStorageAccount --query [0].value --output tsv)
az storage container set-permission \ --account-name $blobStorageAccount \ --account-key $blobStorageAccountKey \ --name thumbnails \
--public-access off
These codes mean that Set container public access.
This my container status.
This my storage account status.
Error
Please remove the character \ if the command is in one line. The symbol \ is just used when the command is too long and need another line.
The below command is working:
az storage container set-permission --account-name $blobStorageAccount --account-key $blobStorageAccountKey --name thumbnails --public-access off
The test result:
I have a basic Nginx docker image, acting as a reverse-proxy, that currently uses basic authentication sitting in front of my application server. I'm looking for a way to integrate it with our SSO solution in development that uses JWT, but all of the documentation says it requires Nginx+. So, is it possible to do JWT validation inside of open-sourced Nginx, or do I need the paid version?
Sure, there are open source codes, which you can use and customize for your case (example).
IMHO there are better implementations, which you can use as an "auth proxy" in front of your application. My favorite is keycloak-gatekeeper (you can use it with any OpenID IdP, not only with the Keycloak), which can provide authentication, authorization, token encryption, refresh token implementation, small footprint, ...
There's also lua-resty-openidc: https://github.com/zmartzone/lua-resty-openidc
lua-resty-openidc is a library for NGINX implementing the OpenID
Connect Relying Party (RP) and/or the OAuth 2.0 Resource Server (RS)
functionality.
When used as an OpenID Connect Relying Party it authenticates users
against an OpenID Connect Provider using OpenID Connect Discovery and
the Basic Client Profile (i.e. the Authorization Code flow). When used
as an OAuth 2.0 Resource Server it can validate OAuth 2.0 Bearer
Access Tokens against an Authorization Server or, in case a JSON Web
Token is used for an Access Token, verification can happen against a
pre-configured secret/key .
Given that you have a configuration set up without authentication, I found this and got it to work: https://hub.docker.com/r/tomsmithokta/nginx-oss-okta which is entirely based on the lua-resty-openidc as mentioned above. The fact that it was already built was helpful for me though.
First configure your Okta app in the Okta web GUI then fill in the proper fields that are not commented out in the NGINX example conf. The only caveat is to uncomment the redirect_uri and fill that in but instead comment out or remove the redirect_uri_path which is a deprecated field. All the other things in the config are parameters you can play with or just accept them as is.
By default it passes you onto a headers page but if you adjust the proxy_pass field you should be able to pass it to your app.
based on this gist https://gist.github.com/abbaspour/af8dff3b297b0fcc6ba7c625c2d7c0a3
here's how I did it in a dockerfile ( based on buster-slim )
FROM python:3.9-slim as base
FROM base as builder
ENV LANG en_GB.UTF-8 \
LANGUAGE en_GB.UTF-8 \
PYTHONUNBUFFERED=True \
PYTHONIOENCODING=UTF-8
RUN apt-get update \
&& apt-get install --no-install-recommends --no-install-suggests -y \
build-essential \
patch \
git \
wget \
libssl-dev \
libjwt-dev \
libjansson-dev \
libpcre3-dev \
zlib1g-dev \
&& wget https://nginx.org/download/nginx-1.18.0.tar.gz \
&& tar -zxvf nginx-1.18.0.tar.gz \
&& git clone https://github.com/TeslaGov/ngx-http-auth-jwt-module \
&& cd nginx-1.18.0 \
&& ./configure --add-module=../ngx-http-auth-jwt-module \
--with-http_ssl_module \
--with-http_v2_module \
--with-ld-opt="-L/usr/local/opt/openssl/lib" \
--with-cc-opt="-I/usr/local/opt/openssl/include" \
&& make
FROM base
COPY --from=builder /nginx-1.18.0/objs/nginx /usr/sbin/nginx
COPY --from=builder /nginx-1.18.0/conf /usr/local/nginx/conf
ENV LANG en_GB.UTF-8 \
LANGUAGE en_GB.UTF-8 \
PYTHONUNBUFFERED=True \
PYTHONIOENCODING=UTF-8
RUN apt-get update && \
apt-get install --no-install-recommends --no-install-suggests -y \
libssl-dev \
libjwt-dev \
libjansson-dev \
libpcre3-dev \
zlib1g-dev
The following command does not work even though there is a proxy by the name income_prediction
apigeetool createProduct \
--approvalType "auto" \
--environments ${APIGEE_ENVIRONMENT} \
--proxies income_prediction \
--productName "income_prediction_trial" \
--productDesc "Free trial API for income prediction." \
--quota 10 \
--quotaInterval 1 \
--quotaTimeUnit "day"
I get an:
Error: Create Product failed with status code 403
You're missing the mandatory --organization parameter (that specifies the organization you're creating the product for). For some reason, the apigeetool is not asking for it (it should).