I tried to create a trigger for user deletes (Firebase Auth) that should call my Cloud Run instance but it seems that it doesn't trigger even after the 10 minutes. This is how I created the trigger:
gcloud eventarc triggers create on-delete-user \
--event-filters="methodName=google.firebase.auth.user.v1.deleted" \
--event-filters="serviceName=identitytoolkit.googleapis.com" \
--event-filters="type=google.cloud.audit.log.v1.written" \
--destination-run-service=grpc-v1-dev \
--destination-run-path=/api.v1.DeleteUserEvent \
--service-account user#projectId.iam.gserviceaccount.com \
--location europe-west4 \
--project projectId \
--format json
What am I missing?
UPDATE:
I also tried to user the method name that I saw in the Logs.
gcloud eventarc triggers create on-delete-user-2 \
--event-filters="type=google.cloud.audit.log.v1.written" \
--event-filters="serviceName=identitytoolkit.googleapis.com" \
--event-filters="methodName=google.cloud.identitytoolkit.v1.AccountManagementService.DeleteAccount" \
--destination-run-service=grpc-v1-dev \
--destination-run-path=/api.v1.DeleteUserEvent \
--service-account user#projectId.iam.gserviceaccount.com \
--location europe-west4 \
--project projectId \
--format json
UPDATE2:
I also tried to create it using the console, but still no events.
UPDATE3:
I do get events from google.cloud.identitytoolkit.v1.AccountManagementService.DeleteAccount in the log and my cloud instance receives the call, but the data I received has nothing to do with the model here.
https://github.com/googleapis/google-cloudevents/blob/master/proto/google/events/firebase/auth/v1/events.proto
Related
I am new to setting up the gRPC web based client side. Our backend is already up and running on Go with gRPC. I am testing out what it's like converting the .proto file into TS. I am successfully able to generate some of the files, however, I am missing the TypeScript "Service" file.
I pretty much followed the instructions from the grpc_tools_node_protoc_ts site.
Setup a script to generate files for 1) the service and 2) the client model:
PROTOC_GEN_TS_PATH="./node_modules/.bin/protoc-gen-ts"
GRPC_TOOLS_NODE_PROTOC_PLUGIN="./node_modules/.bin/grpc_tools_node_protoc_plugin"
GRPC_TOOLS_NODE_PROTOC="./node_modules/.bin/grpc_tools_node_protoc"
OUT_DIR="./_protos_/proto/"
# JavaScript code generating
${GRPC_TOOLS_NODE_PROTOC} \
--plugin=protoc-gen-grpc="${GRPC_TOOLS_NODE_PROTOC_PLUGIN}" \
--js_out=import_style=commonjs,binary:"${OUT_DIR}" \
--grpc_out="${OUT_DIR}" \
-I "${OUT_DIR}" \
"${OUT_DIR}"/*.proto
${GRPC_TOOLS_NODE_PROTOC} \
--plugin=protoc-gen-ts="${PROTOC_GEN_TS_PATH}" \
--ts_out="${OUT_DIR}" \
-I "${OUT_DIR}" \
"${OUT_DIR}"/*.proto
What I get on the output is missing the *_grpc_pb.d.ts. I am under the impression I need this? 🤷🏻♂️
I have also tried adding the service option to the flag:
--ts_out="service=grpc-web:${OUT_DIR}" \
This now generates a *_pb_service.d.ts output file, still without the *_grpc_pb.d.ts file. I was reading the docs more and am thinking this service=grpc-web is actually the option I need since we're not running a node server.
Does this seem right? This is what I have now:
# Note the ts_out flag "service=grpc-node":
# This does generate the *_grpc_pb.d.ts but not the service files
protoc \
--plugin="protoc-gen-ts=${PROTOC_GEN_TS_PATH}" \
--plugin=protoc-gen-grpc=${GRPC_TOOLS_NODE_PROTOC_PLUGIN} \
--js_out="import_style=commonjs,binary:${OUT_DIR}" \
--ts_out="service=grpc-node:${OUT_DIR}" \
--grpc_out="${OUT_DIR}" \
-I "${OUT_DIR}" \
"${OUT_DIR}"/*.proto
# Note the ts_out flag "service=grpc-web":
# This does generate the service files, but not the *_grpc_pb.d.ts file
protoc \
--plugin="protoc-gen-ts=${PROTOC_GEN_TS_PATH}" \
--plugin=protoc-gen-grpc=${GRPC_TOOLS_NODE_PROTOC_PLUGIN} \
--js_out="import_style=commonjs,binary:${OUT_DIR}" \
--ts_out="service=grpc-web:${OUT_DIR}" \
--grpc_out="${OUT_DIR}" \
-I "${OUT_DIR}" \
"${OUT_DIR}"/*.proto
Background
I try to learn to use Azure blob through azure doc.
Troubles
I got an error when I want to use these codes.
blobStorageAccount=probestudent
blobStorageAccountKey=$(az storage account keys list -g ProbeStudent \
-n $blobStorageAccount --query [0].value --output tsv)
az storage container set-permission \ --account-name $blobStorageAccount \ --account-key $blobStorageAccountKey \ --name thumbnails \
--public-access off
These codes mean that Set container public access.
This my container status.
This my storage account status.
Error
Please remove the character \ if the command is in one line. The symbol \ is just used when the command is too long and need another line.
The below command is working:
az storage container set-permission --account-name $blobStorageAccount --account-key $blobStorageAccountKey --name thumbnails --public-access off
The test result:
I created a lot of logging metrics and alert policies in line with CIS Benchmark requirements, and in testing this, it worked fine. But only for while. Now, when I go to edit the logging metric I can see that changes I perform are displayed in the results using the filter I have, but when I go to Stackdriver and look at the alert policy, it hasn't picked up anything. I haven't changed anything to my knowledge.
Here are the commands I've used to create all this:
gcloud beta logging metrics create vpc_firewall_changes \
--description="Actions related to VPC firewall changes (CIS 2.7)" \
--log-filter="resource.type=\"gce_firewall_rule\" AND \
jsonPayload.event_subtype=\"compute.firewalls.patch\" OR \
jsonPayload.event_subtype=\"compute.firewalls.insert\""
gcloud alpha monitoring channels create \
--type=email \
--display-name="VPC Firewall Changes" \
--description="E-mail channel for alerting VPC firewall changes" \
--channel-labels=email_address=[my email]
gcloud alpha monitoring policies create \
--notification-channels=[channel uri] \
--condition-filter="resource.type=\"global\" AND metric.type=\"logging.googleapis.com/user/vpc_firewall_changes\"" \
--aggregation="{
\"alignmentPeriod\": \"60s\", \
\"crossSeriesReducer\": \"REDUCE_COUNT\", \
\"perSeriesAligner\": \"ALIGN_RATE\" \
}" \
--condition-display-name="logging/user/vpc_firewall_changes [COUNT]" \
--duration=60s \
--if="> 0.001" \
--trigger-count=1 \
--combiner=OR \
--display-name="VPC Firewall Changes Alerts" \
--documentation="You are receiving this alert because changes have been made relating to the VPC firewall rules."
Has anyone experienced this?
I am trying to create airflow connections on Cloud Composer using gcloud CLI.
I follow the document and run the following comment.
https://cloud.google.com/composer/docs/how-to/managing/connections#creating_new_airflow_connections
gcloud composer environments run my-env \
--project my-project \
--location asia-northeast1 \
connections -- --add \
--conn_id=my-conn-id \
--conn_type=google_cloud_platform \
--conn_extra '{"extra\__google\_cloud\_platform\__project": "my-project", \
"extra\__google\_cloud\_platform\__key_path": "/home/airflow/gcs/data/keys/my-key.json", \
"extra\__google\_cloud\_platform\__scope": "https://www.googleapis.com/auth/cloud-platform"}'
kubeconfig entry generated for asia-northeast1-my-env-44718514-gke.
Executing within the following kubectl namespace: default
W0525 22:51:11.244104 93234 flags.go:39] conn_extra is DEPRECATED and will be removed in
a future version. Use conn-extra instead.
W0525 22:51:11.244246 93234 flags.go:39] conn_extra is DEPRECATED and will be removed in
a future version. Use conn-extra instead.
W0525 22:51:11.244256 93234 flags.go:39] conn_extra is DEPRECATED and will be removed in
a future version. Use conn-extra instead.
[2019-05-25 14:51:13,215] {settings.py:176} INFO - setting.configure_orm():
Using pool settings. pool_size=5, pool_recycle=1800
[2019-05-25 14:51:13,598] {default_celery.py:80} WARNING - You have configured a
result_backend of redis://airflow-redis-service:6379/0, it is highly recommended to use an
alternative result_backend (i.e. a database).
[2019-05-25 14:51:13,600] {__init__.py:51} INFO - Using executor CeleryExecutor
[2019-05-25 14:51:13,680] {app.py:51} WARNING - Using default Composer Environment Variables.
Overrides have not been applied.
[2019-05-25 14:51:13,688] {configuration.py:516} INFO - Reading the config from
/etc/airflow/airflow.cfg
[2019-05-25 14:51:13,698] {configuration.py:516} INFO - Reading the config from
/etc/airflow/airflow.cfg
The connection is successfully created but project ID, key file path and scope are empty. So the connection is invalid.
When I manually created, those properties are not empty. Am I missing something?
Composer image: composer-1.5.0-airflow-1.10.1
I am not able to replicate it. When I run the following command the connection does get added with the extras field:
gcloud composer environments run my-env \
--project my-project \
--location europe-west1 \
connections -- --add \
--conn_id=my-conn-id \
--conn_type=google_cloud_platform \
--conn_extra='{"extra__google_cloud_platform__project": "my-project", "extra__google_cloud_platform__key_path":"/tesf"}
Found some syntax error on escaping quotation. This works fine.
$ CONNECTION_CREATE_COMMAND="gcloud composer environments run $COMPOSER_ENVIRONMENT \
--project $COMPOSER_PROJECT \
--location ${COMPOSER_LOCATION} \
connections -- --add \
--conn_id=${CONN_ID_BASE}_${app}_${c} \
--conn_type=google_cloud_platform \
--conn_extra '{\"extra__google_cloud_platform__project\": \"${BQ_PROJECT}\", \
\"extra__google_cloud_platform__key_path\": \"${KEY_JSON_FILE_PATH}\", \
\"extra__google_cloud_platform__scope\": \"https://www.googleapis.com/auth/cloud-platform\"}'"
$ eval $CONNECTION_CREATE_COMMAND
I want to place a file a file on salt-master via salt-api. I have configured salt-api using rest cherrypy and configured a custom hook for it. I wanted to explore the use-case where we can transfer the file first to salt-master and secondly distribute it to minions. I'm able to achieve the second part but not been able to post data file to the API.
Here is one way to do it using file.write execution module.
First login and save the token to a cookie file (I had to change eauth to ldap, auto didn't work for some reason):
curl -sSk http://localhost:8000/login \
-c ~/cookies.txt \
-H 'Accept: application/x-yaml' \
-d username=USERNAME\
-d password=PASSWORD \
-d eauth=auto
Now run a job to create a file on the salt-master (assuming your salt-master is also running a salt-minion):
curl -sSk http://localhost:8000 \
-b ~/cookies.txt \
-H 'Accept: application/x-yaml' \
-d client=local \
-d tgt='saltmaster' \
-d fun=file.write \
-d arg='/tmp/somefile.txt' \
-d arg='This is some example text
with newlines
A
B
C'
Note that the spacing used in your command will affect how the lines will show up in the file, with the example above is gives the most aesthetically pleasing result.