apigee command line tool unable to create product for proxy - apigee

The following command does not work even though there is a proxy by the name income_prediction
apigeetool createProduct \
--approvalType "auto" \
--environments ${APIGEE_ENVIRONMENT} \
--proxies income_prediction \
--productName "income_prediction_trial" \
--productDesc "Free trial API for income prediction." \
--quota 10 \
--quotaInterval 1 \
--quotaTimeUnit "day"
I get an:
Error: Create Product failed with status code 403

You're missing the mandatory --organization parameter (that specifies the organization you're creating the product for). For some reason, the apigeetool is not asking for it (it should).

Related

Protoc does not export the TS file version of *_grpc_pb.js?

I am new to setting up the gRPC web based client side. Our backend is already up and running on Go with gRPC. I am testing out what it's like converting the .proto file into TS. I am successfully able to generate some of the files, however, I am missing the TypeScript "Service" file.
I pretty much followed the instructions from the grpc_tools_node_protoc_ts site.
Setup a script to generate files for 1) the service and 2) the client model:
PROTOC_GEN_TS_PATH="./node_modules/.bin/protoc-gen-ts"
GRPC_TOOLS_NODE_PROTOC_PLUGIN="./node_modules/.bin/grpc_tools_node_protoc_plugin"
GRPC_TOOLS_NODE_PROTOC="./node_modules/.bin/grpc_tools_node_protoc"
OUT_DIR="./_protos_/proto/"
# JavaScript code generating
${GRPC_TOOLS_NODE_PROTOC} \
--plugin=protoc-gen-grpc="${GRPC_TOOLS_NODE_PROTOC_PLUGIN}" \
--js_out=import_style=commonjs,binary:"${OUT_DIR}" \
--grpc_out="${OUT_DIR}" \
-I "${OUT_DIR}" \
"${OUT_DIR}"/*.proto
${GRPC_TOOLS_NODE_PROTOC} \
--plugin=protoc-gen-ts="${PROTOC_GEN_TS_PATH}" \
--ts_out="${OUT_DIR}" \
-I "${OUT_DIR}" \
"${OUT_DIR}"/*.proto
What I get on the output is missing the *_grpc_pb.d.ts. I am under the impression I need this? 🤷🏻‍♂️
I have also tried adding the service option to the flag:
--ts_out="service=grpc-web:${OUT_DIR}" \
This now generates a *_pb_service.d.ts output file, still without the *_grpc_pb.d.ts file. I was reading the docs more and am thinking this service=grpc-web is actually the option I need since we're not running a node server.
Does this seem right? This is what I have now:
# Note the ts_out flag "service=grpc-node":
# This does generate the *_grpc_pb.d.ts but not the service files
protoc \
--plugin="protoc-gen-ts=${PROTOC_GEN_TS_PATH}" \
--plugin=protoc-gen-grpc=${GRPC_TOOLS_NODE_PROTOC_PLUGIN} \
--js_out="import_style=commonjs,binary:${OUT_DIR}" \
--ts_out="service=grpc-node:${OUT_DIR}" \
--grpc_out="${OUT_DIR}" \
-I "${OUT_DIR}" \
"${OUT_DIR}"/*.proto
# Note the ts_out flag "service=grpc-web":
# This does generate the service files, but not the *_grpc_pb.d.ts file
protoc \
--plugin="protoc-gen-ts=${PROTOC_GEN_TS_PATH}" \
--plugin=protoc-gen-grpc=${GRPC_TOOLS_NODE_PROTOC_PLUGIN} \
--js_out="import_style=commonjs,binary:${OUT_DIR}" \
--ts_out="service=grpc-web:${OUT_DIR}" \
--grpc_out="${OUT_DIR}" \
-I "${OUT_DIR}" \
"${OUT_DIR}"/*.proto

Azure storage container permission cannot be set

Background
I try to learn to use Azure blob through azure doc.
Troubles
I got an error when I want to use these codes.
blobStorageAccount=probestudent
blobStorageAccountKey=$(az storage account keys list -g ProbeStudent \
-n $blobStorageAccount --query [0].value --output tsv)
az storage container set-permission \ --account-name $blobStorageAccount \ --account-key $blobStorageAccountKey \ --name thumbnails \
--public-access off
These codes mean that Set container public access.
This my container status.
This my storage account status.
Error
Please remove the character \ if the command is in one line. The symbol \ is just used when the command is too long and need another line.
The below command is working:
az storage container set-permission --account-name $blobStorageAccount --account-key $blobStorageAccountKey --name thumbnails --public-access off
The test result:

Stackdriver Alerts Policy has stopped working

I created a lot of logging metrics and alert policies in line with CIS Benchmark requirements, and in testing this, it worked fine. But only for while. Now, when I go to edit the logging metric I can see that changes I perform are displayed in the results using the filter I have, but when I go to Stackdriver and look at the alert policy, it hasn't picked up anything. I haven't changed anything to my knowledge.
Here are the commands I've used to create all this:
gcloud beta logging metrics create vpc_firewall_changes \
--description="Actions related to VPC firewall changes (CIS 2.7)" \
--log-filter="resource.type=\"gce_firewall_rule\" AND \
jsonPayload.event_subtype=\"compute.firewalls.patch\" OR \
jsonPayload.event_subtype=\"compute.firewalls.insert\""
gcloud alpha monitoring channels create \
--type=email \
--display-name="VPC Firewall Changes" \
--description="E-mail channel for alerting VPC firewall changes" \
--channel-labels=email_address=[my email]
gcloud alpha monitoring policies create \
--notification-channels=[channel uri] \
--condition-filter="resource.type=\"global\" AND metric.type=\"logging.googleapis.com/user/vpc_firewall_changes\"" \
--aggregation="{
\"alignmentPeriod\": \"60s\", \
\"crossSeriesReducer\": \"REDUCE_COUNT\", \
\"perSeriesAligner\": \"ALIGN_RATE\" \
}" \
--condition-display-name="logging/user/vpc_firewall_changes [COUNT]" \
--duration=60s \
--if="> 0.001" \
--trigger-count=1 \
--combiner=OR \
--display-name="VPC Firewall Changes Alerts" \
--documentation="You are receiving this alert because changes have been made relating to the VPC firewall rules."
Has anyone experienced this?

Can I have conflict in Riak with using a vclock getting by a get

I want to know if I can have conflict in this scenario :
#!/usr/bin/env bash
curl -XPUT -d '{"bar":"baz"}' \
-H "Content-Type: application/json" \
http://127.0.0.1:8098/riak/obj/1
response=$(curl -I http://127.0.0.1:8098/riak/obj/1 | grep 'X-Riak-Vclock:' | egrep -o ' .*$')
curl -v -XPUT -d '{"bar":"foo"}' \
-H "Content-Type: application/json" \
-H "X-Riak-Vclock: $response" \
http://127.0.0.1:8098/riak/obj/1
In some words :
First I have no object for the key 1 I put the {"bar":"baz"} value with the PUT of the http api.
Then, I read the value with a get. And I store the vclock in variable.
And finaly I put a new value {"bar":"foo"} for the key 1
Is there a case where I can have {"bar":"baz"} for the key 1 ? If Riak has a conflict, it will be resolve with vclock ?
Thanks !
It depends how your Riak database is configured, either globally or if you changed the default configuration of the bucket you're using. If you keep the default config, your second PUT (with the vclock) might:
- fail, if someone updated the key behind your back (rare), and the vclock data you have is already obsolete. You need to re-read the value and update it. Best is to have a retry mechanism.
- fail, if the write consistency constrains you have is too strict, and too many nodes are down (rare). Usually the default read and write config are sane.
- succeed, if the vclock data is still valid for this key (most of the time)
In case it succeeds, it might be that the network topology was in a split-brain situation. In this case, Riak will solve the issue itself using v-clock data.

How to solve the CSScomb error in PhpStorm?

I wanted to install on PhpStorm 8.0.1 CSScomb.js
I do everything as it is written on a page on github. Established CSScomb globally and locally (so sure). Prescribed ways and...
...run and out this error:
Error running CSScomb: Can not run program "C: \ Users \ Kanat \
AppData \ Roaming \ npm \ node_modules \ csscomb \ bin \ csscomb" (in
directory "D: \ OpenServer \ domains \ LPDevplate \ src \ scss \
modules"): CreateProcess error = 193% 1 is not a valid Win32
application
Someone faced with this error and help solve it?
the same error "Error running 'CSScomb': Cannot run program"

Resources