Azure storage container permission cannot be set - asp.net

Background
I try to learn to use Azure blob through azure doc.
Troubles
I got an error when I want to use these codes.
blobStorageAccount=probestudent
blobStorageAccountKey=$(az storage account keys list -g ProbeStudent \
-n $blobStorageAccount --query [0].value --output tsv)
az storage container set-permission \ --account-name $blobStorageAccount \ --account-key $blobStorageAccountKey \ --name thumbnails \
--public-access off
These codes mean that Set container public access.
This my container status.
This my storage account status.
Error

Please remove the character \ if the command is in one line. The symbol \ is just used when the command is too long and need another line.
The below command is working:
az storage container set-permission --account-name $blobStorageAccount --account-key $blobStorageAccountKey --name thumbnails --public-access off
The test result:

Related

Firestore authorization for Google Compute engine for app on a docker container

I have deployed a Node.js app on a Google compute instance via a Docker container. Is there a recommended way to pass the GOOGLE_APPLICATION_CREDENTIALS to the docker container?
I see the documentation states that GCE has Application Default Credentials (ADC), but these are not available in the docker container. (https://cloud.google.com/docs/authentication/production)
I am a bit new to docker & GCP, so any help would be appreciated.
Thank you!
So, I could find this documentation on where you can inject your GOOGLE_APPLICATION_CREDENTIALS into a docker in order to test cloud run locally, I know that this is not cloud run, but I believe that the same command could be used in order to inject your credentials to the container.
As I know that a lot of the times the community needs the steps and commands as the links could change and information also could change I will copy the steps needed in order to inject the credentials.
Refer to Getting Started with Authentication for instructions
on generating, retrieving, and configuring your Service Account
credentials.
The following Docker run flags inject the credentials and
configuration from your local system into the local container:
Use the --volume (-v) flag to inject the credential file into the
container (assumes you have already set your
GOOGLE_APPLICATION_CREDENTIALS environment variable on your machine):
-v $GOOGLE_APPLICATION_CREDENTIALS:/tmp/keys/FILE_NAME.json:ro
Use the --environment (-e) flag to set the
GOOGLE_APPLICATION_CREDENTIALS variable inside the container:
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json
Optionally, use this fully configured Docker run command:
PORT=8080 && docker run \
-p 9090:${PORT} \
-e PORT=${PORT} \
-e K_SERVICE=dev \
-e K_CONFIGURATION=dev \
-e K_REVISION=dev-00001 \
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json \
-v $GOOGLE_APPLICATION_CREDENTIALS:/tmp/keys/FILE_NAME.json:ro \ gcr.io/PROJECT_ID/IMAGE
Note that the path
/tmp/keys/FILE_NAME.json
shown in the example above is a reasonable location to place your
credentials inside the container. However, other directory locations
will also work. The crucial requirement is that the
GOOGLE_APPLICATION_CREDENTIALS environment variable must match the
bind mount location inside the container.
Hope this works for you.

How to delete all contents of local Artifactory repository via REST API?

I'm using a local repository as a staging repo and would like to be able to clear the whole staging repo via REST. How can I delete the contents of the repo without deleting the repo itself?
Since I have a similar requirement in one of my environments I like to provide a possible solution approach.
It is assumed the JFrog Artifactory instance has a local repository called JFROG-ARTIFACTORY which holds the latest JFrog Artifactory Pro installation RPM(s). For listing and deleting I've created the following script:
#!/bin/bash
# The logged in user will be also the admin account for Artifactory REST API
A_ACCOUNT=$(who am i | cut -d " " -f 1)
LOCAL_REPO=$1
PASSWORD=$2
STAGE=$3
URL="example.com"
# Check if a stage were provided, if not set it to PROD
if [ -z "$STAGE" ]; then
STAGE="repository-prod"
fi
# Going to list all files within the local repository
# Doc: https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-FileList
curl --silent \
-u"${A_ACCOUNT}:${PASSWORD}" \
-i \
-X GET "https://${STAGE}.${URL}/artifactory/api/storage/${LOCAL_REPO}/?list&deep=1" \
-w "\n\n%{http_code}\n"
echo
# Going to delete all files in the local repository
# Doc: https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-DeleteItem
curl --silent \
-u"${A_ACCOUNT}:${PASSWORD}" \
-i \
-X DELETE "https://${STAGE}.${URL}/artifactory/${LOCAL_REPO}/" \
-w "\n\n%{http_code}\n"
echo
So after calling
./Scripts/deleteRepository.sh JFROG-ARTIFACTORY Pa\$\$w0rd! repository-dev
for the development instance, it listed me all files in the local repository called JFROG-ARTIFACTORY, the JFrog Artifactory Pro installation RPM(s), deleted them, but left the local repository itself.
You may change and enhance the script for your needs and have also a look into How can I completely remove artifacts from Artifactory?

How to populate a CosmosDB collection by command line?

I am developing a set of Scripts for Azure and I would like to know how to populate a CosmosDB collection with az.
Currently, I know how to create a Database and Collection but how to initialize the Database?
az cosmosdb create \
--resource-group $RESOURCE_GROUP \
--name $COSMOS_DB_ACCOUNT
az cosmosdb database create \
--resource-group-name $RESOURCE_GROUP \
--name $COSMOS_DB_ACCOUNT \
--db-name $COSMOS_DB_NAME
az cosmosdb collection create \
--resource-group-name $RESOURCE_GROUP \
--name $COSMOS_DB_ACCOUNT \
--collection-name $COSMOS_DB_COLLECTION_NAME \
--db-name $COSMOS_DB_NAME \
--partition-key-path $COSMOS_DB_COLLECTION_PARTITION_KEY
Reading the documentation, I didnĀ“t see a solution.
az doesn't provide any data-movement options for Cosmos DB.
For the SQL API, you'll either need to create your own command-line tool, or use the Cosmos DB-supplied Data Migration Tool (Windows-only, unlike az), which provides a command-line interface. For example:
dt /s:JsonFile /s.Files:.\inputdata.json /t:DocumentDBBulk /t.ConnectionString:"AccountEndpoint=<name>;AccountKey=<key>;Database=<db>;" /t.Collection:<collname> /t.CollectionThroughput:<throughput>
This has support for the MongoDB API as well, but you can also use native command-line tools such as mongoimport.

Stackdriver Alerts Policy has stopped working

I created a lot of logging metrics and alert policies in line with CIS Benchmark requirements, and in testing this, it worked fine. But only for while. Now, when I go to edit the logging metric I can see that changes I perform are displayed in the results using the filter I have, but when I go to Stackdriver and look at the alert policy, it hasn't picked up anything. I haven't changed anything to my knowledge.
Here are the commands I've used to create all this:
gcloud beta logging metrics create vpc_firewall_changes \
--description="Actions related to VPC firewall changes (CIS 2.7)" \
--log-filter="resource.type=\"gce_firewall_rule\" AND \
jsonPayload.event_subtype=\"compute.firewalls.patch\" OR \
jsonPayload.event_subtype=\"compute.firewalls.insert\""
gcloud alpha monitoring channels create \
--type=email \
--display-name="VPC Firewall Changes" \
--description="E-mail channel for alerting VPC firewall changes" \
--channel-labels=email_address=[my email]
gcloud alpha monitoring policies create \
--notification-channels=[channel uri] \
--condition-filter="resource.type=\"global\" AND metric.type=\"logging.googleapis.com/user/vpc_firewall_changes\"" \
--aggregation="{
\"alignmentPeriod\": \"60s\", \
\"crossSeriesReducer\": \"REDUCE_COUNT\", \
\"perSeriesAligner\": \"ALIGN_RATE\" \
}" \
--condition-display-name="logging/user/vpc_firewall_changes [COUNT]" \
--duration=60s \
--if="> 0.001" \
--trigger-count=1 \
--combiner=OR \
--display-name="VPC Firewall Changes Alerts" \
--documentation="You are receiving this alert because changes have been made relating to the VPC firewall rules."
Has anyone experienced this?

MUP (meteor) deploy deletes files

Sadly although mup deploy works perfectly, I have a folder called ".uploads", which users can upload files into.
Each deploy deletes the files in the directory. I would like to exclude or protect the file from the deploy deleting the files, any ideas?
Currently I filed an issue: https://github.com/arunoda/meteor-up/issues/1022
Not sure if its an issue with mup or my system setup. I use tomitrescak:meteor-uploads and also cfs:file-collection. They both have the same issue.
But from what I see it shall be easy to do, you need to modify the script at https://github.com/arunoda/meteor-up/blob/mupx/templates/linux/start.sh#L26
And add a new mapping. You can map multiple volumes such as following: Mounting multiple volumes on a docker container?
So your script would look like (mounting the folder on the host machine /opt/uploads/myapp to the folder /opt/uploads in the container):
docker run \
-d \
--restart=always \
--publish=$PORT:80 \
--volume=$BUNDLE_PATH:/bundle \
--volume=/opt/uploads/myapp:/opt/uploads/ \ .... this is the new line
--env-file=$ENV_FILE \
--link=mongodb:mongodb \
--hostname="$HOSTNAME-$APPNAME" \
--env=MONGO_URL=mongodb://mongodb:27017/$APPNAME \
--name=$APPNAME \
meteorhacks/meteord:base
This can be found in "start.sh" in MUP until this issue is resolved and you can mount volumes via the options.
Also see discussion here: https://github.com/tomitrescak/meteor-uploads/issues/235#issuecomment-228618130

Resources