Granular permissions in Yandex Cloud Object Storage - yandexcloud

How to create a service account in Object Storage that has permissions only for one bucket?
I've tried to create service account via Web Console, but can't find any roles related to Object Storage:

To restrict access for a service account, you need to use ACL.
You'll also need to use YC CLI and AWS CLI.
Let me explain everything from the beginning starting with account creation.
# yc iam service-account create name <account name>
id: <service-account id>
folder_id: <folder_id>
created_at: "2019-01-23T45:67:89Z"
name: <account name>
# yc iam access-key create service-account-name <account name>
access_key:
id: <operation id>
service_account_id: <service-account_id>
created_at: "2019-12-34T56:78:90Z"
key_id: <key id>
secret: <secret key>
Save the key_id and the secret key. Now set the AWS CLI according to the instruction in documentation to work from the admin service account.
Create a bucket and set access for it. To grant access, you need to set the service_account_id in the put-bucket-acl command of the id field.
# aws endpoint-url=https://storage.yandexcloud.net s3
mb s3://<bucket_name>
make_bucket: <bucket_name>
# aws endpoint-url=https://storage.yandexcloud.net
s3api put-bucket-acl \
bucket hidden-bucket grant-full-control
id=<service_account_id> \
grant-read
P.S. The only problem is that Yandex Object storage doesn't support permission "WRITE", and you can only set full-access for a service account. It means it can edit ACL on its bucket.

Related

AWS Amplify Build Issue - StackUpdateComplete

When running amplify push -y in the CLI, my project errors with this message:
["Index: 0 State: {\"deploy\":\"waitingForDeployment\"} Message: Resource is not in the state stackUpdateComplete"]
How do I resolve this error?
The "Resource is not in the state stackUpdateComplete" is the message that comes from the root CloudFormation stack associated with the Amplify App ID. The Amplify CLI is just surfacing the error message that comes from the update stack operation. This indicates that the Amplify's CloudFormation stack may have been still be in progress or stuck.
Solution 1 – “deployment-state.json”:
To fix this issue, go to the S3 bucket containing project settings and deleted the “deployment-state.json” file in root folder as this file holds the app deployment states. The bucket should end with, or contain the word “deployment”.
Solution 2 – “Requested resource not found”:
Check the status of the CloudFormation stack and see if you can notice that the stack failed because of a “Requested resource not found” error indicating that the DynamoDB table “tableID” was missing and confirm that you have deleted it (possibly accidentally). Manually create the above DynamoDB table and retry to push again.
Solution 3A - “#auth directive with 'apiKey':
If you recieve an error stating that “#auth directive with 'apiKey' provider found, but the project has no API Key authentication provider configured”. This error appears when you define a public authorisation in your GraphQL schema without specifying a provider. The public authorization specifies that everyone will be allowed to access the API, behind the scenes the API will be protected with an API Key. To be able to use the public API you must have API Key configured.
The #auth directive allows the override of the default provider for a given authorization mode. To fix the issue specify “IAM” as the provider which allows to use an "Unauthenticated Role" from Cognito Identity Pools for public access instead of an API Key.
Below is the sample code for public authorisation rule:
type Todo #model #auth(rules: [{ allow: public, provider: iam, operations: [create, read, update, delete] }]) {
id: ID!
name: String!
description: String
}
After making the above changes, you can run “amplify update api” and add a IAM auth provider, the CLI generated scoped down IAM policies for the "UnAuthenticated" role automatically.
Solution 3B - Parameters: [AuthCognitoUserPoolId] must have values:
Another issue could occur here, where the default authorization type is API Key when you run the command “amplify add api” without specifying the API type. To fix this issue, follow these steps:
Deleted the the API
Recreate a new one by specifying the “Amazon Cognito user pool” as the authorization mode
Add IAM as an additional authorization type
Re-enable #auth directive in the newly created API Schema
Run “amplify push”
Documentation:
Public Authorisation
Troubleshoot CloudFormation stack issues in my AWS Amplify project

How to use AWS SSM parameter for token in provider github?

This is the code snippet in my main.tf file:
provider "github" {
token = var.github_token_ssm
owner = var.owner
}
data "github_repository" "github" {
full_name = var.repository_name
}
The github token is stored in AWS secretsmanager parameter.
If the value of the token is hardcoded github token, then it works fine.
If the value of the token is a AWS secretsmanager parameter (eg. arn:aws:secretsmanager:us-east-1:xxxxxxxxxxxx:secret:xxxx-Github-t0UOOD:xxxxxx), it is not working.
I don't want to hardcode github token in the code. How can I use secretsmanager parameter for token above?
As far as I know, Terraform not supporting aws Secret Manager (but you can use the vault to store secrets).
you can also deploy it with TF_VAR variable and ENV Var
export TF_VAR_db_username=admin TF_VAR_db_password=adifferentpassword
You can also run a script that will pull the secret from aws and store it in EnvVar.
just remember to secure your state file (the password will exist in clear text)

Airflow wasb_default config

What is the correct format of wasb_defautl config?
I'm trying the following:
Host: https://<blob storage acc>.blob.core.windows.net
Schema: <empty>
login: <empty>
Password: <empty>
Port: <empty>
Extra: {"sas_token": "<blob storage account key1>"}
When I run DAG I'm constantly receiving:
ValueError: You need to provide an account name and either an account_key or sas_token when creating a storage service.
When we use Airflow to connect Azure Blob storage, please make sure that a Airflow connection of type wasb exists. Authorization can be done by supplying a login (=Storage account name) and password (=KEY), or login and SAS token in the extra field. For more details, please refer to here and here
For example
Conn Type : wasb
login : <account name>
password: <account key>

ARM template download access denied 401

I have contributor access. When I try to download an ARM template from azure portal, I get "access denied : 401" error. What could be the reason and how can I fix it? Strangely no one had this issue on google.
You can also use Azure CLI, Azure PowerShell, or REST API to export ARM templates.
Azure CLI:
echo "Enter the Resource Group name:" &&
read resourceGroupName &&
az group export --name $resourceGroupName
The script displays the template on the console. Copy the JSON, and save as a file.
Note: The export template feature doesn't support exporting Azure Data Factory resources.
Azure Powershell:
To export all resources in a resource group, use the Export-AzResourceGroup cmdlet and provide the resource group name.
$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
Export-AzResourceGroup -ResourceGroupName $resourceGroupName
It saves the template as a local file. More options here.
REST API:
POST https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/exportTemplate?api-version=2019-10-01
Captures the specified resource group as a template. API Reference here.

Authenticating Google Cloud Datastore c# SDK

I am trying to authenticate Google Datastore c# SDK in a k8 pod running in google cloud.
I could not find any way to inject the account.json file in to DatastoreDb or DatastoreClient beside using GOOGLE_APPLICATION_CREDENTIALS environment variable.
Using GOOGLE_APPLICATION_CREDENTIALS environment variable is problematic since i do not want to leave the account file exposed.
According to the documentations in: https://googleapis.github.io/google-cloud-dotnet/docs/Google.Cloud.Datastore.V1/index.html
When running on Google Cloud Platform, no action needs to be taken to
authenticate.
But that does not seem to work.
A push in the right direction will be appreciated (:
I'd suggest using a K8s secret to store the service account key and then mounting it in the pod at run time. See below:
Create a service account for the desired application.
Generate and encode a service account key: just generate a .json key for the newly created service account from the previous step and then encode it using base64 -w 0 key-file-name. This is important: K8S expects the secret's content to be Base64 encoded.
Create the K8s secret manifest file (see content below) and then apply it.
apiVersion: v1
kind: Secret
metadata:
name: your-service-sa-key-k8s-secret
type: Opaque
data:
sa_json: previously_generated_base64_encoding
Mount the secret.
volumes:
- name: service-account-credentials-volume
secret:
secretName: your-service-sa-key-k8s-secret
items:
- key: sa_json
path: secrets/sa_credentials.json
Now all you have to do is set the GOOGLE_APPLICATION_CRENDENTIALS to be secrets/sa_credentials.json.
Hope this helps. Sorry for the formatting (on a hurry).
This is how it can be done:
var credential =
GoogleCredential.FromFile(#"/path/to/google.credentials.json").CreateScoped(DatastoreClient.DefaultScopes);
var channel = new Grpc.Core.Channel(DatastoreClient.DefaultEndpoint.ToString(), credential.ToChannelCredentials());
DatastoreClient client = DatastoreClient.Create(channel, settings:
DatastoreSettings.GetDefault());
DatastoreDb db = DatastoreDb.Create(YOUR_PROJECT_ID, client:client);
// Do Datastore stuff...
// Shutdown the channel when it is no longer required.
await channel.ShutdownAsync();
Taken from: https://github.com/googleapis/google-cloud-dotnet/blob/master/apis/Google.Cloud.Datastore.V1/Google.Cloud.Datastore.V1/DatastoreClient.cs

Resources