What is the correct format of wasb_defautl config?
I'm trying the following:
Host: https://<blob storage acc>.blob.core.windows.net
Schema: <empty>
login: <empty>
Password: <empty>
Port: <empty>
Extra: {"sas_token": "<blob storage account key1>"}
When I run DAG I'm constantly receiving:
ValueError: You need to provide an account name and either an account_key or sas_token when creating a storage service.
When we use Airflow to connect Azure Blob storage, please make sure that a Airflow connection of type wasb exists. Authorization can be done by supplying a login (=Storage account name) and password (=KEY), or login and SAS token in the extra field. For more details, please refer to here and here
For example
Conn Type : wasb
login : <account name>
password: <account key>
Related
I am using Remote URL option which reaches out to remote web server to retrieve needed data. Simply using this,
https://rundeck:test#myserver.com works. However, I would like to pass the password in secure way so...
Option 1 uses 'Secure pass input' and pass is retrieved from key storage, however the password is then not added to the remote URL in
Option 2, which uses Remote URL, https://rundeck:${option.password.value}#myserver.com. My remote servers receives the password as ${option.password.value} and not the actual password value retrieved in Option 1. I understand that Secure Remote Authentication can't be used in Options, however i don't believe I have seen restrictions on what I want to do with Secure † Password input in Rundeck's docs.
Lastly, typing in the password in Secure † Password input option does add the password to the mentioned URL above. I have tested and verified that ${option.password.value} value can be passed in a job's step, that part works. However, it does not appear to work in cascading options.
Currently, secure options values are not expanded as a part of remote options, you can suggest it here (similar to this). Alternatively, you can create a specific custom plugin for that.
Another approach is to design a workflow that uses the HTTP Workflow Step Plugin (passing your secure password as a part of the authentication in the URL) to access the web service + JQ Filter to generate the desired data, then in another/step you can get that data using data variables.
Like this:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 7f34f7ff-c4a3-4616-a2aa-0df491450366
loglevel: INFO
name: HTTP
nodeFilterEditable: false
options:
- name: mypass
secure: true
storagePath: keys/password
valueExposed: true
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- configuration:
authentication: None
checkResponseCode: 'false'
method: GET
printResponse: 'true'
printResponseToFile: 'false'
proxySettings: 'false'
remoteUrl: https://user:${option.mypass}#myserver.com
sslVerify: 'true'
timeout: '30000'
nodeStep: false
plugins:
LogFilter:
- config:
filter: .
logData: 'true'
prefix: result
type: json-mapper
type: edu.ohio.ais.rundeck.HttpWorkflowStepPlugin
- exec: echo "name 2 is ${data.Name2}"
keepgoing: false
strategy: node-first
uuid: 7f34f7ff-c4a3-4616-a2aa-0df491450366
How to create a service account in Object Storage that has permissions only for one bucket?
I've tried to create service account via Web Console, but can't find any roles related to Object Storage:
To restrict access for a service account, you need to use ACL.
You'll also need to use YC CLI and AWS CLI.
Let me explain everything from the beginning starting with account creation.
# yc iam service-account create name <account name>
id: <service-account id>
folder_id: <folder_id>
created_at: "2019-01-23T45:67:89Z"
name: <account name>
# yc iam access-key create service-account-name <account name>
access_key:
id: <operation id>
service_account_id: <service-account_id>
created_at: "2019-12-34T56:78:90Z"
key_id: <key id>
secret: <secret key>
Save the key_id and the secret key. Now set the AWS CLI according to the instruction in documentation to work from the admin service account.
Create a bucket and set access for it. To grant access, you need to set the service_account_id in the put-bucket-acl command of the id field.
# aws endpoint-url=https://storage.yandexcloud.net s3
mb s3://<bucket_name>
make_bucket: <bucket_name>
# aws endpoint-url=https://storage.yandexcloud.net
s3api put-bucket-acl \
bucket hidden-bucket grant-full-control
id=<service_account_id> \
grant-read
P.S. The only problem is that Yandex Object storage doesn't support permission "WRITE", and you can only set full-access for a service account. It means it can edit ACL on its bucket.
I am trying to authenticate Google Datastore c# SDK in a k8 pod running in google cloud.
I could not find any way to inject the account.json file in to DatastoreDb or DatastoreClient beside using GOOGLE_APPLICATION_CREDENTIALS environment variable.
Using GOOGLE_APPLICATION_CREDENTIALS environment variable is problematic since i do not want to leave the account file exposed.
According to the documentations in: https://googleapis.github.io/google-cloud-dotnet/docs/Google.Cloud.Datastore.V1/index.html
When running on Google Cloud Platform, no action needs to be taken to
authenticate.
But that does not seem to work.
A push in the right direction will be appreciated (:
I'd suggest using a K8s secret to store the service account key and then mounting it in the pod at run time. See below:
Create a service account for the desired application.
Generate and encode a service account key: just generate a .json key for the newly created service account from the previous step and then encode it using base64 -w 0 key-file-name. This is important: K8S expects the secret's content to be Base64 encoded.
Create the K8s secret manifest file (see content below) and then apply it.
apiVersion: v1
kind: Secret
metadata:
name: your-service-sa-key-k8s-secret
type: Opaque
data:
sa_json: previously_generated_base64_encoding
Mount the secret.
volumes:
- name: service-account-credentials-volume
secret:
secretName: your-service-sa-key-k8s-secret
items:
- key: sa_json
path: secrets/sa_credentials.json
Now all you have to do is set the GOOGLE_APPLICATION_CRENDENTIALS to be secrets/sa_credentials.json.
Hope this helps. Sorry for the formatting (on a hurry).
This is how it can be done:
var credential =
GoogleCredential.FromFile(#"/path/to/google.credentials.json").CreateScoped(DatastoreClient.DefaultScopes);
var channel = new Grpc.Core.Channel(DatastoreClient.DefaultEndpoint.ToString(), credential.ToChannelCredentials());
DatastoreClient client = DatastoreClient.Create(channel, settings:
DatastoreSettings.GetDefault());
DatastoreDb db = DatastoreDb.Create(YOUR_PROJECT_ID, client:client);
// Do Datastore stuff...
// Shutdown the channel when it is no longer required.
await channel.ShutdownAsync();
Taken from: https://github.com/googleapis/google-cloud-dotnet/blob/master/apis/Google.Cloud.Datastore.V1/Google.Cloud.Datastore.V1/DatastoreClient.cs
Is there a way to configure user roles with SaltStack for MongoDB 3? I see that the mongodb module has relevant role management functions, but the mongodb_user state does not refer to roles anywhere.
Yes, there certainly is!
You'll want to use the Mongodb module, and execute it from a state using module.run.
so, for example, if you want to manage the roles of a user 'TestUser', you'd create 'manage_mongo_roles.sls', and it will contain states like the following:
manage_mongo_roles:
module.run:
- name: mongodb.user_grant_roles
- m_name: TestUser
- roles: ["admin"]
- database: admin
- user: admin
- password: ''
- host: localhost
- port: 27017
The 'name' paramater for the module MUST be prefaced with a m_, so that the state knows to pass this to the module and not use it as the name of the module to be executed.
Also note that the role MUST be of the format
["role"]
The documentation indicates that, if run from the salt CLI it should be contained in single quotes, like so:
'["role"]'
but doing so in the module.run state WILL cause it to fail, and return with a less than descriptive error message.
In my Symfony2 web app I'm supposed to send two kind of emails: instant and bulk. The instant emails should be send right away while the bulk emails should be send using the spool. With the default configuration of the Swiftmailer in Symfony2 it is impossible to do that because there is only one mailer service.
Similar questions has been asked here in SO ( How to spool emails (in a task) and send normal emails in the moment in the other controllers? ) with no luck, but according to this github thread ( https://github.com/symfony/SwiftmailerBundle/issues/6 ) it is possible to create a second mailer service that can be configured totally different than the default one. Someone there (stof) recommended as a possible solution to follow the configuration found in the SwiftmailerBundle ( https://github.com/symfony/SwiftmailerBundle/blob/master/Resources/config/swiftmailer.xml ) to create this new service, but I don't know how exactly do that.
Does anyone know how to create an additional mailer service that I can configured as a spool while having the default mailer service to send regular (instant) emails?
I found the solution here
This is how I implemented it:
First, I configured the default mailer service to work as a spool to send bulk emails.
(config.yml)
swiftmailer:
transport: %mailer_transport%
encryption: %mailer_encryption%
auth_mode: %mailer_auth_mode%
host: %mailer_host%
username: %mailer_user%
password: %mailer_password%
spool:
type: file
path: "%kernel.root_dir%/spool"
Then, inside one of my bundles (CommonBundle) I registered a new service called "instant_mailer" that maps to the Swiftmailer class.
(service.yml)
instant_mailer:
class: %swiftmailer.class%
arguments: ["#?swiftmailer.transport.real"]
Finally, in my controller, whenever I want to send an email usning the spool I just do:
$mailer = $this->get('mailer');
And to send an instant email:
$mailer = $this->get('instant_mailer');