when using puppet I used to do something like the following:
class common_vars {
$some_var = calculate_some_var()
}
class A {
Class[common_vars] -> Class[A]
do_something_with($common_vars::some_var)
}
class B {
Class[common_vars] -> Class[B]
do_something_else_with($common_vars::some_var)
}
I'm now looking for something similar in saltstack.
I used this for example for setting up network services which are bound to a specific network address. I first setup networking, then calculate some common addresses, then setup the network services.
Instead of calculating these addresses (where different services may share the same address) over and over again in each state-file, I'd like to calculate them once, and reuse them later on.
I would do this in Salt's Pillar.
Here's a walkthrough: http://docs.saltstack.com/topics/tutorials/pillar.html
This probably goes beyond Salts YAML renderer. Instead you could write your SLS in pure python, giving you the ability to to write functions, classes etc and return the state data. For example:
some_var = lambda: 1 + 2
def do_something():
global(some_var)
return some_var() + 5 # 8
def do_something_else():
global(some_var)
return some_var() + 10 # 13
def run():
return {
'/home/user/somefile.txt': {
'file.append': [
{'text': do_something()},
]
},
'/home/user/some_other.txt': {
'file.append': [
{'text': do_something_else()},
]
},
}
Related
Want to know if Karate supports Neo4j database?. If yes, would like to have an ex. feature which will be helpful.
Karate supports any Java code so that way indirectly you should be able to do anything you want.
Please look at this JDBC example which will get you started: dogs.feature. You will need to write a little bit of Java code (one time only) so if you don't have that skill, please ask someone to help.
# use jdbc to validate
* def config = { username: 'sa', password: '', url: 'jdbc:h2:mem:testdb', driverClassName: 'org.h2.Driver' }
* def DbUtils = Java.type('com.intuit.karate.demo.util.DbUtils')
* def db = new DbUtils(config)
# since the DbUtils returns a Java Map, it becomes normal JSON here !
# which means that you can use the full power of Karate's 'match' syntax
* def dogs = db.readRows('SELECT * FROM DOGS')
* match dogs contains { ID: '#(id)', NAME: 'Scooby' }
I need to switch the Symfony cache adapter depending on ENV conditions. Like if some variable is set, use "cache.adapter.apcu" or use "cache.adapter.filesystem" otherwise.
Is it possible somehow? The documentation is not really helpful with it.
P.S.: It is not possible for us to do this via the creation of a whole new environment
Here is a basic example for a CacheAdapter which has adapters fed into it and then picking one based on a parameter (or alternatively envvar):
<?php
namespace App\Cache;
use Psr\Cache\CacheItemInterface;
use Psr\Cache\InvalidArgumentException;
use Psr\Container\ContainerInterface;
use Symfony\Component\Cache\Adapter\AdapterInterface;
use Symfony\Component\Cache\CacheItem;
use Symfony\Contracts\Service\ServiceSubscriberInterface;
use Symfony\Contracts\Service\ServiceSubscriberTrait;
class EnvironmentAwareCacheAdapter implements AdapterInterface, ServiceSubscriberInterface
{
use ServiceSubscriberTrait;
private string $environment;
public function __construct(string $environment)
{
$this->environment = $environment;
}
public function getItem($key)
{
return $this->container->get($this->environment)->getItem($key);
}
public function getItems(array $keys = [])
{
return $this->container->get($this->environment)->getItems($key);
}
// ...
}
This is how you would configure it:
services:
App\Cache\EnvironmentAwareCacheAdapter:
arguments:
$environment: '%kernel.environment%'
tags:
- { name: 'container.service_subscriber', key: 'dev', id: 'cache.app' }
- { name: 'container.service_subscriber', key: 'prod', id: 'cache.system' }
It's not the most elegant solution and is missing error handling and possibly a fallback. Basically, by adding tags with an appropriately named key and the alias to an existing cache as id, you can then refer to that cache with the key in your own adapter. So, depending on your environment you will pick either one. You can replace the key and the constructor argument with anything else you like. I hope that helps.
It seems like you can not set up your cache configuration to use a environment variable like so:
framework:
cache:
app: %env(resolve:CACHE_ADAPTER)%
It is the constraint of FrameworkBundle that provides the cache service. And this constraint will not be "fixed" (Using environment variables at compile time #25173).
To make it possible you need to make your own cache provider that can just pass all arguments to the needed cache provider. You will have access to environment variables at runtime and so you can use it as a proxy that knows what provider to use.
I'm learning Airflow and am planning to set some variables to use across different tasks. These are in my dags folder, saved as configs.json, like so:
{
"vars": {
"task1_args": {
"something": "This is task 1"
},
"task2_args": {
"something": "this is task 2"
}
}
}
I get that we can enter Admin-->Variables--> upload the file. But I have 2 questions:
What if I want to adjust some of the variables while airflow is running? I can adjust my code easily and it updates in realtime but it doesn't seem like this works for variables.
Is there a way to just auto-import this specific file on startup? I don't want to have to add it every time I'm testing my project.
I don't see this mentioned in the docs but it seems like a pretty trivial thing to want.
What you are looking for is With code, how do you update an airflow variable?
Here's an untested snippet that should help
from airflow.models import Variable
Variable.set(key="my_key", value="my_value")
So basically you can write a bootstrap python script to do this setup for you.
In our team, we use such scripts to setup all Connections, and Pools too
In case you are wondering, here's the set(..) method from source
#classmethod
#provide_session
def set(
cls,
key: str,
value: Any,
serialize_json: bool = False,
session: Session = None
):
"""
Sets a value for an Airflow Variable with a given Key
:param key: Variable Key
:param value: Value to set for the Variable
:param serialize_json: Serialize the value to a JSON string
:param session: SQL Alchemy Sessions
"""
if serialize_json:
stored_value = json.dumps(value, indent=2)
else:
stored_value = str(value)
Variable.delete(key, session=session)
session.add(Variable(key=key, val=stored_value))
session.flush()
I have been exploring the profile list feature of the kubespawner, and am presented with a list of available notebooks when I login. All good. Now I have the use case of User A logging in and seeing notebooks 1 and 2, with User B seeing notebooks 2 and 3.
Is it possible to assign certain profiles to specific users?
I do not think Jupyterhub enables you do to that based on this https://zero-to-jupyterhub.readthedocs.io/en/latest/user-environment.html
I think a way to achieve this would be having multiple jupyterhub instances configured with different list of notebook images. Based on something like AD group, you redirect your user to required instance so they get specific image options.
You can dynamically configure the profile_list to provide users with different image profiles.
Here's a quick example:
#gen.coroutine
def get_profile_list(spawner):
"""get_profile_list is a hook function that is called before the spawner is started.
Args:
spawner (_type_): jupyterhub.spawner.Spawner instance
Yields:
list: list of profiles
"""
# gets the user's name from the auth_state
auth_state = yield spawner.user.get_auth_state()
if spawner.user.name:
# gets the user's profile list from the API
api_url = # Make a request
data_json = requests.get(url=api_url, verify=False).json()
data_json_str = str(data_json)
data_json_str = data_json_str.replace("'", '"')
data_json_str = data_json_str.replace("True", "true")
data_python = json.loads(data_json_str)
return data_python
return [] # return default profile
c.KubeSpawner.profile_list = get_profile_list
And you can have your API return some kind of configuration similar to this:
[
{
"display_name": "Google Cloud in us-central1",
"description": "Compute paid for by funder A, closest to dataset X",
"spawner_override": {
"kubernetes_context": "<kubernetes-context-name-for-this-cluster">,
"ingress_public_url": "http://<ingress-public-ip-for-this-cluster>"
}
},
{
"display_name": "AWS on us-east-1",
"description": "Compute paid for by funder B, closest to dataset Y",
"spawner_override": {
"kubernetes_context": "<kubernetes-context-name-for-this-cluster">,
"ingress_public_url": "http://<ingress-public-ip-for-this-cluster>",
"patches": {
"01-memory": """
kind: Pod
metadata:
name: {{key}}
spec:
containers:
- name: notebook
resources:
requests:
memory: 16Mi
""",
}
}
},
]
Credit: https://multicluster-kubespawner.readthedocs.io/en/latest/profiles.html
I've got a two dimensional set of configuration variables:
$environments = [
{
'name' => 'foo',
'port' => '1234',
},
{
'name' => 'bar',
'port' => '4321',
},
]
Is it possible to iterate over the arrays and use the variables from the inner arrays. E.g. I want to create an user account for every name.
# How to get each name?
user { $environment:
ensure => 'present'
}
Puppet 4 provides built-in functions for iterating over aggregate values, and new, Ruby-like syntax to go with them. These are also available in recent enough versions of Puppet 3 when the future parser is enabled. If you are using such a Puppet, you could approach the problem like this:
each($environments) |$e| {
foo { $e['name']: port => $e['port'] }
}
I never really worked with iterations in puppet.
But to create multiple resources from a hash (note hash not array) you can use the create_resources() function.
The documentation has a good example.
Your hash can not contain parameters that the resource dose not understand tho. In your example port would not work with the user resource as it dose not understand that parameter.
Hope this helps a bit anyway.