Merge/Combine top level of dictionary - dictionary

I am trying to create dictionary out of the servers stored in different env variables in ansible.
What i currently have is:
env_loadbalancer_vservers2: "{{ hostvars[inventory_hostname] | dict2items | selectattr('key', 'match', 'env_.*_loadbalancer_vservers(?![_.])') | list | items2dict }} "
Which will:
get all variables in ansible for a specific host,
change dict to items type
as we can easily access now key value I will match only keys I want using regex
Change it back to list
Back to dict
problem is that output looks like this:
{
"env_decision_manager_loadbalancer_vservers": {
"decision_central": {
"ip_or_dns": "ip",
"port": "port",
"protocol": "SSL",
"ssl": true,
"timeout": 600,
}
},
"env_ftp_loadbalancer_vservers": {
"ftp_1": {
"ip_or_dns": "ip",
"port": "port",
"protocol": "FTP",
"ssl": false,
"timeout": 9010,
}
},
"env_jboss_loadbalancer_vservers": {
"jboss": {
"ip_or_dns": "ip",
"port": "port",
"protocol": "SSL",
"ssl": true,
"timeout": 600,
}
"jboss_adm": {
"ip_or_dns": "som_other_ip",
"port": "rando_number",
"protocol": "SSL",
"ssl": true,
"timeout": 86410,
}
}
While my desired output should look like:
{
"decision_central": {
"ip_or_dns": "ip",
"port": "port",
"protocol": "SSL",
"ssl": true,
"timeout": 600,
},
"ftp_1": {
"ip_or_dns": "ip",
"port": "port",
"protocol": "FTP",
"ssl": false,
"timeout": 9010,
},
"jboss": {
"ip_or_dns": "ip",
"port": "port",
"protocol": "SSL",
"ssl": true,
"timeout": 600,
},
"jboss_adm": {
"ip_or_dns": "som_other_ip",
"port": "rando_number",
"protocol": "SSL",
"ssl": true,
"timeout": 86410,
}
So practically I need to remove "Top-level key tier" and merge their values. I've spent quite a time on this solution without any good progress and I would be happy for any advice :)
PS. The solution should be "clean" without any custom modules or actual tasks, the best idea would just add some functions to the filter pipeline mentioned above that will result in the correct format of dict
Thank you :)

Select the attribute value
regexp: 'env_.*_loadbalancer_vservers(?![_.])'
l1: "{{ hostvars[inventory_hostname]|
dict2items|
selectattr('key', 'match', regexp)|
map(attribute='value')|
list }}"
gives the list
l1:
- decision_central:
ip_or_dns: ip
port: port
protocol: SSL
ssl: true
timeout: 600
- ftp_1: null
ip_or_dns: IP
port: port
protocol: FTP
ssl: false
timeout: 9010
- jboss:
ip_or_dns: ip
port: port
protocol: SSL
ssl: true
timeout: 600
jboss_adm:
ip_or_dns: som_other_ip
port: rando_number
protocol: SSL
ssl: true
timeout: 86410
Combine the items of the list
d1: "{{ {}|combine(l1) }}"
gives the dictionary you're looking for
d1:
decision_central:
ip_or_dns: ip
port: port
protocol: SSL
ssl: true
timeout: 600
ftp_1:
ip_or_dns: ip
port: port
protocol: FTP
ssl: false
timeout: 9010
jboss:
ip_or_dns: ip
port: port
protocol: SSL
ssl: true
timeout: 600
jboss_adm:
ip_or_dns: som_other_ip
port: rando_number
protocol: SSL
ssl: true
timeout: 86410

Related

How to add new field to JSON with jq based on other values

Assume I have the following JSON:
{
"Address": "myaddress1.com",
"Port": 6379
},
{
"Address": "myaddress2.com",
"Port": 6379
}
I want to concatenate both the port and address and also prepend and append some text achieving solely with jq. For example I want the new output to be:
{
"Address": "myaddress1.com",
"Port": 6379,
"FullAddress": "redis://myaddress1.com:6379
},
{
"Address": "myaddress2.com",
"Port": 6379,
"FullAddress": "redis://myaddress2.com:6379
}
Is this possible with just JQ or do I need to use a scripting language?
Assuming that your input file is actually an array of objects, then the following might work for you:
$ jq 'map(. + { "FullAddress": "redis://\(.Address):\(.Port)" })' input.json
[
{
"Address": "myaddress1.com",
"Port": 6379,
"FullAddress": "redis://myaddress1.com:6379"
},
{
"Address": "myaddress2.com",
"Port": 6379,
"FullAddress": "redis://myaddress2.com:6379"
}
]

How add ECR and Dynamodb permission in cloudformation?

I already have the ElasticBeanstalk (Multicontainer) template. In addition to that, I want to get Docker image from ECR which I already have ****.dkr.ecr.ap-south-1.amazonaws.com/traveltouch:latest in my ECR repo. (If want to create new repo also fine). I mentioned that image in my Dockerrun.json and Dynamodb also needed. Here how to attach that permission and add My ECR repo and new dynamodb table into to my MyInstanceProfile and the cloudformation.
AWSTemplateFormatVersion: '2010-09-09'
Resources:
sampleApplication:
Type: AWS::ElasticBeanstalk::Application
Properties:
Description: AWS Elastic Beanstalk Sample Application
sampleApplicationVersion:
Type: AWS::ElasticBeanstalk::ApplicationVersion
Properties:
ApplicationName:
Ref: sampleApplication
Description: AWS ElasticBeanstalk Sample Application Version
SourceBundle:
S3Bucket: !Sub "elasticbeanstalk-ap-south-1-182107200133"
S3Key: TravelTouch/Dockerrun.aws.json
sampleConfigurationTemplate:
Type: AWS::ElasticBeanstalk::ConfigurationTemplate
Properties:
ApplicationName:
Ref: sampleApplication
Description: AWS ElasticBeanstalk Sample Configuration Template
OptionSettings:
- Namespace: aws:autoscaling:asg
OptionName: MinSize
Value: '2'
- Namespace: aws:autoscaling:asg
OptionName: MaxSize
Value: '6'
- Namespace: aws:elasticbeanstalk:environment
OptionName: EnvironmentType
Value: LoadBalanced
- Namespace: aws:autoscaling:launchconfiguration
OptionName: IamInstanceProfile
Value: !Ref MyInstanceProfile
SolutionStackName: 64bit Amazon Linux 2018.03 v2.26.0 running Multi-container Docker 19.03.13-ce (Generic)
sampleEnvironment:
Type: AWS::ElasticBeanstalk::Environment
Properties:
ApplicationName:
Ref: sampleApplication
Description: AWS ElasticBeanstalk Sample Environment
TemplateName:
Ref: sampleConfigurationTemplate
VersionLabel:
Ref: sampleApplicationVersion
MyInstanceRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- ec2.amazonaws.com
Action:
- sts:AssumeRole
Description: Beanstalk EC2 role
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSElasticBeanstalkWebTier
- arn:aws:iam::aws:policy/AWSElasticBeanstalkMulticontainerDocker
- arn:aws:iam::aws:policy/AWSElasticBeanstalkWorkerTier
MyInstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Roles:
- !Ref MyInstanceRole
and my Dockerrun.json
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [{
"environment": [{
"name": "POSTGRES_USER",
"value": "admin"
},
{
"name": "POSTGRES_PASSWORD",
"value": "postgres"
},
{
"name": "POSTGRES_DB",
"value": "traveldb"
}
],
"essential": true,
"image": "postgres:12-alpine",
"memory": 300,
"mountPoints": [{
"containerPath": "/var/lib/postgresql/data/",
"sourceVolume": "postgres_data"
}],
"name": "db",
"portMappings": [{
"containerPort": 5432,
"hostPort": 5432
}]
},
{
"essential": true,
"links": [
"db"
],
"name": "web",
"image": "****.dkr.ecr.ap-south-1.amazonaws.com/traveltouch:latest",
"memory": 300,
"portMappings": [{
"containerPort": 80,
"hostPort": 80
}]
}
],
"volumes": [{
"host": {
"sourcePath": "postgres_data"
},
"name": "postgres_data"
}
]
}

Flutter firebase connect to emulator from real device

Hi I am currently using firebase's local emulator and flutter. However, I am using a real device and not a simulator therefore I do not know how to connect to my laptops localhost. I am currently using this code
// [Firestore | localhost:8080]
FirebaseFirestore.instance.settings = const Settings(
host: "localhost:8080",
sslEnabled: false,
persistenceEnabled: false,
);
// [Authentication | localhost:9099]
await FirebaseAuth.instance.useEmulator("http://localhost:9099");
FirebaseFunctions.instance.useFunctionsEmulator(
origin: "http://localhost:5001"
);
// [Storage | localhost:9199]
await FirebaseStorage.instance.useEmulator(
host: "localhost",
port: 9199,
);
Ok I fixed the problem by these two steps:
firebase.json:
{
...
"emulators": {
"auth": {
"host": "0.0.0.0", <--- Adding host
"port": 9099
},
"functions": {
"host": "0.0.0.0",
"port": 5001
},
"firestore": {
"host": "0.0.0.0",
"port": 8080
},
"storage": {
"host": "0.0.0.0",
"port": 9199
},
"ui": {
"enabled": true
}
},
...
}
flutter main.dart:
const String localIp = "You local ip goes here";
FirebaseFirestore.instance.settings = const Settings(
host: localIp+":8080",
sslEnabled: false,
persistenceEnabled: false,
);
await FirebaseAuth.instance.useEmulator("http://"+localIp+":9099");
FirebaseFunctions.instance.useFunctionsEmulator(
origin: "http://"+localIp+":5001"
);
await FirebaseStorage.instance.useEmulator(
host: localIp,
port: 9199,
);

Kubernetes Cluster APP DNS

If I have a domain name www.domain.com registered and I have fresh kubernetes cluster up and running. I have successfully lauched Deployments and Services to expose the requirements.
The service is creating a LoadBalancer on my GCE cluster and when I try to access my APP through the the external IP its working.
But this is what I wanted to achieve ideally :
To route all the traffic for my apps as www.app.domain.com , www.app2.domain.com. Upon research I have found that I need an Ingress Controller preferably NGINX server, I have been try to do this and failing miserably.
This is the service exposing JSON for my deployments:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": 'node-js-srv'
},
"spec": {
"type": 'LoadBalancer',
"label": {
'app': 'node-js-srv'
},
"ports": [
{
"targetPort": 8080,
"protocol": "TCP",
"port": 80,
"name": "http"
},
{
"protocol": "TCP",
"port": 443,
"name": "https",
"targetPort": 8080
}
],
"selector": {
"app": 'node-js'
},
}
}
GCE/GKE have already a Ingress Controller and you could use that one.
You must specify your service as type NodePortand create a ressource from type Ingress
See:
https://kubernetes.io/docs/user-guide/ingress/
You find a example for GCE here https://github.com/kubernetes/ingress/tree/master/examples/deployment/gce
Service:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": 'node-js-srv'
},
"spec": {
"type": 'NodePort',
"label": {
'app': 'node-js-srv'
},
"ports": [
{
"targetPort": 8080,
"protocol": "TCP",
"port": 80,
"name": "http"
},
{
"protocol": "TCP",
"port": 443,
"name": "https",
"targetPort": 8080
}
],
"selector": {
"app": 'node-js'
},
}
}
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
spec:
rules:
- host: www.app.domain.com
http:
paths:
- backend:
serviceName: node-js-srv
servicePort: 80
- host: www.app2.domain.com
http:
paths:
- backend:
serviceName: xyz
servicePort: 80

Slow static IP assignment to Kubernetes Load Balancer on Google Cloud Platform

When I create a Kubernetes Load Balancer Service using the following specification:
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "a1"
},
"spec": {
"selector": {
"app": "a1"
},
"ports": [
{
"port": 80,
"targetPort": 80,
"name": "http"
},
{
"port": 443,
"targetPort": 443,
"name": "https"
}
],
"type": "LoadBalancer"
}
}
I have to wait between 1 and 2 minutes until I get an EXTERNAL_IP.
I thought of reserving static IPs before and assigning them on Service creation:
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "a1"
},
"spec": {
"selector": {
"app": "a1"
},
"ports": [
{
"port": 80,
"targetPort": 80,
"name": "http"
},
{
"port": 443,
"targetPort": 443,
"name": "https"
}
],
"type": "LoadBalancer",
"loadBalancerIP": "130.211.64.237"
}
}
But I have the same delay 1 - 1.5 minutes:
$ kubectl get svc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
a1 10.127.248.248 130.211.64.237 80/TCP,443/TCP app=a1 1m
Does anyone know why this delay happens and if there is a way to shorten it?
The delay is unfortunately just caused by latency in the Compute Engine APIs for creating the components of a load balanced service, and there's no real way to avoid it.
The Kubernetes master, when instructed to create a load balancer, has to create a static IP address, a target pool, a forwarding rule, and a firewall rule. These resources can take some time to fully initialize and be ready for use, so a wait of a minute or two is to be expected for now.

Resources