I am trying to parse JSON and getting stuck when it comes to iterating over a array.
[
{
"severity": "low",
"name": "AWS IAM policy attached to users",
"rule": "$.resource[*].aws_iam_policy_attachment[*].*[*].users exists and $.resource[*].aws_iam_policy_attachment[*].*[*].users[*] is not empty",
"files": [
"main.tf"
],
"id": "1903f355-b68f-4d9c-84dd-c46abe4f8673"
},
{
"severity": "medium",
"name": "AWS VPC allows unauthorized peering",
"rule": "$.resource[*].aws_vpc_peering_connection[*].*[*].peer_vpc_id does not equal $.resource[*].aws_vpc_peering_connection[*].*[*].vpc_id",
"files": [
"vpcpeering.tf",
"main.tf"
],
"id": "59356130-d856-470d-a08e-b2a0ba2a4ac7"
}
]
. += [{"severity": "Severity","name": "Name","files": "Files","id": "0"}, {"severity": "--------","name": "------------------------------------","files": "--------","id": "01"}] | sort_by(.id) | .[] | ["| " + .severity, "| " + .name, "| " + (.files | join (",")) , "| " ]
jq: error (at <stdin>:131): Cannot iterate over string ("Files")
exit status 5
https://jqplay.org/s/KzNXa7NqKq
I am trying to print the results in tabular format.
Severity. Name. Files
low AWS IAM policy attached to users. main.tf
low AWS VPC allows unauthorized peering. vpcpeering.tf, main.tf
Since the number of items in the .files array is variable, the following will focus on producing TSV (tab-separated values). You can easily adapt this solution to your requirements regarding a table with a variable number of columns.
First, ignoring the headers, note that the following filter, when used in conjunction with the -r command-line option, produces the output as shown:
map({id, severity, name, files})
| sort_by(.id)[]
| [.severity, .name, .files[]]
| #tsv
Output
low AWS IAM policy attached to users main.tf
medium AWS S3 object versioning is disabled cloudtrail.tf.json cloudtrail.tf
medium AWS VPC NACL allows traffic from blocked ports SG.tf
medium AWS security group allow egress traffic from blocked ports - 21,22,135,137-139,445,69 securitygroup22.tf
medium AWS Access logging not enabled on S3 buckets cloudtrail.tf.json cloudtrail.tf
medium AWS VPC allows unauthorized peering vpcpeering.tf
medium AWS IAM password policy does not have a minimum of 14 characters iampassword.tf
medium AWS security group allows traffic from blocked ports securitygroup22.tf
high AWS Security Groups allow internet traffic to SSH port (22) securitygroup22.tf
medium AWS EC2 instance have SSH port open to internet securitygroup22.tf
medium AWS IAM password policy allows password reuse iampassword.tf
medium AWS VPC NACL allow egress traffic from blocked ports SG.tf
high AWS Security Groups with Inbound rule overly permissive to All Traffic securitygroup22.tf
Headers
A modular way to deal with the header row is to define headers, e.g. along these lines:
def headers:
["Severity","Name","Files"],
["--------","----","-----"] ;
With this def, we essentially just add headers to the previous jq program:
headers,
(map({id, severity, name, files})
| sort_by(.id)[]
| [.severity, .name, .files[]])
| #tsv
Using a different "join" character for .files
Instead of the line:
[.severity, .name, .files[]]
you might wish to use a different "join" character for the files, e.g.
[.severity, .name, (.files|join(";"))]
Season to taste.
Related
We have configured OpenSearch in AWS. We need to send two different application logs to two different indexes in OpenSearch using Fluent Bit. We are using tail as an INPUT and ES as an OUTPUT. Please find below fluent-bit configuration -
INPUT -
[INPUT]
name tail
path /var/log/messages
Refresh_Interval 1
Tag messages
Path_Key On
read_from_head true
[INPUT]
name tail
path /var/log/secure
Refresh_Interval 1
Tag secure
Path_Key On
read_from_head true
OUTPUT -
[OUTPUT]
Name es
Match *
Host opensearch-url
Port 443
HTTP_User admin
HTTP_Passwd ************
tls On
tls.verify Off
Include_Tag_Key On
Tag_Key tag
I am working on "Patch an AMI and update an Auto Scaling group" and followed the AWS document to configure but I am stuck at "Task 3: Create a runbook, patch the AMI, and update the Auto Scaling group" with the below error. To fix it I have added "user data" while starting the instance(startInstances). As it's accepting only base64, converted and provided base64(UmVzdGFydC1TZXJ2aWNlIEFtYXpvblNTTUFnZW50Cg==).
I tried to execute with the below user data but both are not working, even I tried to apply a new step with the same commands but failed to patch AMI.
Tried the below script:
<powershell> powershell.exe -Command Start-Service -Name AmazonSSMAgent </powershell> <persist>true</persist>
Tried to start and restart SSM agent.
Restart-Service AmazonSSMAgent
base64: UmVzdGFydC1TZXJ2aWNlIEFtYXpvblNTTUFnZW50Cg==
YAML sample:
mainSteps:
- name: startInstances
action: 'aws:runInstances'
timeoutSeconds: 1200
maxAttempts: 1
onFailure: Abort
inputs:
ImageId: '{{ sourceAMIid }}'
InstanceType: m3.large
MinInstanceCount: 1
MaxInstanceCount: 1
SubnetId: '{{ subnetId }}'
UserData: UmVzdGFydC1TZXJ2aWNlIEFtYXpvblNTTUFnZW50Cg==
Still, I am seeing the below error.
Step timed out while step is verifying the SSM Agent availability on the target instance(s). SSM Agent on Instances: [i-xxxxxxxx] are not functioning. Please refer to Automation Service Troubleshooting Guide for more diagnosis details.
Your suggestion/solutions help me a lot. Thank you.
I have troubleshoot and fixed the issue.
The issue was security group is missing on the instance. To communicate the SendCommand API of SSM Service with SSM agent on instance needs a security group that allows HTTPS port 443. I have attached SG allowing 443 port then the SSM agent can communicate with the EC2 instance.
EC2 instance IAM role should have SSM agent full access policy attached to it.
We might get the same issue when the SSM agent is not running on the EC2 instance, for that we need to provide user-data or add a new step in YAML or JSON on systems manager Documents.
If you are working on a windows instance use the below script to start the SSM agent. If its Linux server uses Linux script/commands.
{
"schemaVersion": "2.0",
"description": "Start SSM agent on instance",
"mainSteps": [
{
"action": "aws:runPowerShellScript",
"name": "runPowerShellScript",
"inputs": {
"runCommand": [
"Start-Service AmazonSSMAgent"
]
}
}
]
}
I would like to setup ansible on my Mac. I've done something similar in GNS3 and it worked but here there are more factors I need to take into account. so I have the Ansible installed. I added hostnames in /etc/hosts and I can ping using the hostnames I provided there.
I have created ansible folder which I am going to use and put ansible.cfg inside:
[defaults]
hostfile = ./hosts
host_key_checking = false
timeout = 5
inventory = ./hosts
In the same folder I have hosts file:
[tp-lab]
lab-acc0
When I try to run the following command: ansible tx-edge-acc0 -m ping
I am getting the following errors:
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
[WARNING]: Unhandled error in Python interpreter discovery for host tx-edge-acc0: unexpected output from Python interpreter discovery
[WARNING]: sftp transfer mechanism failed on [tx-edge-acc0]. Use ANSIBLE_DEBUG=1 to see detailed information
[WARNING]: scp transfer mechanism failed on [tx-edge-acc0]. Use ANSIBLE_DEBUG=1 to see detailed information
[WARNING]: Platform unknown on host tx-edge-acc0 is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change the meaning of that path. See
https://docs.ansible.com/ansible/2.10/reference_appendices/interpreter_discovery.html for more information.
tx-edge-acc0 | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"module_stderr": "Shared connection to tx-edge-acc0 closed.\r\n",
"module_stdout": "\r\nerror: unknown command: /bin/sh\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 0
Any idea what might the problem here? much appreciated
At first glance it seems that you ansible controller does not load configuration files (especially ansible.cfg) when playbook is fired.
(From documentation) Ansible searches for configuration files in the following order, processing the first file it finds and ignoring the rest:
$ANSIBLE_CONFIG if the environment variable is set.
ansible.cfg if it’s in the current directory.
~/.ansible.cfg if it’s in the user’s home directory.
/etc/ansible/ansible.cfg, the default config file.
Edit: For peace of mind it is good to use full paths
EDIT Based on comments
$ cat /home/ansible/ansible.cfg
[defaults]
host_key_checking = False
inventory = /home/ansible/hosts # <-- use full path to inventory file
$ cat /home/ansible/hosts
[servers]
server-a
server-b
Command & output:
# Supplying inventory host group!
$ ansible servers -m ping
server-a | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
server-b | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
After installed devstack, I follow the doc to set security rules:
$ openstack security group rule create --proto icmp --dst-port 0 default
More than one security_group exists with the name 'default'.
It claims I have multiple security_group which named with default, but I check with nova secgroup-list:
$ nova secgroup-list
WARNING: Command secgroup-list is deprecated and will be removed after Nova 15.0.0 is released. Use python-neutronclient or python-openstackclient instead.
+--------------------------------------+---------+------------------------+
| Id | Name | Description |
+--------------------------------------+---------+------------------------+
| e5466481-f656-46fa-ac72-56f7ab118c70 | default | Default security group |
+--------------------------------------+---------+------------------------+
So, there is only one security group named with default...
Could any one give me some suggestion about that?
Yes that's a common problem when you have multiple tenants, each with a "default" group. The solution I have applied before is to use the group ID instead of the group name when it comes to the "default" group.
Following "OpenStack installation guide for Ubunut 12.04 LTS" have configured a network of virtual machine with keystone e swift services. I have configured swift to use keystone service for authentication.All work perfectly. The problem has arisen when trying to configure a second swift service on keystone. On key stone i have create the two swift services called "swift" and "swift2" both with type property set set to the value "object-store". I set the endpoints for both services: "http://proxyserver1:8080/v1/AUTH_%(tenant_id)s" for first swift service and "http://proxyserver2:8080/v1/AUTH_%(tenant_id)s" for the second. For each swift service i have created a user with role "admin" and both belonging to same tenant "service". When i try to authenticate a user on keystone using:
curl -s -d '{"auth": {"tenantName": "service", "passwordCredentials": {"username": "swift", "password": "PASSWORD"}}}' -H 'Content-type: application/json' http://identity-manager:5000/v2.0/tokens
I receive a response with a incorrect serviceCatalog array. It contains only two endpoints: endpoints for keystone itself and one for an object-store service. So is missing one object-store service endpoint. Moreover the endpoits for the only object-store service returned is wrong because is a property mix of the two object-store service:
{
"endpoints": [
{
"adminURL": "http://proxyserver2:8080",
"region": "regionOne",
"internalURL": "http://proxyserver1:8080/v1/AUTH_74eb7b8a36f64105a7d77fe00a2b6d41",
"id": "0d30428e6a2d4035baf1c84401c8ff1b",
"publicURL": "http://proxyserver1:8080/v1/AUTH_74eb7b8a36f64105a7d77fe00a2b6d41"
}
],
"endpoints_links": [],
"type": "object-store",
"name": "swift2"
}
My question is whether it is possible to configure two swift cluster on keystone. If the answer is yes, where I could have been wrong?