How to use kafkacat -f option? - unix

I am trying to consume Kafka message using below options [format token as requirement]
kafkacat -C -b localhost:9092 -t test-topic -p 0 -f 'Topic %t [%p] at offset %o: key %k: %s\n' -o -1  -e | jq .
But getting below error,
Error: file/topic list only allowed in producer(-P)/kafkaconsumer(-G) mode
Usage: <path> <options> [file1 file2 .. | topic1 topic2 ..]]
kcat - Apache Kafka producer and consumer tool
https://github.com/edenhill/kcat
If I try above command without -f option then it works, but I want the formatted output. What would be the issue?

This has worked for me
[root#vm-10-75-112-163 cloud-user]# kcat -b mybroker:9092 -t test1 -f 'Topic %t[%p], offset: %o, key: %k, payload: %S bytes: %s\n' -C
Topic test1[0], offset: 0, key: , payload: 1 bytes: 1
Topic test1[0], offset: 1, key: , payload: 1 bytes: 2

Related

wso2 API Manager AutoScaling Group Not Creating

I've been trying to use the stock templates from the wso2 website to deploy wso2 to AWS. The CloudFormation stack fails to create because the auto scaler fails to create.
I checked the EC2 instances and the actual instance is running and healthy.
I SSH'ed to the instance and ran:
grep -ni 'error\|failure' $(sudo find /var/log -name cfn-init\* -or -name cloud-init\*)
to check the log files for errors or failures. I didn't find any.
I then tried to run:
/usr/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource WSO2MINode1AutoScalingGroup --region ${AWS::Region}
from the correct instance. I filled in the correct information manually when I ran the command on the instance. I pulled this command from the YAML file from the wso2 website. This command returned an Access Denied error for the stack.
Any help would be greatly appreciated. I feel like I'm over looking something simple. I included the LaunchConfiguration and the template for the Auto Scaling group below if that's useful. Happy to provide other information.
WSO2MINode1LaunchConfiguration:
Type: 'AWS::AutoScaling::LaunchConfiguration'
Properties:
ImageId: !FindInMap
- WSO2APIMAMIRegionMap
- !Ref 'AWS::Region'
- !Ref OperatingSystem
InstanceType: !Ref WSO2InstanceType
BlockDeviceMappings:
- DeviceName: /dev/sda1
Ebs:
VolumeSize: '20'
VolumeType: gp2
DeleteOnTermination: 'true'
KeyName: !Ref KeyPairName
SecurityGroups:
- !Ref WSO2MISecurityGroup
UserData: !Base64
'Fn::Sub': |
Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0
--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config
cloud_final_modules:
- [scripts-user, always]
--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
export PATH=~/.local/bin:$PATH
if [[ ${OperatingSystem} == "Ubuntu1804" ]]; then
export DEBIAN_FRONTEND=noninteractive
apt-get update
apt install -y puppet nfs-common
apt install -y python-pip
apt install -y python3-pip
pip3 install boto3
pip install boto3
sed -i '/\[main\]/a server=puppet' /etc/puppet/puppet.conf
fi
if [[ ${OperatingSystem} == "CentOS7" ]]; then
yum install -y epel-release zip unzip nfs-utils
yum install -y python-pip
pip install boto3
rpm -Uvh https://yum.puppetlabs.com/puppet5/puppet5-release-el-7.noarch.rpm
yum install -y puppet-agent
echo $'[main]\nserver = puppet\ncertname = agent3\nenvironment = production\n\runinterval = 1h' > /etc/puppetlabs/puppet/puppet.conf
fi
pip install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz
export PuppetmasterIP=${PuppetMaster.PrivateIp}
echo "$PuppetmasterIP puppet puppetmaster" >> /etc/hosts
export MI_HOST=${WSO2APIMLoadBalancer.DNSName}
export MI_PORT=8290
service puppet restart
sleep 150
export FACTER_profile=mi
if [[ ${OperatingSystem} == "Ubuntu1804" ]]; then
puppet agent -vt >> /var/log/puppetlog.log
fi
if [[ ${OperatingSystem} == "CentOS7" ]]; then
/opt/puppetlabs/bin/puppet agent -vt >> /var/log/puppetlog.log
fi
sleep 30
service puppet stop
sh /usr/lib/wso2/wso2am/4.1.0/wso2mi-4.1.0/bin/micro-integrator.sh start
if [[ ${OperatingSystem} == "Ubuntu1804" ]]; then
echo "/usr/local/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource WSO2MINode1AutoScalingGroup --region ${AWS::Region}" >> /home/ubuntu/cfn-signal.txt
/usr/local/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource WSO2MINode1AutoScalingGroup --region ${AWS::Region}
fi
if [[ ${OperatingSystem} == "CentOS7" ]]; then
/usr/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource WSO2MINode1AutoScalingGroup --region ${AWS::Region}
fi
echo 'export HISTTIMEFORMAT="%F %T "' >> /etc/profile.d/history.sh
cat /dev/null > ~/.bash_history && history -c
DependsOn:
- WSO2MISecurityGroup
- WSO2APIMSecurityGroup
- PuppetMaster
WSO2MINode1AutoScalingGroup:
Type: 'AWS::AutoScaling::AutoScalingGroup'
Properties:
LaunchConfigurationName: !Ref WSO2MINode1LaunchConfiguration
DesiredCapacity: 1
MinSize: 1
MaxSize: 1
VPCZoneIdentifier:
- !Ref WSO2APIMPrivateSubnet1
- !Ref WSO2APIMPrivateSubnet2
Tags:
- Key: Name
Value: !Sub ${EnvironmentName} WSO2MIInstance
PropagateAtLaunch: 'true'
CreationPolicy:
ResourceSignal:
Count: 1
Timeout: PT30M
UpdatePolicy:
AutoScalingRollingUpdate:
MaxBatchSize: '2'
MinInstancesInService: '1'
PauseTime: PT10M
SuspendProcesses:
- AlarmNotification
WaitOnResourceSignals: false
DependsOn:
- WSO2APIMNode1AutoScalingGroup
- WSO2APIMNode2AutoScalingGroup
Thank you!

Ansible async task with poll=0 living beyond timeout

For the context : ansible 2.7.9
I'm experimenting with Ansible asynchronous actions, and working with this playbook :
---
- hosts: 127.0.0.1
connection: local
gather_facts: no
tasks:
- name: This is "long task", sleeping for {{ secondsTaskLong }} seconds
shell: "[ -f testFile ] && rm testFile; sleep {{ secondsTaskLong }}; touch testFile"
async: "{{ timeoutSeconds }} "
poll: 0
- stat:
path: testFile
register: checkTestFile
- name: Check the test file exists
assert:
that: checkTestFile.stat.exists
...
test 1 :
ansible-playbook async.yml --extra-var "secondsTaskLong=2 timeoutSeconds=3"
The assertion fails, but if I check the directory contents, ls reveals the test file ./testFile is here.
test 2 :
Testing with a 5s duration :
ansible-playbook async.yml --extra-var "secondsTaskLong=5 timeoutSeconds=3"; watch -n 1 -d "ls -l testFile"
The final watch part reveals that the test file is created a few seconds AFTER the end of the playbook execution.
test 3 :
Now with a 9s duration (i.e. way beyond the timeout) :
ansible-playbook async.yml --extra-var "secondsTaskLong=9 timeoutSeconds=3"; watch -n 1 -d "ls -l testFile"
The test file is still created.
test 4 :
Now trying with 10 seconds :
ansible-playbook async.yml --extra-var "secondsTaskLong=10 timeoutSeconds=3"; watch -n 1 -d "ls -l testFile"
The test file is NOT created.
What is going on exactly ? What allows this "long task" live for 9 seconds, beyond the timeout ? What kills it at 10 seconds ?
EDIT:
I can add an explicit connection timeout > 10s and still observe the same behavior :
ansible-playbook async.yml -T 20 --extra-var "secondsTaskLong=9 timeoutSeconds=3"; watch -n 1 -d "ls -l testFile"
ansible-playbook async.yml -T 20 --extra-var "secondsTaskLong=10 timeoutSeconds=3"; watch -n 1 -d "ls -l testFile"
What is going on exactly ? What allows this "long task" live for 9
seconds, beyond the timeout ? What kills it at 10 seconds ?
When poll > 0, ansible uses async value as timeout, otherwise, connection timeout (default is 10 seconds) is used. Refer the details on connection timeout here and poll value here.
For example, if you change the task like below and use timeoutSeconds=3 and secondsTaskLong=4 then you will get timeout error after 3 seconds and assert will fail. However, for timeoutSeconds=3 and secondsTaskLong=2, assert will be successful.
- name: This is "long task", sleeping for {{ secondsTaskLong }} seconds
shell: "[ -f testFile ] && rm testFile; sleep {{ secondsTaskLong }}; touch testFile"
async: "{{ timeoutSeconds }} "
poll: 1

Mapping ACPI events for brightness buttons on Lenovo Yoga X1 v2

I installed Ubuntu Gnome 17.04 on my new Lenovo Yoga X1 (version 2) and the brightness buttons don't work out of the box. I've gone through the steps (below) I thought necessary to map these keys to xrandr calls, but nothing happens, even if I log the key mapping event being caught successfully. If I manually run the logged command brightness changes appropriately. What am I missing in the ACPI route?
First see what ACPI events the brightness buttons are sending
$ acpi_listen
video/brightnessdown BRTDN 00000087 00000000
video/brightnessup BRTUP 00000086 00000000
Then create the event definitions
$ cat yoga-brightness-up
event=video/brightnessup BRTUP 00000086
action=/etc/acpi/yoga-brightness.sh up
$ cat yoga-brightness-down
event=video/brightnessdown BRTDN 00000087
action=/etc/acpi/yoga-brightness.sh down
Define the action script
$ cat /etc/acpi/yoga-brightness.sh
#!/bin/sh
# Where the backlight brightness is stored
BR_DIR="/sys/devices/pci0000:00/0000:00:02.0/drm/card0/card0-eDP-1/intel_backlight/"
test -d "$BR_DIR" || exit 0
MIN=0
MAX=$(cat "$BR_DIR/max_brightness")
VAL=$(cat "$BR_DIR/brightness")
if [ "$1" = down ]; then
VAL=$((VAL-71))
else
VAL=$((VAL+71))
fi
if [ "$VAL" -lt $MIN ]; then
VAL=$MIN
elif [ "$VAL" -gt $MAX ]; then
VAL=$MAX
fi
PERCENT=`echo "$VAL / $MAX" | bc -l`
#export XAUTHORITY=/home/ivo/.Xauthority # CHANGE "ivo" TO YOUR USER
#export DISPLAY=:0.0
export XAUTHORITY=/home/jorvis/.Xauthority
export DISPLAY=:0
echo "xrandr --output eDP-1 --brightness $PERCENT" > /tmp/yoga-brightness.log
xrandr --output eDP-1 --brightness $PERCENT
echo $VAL > "$BR_DIR/brightness"
Restart acpid
$ sudo /etc/init.d/acpid reload
Success should write to the brightness log
$ rm /tmp/yoga-brightness.log
[ hit brightness down button three times ]
$ sudo cat /tmp/yoga-brightness.log
xrandr --output eDP-1 --brightness .76603773584905660377
Log is written correctly, as is the brightness value:
$ cat /sys/devices/pci0000:00/0000:00:02.0/drm/card0/card0-eDP-1/intel_backlight/brightness
759
Nothing happens on the actual display though. It DOES work
though if I manually run the command which was logged to
have run.
$ xrandr --output eDP-1 --brightness .76603773584905660377
And after reading through this post I noticed the ENV part again and double-checked it. The problem was the setting of $XAUTHORITY in the script. It was fine for it to point to my ~/.Xauthority (which didn't exist by default), but I needed to do this:
$ ln -s /run/user/1000/gdm/Xauthority ~/.Xauthority
After that the brightness buttons worked.

How do I deploy a artifact with maven layout using REST API?

I can do a normal deploy using the below command
curl -i -X PUT -u $artifactoryUser:$artifactoryPassword -T /path/to/file/file.zip http://localhost/artifactory/simple/repo/groupId/artifactId/version/file.zip
However, this will not resolve or update maven layout on the artifact. Is there a way I can upload without using the artifactory-maven plugin?
I found a solution to this question I had posted.
Syntax Used:
curl -i -X PUT -K $CURLPWD "http://localhost/artifactory/$REPO/$groupId/$artifactId/$versionId/$artifactId-$versionId.$fileExt"
Ended up writing a script so that md5 & sha1 values are uploaded with the file, or else, I had to go in Artifactory and fix it manually.
#!/bin/bash
usage() {
echo "Please check the Usage of the Script, there were no enough parameters supplied."
echo "Usage: ArtifactoryUpload.sh localFilePath Repo GroupID ArtifactID VersionID"
exit 1
}
if [ -z "$5" ]; then
usage
fi
localFilePath="$1"
REPO="$2"
groupId="$3"
artifactId="$4"
versionId="$5"
ARTIFAC=http://localhost/artifactory
if [ ! -f "$localFilePath" ]; then
echo "ERROR: local file $localFilePath does not exists!"
exit 1
fi
which md5sum || exit $?
which sha1sum || exit $?
md5Value="`md5sum "$localFilePath"`"
md5Value="${md5Value:0:32}"
sha1Value="`sha1sum "$localFilePath"`"
sha1Value="${sha1Value:0:40}"
fileName="`basename "$localFilePath"`"
fileExt="${fileName##*.}"
echo $md5Value $sha1Value $localFilePath
echo "INFO: Uploading $localFilePath to $targetFolder/$fileName"
curl -i -X PUT -K $CURLPWD \
-H "X-Checksum-Md5: $md5Value" \
-H "X-Checksum-Sha1: $sha1Value" \
-T "$localFilePath" \
"$ARTIFAC/$REPO/$groupId/$artifactId/$versionId/$artifactId-$versionId.$fileExt"

How can I send an email through the UNIX mailx command?

How can I send an email through the UNIX mailx command?
an example
$ echo "something" | mailx -s "subject" recipient#somewhere.com
to send attachment
$ uuencode file file | mailx -s "subject" recipient#somewhere.com
and to send attachment AND write the message body
$ (echo "something\n" ; uuencode file file) | mailx -s "subject" recipient#somewhere.com
Here you are :
echo "Body" | mailx -r "FROM_EMAIL" -s "SUBJECT" "To_EMAIL"
PS. Body and subject should be kept within double quotes.
Remove quotes from FROM_EMAIL and To_EMAIL while substituting email addresses.
mailx -s "subjec_of_mail" abc#domail.com < file_name
through mailx utility we can send a file from unix to mail server.
here in above code we can see
first parameter is -s "subject of mail"
the second parameter is mail ID and the last parameter is name of file which we want to attach
mail [-s subject] [-c ccaddress] [-b bccaddress] toaddress
-c and -b are optional.
-s : Specify subject;if subject contains spaces, use quotes.
-c : Send carbon copies to list of users seperated by comma.
-b : Send blind carbon copies to list of users seperated by comma.
Hope my answer clarifies your doubt.
Its faster with MUTT command
echo "Body Of the Email" | mutt -a "File_Attachment.csv" -s "Daily Report for $(date)" -c cc_mail#g.com to_mail#g.com -y
-c email cc list
-s subject list
-y to send the mail
From the man page:
Sending mail
To send a message to one or more people, mailx can be invoked with
arguments which are the names of
people to whom the mail will be sent.
The user is then expected to type in
his message, followed
by an ‘control-D’ at the beginning of a line.
In other words, mailx reads the content to send from standard input and can be redirected to like normal. E.g.:
ls -l $HOME | mailx -s "The content of my home directory" someone#email.adr
echo "Sending emails ..."
NOW=$(date +"%F %H:%M")
echo $NOW " Running service" >> open_files.log
header=`echo "Service Restarting: " $NOW`
mail -s "$header" abc.xyz#google.com, \
cde.mno#yahoo.com, \ < open_files.log
Customizing FROM address
MESSAGE="SOME MESSAGE"
SUBJECT="SOME SUBJECT"
TOADDR="u#u.com"
FROM="DONOTREPLY"
echo $MESSAGE | mail -s "$SUBJECT" $TOADDR -- -f $FROM
Here is a multifunctional function to tackle mail sending with several attachments:
enviaremail() {
values=$(echo "$#" | tr -d '\n')
listargs=()
listargs+=($values)
heirloom-mailx $( attachment=""
for (( a = 5; a < ${#listargs[#]}; a++ )); do
attachment=$(echo "-a ${listargs[a]} ")
echo "${attachment}"
done) -v -s "${titulo}" \
-S smtp-use-starttls \
-S ssl-verify=ignore \
-S smtp-auth=login \
-S smtp=smtp://$1 \
-S from="${2}" \
-S smtp-auth-user=$3 \
-S smtp-auth-password=$4 \
-S ssl-verify=ignore \
$5 < ${cuerpo}
}
function call:
enviaremail "smtp.mailserver:port" "from_address" "authuser" "'pass'" "destination" "list of attachments separated by space"
Note: Remove the double quotes in the call
In addition please remember to define externally the $titulo (subject) and $cuerpo (body) of the email prior to using the function
If you want to send more than two person or DL :
echo "Message Body" | mailx -s "Message Title" -r sender#someone.com receiver1#someone.com,receiver_dl#.com
here:
-s = subject or mail title
-r = sender mail or DL

Resources