jq command executes differently - one successfully and the other is not - jq

I am running this command:
{{ aws ec2 describe-availability-zones --region ca-central-1 | jq '.AvailabilityZones[]|(.ZoneName)}}'
on 2 identical MacOs and one Amazon Linux.
The MacOs subject to this question is showing this error :
parse error: Invalid numeric literal at line 1, column 18
However, the Amazon and the Other MacOS are showing the correct output
Please help me! This is driving me crazy

This error message indicates that the input piped to jq wasn't valid JSON. Since this input comes directly from the output of the aws ec2 describe-instances command it looks like it isn't emitting JSON, or it's emitting other text as well as JSON.
The fastest way to diagnose this is going to be for you to find out what text the aws command is emitting.
One possible cause could be that you have it configured via an environment variable (AWS_DEFAULT_OUTPUT) or configuration file (e.g. ~/.aws/config) to output YAML or text or tables. (In fact, I consider this probable. I can reproduce the error message exactly down to the column number if I set it to output YAML.) You could rule this out by explicitly specifying --output json.
Beyond that, I suggest you compare these machines to each other. For example, try this on each machine and see what's different on the odd machine:
echo Versions:
aws --version
jq --version
echo Environment:
env | grep '^AWS_'
echo AWS configuration:
aws configure list
echo AWS config file:
cat ~/.aws/config

Related

send dummy logs to my docker fluentbit agent

I have installed fluentbit agent on my docker and also exposed the port 24224 to my localhost.
How do I send a dummy log to my fluentbit docker agent ?
For what I understand is when I will send a log to :24224 somehow, the agent will do some processing on logs and will send it to localhost:8006
which should be captured in my otel-collector.
I have done all the setup, all I missing is -- a dummy log to test the scenario.
Thanks in advance !
docker run --log-driver=fluentd -t ubuntu echo "Testing a log message"
The command is mostly self explanatory for docker users, I have added a description for the ease of understanding.
The command simply follows docker run syntax
docker run [OPTIONS] IMAGE [COMMAND]
IMAGE=ubuntu
This command simply uses the ubuntu docker image.
COMMAND= echo "Testing a log message"
runs the basic echo command.
the main helpful portion is log-driver
Appearantly, ubuntu docker image supports several log drivers, this list also includes fluend -- which will automatically send the message to localhost:24224 without specifying port !
There might be other tools which can be configured to send logs to localhost:24224, this is one of those many solutions which came handy !

GET command not found

I am a in a student job where I am required to do work with a DB but it really isn't my domain.
In the Documentation it says to enter the line
GET /_cat/health?v
This returns the error
-bash: GET: command not found
It also proposes that I copy as curl. Then the command that works is
curl -XGET 'localhost:9200/_cat/health?v&pretty'
I can I make the command "GET /_cat/health?v" to work?
GET is a request method of the HTTP protocol. If you don't write an HTTP server or client software then you don't have to deal with it explicitly.
The command line
curl -XGET 'localhost:9200/_cat/health?v&pretty'
tells curl to request the URL http://localhost:9200/_cat/health?v&pretty using the GET request method.
GET is the default method, you don't need to specify it explicitly.
Also, the second argument you provide to curl is not an URL. curl is nice and completes it to a correct URL but other programs that expect URLs might not work the same (for various reasons). It's better to always specify complete URLs to get the behaviour you expect.
Your command line should be:
curl 'http://localhost:9200/_cat/health?v&pretty'
The apostrophes around the URL are required because it contains characters that are special to the shell (&). A string enclosed in apostrophes tells the shell to not interpret any special characters inside it.
Without the apostrophes, the shell thinks the curl command ends on & and pretty is a different command and the result is not what you expect.
Behind the scene, curl uses HTTP to connect to the server localhost on port 9200 and sends it this HTTP request:
GET /_cat/health?v&pretty
When you start working with elasticsearch, one of the first things they ask you to do to test your install is to do a GET /_cat/health?v, as shown here:
enter link description here
They fail to tell you that this will not work in a terminal, as Ravi Sharma has explained above. Maybe the elasticsearch team should clarify this a bit. At least they supply a Copy as cURL link. It is just frustrating for someone new at this.
sudo apt install libwww-perl
GET command is in package libwww-perl

robot framework: telnet execute command "Prompt is not set"

Trying to run this piece of code, but a "Prompt is not set" error keeps occurring at the Execute Command line.
*** Settings ***
Library Telnet
Library Telnet ${out}
Library Collections
Library Collections ${y}
Library Collections ${x}
*** Variables ***
${ip} 0.0.0.0
${port} 0
*** Test Cases ***
telnet to server
Open Connection ${ip} ${port}
verify something
${out}= Execute Command ls
${y}= Get From List ${out} 0
Should Match Regexp ${y} /^ID$/
Exit Test
Close All Connections
I have also tried deleting "Library Telnet ${out}" and replacing the " ${out}=
Execute Command ls" line with the following, but receive the same error.
Write ls
Set Prompt ${out}
${out}= Read Until Prompt
Is there a problem with the syntax? Or, is the usage of the "prompt" completely wrong? (if so, how can i fix this?)
(note: this is a first attempt at robot framework, so please feel free to comment on any other problems!)
Everything is in the Telnet docs. I use RED Robot Editor which can show me docs for Telnet and Telnet KW by hover over Telnet entry in editor, this can also be generated via command line:
python -m robot.libdoc Telnet show
There is a part about Prompt:
== Prompt ==
Often the easiest way to read the output of a command is reading all
the output until the next prompt with `Read Until Prompt`. It also makes
it easier, and faster, to verify did `Login` succeed.
Prompt can be specified either as a normal string or a regular expression.
The latter is especially useful if the prompt changes as a result of
the executed commands. Prompt can be set to be a regular expression
by giving ``prompt_is_regexp`` argument a true value (see `Boolean
arguments`).
Examples:
| `Open Connection` | lolcathost | prompt=$ |
| `Set Prompt` | (> |# ) | prompt_is_regexp=true |
Check Telnet docs for more help and examples.
ps. I don't see reason to import Telnet with parameter:
Library Telnet ${out}
Although I am too late to answer, I did not see the exact expected answer hence answering this now.
You need to set 2 things in Open Connection.
prompt_is_regexp=yes
prompt=#{Your expected prompt}

aws configure command giving [Errno 5] Input/output error

I am configuring awscli
I run following command:
[bharthan#pchirmpc007 ~]$ aws configure
AWS Access Key ID [None]: adfasdfadfasdfasdfasdf
AWS Secret Access Key [None]: adfasdfasdfasdfasdfasdfasd
Default region name [None]: us-east-1
Default output format [None]: json
It is giving me following error:
[Errno 5] Input/output error
Any suggestions what may be the reason.
You may have some bad sectors on the target HDD.
To check sda1 volume for bad sectors in Linux run fsck -c /dev/sda1. For drive C: in Windows it should be chkdsk c: /f /r.
IMHO chkdsk way will be more suitable as it will remap bad blocks on the HDD while Linux fsck simply marks such blocks as unusable in the current file system.
Quote from man fsck.ext2
-c This option causes e2fsck to use badblocks(8) program to do a read-only scan of the device in order to find any bad blocks. If any bad blocks are found, they are added to the bad block inode to prevent them from being allocated to a file or directory. If this option is specified twice, then the bad block scan will be done using a non-destructive read-write test

Why OpenMPI uses a different server given a different -n setting?

I am testing out OpenMPI, provided and compiled by another user, (I am using soft link to his directories for all bin, include, etc - all the mandatory directories) but I ran into this weird thing:
First of all, if I ran mpirun with -n setting <= 10, I can run this below. testrunmpi.py simply prints out "run." from each core.
# I am in serverA.
bash-3.2$ /home/karl/bin/mpirun -n 10 ./testrunmpi.py
run.
run.
run.
run.
run.
run.
run.
run.
run.
run.
However, when I tried running -n more than 10, I will run into this:
bash-3.2$ /home/karl/bin/mpirun -n 24 ./testrunmpi.py
karl#serverB's password: Could not chdir to home directory /home/karl: No such file or directory
bash: /home/karl/bin/orted: No such file or directory
--------------------------------------------------------------------------
A daemon (pid 19203) died unexpectedly with status 127 while attempting
to launch so we are aborting.
There may be more information reported by the environment (see above).
This may be because the daemon was unable to find all the needed shared
libraries on the remote node. You may set your LD_LIBRARY_PATH to have the
location of the shared libraries on the remote nodes and this will
automatically be forwarded to the remote nodes.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that the job aborted, but has no info as to the process
that caused that situation.
--------------------------------------------------------------------------
bash-3.2$
bash-3.2$
Permission denied, please try again.
karl#serverB's password:
Permission denied, please try again.
karl#serverB's password:
I see that the work is dispatched to serverB, while I was on serverA. I don't have any account on serverB. But if I invoke mpirun -n <= 10, the work will be on serverA.
This is strange, so I checked out /home/karl/etc/openmpi-default-hostfile, and tried set the following:
serverA slots=24 max_slots=24
serverB slots=0 max_slots=32
But the problem persists and still gives out the same error message above. What must I do in order to have my program run on serverA only?
The default hostfile in Open MPI is system-wide, i.e. its location is determined while the library is being built and installed and there is no user-specific version of it. The actual location can be obtained by running the ompi_info command like this:
$ ompi_info --param orte orte | grep orte_default_hostfile
MCA orte: parameter "orte_default_hostfile" (current value: <LOOK HERE>, data source: default value)
You can override the list of hosts in several different ways. First, you can provide your own hostfile via the -hostfile option to mpirun. If so, you don't have to put hosts with zero slots inside it - simply omit machines that you have no access to. For example:
localhost slots=10 max_slots=10
serverA slots=24 max_slots=24
You can also change the path to the default hostfile by setting the orte_default_hostfile MCA parameter:
$ mpirun --mca orte_default_hostfile /path/to/your/hostfile -n 10 executable
Instead of passing each time the --mca option, you can set the value in an exported environment variable called OMPI_MCA_orte_default_hostfile. This could be set in your shell's dot-rc file, e.g. in .bashrc if using Bash.
You can also specify the list of nodes directly via the -H (or -host) option.

Resources