How can I have the OpenStack Swift cli list the columns in the output for the swift list like swift list --lh?
I currently see the following
user#server1:~$ swift list --lh
164006 30G 2020-03-28 02:32:33 backups1
54637 8.1G 2019-10-09 03:00:02 backups2
46549 1.8G 2020-02-26 21:30:03 backups3
258K 40G
I'd be interested in any official documentation that would state the columns as well.
The openstack container list command produces different output that potentially has what you want:
$ openstack container list --long
+------------+-------+-------+
| Name | Bytes | Count |
+------------+-------+-------+
| container0 | 0 | 0 |
+------------+-------+-------+
You can ask for output in CSV, JSON, YAML, or raw values (using the -f option), and you can request specific columns using -c:
$ openstack container list --long -f yaml -c Name -c Count
- Count: 0
Name: container0
Related
According to the manual select needs a parameter boolean_expression.
I always wonder what exactly is meant by this in jq.
To take full advantage of the select filter, it would be nice to have a clear definition.
Can someone give this missing precise definition?
The following collection of unusual examples looks a bit strange and counterintuitive to me:
jq -n '1,2 | select(null)' outputs nothing
jq -n '1,2 | select(empty)' outputs nothing
jq -n '1,2 | select(42)' outputs 1 2
jq -n '1,2 | select(-1.23)' outputs 1 2
jq -n '1,2 | select({a:"strange"})' outputs 1 2
jq -n '1,0,-1,null,false,42 | select(.)' outputs 1 0 -1 42
It seems to me that everything that is not false and not null is considered true.
In the examples, the constants are to understand as placeholders for the result of an arbitrary expression.
Yes, null and false are indeed considered falsy, other values as truthy. This notion is (somewhat unfortunately) explained in the if-then-else section of the manual.
Therefore jq -n '1,2 | select(null)' will produce nothing, as would jq -n '1,2 | select(false)'
In the case of jq -n '1,2 | select(empty)', the empty just eats up all the results, so there is nothing to output.
All other cases are truthy, therefore the input is propagated.
Note that none of your examples considers the actual input for evaluation. All selects have a constant argument.
To filter based on the input, the argument of select has to somehow process it (as opposed to constants which simply ignore it), e.g. jq -n '1,2 | select(.%2 == 0)' outputs just 2.
I've inherited a JS code base with Jasmine unit tests. The testing framework uses karma and instanbul-combine to get code coverage. It seems istanbul-combine isn't working with present node modules, and besides is no longer maintained: the recommended replacement is nyc. I'm having trouble replacing istanbul-combine with nyc in the Makefile.
I succeeded in merging my separate coverage results (json) files into a single coverage-final.json file (this SO question), but now I need to generate the summary report.
How do I generate a summary report from a coverage.json file?
One problem here, I think, is that I have no .nyc_output directory with intermediate results, since I'm not using nyc to generate coverage data. All my coverage data is in a coverage directory and its child directories.
I've tried specifying a filename:
npx nyc report --include coverage-final.json
Also tried specifying the directory:
npx nyc report --include coverage
Neither works.
----------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
----------|---------|----------|---------|---------|-------------------
All files | 0 | 0 | 0 | 0 |
----------|---------|----------|---------|---------|-------------------
The CLI help documentation says
--temp-dir, -t directory to read raw coverage information from
But when I use that point to coverage directory (viz., npx nyc report -t coverage), I get the same unsatisfactory result. NYC is apparently fairly rigid in the formats it will accept this data.
Here's the original Makefile line that I'm replacing:
PATH=$(PROJECT_HOME)/bin:$$PATH node_modules/istanbul-combine/cli.js \
-d coverage/summary -r html \
coverage/*/coverage-final.json
Using this line in my Makefile worked:
npx nyc report --reporter html --reporter text -t coverage --report-dir coverage/summary
It grabs the JSON files from the coverage directory and puts them altogether into an HTML report in the coverage/summary subdirectory. (Didn't need the nyc merge command from my previous question/answer.)
I'm not sure why the -t option didn't work before. It may be I was using the wrong version of nyc (15.0.0 instead of 14.1.1, fwiw).
After trying multiple nyc commands to produce the report from JSON with no luck, I found an interesting behavior of nyc: You have to be in the parent directory of the instrumented code when you are generating a report. For example:
If the code I instrumented is in /usr/share/node/**, and the merged coverage.json result is in /tmp directory. If I run nyc report --temp-dir=/tmp --reporter=text under /tmp, I won't get anything.
----------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
----------|---------|----------|---------|---------|-------------------
All files | 0 | 0 | 0 | 0 |
----------|---------|----------|---------|---------|-------------------
But if I run the same command under /usr/share/node or /, I'm able to get the correct output with coverage numbers.
Not sure if it's a weird permission issue in nyc. If it's an expected behavior of nyc
I am working on a cluster where a dataset is kept in hdfs in distributed manner. Here is what I have:
[hmi#bdadev-5 ~]$ hadoop fs -ls /bdatest/clm/data/
Found 1840 items
-rw-r--r-- 3 bda supergroup 0 2015-08-11 00:32 /bdatest/clm/data/_SUCCESS
-rw-r--r-- 3 bda supergroup 34404390 2015-08-11 00:32 /bdatest/clm/data/part-00000
-rw-r--r-- 3 bda supergroup 34404062 2015-08-11 00:32 /bdatest/clm/data/part-00001
-rw-r--r-- 3 bda supergroup 34404259 2015-08-11 00:32 /bdatest/clm/data/part-00002
....
....
The data is of the form:
[hmi#bdadev-5 ~]$ hadoop fs -cat /bdatest/clm/data/part-00000|head
V|485715986|1|8ca217a3d75d8236|Y|Y|Y|Y/1X||Trimode|SAMSUNG|1x/Trimode|High|Phone|N|Y|Y|Y|N|Basic|Basic|Basic|Basic|N|N|N|N|Y|N|Basic-Communicator|Y|Basic|N|Y|1X|Basic|1X|||SAM|Other|SCH-A870|SCH-A870|N|N|M2MC|
So, what I want to do is to count the total number of lines in the original data file data. My understanding is that the distributed chunks like part-00000, part-00001 etc have overlaps. So just counting the number of lines in part-xxxx files and summing them won't work. Also the original dataset data is of size ~70GB. How can I efficiently find out the total number of lines?
More efficiently -- you can use spark to count the no. of lines. The following code snippet helps to count the number of lines.
text_file = spark.textFile("hdfs://...")
count = text_file.count();
count.dump();
This displays the count of no. of lines.
Note: The data in different part files will not overlap
Using hdfs dfs -cat /bdatest/clm/data/part-* | wc -l will also give you the output but this will dump all the data to the local machine and takes longer time.
Best solution is to use MapReduce or spark. MapReduce will take longer time to develop and execute. If the spark is installed, this is the best choice.
If you need to just find the number of lines in data. You can use the following command:
hdfs dfs -cat /bdatest/clm/data/part-* | wc -l
Also you can write a simple mapreduce program with identity mapper which emits the input as output. Then you check the counters and find the input records for mapper. That will be number of lines in your data.
Hadoop one liner:
hadoop fs -cat /bdatest/clm/data/part-* | wc -l
Source: http://www.sasanalysis.com/2014/04/10-popular-linux-commands-for-hadoop.html
Another approach would be to create a map reduce job where the mapper emits 1 for each line and the reducer sums the values. See the accepted answer of Writing MApreduce code for counting number of records for the solution.
This is such a common task that I wish there is a subcommand in fs to do that (e.g. hadoop fs -wc -l inputdir) to avoid streaming all the content to one machine that performs the "wc -l" command.
To count lines efficiently, I often use hadoop streaming and unix commands as follows:
hadoop jar ${HADOOP_HOME}/hadoop-streaming.jar \
-Dmapred.reduce.tasks=1 \
-input inputdir \
-output outputdir \
-mapper "bash -c 'paste <(echo "count") <(wc -l)'" \
-reducer "bash -c 'cut -f2 | paste -sd+ | bc'"
Every mapper will run "wc -l" on the parts it has and then a single reducer will sum up the counts from all the mappers.
If you have a very big file with about same line content (I imagine a JSON or a log entry), and you don't care about precision, you could calculate it.
Example, I store raw JSON in a file:
Size of the file: 750Mo
Size of first line: 752 chars (==> 752 octets)
Lines => about 1.020.091
Running cat | wc -l gives 1.018.932
Not so bad ^^
You can use hadoop streaming for this problem.
This is how you run it :
hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-streaming-2.6.0-cdh5.11.0.jar -input <dir> -output <dir> counter_mapper.py -reducer counter_reducery -file counter_mapper.py -file counter_reducer.py
counter_mapper.py
#!/usr/bin/env python
import sys
count = 0
for line in sys.stdin:
count = count + 1
print count
counter_reducer.py
#!/usr/bin/env python
import sys
count = 0
for line in sys.stdin:
count = count +int(line)
print count
I would like a dead-simple way to query my gps location from a usb dongle from the unix command line.
Right now, I know I've got a functioning software and hardware system, as evidenced by the success of the cgps command in showing me my position. I'd now like to be able to make short requests for my gps location (lat,long in decimals) from the command line. my usb serial's path is /dev/ttyUSB0 and I'm using a Global Sat dongle that outputs generic NMEA sentences
How might I accomplish this?
Thanks
telnet 127.0.0.1 2947
?WATCH={"enable":true}
?POLL;
gives you your answer, but you still need to separate the wheat from the chaff. It also assumes the gps is not coming in from a cold start.
A short script could be called, e.g.;
#!/bin/bash
exec 2>/dev/null
# get positions
gpstmp=/tmp/gps.data
gpspipe -w -n 40 >$gpstmp"1"&
ppid=$!
sleep 10
kill -9 $ppid
cat $gpstmp"1"|grep -om1 "[-]\?[[:digit:]]\{1,3\}\.[[:digit:]]\{9\}" >$gpstmp
size=$(stat -c%s $gpstmp)
if [ $size -gt 10 ]; then
cat $gpstmp|sed -n -e 1p >/tmp/gps.lat
cat $gpstmp|sed -n -e 2p >/tmp/gps.lon
fi
rm $gpstmp $gpstmp"1"
This will cause 40 sentences to be output and then grep lat/lon to temporary files and then clean up.
Or, from GPS3 github repository place the alpha gps3.py in the same directory as, and execute, the following Python2.7-3.4 script.
from time import sleep
import gps3
the_connection = gps3.GPSDSocket()
the_fix = gps3.DataStream()
try:
for new_data in the_connection:
if new_data:
the_fix.refresh(new_data)
if not isinstance(the_fix.TPV['lat'], str): # check for valid data
speed = the_fix.TPV['speed']
latitude = the_fix.TPV['lat']
longitude = the_fix.TPV['lon']
altitude = the_fix.TPV['alt']
print('Latitude:', latitude, 'Longitude:', longitude)
sleep(1)
except KeyboardInterrupt:
the_connection.close()
print("\nTerminated by user\nGood Bye.\n")
If you want it to close after one iteration also import sys and then replace sleep(1) with sys.exit()
much easier solution:
$ gpspipe -w -n 10 | grep -m 1 lon
{"class":"TPV","device":"tcp://localhost:4352","mode":2,"lat":11.1111110000,"lon":22.222222222}
source
You can use my script : gps.sh return "x,y"
#!/bin/bash
x=$(gpspipe -w -n 10 |grep lon|tail -n1|cut -d":" -f9|cut -d"," -f1)
y=$(gpspipe -w -n 10 |grep lon|tail -n1|cut -d":" -f10|cut -d"," -f1)
echo "$x,$y"
sh gps.sh
43.xx4092000,6.xx1269167
Putting a few of the bits of different answers together with a bit more jq work, I like this version:
$ gpspipe -w -n 10 | grep -m 1 TPV | jq -r '[.lat, .lon] | #csv'
40.xxxxxx054,-79.yyyyyy367
Explanation:
(1) use grep -m 1 after invoking gpspipe, as used by #eadmaster's answer, because the grep will exit as soon as the first match is found. This gets you results faster instead of having to wait for 10 lines (or using two invocations of gpspipe).
(2) use jq to extract both fields simultaneously; the #csv formatter is more readable. Note the use of jq -r (raw output), so that the output is not put in quotes. Otherwise the output would be "40.xxxx,-79.xxxx" - which might be fine or better for some applications.
(3) Search for the TPV field by name for clarity. This is the "time, position, velocity" record, which is the one we want for extracting the current lat & lon. Just searching for "lat" or "lon" risks getting confused by the GST object that some GPSes may supply, and in that object, 'lat' and 'lon' are the standard deviation of the position error, not the position itself.
Improving on eadmaster's answer here is a more elegant solution:
gpspipe -w -n 10 | jq -r '.lon' | grep "[[:digit:]]" | tail -1
Explanation:
Ask from gpsd 10 times the data
Parse the received JSONs using jq
We want only numeric values, so filter using grep
We want the last received value, so use tail for that
Example:
$ gpspipe -w -n 10 | jq -r '.lon' | grep "[[:digit:]]" | tail -1
28.853181286
How can I loop through the contents of a file within Robot Framework?
My file contents would be like this:
1001
1002
1003
1004
I want to read the contents one by one, assign it to a variable and then do some operations with it.
Robotframework has several built-in libraries that add a lot of functionality. Two that you can use for this task are the OperatingSystem library and the String library.
You can use the keyword Get File from the OperatingSystem library to read the file, and you can use the Split to Lines keyword from the String library to convert the file contents to a list of lines. Then it's just a matter of looping over the lines using a for loop.
For example:
*** Settings ***
| Library | OperatingSystem
| Library | String
*** Test Cases ***
| Example of looping over the lines in a file
| | ${contents}= | Get File | data.txt
| | #{lines}= | Split to lines | ${contents}
| | :FOR | ${line} | IN | #{lines}
| | | log | ${line} | WARN
This solve my issue same like yours !
${File}= Get File Path\\FileName.txt
#{list}= Split to lines ${File}
:FOR ${line} IN #{list}
\ Log ${line}
\ ${Value}= Get Variable Value ${line}
\ Log ${Value}
I am reading from 'text' file and 'Get Variable Value' is part of builtin library. Thanks!
Below is a list of different examples how to use FOR & While loops in your Robot Framework Test Cases.
http://robotframework.googlecode.com/svn/tags/robotframework-2.5.3/atest/testdata/running/for.txt
My strategy, that I've used successfully with .csv files, would be to create a Python-based keyword that will grab the nth item in a file. The way I did it involved importing the CSV Python library, so to give a more complete answer I'd have to know what file type you're trying to read from.