hjson: why does close brace have to be on a separate line? - hjson

This works: (update: but not as I was thinking! it actually sets b = "c, d: e")
a: [
{ b: c, d: e
}
]
and this works:
a: [
{ "b": "c", "d": "e" }
]
But this doesn't work. What about the hjson definition disallows the closing brace at the end of the line?
a: [
{ b: c, d: e }
]
Found ']' where a key name was expected
(check your syntax or use quotes if the key name
includes {}[],: or whitespace): line 3 column 1 (char 23)

In Hjson a string without quotes is terminated by the newline, so your closing brace gets eaten by the quoteless string.
When you write
{ b: c, d: e
}
you are saying, give me a string that contains "c, d: e".
You need to use either quotes
{ b: "c", d: "e" }
or
{
b: c
d: e
}

Related

Is the jq + operator eager?

I originally wrote my jq command as
.data.viewer.zones[] | .httpRequests1mGroups[0].sum|with_entries(select(.key|endswith("Map")|not)) + {"zoneTag": .zoneTag}
and got this result:
{
"bytes": 2875120330,
"cachedBytes": 1475518778,
"zoneTag": null
}
{
"bytes": 2875120330,
"cachedBytes": 1475518778,
"zoneTag": null
}
zoneTag is the last attribute in a zones object.
I rewrote the command as
.data.viewer.zones[] | {"zoneTag": .zoneTag} + .httpRequests1mGroups[0].sum|with_entries(select(.key|endswith("Map")|not))
and get what I expected:
{
"zoneTag": "zone 1",
"bytes": 2875120330,
"cachedBytes": 1475518778,
}
{
"zoneTag": "zone 2",
"bytes": 2875120330,
"cachedBytes": 1475518778,
}
My question is why? Is + eager? (I get the same results using *.)
Thanks.
So maybe you're looking for an explanation in terms of operator precedence.
Let:
A represent .data.viewer.zones[]
B represent .httpRequests1mGroups[0].sum
C represent with_entries(select(.key|endswith("Map")|not))
Then your first jq expression is equivalent to
A | B | C + {zoneTag}
whereas your second is equivalent to:
A | {zoneTag} + B | C
So in the first case, {zoneTag} gets its value from B but in the
second case, it comes from A.
In jq, for most purposes, including that of object addition, an explicit null value in the object on the right is not the same as the absence of a key.
Thus if A is {"a": 1} then A + {} is A but A + {"a": null} is {"a": null}.
Thus the "right-most value" rule must be understood to mean "right-most explicit value".
Whether any of this has to do with "eagerness" depends on your understanding of that term.
Non-lazy evaluation
In jq, object addition (and indeed addition in general) proceeds from right to left and is of course non-lazy, as can be seen in the following example, which also illustrates the RHS-dominance mentioned above.
jq -n '{a:(1|debug)} + {b: (2|debug)} + {a:(3|debug)}'
["DEBUG:",3]
["DEBUG:",2]
["DEBUG:",1]
{
"a": 3,
"b": 2
}
So far as I know, though, the right-associativity might not be guaranteed.

Unix : Split line with delimiter

I've a file like this
a b c,d
e f g
x y r,s,t
and I would like to split this into columns using "," as delimiter. The other columns should be copied.
Expected result :
a b c
a b d
e f g
x y r
x y s
x y t
Thank you
Using awk. Expects field separators to be space or tab:
$ awk '{
split($3,a,",") # split the third field on commas, hash to a
for(i in a) { # for all entries in a
sub(/[^ \t]+$/,a[i],$0) # replace last field with entries in a...
print # ... preserving separators, space or tab
}
}' file
a b c
a b d
e f g
x y r
x y s
x y t
Due to the use of sub() it will produce false results if there is a & in the $3. Also, as mentioned in the comments, using for(i in a) may result in records outputing in seemingly random order. If that is a problem, use:
$ awk '{
n=split($3,a,",") # store element count to n
for(i=1;i<=n;i++) { # iterate in order
sub(/[^ \t]+$/,a[i],$0)
print
}
}' file
For tab separated files:
$ awk '
BEGIN { OFS="\t" } # output field separator
{
n=split($3,a,",")
for(i=1;i<=n;i++) {
$3=a[i] # & friendly version
print
}
}' file

firebase get key value and store it in array

I build ionic 3 app and I'm using firebase.
I have a structure of details that I want to get the key and value and store it in array.
some questions:
which kind of array I can do in typescript?
2.I want to get the keys and the values of this structure:
"Years": [
1:"mechina",
2:"Year A",
3:"Year B",
4:"Year C",
5:"Year D"
],
the numbers are the key. what I tried , it give me only the values but i need to key also
this.collegeProvider.loadYears().on('value',years =>{
this.yearsArray = years.val();
console.log(this.yearsArray);
});
I am not sure of the json structure you have provided but the firebase json tree looks something like this,
{
"-KsdJ5ngvltq1eOQJ6JS" : {
"a" : "",
"b" : "",
"c" : "",
"d" : "",
"e" : "",
"f" : "",
"g" : "kaushalagarwal79#gmail.com",
"h" : "",
"i" : "",
"j" : "",
"k" : ""
},
So while pushing your data in firebase you can code something like,
myRef.push().setValue(new book(n, a, cn, cc, r, c, e, p, d, ed, pub, iE));
key = myRef.push().getKey();
So in this case the key received is, -KsdJ5ngvltq1eOQJ6JS
U can create a array for numerous objects,
Hope that helps!!!

ArangoDB/AQL: projecting graph traversal results in tree format

How do you write an AQL to project graph traversal result in JSON tree format?
For example.
NamedGraph-"ExampleGraph"
A -> B -> C
A -> B -> D -> E
A -> F -> R
Expected Result:
{
A: "data",
children: [{
B: "data",
children: [
{ C: "data", children : []}, { D: "data" , children [{E}]}
]
}, {F..}]
}

How to calculate the tree the results by combining individual leaf paths?

Let's say I have an input file where each line contains the path from the root (A) to a leaf
echo "A\tB\tC\nA\tB\tD\nA\tE" > lines.txt
A B C
A B D
A E
How can I easily generate the resulting tree?: (A(B(C,D),E))
I'd like to use GNU tools (awk, sed, etc.) because they tend to work better with large files, but an R script would also work. The R input would be:
# lines <- lapply(readLines("lines.txt"), strsplit, " +")
lines <- list(list(c("A", "B", "C")), list(c("A", "B", "D")), list(c("A","E")))
In Perl:
#!/usr/bin/env perl
use strict;
my $t = {};
while (<>) {
my #a = split;
my $t1 = $t;
while (my $a = shift #a) {
$t1->{$a} = {} if not exists $t1->{$a};
$t1 = $t1->{$a};
}
}
print &p($t)."\n";
sub p {
my ($t) = #_;
return
unless keys %$t;
return '('
. join(',', map { $_ . p($t->{$_}) } sort keys %$t)
. ')';
}
This script returns:
% cat <<EOF | perl l.pl
A B C
A B D
A E
EOF
(A(B(C,D),E))
Note that this script, due to recursion in p is not at all suited for large datasets. But that can be easily resolved by turning that into a double for loop, like in the first while above.
Why do it the easy way, if you can use Bourne Shell script instead? Note, this is not even Bash, this is plain old Bourne shell, without arrays...
#!/bin/sh
#
# A B C
# A B D
# A E
#
# "" vs "A B C" -> 0->3, ident 0 -> -0+3 -> "(A(B(C"
# "A B C" vs "A B D" -> 3->3, ident 2 -> -1+1 -> ",D"
# "A B D" vs "A E" -> 3->2, ident 1 -> -2+1 -> "),E"
# "A E" vs. endc -> 2->0, ident 0 -> -2+0 -> "))"
#
# Result: (A(B(C,D),E))
#
# Input stream is a path per line, path segments separated with spaces.
process_line () {
local line2="$#"
n2=$#
set -- $line1
n1=$#
s=
if [ $n2 = 0 ]; then # last line (empty)
for s1 in $line1; do
s="$s)"
done
else
sep=
remainder=false
for s2 in $line2; do
if ! $remainder; then
if [ "$1" != $s2 ]; then
remainder=true
if [ $# = 0 ]; then # only children
sep='('
else # sibling to an existing element
sep=,
shift
for s1 in $#; do
s="$s)"
done
fi
fi
fi
if $remainder; then # Process remainder as mismatch
s="$s$sep$s2"
sep='('
fi
shift # remove the first element of line1
done
fi
result="$result$s"
}
result=
line1=
(
cat - \
| sed -e 's/[[:space:]]\+/ /' \
| sed -e '/^$/d' \
| sort -u
echo '' # last line marker
) | while read line2; do
process_line $line2
line1="$line2"
test -n "$line2" \
|| echo $result
done
This produces the correct answer for two different files (l.sh is the shell version, l.pl the version in Perl):
% for i in l l1; do cat $i; ./l.sh < $i; ./l.pl < $i; echo; done
A
A B
A B C D
A B E F
A G H
A G H I
(A(B(C(D),E(F)),G(H(I))))
(A(B(C(D),E(F)),G(H(I))))
A B C
A B D
A E
(A(B(C,D),E))
(A(B(C,D),E))
Hoohah!
Okay, so I think I got it:
# input
lines <- c(list(c("A", "B", "C")), list(c("A", "B", "D")), list(c("A","E")))
# generate children
generate_children <- function(lines){
children <- list()
for (line in lines) {
for (index in 1:(length(line)-1)){
parent <- line[index]
next_child <- line[index + 1]
if (is.null(children[[parent]])){
children[[parent]] <- next_child
} else {
if (next_child %notin% children[[parent]]){
children[[parent]] <- c(children[[parent]], next_child)
}
}
}
}
children
}
expand_children <- function(current_parent, children){
if (current_parent %in% names(children)){
expanded_children <- sapply(children[[current_parent]], function(current_child){
expand_children(current_child, children)
}, USE.NAMES = FALSE)
output <- setNames(list(expanded_children), current_parent)
} else {
output <- current_parent
}
output
}
children <- generate_children(lines)
root <- names(children)[1]
tree <- expand_children(root, children)
dput(tree)
# structure(list(A = structure(list(B = c("C", "D"), "E"), .Names = c("B",""))), .Names = "A")
Is there a simpler answer?

Resources