I am using kusto. I have a function and I want to use it for each row.
I can call the function with | invoke <FUNCTION_NAME> but how can I apply to app rows ?
Use the "extend" operator
For example:
let f = view(a:int){
a*3
};
let t = datatable(a:int) [ 1,2,3];
t
| extend b = f(a)
a
b
1
3
2
6
3
9
So as an input, i get echo [number]. For example: echo 5.
I need to get as output the sequence of numbers from 1 to [number].
So if I get 5 as input, I need: 1 2 3 4 5 (all on a separate line).
I know I can use seq 5 to get 1 2 3 4 5 but the issue is that I need to use pipes.
So the final command should be like this: echo 5 | seq [number] which should give 1 2 3 4 5 as output. My issue is that I don't know how to get the output from echo as my input for seq.
Assuming that echo 5 is an example replacement of an unknown program that will write a single number to stdout and that this output should be used as an argument for seq, you could use a script like this:
file seqstdin:
#!/bin/sh
read num
seq "$num"
You can use it like
echo 5 | ./seqstdin
to get the output
1
2
3
4
5
You can also write everything in a single line, e.g.
echo '5'| { read num; seq "$num"; }
Notes:
This code does not contain any error handling. It uses the first line of input as an argument for seq. If seq does not accept this value it will print an error message.
I did not use read -r or IFS= because I expect the input to be a number suitable for seq. With other input you might get unexpected results.
You can use the output of the echo command as follows:
seq $(echo 5)
In case you're dealing with a variable, you might do:
var=5
echo $var
seq $var
I have two files that each have 700 fields, where 699/700 of the fields have matching headers. I would like to reorder the fields so that they are in the same order in both files (though which order is irrelevant). For example, given:
File1:
FRUIT MSMC1 MSMC24 MSMC2 MSMC10
Apple 1 2 3 2
Pear 2 1 4 5
File2:
VEG MSMC24 MSMC1 MSMC2 MSMC10
Onion 2 1 3 2
Radish 0 3 9 3
I would like both files to have the first field as the fields that are not common to both files, then the rest of the fields in the same order in both files, for example one possible outcome would be:
File1:
FRUIT MSMC1 MSMC2 MSMC10 MSMC24
Apple 1 3 2 2
Pear 2 4 5 1
File2:
VEG MSMC1 MSMC2 MSMC10 MSMC24
Onion 1 3 2 2
Radish 3 9 3 0
Using data.table, this can help you
First read the files,
library(data.table)
dt1 <- fread("file1.csv")
dt2 <- fread("file2.csv")
then, get the names of the fields, the common ones
ndt1 <- names(dt1)[-1]
ndt2 <- names(dt2)[-1]
common <- intersect(ndt1, ndt2)
and now you can just apply the new order
setorder(dt1, c(ndt1[1], setdiff(ndt1, common), common))
setorder(dt2, c(ndt2[1], setdiff(ndt2, common), common))
A perl solution that leaves the first file as is and writes the second file with the columns arranged in the same order as the first file. It reads the 2 files supplied on the command line (which follow the script name).
Update: Added the map $_ // (), phrase to allow the second file to be a subset of the first file. Answer to his question How could these answers be modified if one file were to be a subset of the other (not all columns from file 1 are in file2)? – theo4786
#!/usr/bin/perl
use strict;
use warnings;
# commandline: perl script_name.pl fruits.csv veg.csv
my (undef, #fruit_hdrs) = split ' ', <> and close ARGV;
my #veg_hdrs;
while (<>) {
my ($name, #cols) = split;
# only executes for the first line (header line) of second file
#veg_hdrs = #cols unless #veg_hdrs;
my %line;
#line{ #veg_hdrs } = #cols;
print join(" ", $name, map $_ // (), #line{ #fruit_hdrs } ), "\n";
}
Output is:
VEG MSMC1 MSMC24 MSMC2 MSMC10
Onion 1 2 3 2
Radish 3 0 9 3
In perl, the tool for this job is a hash slice.
You can access values of a hash as #hash{#keys}.
So something like this:
#!/usr/bin/env perl
use strict;
use warnings;
use Data::Dumper;
my #headers;
my $type;
my #rows;
#iterate data - would do this with a normal 'open'
while ( <DATA> ) {
#set headers if the leading word is all upper case
if ( m/^[A-Z]+\s/ ) {
#seperate out type (VEG/FRUIT) from the other headings.
chomp ( ( $type, #headers ) = split );
#print for debugging
print Dumper \#headers;
}
else {
#create a hash to store this row.
my %this_row;
#split the row on whitespace, capturing name and ordered fields by header row.
( my $name, #this_row{#headers} ) = split;
#insert name and type into the hash
$this_row{name} = $name;
$this_row{type} = $type;
#print for debugging
print Dumper \%this_row;
#store it in #rows
push ( #rows, \%this_row );
}
}
#print output:
#header line
print join ("\t", "name", "type", #headers ),"\n";
#iterate rows, extract ordered by _last_ set of headers.
foreach my $row ( #rows ) {
print join ( "\t", $row->{name}, $row->{type}, #{$row}{#headers} ),"\n";
}
__DATA__
FRUIT MSMC1 MSMC24 MSMC2 MSMC10
Apple 1 2 3 2
Pear 2 1 4 5
VEG MSMC24 MSMC1 MSMC2 MSMC10
Onion 2 1 3 2
Radish 0 3 9 3
Note - I've used Data::Dumper for diagnostics - those lines can be removed, but I've left them because illustrate what's going on.
Likewise reading from <DATA> - normally you'd open a file handle, or just use while ( <> ) { to read STDIN or files specified on command line.
The ordering of output is based on the last header line 'seen' - you can of course, sort that, or reorder it.
If you need to handle mismatching columns, this will error on the missing one. In this scenario, we can break out map to fill in any blanks, and use a hash for headers to ensure we capture them all.
E.g.;
#!/usr/bin/env perl
use strict;
use warnings;
use Data::Dumper;
my #headers;
my %headers_combined;
my $type;
my #rows;
#iterate data - would do this with a normal 'open'
while ( <DATA> ) {
#set headers if the leading word is all upper case
if ( m/^[A-Z]+\s/ ) {
#seperate out type (VEG/FRUIT) from the other headings.
chomp ( ( $type, #headers ) = split );
#add to hash of headers, to preserve uniques
$headers_combined{$_}++ for #headers;
#print for debugging
print Dumper \#headers;
}
else {
#create a hash to store this row.
my %this_row;
#split the row on whitespace, capturing name and ordered fields by header row.
( my $name, #this_row{#headers} ) = split;
#insert name and type into the hash
$this_row{name} = $name;
$this_row{type} = $type;
#print for debugging
print Dumper \%this_row;
#store it in #rows
push ( #rows, \%this_row );
}
}
#print output:
#header line
#note - extract keys from hash, not the #headers array.
#sort is needed to order them, because default is unordered.
print join ("\t", "name", "type", sort keys %headers_combined ),"\n";
#iterate rows, extract ordered by _last_ set of headers.
foreach my $row ( #rows ) {
print join ( "\t", $row->{name}, $row->{type}, map { $row->{$_} // '' } sort keys %headers_combined ),"\n";
}
__DATA__
FRUIT MSMC1 MSMC24 MSMC2 MSMC10 OTHER
Apple 1 2 3 2 x
Pear 2 1 4 5 y
VEG MSMC24 MSMC1 MSMC2 MSMC10 NOTHING
Onion 2 1 3 2 p
Radish 0 3 9 3 z
Here, map { $row->{$_} // '' } sort keys %headers_combined takes all the keys of the hash, returns them in order, and then extracts that key from the row - or gives an empty space if it's undefined. (thats what // does)
This will reorder the fields in file2 to match the order in file1:
$ cat tst.awk
FNR==1 {
fileNr++
for (i=2;i<=NF;i++) {
name2nr[fileNr,$i] = i
nr2name[fileNr,i] = $i
}
}
fileNr==2 {
printf "%s", $1
for (i=2;i<=NF;i++) {
printf "%s%s", OFS, $(name2nr[1,nr2name[2,i]])
}
print ""
}
$ awk -f tst.awk file1 file2
VEG MSMC1 MSMC24 MSMC2 MSMC10
Onion 1 2 3 2
Radish 3 0 9 3
With GNU awk you can delete the fileNr++ line and use ARGIND instead of fileNr everywhere else.
I have a script where the sql output of the function is multiple rows (one column) and I'm trying to loop through those for loop function but can't get to seem to get it to work...
rslt=sqlquery {}
echo $rslt
1
2
3
4
for i in $rslt
do
echo "lvl$i"
done
but for the loop...I keep getting this back four times
lvl1
2
3
4
where as I want to get this back...
lvl1
lvl2
lvl3
lvl4
how do I get that?
In order to get the needed result in your script you need to take $rslt under double quotes ". This will ensure that you are not loosing the new lines \n from you result which you are expecting to have in the loop.
for i in "$rslt"
do
echo "lvl$i"
done
To loop over the values in a ksh array, you need to use the ${array[#]} syntax:
$ set -A rslt 1 2 3 4
$ for i in ${rslt[#]}
> do
> echo "lvl$i"
> done
lvl1
lvl2
lvl3
lvl4
I have files (~1k) that look (basically) like this:
NAME1.txt
NAME ATTR VALUE
NAME1 x 1
NAME1 y 2
...
NAME2.txt
NAME ATTR VALUE
NAME2 x 19
NAME2 y 23
...
Where the ATTR collumn is same in everyfile and the name column is just some version of the filename. I would like to combine them together into 1 file that looks like:
All_data.txt
ATTR NAME1_VALUE NAME2_VALUE NAME3_VALUE ...
X 1 19 ...
y 2 23 ...
...
Is there simple way to do this with just command line utilities or will I have to resort to writing some script?
Thanks
You need to write a script.
gawk is the obvious candidate
You could create an associative array in a BEGIN block, using FILENAME as the KEY and
ATTR " " VALUE
values as the value.
Then create your output in an END block.
gawk can process all txt files together by using *txt as the filename
It's a bit optimistic to expect there to be a ready made command to do exactly what you want.
Very few command join data horizontally.