parsing text file in tcl and creating dictionary of key value pair where values are in list format - dictionary

How to seperate following text file and Keep only require data for corresponding :
for example text file have Format:
Name Roll_number Subject Experiment_name Marks Result
Joy 23 Science Exp related to magnet 45 pass
Adi 12 Science Exp electronics 48 pass
kumar 18 Maths prime numbers 49 pass
Piya 19 Maths number roots 47 pass
Ron 28 Maths decimal numbers 12 fail
after parsing above Information and storing in dictionary where key is subject(unique) and values corresponding to subject is list of pass Student name

set studentInfo [dict create]; # Creating empty dictionary
set fp [open input.txt r]
set line_no 0
while {[gets $fp line]!=-1} {
incr line_no
# Skipping line number 1 alone, as it has the column headers
# You can alter this logic, if you want to
if {$line_no==1} {
continue
}
if {[regexp {(\S+)\s+\S+\s+(\S+).*\s(\S+)} $line match name subject result]} {
if {$result eq "pass"} {
# Appending the student's name with key value as 'subject'
dict lappend studentInfo $subject $name
}
}
}
close $fp
puts [dict get $studentInfo]
Output :
Science {Joy Adi} Maths {kumar Piya}

Related

The encryption won't decrypt

I was given an encrypted copy of the study guide here, but how do you decrypt and read it???
In a file called pa11.py write a method called decode(inputfile,outputfile). Decode should take two parameters - both of which are strings. The first should be the name of an encoded file (either helloworld.txt or superdupertopsecretstudyguide.txt or yet another file that I might use to test your code). The second should be the name of a file that you will use as an output file.
Your method should read in the contents of the inputfile and, using the scheme described in the hints.txt file above, decode the hidden message, writing to the outputfile as it goes (or all at once when it is done depending on what you decide to use).
The penny math lecture is here.
"""
Program: pennyMath.py
Author: CS 1510
Description: Calculates the penny math value of a string.
"""
# Get the input string
original = input("Enter a string to get its cost in penny math: ")
cost = 0
Go through each character in the input string
for char in original:
value = ord(char) #ord() gives us the encoded number!
if char>="a" and char<="z":
cost = cost+(value-96) #offset the value of ord by 96
elif char>="A" and char<="Z":
cost = cost+(value-64) #offset the value of ord by 64
print("The cost of",original,"is",cost)
Another hint: Don't forget about while loops...
Another hint: After letters -
skip ahead by their pennymath value positions + 2
After numbers - skip ahead by their number + 7 positions
After anything else - just skip ahead by 1 position
The issue I'm having in that I cant seem to get the coding right to decode the file it comes out looking the same. This is the current code I have been using. But once I try to decrypt the message it stays the same.
def pennycost(c):
if c >="a" and c <="z":
return ord(c)-96
elif c>="A" and c<="Z":
return ord(c)-64
def decryption(inputfile,outputfile):
with open(inputfile) as f:
fo = open(outputfile,"w")
count = 0
while True:
c = f.read(1)
if not c:
break;
if count > 0:
count = count -1;
continue
elif c.isalpha():
count = pennycost(c)
fo.write(c)
elif c.isdigit():
count = int(c)
fo.write(c)
else:
count = 6
fo.write(c)
fo.close()
inputfile = input("Please enter the input file name: ")
outputfile = input("Plese enter the output file name(EXISTING FILE WILL BE OVER WRITTEN!): ")
decryption(inputfile,outputfile)

Checking on equal values of 2 different data frame row by row

I have 2 different data frame, one is of 5.5 MB and the other is 25 GB. I want to check if these two data frame have the same value in 2 different columns for each row.
For e.g.
x 0 0 a
x 1 2 b
y 1 2 c
z 3 4 d
and
x 0 0 w
x 1 2 m
y 5 6 p
z 8 9 q
I want to check if the 2° and 3° column are equal for each row, if yes I return the 4° columns for the both data frame.Then I should have:
a w
b m
c m
the 2 data frame are sorted respect the 2° and 3° column value. I try in R but the 2° file (25 GB) is too big. How can I obtain this new file in a "faster" (even some hours) way ???
With GNU awk for arrays of arrays:
$ cat tst.awk
NR==FNR { a[$2,$3][$4]; next }
($2,$3) in a {
for (val in a[$2,$3]) {
print val, $4
}
}
$ awk -f tst.awk small_file large_file
a w
b m
c m
and with any awk (a bit less efficiently):
$ cat tst.awk
NR==FNR { a[$2,$3] = a[$2,$3] FS $4; next }
($2,$3) in a {
split(a[$2,$3],vals)
for (i in vals) {
print vals[i], $4
}
}
$ awk -f tst.awk small_file large_file
a w
b m
c m
The above when reading small_file (NR==FNR is only true for the first file read - look up those variables in the awk man page or google) creates an associative array a[] that maps an index created from the concatenation of the 2nd+3rd fields to the list of value of the 4th field for those 2nd/3rd field combinations. Then when reading large_file it looks up that array for the current 2nd/3rd field combination and loops through all of the values stored for that combination in the previous phase printing that value (the $4 from small_file) plus the current $4.
You said your small file is 5.5 MB and the large file is 25 GB. Since 1 MB is about 1,047,600 characters (see https://www.computerhope.com/issues/chspace.htm) and each of your lines is about 8 characters long that means your small file is about 130 thousand lines long and your large one about 134 million lines long so I expect on an average powered computer the above should take no more than a minute or 2 to run, it certainly won't take anything like an hour!
An alternative to the solution of Ed Morton, but with an identical idea:
$ cat tst.awk
NR==FNR { a[$2,$3] = a[$2,$3] $4 ORS; next }
($2,$3) in a {
s=a[$2,$3]; gsub(ORS,OFS $4 ORS,s)
printf "%s",s;
}
$ awk -f tst.awk small_file large_file
a w
b m
c m

How to read the comma separated values

We have a fixed width file
Col1 length 10
Col2 length 10
Col3 length 30
Col4 length 40
Sample record
ABC 123 xyz. 5171-5261,51617
ABC. 1234. Xxy. 81651-61761
Col4 can have any number of comma separated values
1 or more within length of 40 characters: If it is has 1 value for that record there is no change in output file.
If more than one value is there i.e. comma separated (5171-5261,51617)
the output file should have multiple records.
1 record
ABC. 123. Xyz. 5171-5261
ABC 123. Xyz. 51617
What is the most efficient way to do this.
As of now trying using while and for loop but it is taking so long for execution since we are doing this splitting by reading each record.
The output file can be comma separated or fixed width.
awk is your friend here.
A single line of awk will achieve what you need:
awk -v FIELDWIDTHS="10 10 30 40" '{ if (match($4,",")) { split($4,array,","); for (i in array) { print $1,$2,$3,array[i]; }; } else { print $1,$2,$3,$4 }; }' samp.dat
For ease of reading the code is:
{
if (match($4,",")) {
split($4,array,",");
for (i in array) {
print $1,$2,$3,array[i];
};
} else {
print $1,$2,$3,$4
};
}
Testing with the sample data you supplied gives:
ABC 123 xyz. 5171-5261
ABC 123 xyz. 51617
ABC. 1234. Xxy. 81651-61761
How it works:
awk reads your file one line at a time.
The FIELDWIDTHS directive allows us to reference each column as $1,$2...
Now that we have our columns we can look for a comma in the fourth field with match($4,",").
If we find one we make an array of the values in the fourth field that are separated by commas with split($4,array,",").
Then we loop through this array and print multiple lines of output, one for each element of the array.
If the fourth field has no comma the else clause prints a single line.
This process repeats for each line in your fixed width file.
NOTE:
awk associative arrays do not guarantee to preserve the order of your data.
This means that your output might come out as
ABC 123 xyz. 51617
ABC 123 xyz. 5171-5261
ABC. 1234. Xxy. 81651-61761
i.e. 5171-5261,51617 in the input data produced a line from the second value before the first.
If the ordering is important to you then you can use the code below that makes a csv from your input data first, then produces the output preserving the order.
awk -v FIELDWIDTHS="10 10 30 40" '{print $1,$2,$3,$4}' OFS=',' samp.data > samp.csv
awk -F',' '{ for (i=4; i<=NF; i++) { print $1,$2,$3,$i } }' samp.csv

How to use the function "table:get" (table extension) when 2 keys are required?

I have a file .txt with 3 columns: ID-polygon-1, ID-polygon-2 and distance.
When I import my file into Netlogo, I obtain 3 lists [[list1][list2][list3]] which corresponds with the 3 columns.
I used table:from-list list to create a table with the content of 3 lists.
I obtain {{table: [[1 1] [67 518] [815 127]]}} (The table displays the first two lines of my dataset).
For example, I would like to get the value of distance (list3) between ID-polygon-1 = 1 (list1) and ID-polygon-2 = 67 (list1), that is, 815.
How can I use table:get table key when I have need of 2 keys (ID-polygon-1 and ID-polygon-2) ?
Thanks very much your help.
Using table:from-list will not help you there: it expects "a list of two element lists, or pairs" where the "the first element in the pair is the key and the second element is the value." That's not what you have in your original list.
Furthermore, NetLogo tables (and associative arrays in general) cannot have two keys. They are always just key-value pairs. Nothing prevents the value from being another table, however, and in your case, that is what you need: a table of tables!
There is no primitive to build that directly, however. You will need to build it yourself:
extensions [ table ]
globals [ t ]
to setup
let lists [
[ 1 1 ] ; ID-polygon-1 column
[ 67 518 ] ; ID-polygon-2 column
[ 815 127 ] ; distance column
]
set t table:make
foreach n-values length first lists [ ? ] [
let id1 item ? (item 0 lists)
let id2 item ? (item 1 lists)
let dist item ? (item 2 lists)
if not table:has-key? t id1 [
table:put t id1 table:make
]
table:put (table:get t id1) id2 dist
]
end
Here is what you get when you print the resulting table:
{{table: [[1 {{table: [[67 815] [518 127]]}}]]}}
And here is a small reporter to make it convenient to get a distance from the table:
to-report get-dist [ id1 id2 ]
report table:get (table:get t id1) id2
end
Using get-dist 1 67 will give the 815 result you were looking for.

Doing a complex join on files

I have files (~1k) that look (basically) like this:
NAME1.txt
NAME ATTR VALUE
NAME1 x 1
NAME1 y 2
...
NAME2.txt
NAME ATTR VALUE
NAME2 x 19
NAME2 y 23
...
Where the ATTR collumn is same in everyfile and the name column is just some version of the filename. I would like to combine them together into 1 file that looks like:
All_data.txt
ATTR NAME1_VALUE NAME2_VALUE NAME3_VALUE ...
X 1 19 ...
y 2 23 ...
...
Is there simple way to do this with just command line utilities or will I have to resort to writing some script?
Thanks
You need to write a script.
gawk is the obvious candidate
You could create an associative array in a BEGIN block, using FILENAME as the KEY and
ATTR " " VALUE
values as the value.
Then create your output in an END block.
gawk can process all txt files together by using *txt as the filename
It's a bit optimistic to expect there to be a ready made command to do exactly what you want.
Very few command join data horizontally.

Resources