I need to count number of lines in each block and count number of blocks in order to read it properly afterwards. Can anybody suggest a sample piece of code in Fortran?
My input file goes like this:
# Section 1 at 50% (Name of the block and its number and value)
1 2 3 (Three numbers in line with random number of lines)
...
1 2 3
# Section 2 at 100% (And then again Name of the block)
1 2 3...
and so on.
The code goes below. It works fine with 1 set of data, but when it meets " # " again it just stops providing data only about one section. Can not jump to another section:
integer IS, NOSEC, count
double precision SPAN
character(LEN=100):: NAME, NAME2, AT
real whatever
101 read (10,*,iostat=ios) NAME, NAME2, IS, AT, SPAN
if (ios/=0) go to 200
write(6,*) IS, SPAN
count = 0
102 read(10,*,iostat=ios) whatever
if (ios/=0) go to 101
count = count + 1
write(6,*) whatever
go to 102
200 write(6,*) 'Section span =', SPAN
So the first loop (101) suppose to read parameters of the Block and second (102) counts the number of lines in block with 'ncount' as the only parameter which is needed. However, when after 102 it suppose to jump back to 101 to start a new block, it just goes to 200 instead (printing results of the operation), which means it couldn't read the data about second block.
Let's say your file contains two valid types of lines:
Block headers which begin with '#, and
Data lines which begin with a digit 0 through 9
Let's add further conditions:
Leading whitespace is ignored,
Lines which don't match the first two patterns are considered comments and are ignored
Comment lines do not terminate a block; blocks are only terminated when a new block is found or the end of the file is reached,
Data lines must follow a block header (the first non-comment line in a file must be a block header),
Blocks may be empty, and
Files may contain no blocks
You want to know the number of blocks and how many data lines are in each block but you don't know how many blocks there might be. A simple dynamic data structure will help with record-keeping. The number of blocks may be counted with just an integer, but a singly-linked list with nodes containing a block ID, a data line count, and a pointer to the next node will gracefully handle an arbitrarily large blob of data. Create a head node with ID = 0, a data line count of 0, and the pointer nullify()'d.
The Fortran Wiki has a pile of references on singly-linked lists: http://fortranwiki.org/fortran/show/Linked+list
Since the parsing is simple (e.g. no backtracking), you can process each line as it is read. Iterate over the lines in the file, use adjustl() to dispose of leading whitespace, then check the first two characters: if they are '#, increment your block counter by one and add a new node to the list and set its ID to the value of the block counter (1), and process the next line.
Aside: I have a simple character function called munch() which is just trim(adjustl()). Great for stripping whitespace off both ends of a string. It doesn't quite act like Perl's chop() or chomp() and Fortran's trim() is more of an rtrim() so munch() was the next best name.
If the line doesn't match a block header, check if the first character is a digit; index('0123456789', line(1:1)) is greater than zero if the the first character of line is a digit, otherwise it returns 0. Increment the data line count in the head node of the linked list and go on to process the next line.
Note that if the block count is zero, this is an error condition; write out a friendly "Data line seen before block header" error message with the last line read and (ideally) the line number in the file. It takes a little more effort but it's worth it from the user's standpoint, especially if you're the main user.
Otherwise if the line isn't a block header or a data line, process the next line.
Eventually you'll hit the end of the file and you'll be left with the block counter and a linked list that has at least one node. Depending on how you want to use this data later, you can dynamically allocate an array of integers the length of the block counter, then transfer the data line count from the linked list to the array. Then you can deallocate the linked list and get direct access to the data line count for any block because the block index matches the array index.
I use a similar technique for reading arbitrarily long lists of data. The singly-linked list is extremely simple to code and it avoids the irritation of having to reallocate and expand a dynamic array. But once the amount of data is known, I carve out a dynamic array the exact size I need and copy the data from the linked list so I can have fast access to the data instead of needing to walk the list all the time.
Since Fortran doesn't have a standard library worth mentioning, I also use a variant of this technique with an insertion sort to simultaneously read and sort data.
So sorry, no code but enough to get you started. Defining your file format is key; once you do that, the parser almost writes itself. It also makes you think about exceptional conditions: data before block header, how you want to treat whitespace and unrecognized input, etc. Having this clearly written down is incredibly helpful if you're planning on sharing data; the Fortran world is littered with poorly-documented custom data file formats. Please don't add to the wreckage...
Finally, if you're really ambitious/insane, you could write this as a recursive routine and make your functional programming friends' heads explode. :)
Related
I asked over at the English Stack Exchange, "What is the English word with the longest single definition?" The best answer they could give is that I would need a program that could figure out the longest entry in a (text) file listing dictionary definitions, by counting the amount of characters or words in a given entry, and then provide a list of the longest entries. I also asked at Superuser but they couldn't come up with an answer either, so I decided to give it a shot here.
I managed to find a dictionary file which converted to text has the following format:
a /a/ indefinite article (an before a vowel) 1 any, some, one (have a cookie). 2 one single thing (there’s not a store for miles). 3 per, for each (take this twice a day).
aardvark /ard-vark/ n an African mammal with a long snout that feeds on ants.
abacus /a-ba-kus, a-ba-kus/ n a counting frame with beads.
As you can see, each definition comes after the pronunciation (enclosed by slashes), and then either:
1) ends with a period, or
2) ends before an example (enclosed by parenthesis), or
3) follows a number and ends with a period or before an example, when a word has multiple definitions.
What I would need, then, is a function or program that can distinguish each definition (including considering multiple definitions of a single word as separate ones), then count the amount of characters and/or words within (ignoring the examples in parenthesis since that is not the proper definition), and finally provide a list of the longest definitions (I don't think I would need more than say, a top 20 or so to compare). If the file format was an issue, I can convert the file to PDF, EPUB, etc. with no problem. And, I guess ideally I would want to be able to choose between counting length by characters and by words, if it was possible.
How should I go to do this? I have little experience from programming classes I took a long time ago, but I think it's better to assume I know close to nothing about programming at all.
Thanks in advance.
I'm not going to write code for you, but I'll help think the problem through. Pick the programming language you're most familiar with from long ago, and give it a whack. When you run in to problems, come back and ask for help.
I'd chop this task up into a bunch of subproblems:
Read the dictionary file from the filesystem.
Chunk the file up into discrete entries. If it's a text file like you show, most programming languages have a facility to easily iterate linewise through a file (i.e. take a line ending character or character sequence as the separator).
Filter bad entries: in your example, your lines appear separated by an empty line. As you iterate, you'll just drop those.
Use your human observation and judgement to look for strong patterns in the data that you can give communicate as firm rules -- this is one of the central activities of programming. You've already started identifying some patterns in your question, i.e.
All entries have a preamble with the pronounciation and part of speech.
A multiple definition entry will be interspersed with lone numerals.
Otherwise, a single definition just follows the preamble.
Write the rules you've invented into code. It'll go something like this: First find a way to lop off the word itself and the preamble. With the remainder, identify multiple-def entries by presence of lone numerals or whatever; if it's not, treat it as single-def.
For each entry, iterate over each of the one-or-more definitions you've identified.
Write a function that will count a definition either word-wise or character-wise. If word-wise, you'll probably tokenize based on whitespace. Counting the length of a string character-wise is trivial in most programming languages. Why not implement both!
Keep a data structure in memory as you iterate the file to track "longest". For each definition in each entry, after you apply the length calculation, you'll compare against the previous longest entry. If the new one is longer, you'll record this new leading word and its word count in your data structure. Comparing 'greater than' and storing a variable are fundamental in most programming languages, so while this is the real meat of your program, this shouldn't be hard.
Implement some way to display your results once iteration is done. This may be as simple as a print statement.
Finally, write the glue code that lets you execute the program easily. A program like this could easily be a command-line tool that takes one or two arguments (the path to the file to be analyzed, perhaps you pass your desired counting method 'character|word' as an argument too, since you implemented both). Different languages vary in how easy it is to create an executable to run from the command line, but most support it, so it's a good option for tasks like this.
Alright, I've been given a program that requires me to take a .txt file of varying symbols in rows and columns that would look like this.
..........00
...0....0000
...000000000
0000.....000
............
..#########.
..#...#####.
......#####.
...00000....
and using command arguments to specify row and column, requires me to select a symbol and replace that symbol with an asterisk. The problem i have with this is that it then requires me to recur up, down, left, and right any of the same symbol and change those into an asterisk.
As i understand it, if i were to enter "1 2" into my argument list it would change the above text into.
**********00
***0....0000
***000000000
0000.....000
............
..#########.
..#...#####.
......#####.
...00000....
While selecting the specified character itself isn't a problem, how do i have any similar, adjacent symbols change and then the ones next to those. I have looked around but can't find any information and as my teacher has had a different subs for the last 3 weeks, i havent had a chance to clarify my questions with them. I've been told that recursion can be used, but my actual experience using recursion is limited. Any suggestions or links i can follow to get a better idea on what to do? Would it make sense to add a recursive method that takes the coordinates given adds and subtracts from the row and column respectively to check if the symbol is the same and repeats?
Load in char by char, row by row, into a 2D array of characters. That'll make it a lot easier to move up and down and left and right, all you need to do is move one of the array indexes.
You can also take advantage of recursion. Make a function that changes all adjacent matching characters, and then call that same function on all adjacent matching characters.
I am using Robot Framework Selenium using python. I need help with grabbing a certain part of the string, without getting an exterior library. lets say the text says " Your range price for your product is from $0- 400" So i want to be able to get the 400 and paste is somewhere else in the test. The number isnt always 400 sometimes it may be 55 or something different. So i think i would need a GET TEXT Starting from the dollar sign count two spaces and take whatever is left. or i can get the first number and add 10. Like in this example its 0 so i want it to paste 10. Please Let me Know!
"Fetch From Right" should cover that. You just have to identify the stop point, which in your example looks like it would be the hyphen between the two number values.
for example: to extract the last five digits of this string ABC12345 you would want to create a variable to assign the text to.
${number}= Get Text (defined location of text, minus parentheses)
Then use this command to retrieve the remainder of the string after your identified stop point (C).
${desiredNumber}= Fetch From Right ${number} C
This is essentially creating a new variable, which is defined as the extracted values from the original variable after that point.
Hopefully this helps.
You could use the built-in function Evaluate to use the python underlying system:
${my_string} Get Text <your-identifier-here>
${result} Evaluate ${my_string}[${my_string}.rfind('-') + 1:]
Also, please have a look if you can use one of the standard available libraries: http://robotframework.org/robotframework/
I have a several data frames which start with a bit of text. Sometimes the information I need starts at row 11 and sometimes it starts at row 16 for instance. It changes. All the data frames have in common that the usefull information starts after a row with the title "location".
I'd like to make a loop to delete all the rows in the data frame above the useful information (including the row with "location").
I'm guessing that you want something like this:
readfun <- function(fn,n=-1,target="location",...) {
r <- readLines(fn,n=n)
locline <- grep(target,r)[1]
read.table(fn,skip=locline,...)
}
This is fairly inefficient because it reads the data file twice (once as raw character strings and once as a data frame), but it should work reasonably well if your files are not too big. (#MrFlick points out in the comments that if you have a reasonable upper bound on how far into the file your target will occur, you can set n so that you don't have to read the whole file just to search for the target.)
I don't know any other details of your files, but it might be safer to use "^location" to identify a line that begins with that string, or some other more specific target ...
I have some products which have 2d GS1 bar codes on them. Most have the format 01.17.10 which is GTIN.Expiry Date.Lot Number.
This makes sense as 01 and 17 are fixed length, so can be parsed easily, just by splitting the string in the appropriate place.
However, I also have some in the format 01.10.17.21 (GTIN.Lot.Expiry.Serial Number) which doesn't make sense because Lot and Serial number are variable length, meaning I cannot use position to decode the various elements. Also, I cannot search for the AIs as they could legitimately appear in the data.
It seems that I've no way of reliably decoding this format. Am I missing something?
Thanks!
According to the GS 1 website, "More than one AI can be carried in one bar code. When this happens, AIs with a fixed length data content (e.g., SSCC has a fixed length of 18 digits) are placed at the beginning and AI with variable lengths are placed at the end. If more than one variable length AI is placed in one bar code, then a special "function" character is used to tell the scanner system when one ends and the other one starts."
So it looks like they intend for you to order your AIs with the fixed width identifiers first. Then separate the variable-width fields with a function character, which it, appears is FNC1, but implementing that that will depend on the barcode symbology you are using, It may be different between DataMatrix, Code 128 and QR Code for example.