I ponder on purpose and best practice for the Resource- and Library-folders usage in RobotFramework.
Below I have formulated some statements which serves to illustrate my questions. (Abbreviations used: KW = KeyWord, RF = RobotFramework, TS = TestSuite).
Statements/Questions:
Every KW, that is designed to be shared among TS, and written in RF-syntax, should be put inside a .resource-file in the Resource-folder?
Every KW written in Python should be put (as a method inside a .py-file) in the Library-folder?
I.e. the distinction-line between the Resource- and Library-folder is drawn based on syntax used when writing the KW (RF-KW go into Resource-folder and Python-KW go into Libraries-folder).
Or, should the distinction-line rather be drawn upon closeness to the test-rig and system under test. (i.e. High- or Low-level keywords. Where Low-level Keywords are said to be interact with the system under test). And hence you could place python KW (methods) in the Resource-folder?
My take - Yes on everything, even on the last paragraph with the "Or,". Everything up until it were questions on the on the content/syntax of a file. And if your python (library) file has KW-s that make contextual sense to be in a folder with other similar RF (resource) files - place it there.
Remember two things: for Robotframework the distinction between Resource and Library is mainly what syntax it is expecting & how to import the target's resources. It doesn't enforce any rigid expectations on its purpose.
E.g. nothing stops you of having a high-level keyword developed in python like
def login_with_user_and_do_complex_compund_action(user, pass, other_arg)
, nor to create a relatively low-level KW written in Robotframework syntax:
Keyword For Complex Math That Should Better Be In Python
[Arguments] ${complex_number} ${transformer_functuon} ${other_arg}
The other thing is Robotframework is the tool(-set) with which you construct your automated testing framework for the SUT. By your framework I mean the structure & organization of suites and tests, their interconnections and hierarchy, and - the "helpers" for their operations - the before-mentioned resource (RF) and library (py) files.
As long as this framework is logically sound, has established conventions and is easy to grasp & follow, you can have any structure you find suiting you.
I am a beginner to programming. I am trying to run a simulation of a combustion chamber using reactingFoam.
I have modified the counterflow2D tutorial.
For those who maybe don't know OpenFOAM, it is a programme built in C++ but it does not require C++ programming, just well-defining the variables in the files needed.
In one of my first tries I have made a very simple model but since I wanted to check it very well I set it to 60 seconds with a 1e-6 timestep.
My computer is not very powerful so it took me for a day aprox. (by this I mean I'd like to find a solution rather than repeating the simulation).
I executed the solver reactingFOAM using 4 processors in parallel using
mpirun -np 4 reactingFOAM -parallel > log
The log does not show any evidence of error.
The problem is that when I use reconstructPar it works perfectly but then I try to watch the results with paraFoam and this error is shown:
From function bool Foam::IOobject::readHeader(Foam::Istream&)
in file db/IOobject/IOobjectReadHeader.C at line 88
Reading "mypath/constant/reactions" at line 1
First token could not be read or is not the keyword 'FoamFile'
I have read that maybe some files are empty when they are not supposed to be so, but I have not found that problem.
My 'reactions' file have not been modified from the tutorial and has always worked.
edit:
Sorry for the vague question. I have modified it a bit.
A typical OpenFOAM dictionary file always contains a Foam::Istream named FoamFile. An example from a typical system/controlDict file can be seen below:
FoamFile
{
version 2.0;
format ascii;
class dictionary;
location "system";
object controlDict;
}
During the construction of the dictionary header, if this Istream is absent, OpenFOAM ceases its operation by raising an error message that you have experienced:
First token could not be read or is not the keyword 'FoamFile'
The benefit of the header is possibly to contribute OpenFOAM's abstraction mechanisms, which would be difficult otherwise.
As mentioned in the comments, adding the header entity almost always solves this problem.
I am using gulp for css and js processing. Sometimes I am missing the good old lazyness of the unix make command:
only generate transformed (whatsover, e.g. compilation) files from original files, that have actually changed (based on time stamps).
this is true from stage 1 to 2 (.cpp -> .o), stage 2 to 3 (linking or other stuff) whatever your dependency graph gives...
Make is not limited to source code: You can do image manipulation in several steps (efficiently ‘lazy’ generation of downscaled thumbs for example) or much else. All based on the fairly simple rule: „is at least one of the source files newer in respect to the current output file(s)?“
Unlike gulp, every step generates (more or less temporary) files, not a continuous pipe.
Is there a way, to get the same kind of lazyness in gulp**, i.e. when generating css?
only transform those (less|sass|stylus) files➝css if something changed (on the very respective file)
same for adding in browser prefixes, concat, minify
Admittedly, beyond the first 1 or 2 steps, the output is most likely already a single stream. So any change means ‘touched’. Still, when playing for example with minify options, I'd rather be lazy about the early transpile, prefixing and concat stages (drawing prior results from a temp file). Also on the javascript side ( typeScript, ... )
lazypipe and gulp-cache sound tempting but are something else, if I understand correctly. Saying .watch() is also only a partial answer, for the very first stage.
Is there a more generic approach?
If you're set on using Gulp, then this would seem to be the way to do it. It involves the gulp-cached and gulp-remember plugins.
If I begin with a wholly Japanese sentence and run it through MeCab, I get something like this:
$ echo "吾輩は猫である" | mecab
吾輩 名詞,代名詞,一般,*,*,*,吾輩,ワガハイ,ワガハイ
は 助詞,係助詞,*,*,*,*,は,ハ,ワ
猫 名詞,一般,*,*,*,*,猫,ネコ,ネコ
で 助動詞,*,*,*,特殊・ダ,連用形,だ,デ,デ
ある 助動詞,*,*,*,五段・ラ行アル,基本形,ある,アル,アル
EOS
If I smash together everything I get from the last column, I get "ワガハイワネコデアル", which I can then feed into a speech synthesis program and get output. Said program, however, doesn't handle English words.
I throw English into MeCab, it manages to tokenise it (probably naively at the spaces), but gives no reading:
$ echo "I am a cat" | mecab
I 名詞,固有名詞,組織,*,*,*,*
am 名詞,一般,*,*,*,*,*
a 名詞,一般,*,*,*,*,*
cat 名詞,固有名詞,組織,*,*,*,*
EOS
I want to get readings for these as well, even if they're not perfect, so that I can get something along the lines of "アイアムアキャット".
I have already scoured the web for solutions and whereas I do find a bunch of web sites which have transliteration that appears to be adequate, I can't find any way to do it in my own code. In a couple of cases, I emailed the site authors and got no response yet after waiting for a few weeks. (Just how far behind on their inboxes are these people?)
There are a number of directions I can go but I hit dead ends on all of them so far, so this is my compound question:
MeCab takes custom dictionaries. Is there a custom dictionary which fills in the English knowledge somewhat?
Is there some other library or tool that can take English and spit out Katakana?
Is there some library or tool that can take IPA (International Phonetic Alphabet) and spit out Katakana? (I know how to get from English to IPA.)
As an aside, I find that the software "VOICEROID" can speak English text (poorly, but adequately for my purposes). This software uses MeCab too (or at least its DLL and dictionary files are included in the install.) It also uses another library, Cabocha, which as far as I can tell by running it does the exact same thing as MeCab. It could be using custom dictionaries for either of these two libraries to do the job, or the code to do it could be in the proprietary AITalk library they are using. More research is needed and I haven't figured out how to run either tool against their dictionaries to test it out directly either.
For me ack is essential kit (its aliased to a and I use it a million times a day). Mostly it has everything I need so I'm figuring that this behavior is covered and I just can't find it.
I'd love to be able to restrict it to specific kinds of files using a type. the problem is that these files have a full filename rather than an extension. For instance I'd like to restrict it to build files for buildr so i can search them with --buildr (Similar would apply for mvn poms). I have the following defined in my .ackrc
--type-set=buildr=buildfile,.rake
The problem is that 'buildfile' is the entire filename, not an extension, and I'd like ack to match completely on this name. However if I look at the types bound to 'buildr' it shows that .buildfile is an extension rather than the whole filename.
--[no]buildr .buildfile .rake
The ability to restrict to a particular filename would be really useful for me as there are numerous xml usecases (e.g. ant build.xml or mvn pom.xml) that it would be perfect for. I do see that binary, Makefiles and Rakefiles have special type configuration and maybe that's the way to go. I'd really like to be able to do it within ack if possible before resorting to custom functions. Anyone know if this is possible?
No, you cannot do it. ack 1.x only uses extensions for detecting file types. ack 2.0 will have much more flexible capabilities, where you'll be able to do stuff like:
# There are four different ways to match
# is: Match the filename exactly
# ext: Match the extension of the filename exactly
# match: Match the filename against a Perl regular expression
# firstlinematch: Match the first 80 characters of the first line
# of text against a Perl regular expression. This is only for
# the --type-add option.
--type-add=make:ext:mk
--type-add=make:ext:mak
--type-add=make:is:makefile
--type-add=make:is:gnumakefile
# Rakefiles http://rake.rubyforge.org/
--type-add=rake:is:Rakefile
# CMake http://www.cmake.org/
--type-add=cmake:is:CMakeLists.txt
--type-add=cmake:ext:cmake
# Perl http://perl.org/
--type-add=perl:ext:pod
--type-add=perl:ext:pl
--type-add=perl:ext:pm
--type-add=perl:firstlinematch:/perl($|\s)/
You can see what development on ack 2.0 is doing at https://github.com/petdance/ack2. I'd love to have your help.