So I'm trying to work through the Plone 4 book and I copied a buildout file (from page 51) that contains the following lines:
[instance]
recipe = plone.recipe.zope2instance
http-address = 8080
user = admin:admin
verbose-security = on
eggs =
${eggs:main}
${eggs:devtools}
# Test runner. Run: ``bin/test`` to execute all tests
PlonePlonebuildout configuration files[test]
recipe = zc.recipe.testrunner
eggs =
${eggs:test}
defaults = ['--auto-color', '--auto-progress']
# Coverage report generator.
# Run: ``bin/test --coverage=coverage`` # and then: ``bin/coveragereport``
[coverage-report]
recipe = zc.recipe.egg
eggs = z3c.coverage
scripts = coveragereport
arguments = ('parts/test/coverage', 'coverage')
When I try to run bin/buildout I get the following error message:
ParsingError: File contains parsing errors: /Users/Jon/dev/pln42/buildout.cfg
[line 32]: 'PlonePlonebuildout configuration files[test]\n'
I have included some lines that come before and after the line in question so as to provide context. Since I have copied that line directly from the book, I don't know how to change it in order to make the parsing error go away.
(The source code is supposedly available online, but I have tried following the downloading instructions at Packt Publishing and they have not worked.)
Screenshot from my Kindle book:
The line PlonePlonebuildout configuration files[test] is clearly wrong; remove everything before the [test] part of that line:
# Test runner. Run: ``bin/test`` to execute all tests
[test]
recipe = zc.recipe.testrunner
Not sure if it is your formatting on Stack Overflow or in your file, but you are also missing some indentation elsewhere:
[instance]
recipe = plone.recipe.zope2instance
http-address = 8080
user = admin:admin
verbose-security = on
eggs =
${eggs:main}
${eggs:devtools}
The lines following the eggs = specification should be indented to mark them as the (continuing) value for the eggs parameter. The same goes for the eggs = line in the [test] part:
[test]
recipe = zc.recipe.testrunner
eggs =
${eggs:test}
defaults = ['--auto-color', '--auto-progress']
Looks like something's gone wrong in the editing/formatting here.
You can find all of my book's source code here: http://github.com/optilude/optilux
Use the branch selector to select a chapter. You can either read it through the web, or clone/download from github.
Related
I'm new to Snakemake and try to use specific files in a rule, from the directory() output of another rule that clones a git repo.
Currently, this gives me an error Wildcards in input files cannot be determined from output files: 'json_file', and I don't understand why. I have previously worked through the tutorial at https://carpentries-incubator.github.io/workflows-snakemake/index.html.
The difference between my workflow and the tutorial workflow is that I want to create the data I use later in the first step, whereas in the tutorial, the data was already there.
Workflow description in plain text:
Clone a git repository to path {path}
Run a script {script} on every single JSON files in the directory {path}/parsed/ in parallel to produce the aggregate result {result}
GIT_PATH = config['git_local_path'] # git/
PARSED_JSON_PATH = f'{GIT_PATH}parsed/'
GIT_URL = config['git_url']
# A single parsed JSON file
PARSED_JSON_FILE = f'{PARSED_JSON_PATH}{{json_file}}.json'
# Build a list of parsed JSON file names
PARSED_JSON_FILE_NAMES = glob_wildcards(PARSED_JSON_FILE).json_file
# All parsed JSON files
ALL_PARSED_JSONS = expand(PARSED_JSON_FILE, json_file=PARSED_JSON_FILE_NAMES)
rule all:
input: 'result.json'
rule clone_git:
output: directory(GIT_PATH)
threads: 1
conda: f'{ENVS_DIR}git.yml'
shell: f'git clone --depth 1 {GIT_URL} {{output}}'
rule extract_json:
input:
cmd='scripts/extract_json.py',
json_file=PARSED_JSON_FILE
output: 'result.json'
threads: 50
shell: 'python {input.cmd} {input.json_file} {output}'
Running only clone_git works fine (if I set an all input of GIT_PATH).
Why do I get the error message? Is this because the JSON files don't exist when the workflow is started?
Also - I don't know if this matters - this is a subworkflow used with module.
What you need seems to be a checkpoint rule which is first executed and only then snakemake determines which .json files are present and runs your extract/aggregate functions. Here's an example adapted:
I'm struggling to fully understand the file and folder structure you get after cloning your git repo. So I have fallen back to the best practices by Snakemake of using resources for downloaded and results for created files.
You'll need to re-adjust those paths to match your case again:
GIT_PATH = config["git_local_path"] # git/
GIT_URL = config["git_url"]
checkpoint clone_git:
output:
git=directory(GIT_PATH),
threads: 1
conda:
f"{ENVS_DIR}git.yml"
shell:
f"git clone --depth 1 {GIT_URL} {{output.git}}"
rule extract_json:
input:
cmd="scripts/extract_json.py",
json_file="resources/{file_name}.json",
output:
"results/parsed_files/{file_name}.json",
shell:
"python {input.cmd} {input.json_file} {output}"
def get_all_json_file_names(wildcards):
git_dir = checkpoints.clone_git.get(**wildcards).output["git"]
file_names = glob_wildcards(
"resources/{file_name}.json"
).file_name
return expand(
"results/parsed_files/{file_name}.json",
file_name=file_names,
)
# Rule has checkpoint dependency: Only after the checkpoint is executed
# the rule is executed which then evaluates the function to determine all
# json files downloaded from the git repo
rule aggregate:
input:
get_all_json_file_names
output:
"result.json",
default_target: True
shell:
# TODO: Action which combines all JSON files
edit: Moved the expand(...) from rule aggregate into get_all_json_file_names.
I am trying to generate documentation for my code using QDoc. qtdoc command is already in my environment variable path. But when I try to run the command in the root directory of the project (qdocconf file also resides in project root),
qdoc projectname.qdocconf
I get the following error
qt.qdoc: "qdoc can't run; no project set in qdocconf file"
Here is my projectname.qdocconf file.
headers.fileextensions = "*.h *.hpp"
sources.fileextensions = "*.cpp *.qml *.qdoc"
outputdir = Documentation/Code
headerdirs += Code
sourcedirs += . \
Code
exampledirs = .
imagedirs += ./Images/icons \
./Images/logos
I have commented on my class functions as described in the documentation using the format
/*!
* \fn void inlineFunction()
*
* Some info here...
*/
Can you point me what am I doing wrong?
Also, can I create the documentation using QtCreator instead of running the command in terminal?
Well somehow I figured out that you need to add the following line in your .qdocconf file
project = YourProjectName
which is not present in the minimum qdocconf file presented in https://doc.qt.io/qt-5/qdoc-minimum-qdocconf.html. Even though the above issue was resolved, a lot of other issues were encountered such as:
While compiling, qt.qdoc: No include paths passed to qdoc; guessing reasonable include paths. For this you have to manually include all the source paths. Read: https://bugreports.qt.io/browse/QTBUG-67289
Some comment tags, e.g. \return, \param, present in QtCreator are not recognized by QDoc
Option 2
Alternatively, look for Doxygen which is much easier to use and generate documents with a simple to use GUI. It also recognizes all the comment tags in QtCreator.
Update:
Doxygen plugins for Qt Creator doesn't work anymore as the support has expired. Use Doxygen GUI directly.
The memtest.py is working fine this way:
build/X86/gem5.opt configs/example/memtest.py
But when i am giving with arguments its is giving no such option error:
build/X86/gem5.opt configs/example/memtest.py --cpu-type=TimingSimpleCPU
gem5 Simulator System. http://gem5.org
gem5 is copyrighted software; use the --copyright option for details.
gem5 compiled Jul 27 2018 14:19:35
gem5 started Sep 17 2018 15:31:03
gem5 executing on 2030044470, pid 5045
command line: build/X86/gem5.opt configs/example/memtest.py --cpu-type=TimingSimpleCPU
Usage: memtest.py [options]
memtest.py: error: no such option: --cpu-type
On the other hand se.py and fs.py works fine with additional arguments:
build/X86/gem5.opt configs/example/se.py -c /home/prakhar/gem5/tests/test-progs/hello/bin/x86/linux/hello --cpu-type=TimingSimpleCPU
Is there any way to run memtest.py with --cpu-type and --mem-type arguments?
As the error informs, there is no such option.
The cpu-type is added for se.py and fs.py on the call to
Options.addCommonOptions(parser)
You can add both cpu-type and mem-type manually, by doing something along the lines of
from m5.util import addToPath, fatal
addToPath('../')
from common import CpuConfig
from common import MemConfig
And adding the options to the parser
parser = optparse.OptionParser()
# Other parser options
parser.add_option("--cpu-type", type="choice", default="AtomicSimpleCPU",
choices=CpuConfig.cpu_names(),
help = "type of cpu to run with")
parser.add_option("--mem-type", type="choice", default="DDR3_1600_8x8",
choices=MemConfig.mem_names(),
help = "type of memory to use")
The options are then added as options.cpu_type and options.mem_type. You can check the other examples (in configs/example/) to know if you will need to modify other things to comply with your intentions.
Well i was struggling with this problem and then i found this line in gem5/configs/example/memtest.py:
system = System(physmem = SimpleMemory(),cache_line_size = block_size)
If you want to run with any other memory ie. DRAMSim2 you can change this line.
system = System(physmem = DRAMSim2(),cache_line_size = block_size)
This will enable running memtest.py with memory type as DRAMSim2.You can now this as:
build/X86/gem5.opt configs/example/memtest.py
Also to change cpu-type ypu can refer to lines:
if options.atomic:
root.system.mem_mode = 'atomic'
else:
root.system.mem_mode = 'timing'
Default cpu-type is timing and you can change it to atomic by adding --atomic in the command.
The only difference I have found so far: If a script that is run by app.doScript returns an error, the file and line number of the error are overridden by the file and line number of the app.doScript call.
Are there any other differences I should know about?
Here's sample code that demonstrates the above difference:
First run InDesign:
c:
cd "C:\Program Files\Adobe\Adobe InDesign CS6 Server x64"
InDesignServer.com -port 12345
pause
Next create a batch file to run a script:
c:
cd "C:\Program Files\Adobe\Adobe InDesign CS6 Server x64"
sampleclient -host localhost:12345 -server "C:/doscript_vs_evalfile/call_doScript.jsx"
pause
This is "call_doScript.jsx", which will call app.doScript.
try {
app.doScript(new File("/c/doscript_vs_evalfile/called_by_doScript.jsx"));
"Success";
}
catch (e) {
var sError = "Encountered " + e.name + " #" + e.number + " at line " + e.line + " of file " + e.fileName + "\n" + e.message;
app.consoleout(sError);
sError;
}
This is "called_by_doScript.jsx", which is called by the previous script:
app.consoleout("Running called_by_doScript.jsx");
// Produce error
var a = b;
Run the batch file and this is the result:
02/25/13 13:30:03 INFO [javascript] Executing File: C:\doscript_vs_evalfile\call_doScript.jsx
02/25/13 13:30:03 INFO [javascript] Executing File: C:\doscript_vs_evalfile\called_by_doScript.jsx
02/25/13 13:30:03 INFO [script] Running called_by_doScript.jsx
02/25/13 13:30:03 INFO [script] Encountered ReferenceError #2 at line 2 of file /c/doscript_vs_evalfile/call_doScript.jsx
b is undefined
Notice that the error above is incorrect. The error was caused by line 3 of called_by_doScript, not line 2 of call_doScript.
Now modify the scripts to use $.evalFile, and we get this result:
02/25/13 13:32:39 INFO [javascript] Executing File: C:\doscript_vs_evalfile\call_evalFile.jsx
02/25/13 13:32:39 INFO [script] Running called_by_evalFile.jsx
02/25/13 13:32:39 INFO [script] Encountered ReferenceError #2 at line 3 of file /c/doscript_vs_evalfile/called_by_evalFile.jsx
b is undefined
Notice that the error is now reported at the correct location.
Edit:
I found sparse documentation. It doesn't really answer my question, but it does describe different optional parameters.
doScript: Adobe InDesign CS6 Scripting Guide: JavaScript (direct link)
See page 16, "Using the doScript Method"
evalFile: Javascript Tools Guide: Adobe Creative Suite 5
See page 219
$.evalFile is an ExtendScript feature while app.doScript is implemented by InDesign.
$.evalFile does
maintain the $.stack
consider $.includePath
work in other target applications
app.doScript can
pass arguments
change the language, e.g. AppleScript
use #targetengine to address other sessions
modify the undo/transaction mode as far as supported
but ...
nested doScript calls overwrite arguments
in a complicated setup I had trouble to debug after passing more than 12 arguments.
single stepping across doScript is trouble
Also, as you found, error handling differs. Keep an eye on exceptions ...
If I provide a variable with an embedded space in the environment as follows:
environment =
CPPFLAGS="-D_GNU_SOURCE -I${openssl:location}/include"
I get this error:
ValueError: dictionary update sequence element #1 has length 1; 2 is required
Is this a bug? Is there a workaround?
It's a shortcoming in zc.recipe.cmmi; it cannot handle environment variables without spaces. There is a patch available in the bugtracker for the recipe.
I am not currently aware of a workaround for this other than applying the patch. You can apply the patch on existing eggs using the collective.recipe.patch recipe, which should work in this case too (untried):
[buildout]
parts =
patch-z.r.cmmi
yourcmmipart
[patch-z.r.cmmi]
recipe = collective.recipe.patch
egg = zc.recipe.cmmi <= 1.3.4
patch = patches/environ_section_trunk_r101308.patch
This assumes you have a patches suddirectory with the patch from the bug downloaded. The part needs to be listed before your cmmi part to be executed before that part (or you can fabricate a dependency).
An alternative solution is to just abuse the recipe's 'configure-command' like so:
[buildthis]
recipe = zc.recipe.cmmi
...
configure-command =
export CPPFLAGS="-D_GNU_SOURCE -I${openssl:location}/include";
./configure