I have used the following code to have a graph in main page of doxygen
/*! \mainpage
\dot
digraph example{
node[shape=record, fontname=Helvetica, fontsize=10];
b [label="class cell_deinterleaver" URL= "\ref Cell_deinterleaver" ];
c [label="class freq_deinterleaver" URL= "\ref freq_deinterleaver" ];
b -> c [arrowhead= "open", style = "dashed"];
}
\endot
*/
But I get the following message "reached end of file while inside a dot block! The command that should end the block seems to be missing!"
Can anyone help?
Thanks.
The solution is like the message says:
warning: reached end of file while inside a dot block! The command
that should end the block seems to be missing!
it is the mistake that is easily overlooked, written is
\endot this should be \enddot (note the double d)
Related
i'm new to nf-core/nextflow and needless to say the documentation does not reflect what might be actually implemented. But i'm defining the basic pipeline below:
nextflow.enable.dsl=2
process RUNBLAST{
input:
val thr
path query
path db
path output
output:
path output
script:
"""
blastn -query ${query} -db ${db} -out ${output} -num_threads ${thr}
"""
}
workflow{
//println "I want to BLAST $params.query to $params.dbDir/$params.dbName using $params.threads CPUs and output it to $params.outdir"
RUNBLAST(params.threads,params.query,params.dbDir, params.output)
}
Then i'm executing the pipeline with
nextflow run main.nf --query test2.fa --dbDir blast/blastDB
Then i get the following error:
N E X T F L O W ~ version 22.10.6
Launching `main.nf` [dreamy_hugle] DSL2 - revision: c388cf8f31
Error executing process > 'RUNBLAST'
Error executing process > 'RUNBLAST'
Caused by:
Not a valid path value: 'test2.fa'
Tip: you can replicate the issue by changing to the process work dir and entering the command bash .command.run
I know test2.fa exists in the current directory:
(nfcore) MN:nf-core-basicblast jraygozagaray$ ls
CHANGELOG.md conf other.nf
CITATIONS.md docs pyproject.toml
CODE_OF_CONDUCT.md lib subworkflows
LICENSE main.nf test.fa
README.md modules test2.fa
assets modules.json work
bin nextflow.config workflows
blast nextflow_schema.json
I also tried with "file" instead of path but that is deprecated and raises other kind of errors.
It'll be helpful to know how to fix this to get myself started with the pipeline building process.
Shouldn't nextflow copy the file to the execution path?
Thanks
You get the above error because params.query is not actually a path value. It's probably just a simple String or GString. The solution is to instead supply a file object, for example:
workflow {
query = file(params.query)
BLAST( query, ... )
}
Note that a value channel is implicitly created by a process when it is invoked with a simple value, like the above file object. If you need to be able to BLAST multiple query files, you'll instead need a queue channel, which can be created using the fromPath factory method, for example:
params.query = "${baseDir}/data/*.fa"
params.db = "${baseDir}/blastdb/nt"
params.outdir = './results'
db_name = file(params.db).name
db_path = file(params.db).parent
process BLAST {
publishDir(
path: "{params.outdir}/blast",
mode: 'copy',
)
input:
tuple val(query_id), path(query)
path db
output:
tuple val(query_id), path("${query_id}.out")
"""
blastn \\
-num_threads ${task.cpus} \\
-query "${query}" \\
-db "${db}/${db_name}" \\
-out "${query_id}.out"
"""
}
workflow{
Channel
.fromPath( params.query )
.map { file -> tuple(file.baseName, file) }
.set { query_ch }
BLAST( query_ch, db_path )
}
Note that the usual way to specify the number of threads/cpus is using cpus directive, which can be configured using a process selector in your nextflow.config. For example:
process {
withName: BLAST {
cpus = 4
}
}
I'm sure that this is a misunderstanding of mine, since I'm not really an R programmer...
I have my code here: https://gist.github.com/bnsh/3839c4eb2c6b31e32c39ec312014b2b8
#! /usr/bin/env Rscript
library(R6)
Cloaked <- R6::R6Class("Cloaked",
public = list(
msg = function() {
return(paste(
"this code _works_, but lintr (https://github.com/jimhester/lintr)",
"complains that cloak_class.R:19:8: warning: no visible binding for",
"global variable ‘Cloaked’ when I try to use Cloaked within a",
"function. It's fine tho, if I use it _outside_ a function."
))
}
)
)
main <- function() {
c <- Cloaked$new()
c$msg()
}
main()
It works... But, lintr complains: "cloak_class.R:19:8: warning: no visible binding for global variable ‘Cloaked’"
Actually, it's not about a class, really, because this also complains:
#! /usr/bin/env Rscript
cloaked <- function() {
return(paste(
"this code _works_, but lintr (https://github.com/jimhester/lintr)",
"complains that cloak_function.R:13:3: warning: no visible global",
"function definition for ‘cloaked’ when I try to use cloaked within",
"a function. It's fine tho, if I use it _outside_ a function."
))
}
main <- function() {
cloaked()
}
main()
This code also runs, but lintr says:
cloak_function.R:13:3: warning: no visible global function definition for ‘cloaked’
Why? Short of doing something blunt instrument like # nolint start/# nolint end, what can I do to satisfy lintr?
Thanks!
I've just started with lintr and had the same problem. It appears to be a bug.
https://github.com/REditorSupport/atom-ide-r/issues/7
and the actual issue
https://github.com/jimhester/lintr/issues/27
The only workaround for now (short of fixing the bug within lintr) is to disable the object linter (which is not all that ideal as it will not catch genuine errors of this form). e.g.
with_defaults(object_usage_linter=NULL)
(I don't think the object usage linter is intended for scripts but rather packages - as far as I can tell, to work it will eval the entire script (!) to see what globals are defined. For a package where the R files are all function definitions that's fine, but for a script you don't really want to run the whole script every time you lint the file)
Using the linters argument in the lintr functions:
lint("myfile.R", linters = linters_with_defaults(
object_length_linter = NULL,
object_name_linter = NULL,
object_usage_linter = NULL))
In julia, how do I check if the current is allowed to write to a folder?
I could do the python way, and just attempt to do it, and then fail fail and recover.
(In my case I can definitely recover, I have a list of locations to attempt to write to, as fallbacks. I expect the first few not to work (The first few are shared locations, so only computer admins are likely to have permission to writer there)
Python has also os.access function. Maybe Julia will have something similar in the future. Now we could borrow idea. :)
It is implemented in posixmodule.c (also functionality for windows!) so if you are on posix you could simply mimic:
julia> const R_OK = 4 # readability
julia> const W_OK = 2 # writability
julia> const X_OK = 1 # executability
julia> const F_OK = 4 # existence
julia> access(path, mode) = ccall(:access, Cint, (Cstring, Cint), path, mode) == 0;
Small test:
julia> access("/root", W_OK)
false
julia> access("/tmp", W_OK)
true
(for windows it could be just a little more complicated... But I could not test it now)
EDIT:
Thanks to Matt B. we could use libuv support in Julia which has to be portable (although slower on posix systems):
julia> function uv_access(path, mode)
local ret
req = Libc.malloc(Base._sizeof_uv_fs)
try
ret = ccall(:uv_fs_access, Int32, (Ptr{Void}, Ptr{Void}, Cstring, Int64, Ptr{Void}), Base.eventloop(), req, path, mode, C_NULL)
ccall(:uv_fs_req_cleanup, Void, (Ptr{Void},), req)
finally
Libc.free(req)
end
return ret, ret==0 ? "OK" : Base.struverror(ret)
end
julia> uv_access("/tmp", W_OK)
(0, "OK")
julia> uv_access("/root", W_OK)
(-13, "permission denied")
julia> uv_access("/nonexist", W_OK)
(-2, "no such file or directory")
Is the following sufficient:
julia> testdir(dirpath) = try (p,i) = mktemp(dirpath) ; rm(p) ; true catch false end
testdir (generic function with 1 method)
julia> testdir("/tmp")
true
julia> testdir("/root")
false
Returns true if dirpath is writable (by creating a temporary file inside a try-catch block). To find the first writable directory in a list, the following can be used:
julia> findfirst(testdir, ["/root","/tmp"])
2
Doing apropos("permissions"):
julia> apropos("permissions")
Base.Filesystem.gperm
Base.Filesystem.mkpath
Base.Filesystem.operm
Base.Filesystem.uperm
Base.Filesystem.mkdir
Base.Filesystem.chmod
shows a function called Base.Filesystem.uperm which seems to do exactly what you want it to:
help?> uperm
search: uperm supertype uppercase UpperTriangular isupper unescape_string unsafe_pointer_to_objref
uperm(file)
Gets the permissions of the owner of the file as a bitfield of
Value Description
––––– ––––––––––––––––––
01 Execute Permission
02 Write Permission
04 Read Permission
For allowed arguments, see stat.
Unfortunately it seems to be a bit buggy on my (old v7 nightly) build:
julia> uperm("/root")
0x07 # Uhhh I hope not?
I will update my build and raise a bug if one is not already present.
PS. In case it wasn't clear, I would expect to use this in combination with isdir to detect directory permissions specifically
I don't think that Dan Getz's answer will work on Windows because the temporary file created cannot be deleted while there is an open handle to it, but this amended version with a call to close does work:
function isfolderwritable(folder)
try
(p,i) = mktemp(folder)
close(i)
rm(p)
return(true)
catch
return(false)
end
end
I managed to compile the mruby code adding the mrubygem - mruby-require from https://github.com/mattn/mruby-require
However when I try to call the require './' I get an error. Below is my code:
inc.rb
def test(a, b)
print "Inside the include->test(..)"
return a+b
end
test1.rb
require 'inc.rb'
def helloworld(var1)
print 'hello world ' + var1 + ". Test number = " + test(4, 5)
end
helloworld('test')
When I execute test1.rb I get this error from mruby:
NoMethodError: undefined method 'puts' for main
After some analysis I found out the 'puts' is not working with mruby. Infact after adding mruby-require gem, no ruby code gets execute. Do I need to add any dependency with mruby-require?
Can someone help me please?
Update: Pasting the content of build_config.rb as requested. I have removed the lines which are commented.
build_config.rb
MRuby::Build.new do |conf|
if ENV['VisualStudioVersion'] || ENV['VSINSTALLDIR']
toolchain :visualcpp
else
toolchain :gcc
end
enable_debug
# adding the mruby-require library
conf.gem 'mrbgems/mruby-require'
conf.gembox 'default'
end
MRuby::Build.new('host-debug') do |conf|
if ENV['VisualStudioVersion'] || ENV['VSINSTALLDIR']
toolchain :visualcpp
else
toolchain :gcc
end
enable_debug
conf.gembox 'default'
conf.cc.defines = %w(ENABLE_DEBUG)
conf.gem :core => "mruby-bin-debugger"
end
The following quote is from its README.md:
When mruby-require is being used, additional mrbgems that appear after mruby-require in build_config.rb must be required to be used.
This is from your build_config.rb:
conf.gem 'mrbgems/mruby-require'
conf.gembox 'default'
The default gembox contains mruby-print. So either require mruby-print or preferably swap the lines to make it a built-in gem (the default behavior without mruby-require).
I need to edit a .jam file used by boost-build for a specific kind of projects. The official manual on BJAM language says:
One of the toolsets that cares about DEF files is msvc. The following line should be added to it. flags msvc.link DEF_FILE
;
Since the DEF_FILE variable is not used by the msvc.link action, we need to modify it to be: actions link bind DEF_FILE { $(.LD) ....
/DEF:$(DEF_FILE) .... } Note the bind DEF_FILE part. It tells bjam to
translate the internal target name in DEF_FILE to a corresponding
filename in the link
So apparently just printing DEF_FILE with ECHO wouldn't work. How can it be expanded to a string variable or something that can actually be checked?
What I need to do is to print an error message and abort the build in case the flag is not set. I tried:
if ! $(DEF_FILE)
{
errors.user-error "file not found" ;
EXIT ;
}
but this "if" is always true
I also tried putting "if ! $_DEF_FILE {...}" inside the "actions" contained but apparently it is ignored.
I am not sure I understand the global task you have. However, if you wanted to add checking for non-empty DEF_FILE -- expanding on the documentation bit you quote, you need to add the check in msvc.link function.
If you have a command line pattern (specified with 'actions') its content is what is passed to OS for execution. But, you can also have a function with the same name, that will be called before generating the actions. For example, here's what current codebase have:
rule link.dll ( targets + : sources * : properties * )
{
DEPENDS $(<) : [ on $(<) return $(DEF_FILE) ] ;
if <embed-manifest>on in $(properties)
{
msvc.manifest.dll $(targets) : $(sources) : $(properties) ;
}
}
You can modify this code to additionally:
if ! [ on $(<) return $(DEF_FILE) ] {
ECHO "error" ;
}