To check the code quality of my package I am using the package lintr using the command
lintr::lint_package()
and get one result that I want to ignore:
functions should have cyclomatic complexity of less than 15
How can I ignore this single "false positive" lintr result of a single lintr (cyclocomp_linter)
for a file (line number range)?
Edit 1: Currently I am using this .lintr config file as a workaround (by disabling the lintr completely):
linters: with_defaults(
cyclocomp_linter = NULL # instead of NULL I could use: cyclocomp_linter(16)
)
Although it doesn't solve your problem exactly, you can wrap that function with # nolint start then # nolint end to prevent any linter from flagging that function.
The syntax for preventing a specific linter from flagging a specific set of lines is currently under development - see https://github.com/jimhester/lintr/pull/660 . This will be present in the next major lintr release (3.0.0).
Related
I'm trying to run the code below from BioSeqClass package, however I get an error message:
Error in system(command, intern = TRUE) : '“C:\Program' not found
selectWeka(data, evaluator="CfsSubsetEval", search="BestFirst", n)
This is a problem with how BioSeqClass is calling java: it is leaving the file names unprotected/unquoted, and R's system and system2 commands are horrible by not forcing quoting. (If you ever think of using these commands directly yourself, I strongly recommend something like processx.)
One should create an issue or bug-report, but I don't know how to do that with Bioconductor, and their mirror on github (https://github.com/Bioconductor-mirror) is defunct, so I'm at a loss there. Hopefully somebody with more info can weigh in on this.
Workarounds
It is not obvious if the problem is due to where weka.jar is located or perhaps one of the other arguments. You can find out where the problem is by debugging the selectWeka function and inspecting the value of command before the system call. Look for the Program Files component of a path.
If the problem is with weka.jar, then this suggests that you are installing packages somewhere under C:\Program Files\, which is in my experience bad practice on two counts:
For many problems I cannot recall (but this one re-ignites the discussion), I never install R in the default location under C:\Program Files\...; instead, I install it under a new directory, C:\R\R-3.5.3 (version-based) and go from there. You may not have control over this if on a university/company system.
Since this is not in a base-R package, this suggests that either you are not using a personal library location (collection of packages), or have placed your personal library for some reason under C:\Program Files. If the former, I strongly suggest you never install new packages inside the base-R installation directory, instead using your own. See ?.libPaths and many other tutorials/discussions on the topic online. Using packrat or checkpoint might also mitigate this problem.
If the problem is with trainFile (if I'm reading the source correctly), then your "permanent" fix is to change where Windows puts temporary files, as this trainFile is a temporary file created specifically for this function-run. If this is your problem, I'll leave it up to you to fix.
Regardless, you may not have time or need to make a more permanent solution, you just want to run this once or twice and then move on. For that fix:
Again, debug(selectWeka), and once command is defined (the next command to be executed is tmp <- system(command,intern = TRUE)), run this code to fix the value of command:
if(search=="Ranker"){
command = paste("java -cp ", shQuote(file.path(.path.package("BioSeqClass"), "scripts", "weka.jar")),
" weka.attributeSelection.", evaluator, " -i ", shQuote(trainFile),
" -s \"weka.attributeSelection.", search, " -N ", n, "\"", sep="" )
}else{
command = paste("java -cp ", shQuote(file.path(.path.package("BioSeqClass"), "scripts", "weka.jar")),
" weka.attributeSelection.", evaluator, " -i ", shQuote(trainFile),
" -s weka.attributeSelection.", search, sep="" )
}
(For the record, all I changed was adding shQuote twice to each paste.) Confirm that command now has quotes around things, something like
Browse[2]> command
[1] "java -cp \"C:\\Program Files\\...\\weka.jar" weka.attributeSel... -i \"c:\\path\\to\\some\\tempfile\" ...
Then you can continue-out of the debugger and let it run its course.
(I hope you don't have hundreds of calls to selectWeka.)
Caveat: I am not a use of BioSeqClass, so I'm saying all of this from speculation and inference. I might have mis-located the source of the error. And since I don't know what I'm doing with it, I have not tested the modified command assignment within selectWeka. I believe shQuote(...) is the right way to go, but you might need to use sQuote or dQuote instead, I'm not sure how your system is setup.
In tensorflow the training from the scratch produced following 6 files:
events.out.tfevents.1503494436.06L7-BRM738
model.ckpt-22480.meta
checkpoint
model.ckpt-22480.data-00000-of-00001
model.ckpt-22480.index
graph.pbtxt
I would like to convert them (or only the needed ones) into one file graph.pb to be able to transfer it to my Android application.
I tried the script freeze_graph.py but it requires as an input already the input.pb file which I do not have. (I have only these 6 files mentioned before). How to proceed to get this one freezed_graph.pb file? I saw several threads but none was working for me.
You can use this simple script to do that. But you must specify the names of the output nodes.
import tensorflow as tf
meta_path = 'model.ckpt-22480.meta' # Your .meta file
output_node_names = ['output:0'] # Output nodes
with tf.Session() as sess:
# Restore the graph
saver = tf.train.import_meta_graph(meta_path)
# Load weights
saver.restore(sess,tf.train.latest_checkpoint('path/of/your/.meta/file'))
# Freeze the graph
frozen_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
sess.graph_def,
output_node_names)
# Save the frozen graph
with open('output_graph.pb', 'wb') as f:
f.write(frozen_graph_def.SerializeToString())
If you don't know the name of the output node or nodes, there are two ways
You can explore the graph and find the name with Netron or with console summarize_graph utility.
You can use all the nodes as output ones as shown below.
output_node_names = [n.name for n in tf.get_default_graph().as_graph_def().node]
(Note that you have to put this line just before convert_variables_to_constants call.)
But I think it's unusual situation, because if you don't know the output node, you cannot use the graph actually.
As it may be helpful for others, I also answer here after the answer on github ;-).
I think you can try something like this (with the freeze_graph script in tensorflow/python/tools) :
python freeze_graph.py --input_graph=/path/to/graph.pbtxt --input_checkpoint=/path/to/model.ckpt-22480 --input_binary=false --output_graph=/path/to/frozen_graph.pb --output_node_names="the nodes that you want to output e.g. InceptionV3/Predictions/Reshape_1 for Inception V3 "
The important flag here is --input_binary=false as the file graph.pbtxt is in text format. I think it corresponds to the required graph.pb which is the equivalent in binary format.
Concerning the output_node_names, that's really confusing for me as I still have some problems on this part but you can use the summarize_graph script in tensorflow which can take the pb or the pbtxt as an input.
Regards,
Steph
I tried the freezed_graph.py script, but the output_node_name parameter is totally confusing. Job failed.
So I tried the other one: export_inference_graph.py.
And it worked as expected!
python -u /tfPath/models/object_detection/export_inference_graph.py \
--input_type=image_tensor \
--pipeline_config_path=/your/config/path/ssd_mobilenet_v1_pets.config \
--trained_checkpoint_prefix=/your/checkpoint/path/model.ckpt-50000 \
--output_directory=/output/path
The tensorflow installation package I used is from here:
https://github.com/tensorflow/models
First, use the following code to generate the graph.pb file.
with tf.Session() as sess:
# Restore the graph
_ = tf.train.import_meta_graph(args.input)
# save graph file
g = sess.graph
gdef = g.as_graph_def()
tf.train.write_graph(gdef, ".", args.output, True)
then, use summarize graph get the output node name.
Finally, use
python freeze_graph.py --input_graph=/path/to/graph.pbtxt --input_checkpoint=/path/to/model.ckpt-22480 --input_binary=false --output_graph=/path/to/frozen_graph.pb --output_node_names="the nodes that you want to output e.g. InceptionV3/Predictions/Reshape_1 for Inception V3 "
to generate the freeze graph.
How can I increase maxTokensPerLine in my own Atom.IO environment?
I've got some long lines causing syntax to not be recognized properly, for example not highlighted correctly and brackets not taken note of etc.
But this seems to be a current source containing it. It seems to be taken as a parameter which suggests it could be configurable?
grammar-registry.coffee
I found
this.maxTokensPerLine = (_ref1 = options.maxTokensPerLine) != null ? _ref1 : Infinity;
on line 22 of /usr/share/atom/resources/app/apm/node_modules/first-mate/lib/grammar-registry.js
maxTokensPerLine also appears in
/usr/share/atom/resources/app/apm/node_modules/first-mate/lib/grammar.js
I tried adding maxTokensPerLine: 1000 in config.cson under *, core and editor, but it had no effect.
(old) maxTokensPerLine
syntax.coffee
You can use the package grammar-token-limit, which will handle changing it for you. All you need to do is specify which value you want in the package settings.
I guess if you want to do it yourself, this package would be the place to start looking.
I'm getting this on the console in a QML app:
QFont::setPointSizeF: Point size <= 0 (0.000000), must be greater than 0
The app is not crashing so I can't use the debugger to get a backtrace for the exception. How do I see where the error originates from?
If you know the function the warning occurs in (in this case, QFont::setPointSizeF()), you can put a breakpoint there. Following the stack trace will lead you to the code that calls that function.
If the warning doesn't include the name of the function and you have the source code available, use git grep with part of the warning to get an idea of where it comes from. This approach can be a bit of trial and error, as the code may span more than one line, etc, and so you might have to try different parts of the string.
If the warning doesn't include the name of the function, you don't have the source code available and/or you don't like the previous approach, use the QT_MESSAGE_PATTERN environment variable:
QT_MESSAGE_PATTERN="%{function}: %{message}"
For the full list of variables at your disposal, see the qSetMessagePattern() docs:
%{appname} - QCoreApplication::applicationName()
%{category} - Logging category
%{file} - Path to source file
%{function} - Function
%{line} - Line in source file
%{message} - The actual message
%{pid} - QCoreApplication::applicationPid()
%{threadid} - The system-wide ID of current thread (if it can be obtained)
%{qthreadptr} - A pointer to the current QThread (result of QThread::currentThread())
%{type} - "debug", "warning", "critical" or "fatal"
%{time process} - time of the message, in seconds since the process started (the token "process" is literal)
%{time boot} - the time of the message, in seconds since the system boot if that can be determined (the token "boot" is literal). If the time since boot could not be obtained, the output is indeterminate (see QElapsedTimer::msecsSinceReference()).
%{time [format]} - system time when the message occurred, formatted by passing the format to QDateTime::toString(). If the format is not specified, the format of Qt::ISODate is used.
%{backtrace [depth=N] [separator="..."]} - A backtrace with the number of frames specified by the optional depth parameter (defaults to 5), and separated by the optional separator parameter (defaults to "|"). This expansion is available only on some platforms (currently only platfoms using glibc). Names are only known for exported functions. If you want to see the name of every function in your application, use QMAKE_LFLAGS += -rdynamic. When reading backtraces, take into account that frames might be missing due to inlining or tail call optimization.
On an unrelated note, the %{time [format]} placeholder is quite useful to quickly "profile" code by qDebug()ing before and after it.
I think you can use qInstallMessageHandler (Qt5) or qInstallMsgHandler (Qt4) to specify a callback which will intercept all qDebug() / qInfo() / etc. messages (example code is in the link). Then you can just add a breakpoint in this callback function and get a nice callstack.
Aside from the obvious, searching your code for calls to setPointSize[F], you can try the following depending on your environment (which you didn't disclose):
If you have the debugging symbols of the Qt libs installed and are using a decent debugger, you can set a conditional breakpoint on the first line in QFont::setPointSizeF() with the condition set to pointSize <= 0. Even if conditional breakpoints don't work you should still be able to set one and step through every call until you've found the culprit.
On Linux there's the tool ltrace which displays all calls of a binary into shared libs, and I suppose there's something similar in the M$ VS toolbox. You can grep the output for calls to setPointSize directly, but of course this won't work for calls within the lib itself (which I guess could be the case when it handles the QML internally).
I'm trying to convert pdf to images using this command:
gm convert ./file.pdf -scene 1 thumbs/thumb%02d.jpg
Although I specify -scene argument, it does nothing, as I get output files starting from thumb00.jpg. And I need them to start from thumb01.jpg.
I'm using GraphicsMagick 1.3.12.
What am I doing wrong here?
In order to ensure numbered output files, add the +adjoin option like:
gm convert ./file.pdf -scene 1 +adjoin thumbs/thumb%02d.jpg
This additional requirement was added by GraphicsMagick 1.3.15. It is ok to use the same option for all earlier releases.
There is still an inability to specify the starting scene number. This is a known bug.