how to use chainer.using_config to stop F.dropout in evaluate/predict process in chainer? - chainer

F.dropout is only used in train, I confused how to use chainer.using_config in it?
How does it works and chainer how to know it is in training or predict?

From Chainer v2, function behavior is controlled by the config variable, according to the official doc,
chainer.config.train
Training mode flag. If it is True, Chainer runs in the training mode. Otherwise, it runs in the testing (evaluation) mode. The default value is True.
You may control this config by following two ways.
1. Simply assign the value.
chainer.config.train = False
here, code runs in the test mode, and dropout won't drop any unit.
model(x)
chainer.config.train = True
2. with using_config(key, value) notation
If we use above case, you might need to set True and False often, chainer provides with using_config(key, value) notation to ease this setting.
with chainer.using_config('train', False):
# train config is set to False, thus code runs in the test mode.
model(x)
# Here, train config is recovered to original value.
...
Note1: If you are using trainer moudule, Evaluator will handle these configuration automatically during validation/evaluation (see document, or source code). It means train config is set to False and dropout runs as evaluate mode when calculating validation loss.
Note2: train config is used to switch "train" mode and "evaluate/validation".
If you need "predict" code, you need to implement separately.

Related

Can a parameter be used to set the unit attribute for a component?

So far, using Wolfram System Modeler 4.3 and 5.1 the following minimal example would compile without errors:
model UnitErrorModel
MyComponent c( hasUnit = "myUnit" );
block MyComponent
parameter String hasUnit = "1";
output Real y( unit = hasUnit );
equation
y = 10;
end MyComponent;
end UnitErrorModel;
But with the new release of WSM 12.0 (the jump in version is due to an alignment with the current release of Wolfram's flagship Mathematica) I am getting an error message:
Internal error: Codegen.getValueString: Non-constant expression:c.hasUnit
(Note: The error is given by WSMLink'WSMSimulate in Mathematica 12.0 which is running System Modeler 12.0 internally; here asking for the "InternalValues" property of the above model since I have not installed WSM 12.0 right now).
Trying to simulate the above model in OpenModelica [OMEdit v. 1.13.2 (64-bit)] reveals:
SimCodeUtil.mo: 8492:9-8492:218]: Internal error Unexpected expression (should have been handled earlier, probably in the front-end. Unit/displayUnit expression is not a string literal: c.hasUnit
So it seems that to set the unit attribute I cannot make use of a variable that has parameter variability? Why is this - after all shouldn't it suffice that the compiler can hard-wire the unit when compiling for runtime (after all the given model will run without any error in WSM 4.3 and 5.1)?
EDIT: From the answer to an older question of mine I had believed that at least final parameters might be used to set the unit-attribute. Making the modification final (e.g. c( final hasUnit = "myUnit" ) does not resolve the issue.
I have been given feedback on Wolfram Community by someone from Wolfram MathCore regarding this issue:
You are correct in that it's not in violation with the specification,
although making it a constant makes more sense since you would
invalidate all your static unit checking if you are allowed to change
the unit after building the simulation. We filed an issue on the
specification regarding this (Modelica Specification Issue # 2362).
So, MatheCore is a bit ahead of the game in proposing a Modelica specification change that they have already implemented. ;-)
Note: That in Wolfram System Modeler (12.0) using the annotation Evaluate = true will not cure the problem (cf. the comment above by #matth).
As a workaround variables used to set the unit attribute should have constant variability, but can nevertheless by included in user dialogs to be interactively changed using annotation(Dialog(group = "GroupName")).

How can I change a CPLEX parameter in my Julia code?

I'm using the CPLEX solver to run my ILP model.The ILP model is implemented with Julia/MultiJuMP.
I would like to limit the time of optimization of the problem. If I were working with OPL, I would just have to add Cplex.tilimt=100
In Julia, I put the following code :
mmodel = MultiModel(solver = CplexSolver("CPLEX.tilim"=100), linear = true)
It doesn't work.
From the last section in https://github.com/JuliaOpt/CPLEX.jl/blob/master/README.md, it appears that Julia uses the legacy parameter names as they appear in the C API of CPLEX. For example, CplexSolver(CPX_PARAM_EPINT=1e-8).
Here's the link to the the CPLEX documentation for that parameter: https://www.ibm.com/support/knowledgecenter/SSSA5P_12.9.0/ilog.odms.cplex.help/CPLEX/Parameters/topics/EpInt.html. As you can see, the name appears as the first row in the 'Name prior to V12.6.0' column.
For the time limit, you should thus use CPX_PARAM_TILIM, as this is the name in https://www.ibm.com/support/knowledgecenter/SSSA5P_12.9.0/ilog.odms.cplex.help/CPLEX/Parameters/topics/TiLim.html.

Scrutinizer - skip some phpunits

I want to skip some phpunit tests in scrutinizer.
How can I achive the same?
Where do I need to do configuration changes for the same?
Many CI systems incl. Scrutinizer CI set environment variables in their build environment.
For example the environment variable SCRUTINIZER is set to TRUE. That is only one of many, learn more about Pre-defined Environment Variables on Scrutinizer CI.
Inside the test method (or inside the setUp() method for the whole class) you can check the environment variable (e.g. via $_ENV) and mark the test as skipped.
if (isset($_ENV['SCRUTINIZER'])) {
$this->markTestSkipped(
'Scrutinizer CI build'
);
}
See as well the more general question How to skip tests in PHPunit? and the Phpunit documentation Incomplete and Skipped Tests.
In my case, I have added
./vendor/bin/phpunit --exclude-group Group1, Group2 command as follows in .scrutinizer.yml file at application level to skip phpunits representing these groups as follows :
build:
nodes:
acsi:
tests:
override:
- './vendor/bin/phpunit --exclude-group Group1, Group2'
- phpcs-run --standard=phpcs.xml
- php-scrutinizer-run

Skipping one test if another failed

I'm writing some tests which rely on an external binary being present. Having a test case for ensuring binary is present and functioning seems useful. I want multiple tests, expressing:
Is binary found by subprocess.Popen?
Can it do basic X?
Can it do basic Y?
I want a pattern to skip the basic tests X and Y if the first one fails. It would also be good to test several binaries in the same TestCase, with different kinds of X and Y.
Some approaches come to mind:
Decorate test_X() and test_Y() with #unittest.skipIf(basic failed), either through checking TestResults or setting a flag
Have test_basic() call TestResults.addSkip(test_binary_X) upon failure (but this doesn't appear to be exposed)
Put test_basic() into one TestCase and the others into another
Make test_basic() part of setUp(), so it's run for each of X and Y
Unittest doesn't run tests in the order written, so it may be hard to ensure sequence.
# Something like:
class TestBasics(unittest.TestCase):
test_basic(self):
try:
subprocess.Popen(["binary"], stdout=None, stderr=None)
catch OsError as e:
# fail depending on the error
#unittest.skipIf( test binary in path failed )
test_X(self):
# assert
#unittest.skipIf( test binary in path failed )
test_Y(self):
# assert

What do the numbers before test testcase in Test lab indicate?

What do the numbers before test testcase in Test lab indicate? For example:
[1]Login with ur……
What does the [1] mean?
the [1] before the test case in the test lab indicates the number of occurrences of that test case. If you add the same test case twice it will look something like this.
[1] Test Case Name
[2] Test Case Name
it's the number of instances of that test case in the Execution Grid.
each time you add this test case to the grid you get a new instance added with a new number beside it.
It took me a while to understand instances of test 'cases'. I was thinking of it all wrong (because the test lab is where you run the tests, right?).
When you want a specific set of parameters for a test, use test configurations in the Test Plan.
Once the values are set, you create your instance in the Test Lab from the configuration so that you can execute it.
If, on the other hand, you have an instance in Test Lab with some specific values that you like, you can use right-click -> generate configuration.
This creates a configuration for you and you can give it a name of your choosing (something that I really wanted to do in the test lab, until I discovered how Configurations work).
There is also a button in the test plan configurations tab to 'push' updated values to the Test Lab. So while there is a disconnect between configurations and test lab (by design), but it is not such a disconnect that you can't connect them when you want.

Resources