mypy: pre-commit results differ from - mypy

I am trying to introduce pre-commit into quite a large existing project and replace the existing direct call to mypy with pre-commit, however, the results differ. It would be unreasonable to try and describe everything I've tried in detail or expect a definitive answer as to what to do. I just hope for some direction or advice as to how to diagnose the situation.
Here's the problem. When I run the 'legacy' mypy invocation, everything passes, but when I call the pre-commit run mypy --all-files, I get a lot of errors along the lines of:
pack_a/pack_b/pack_c/json_formatter.py:171:13: error: Returning
Any from function declared to return "str" [no-any-return]
return orjson.dumps(result, self._default).decode('UTF-8')
The function in question is indeed declared to return str, but so is orjson.dumps, so it seems that this should pass (it returns bytes to be precise, but not Any).
The other type of error I get is Unused "type: ignore" comment
I tried to figure out the difference between the configurations, but couldn't.
Here's how mypy is called now:
$(VENV)/bin/mypy --install-types --non-interactive $(CODE)
Where $(CODE) is the list of all the directories that contain 'code' (as opposed to tests and supporting scripts)
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v0.950
hooks:
- id: mypy
args: [
"--config-file=setup.cfg",
--no-strict-optional,
--ignore-missing-imports,
]
additional_dependencies:
- pydantic
- returns
... (I checked all the types-* files the old mypy installs and added them here)
exclude: (a regex to exclude the non-code dirs)
I am not sure how to check that the both configurations check the same set of files, but the number of files checked they report is the same.
There's also the problem that when I edit a file and then run pre-commit run mypy, I get even more errors than with pre-commit run mypy --all-files
Is there anything I could try to diagnose this?

the mirrors-mypy README addresses this --
Because pre-commit runs mypy from an isolated virtualenv (without your dependencies) you may also find it useful to add the typed dependencies to additional_dependencies so mypy can better perform dynamic analysis
at least from your description you're missing orjson there -- perhaps more
disclaimer: I wrote pre-commit

Related

Is it possible to run mypy pre-commit without making it fail?

I would like to add the following to pre-commit for a team:
- repo: https://github.com/pre-commit/mirrors-mypy
rev: 'v0.720'
hooks:
- id: mypy
args: [--ignore-missing-imports]
My team is worried that this might be too strict. To have a gradual introduction, I would like this hook not to make the commit fail, but only to show the issues. Is that possible?
you can, but I wouldn't suggest it -- warning noise is likely to have your whole team ignore the entire output and the entire tool
here's how you would do such a thing (note that it has reduced portability due to bash -- mostly because the framework intentionally does not suggest this)
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v0.720
hooks:
- id: mypy
verbose: true
entry: bash -c 'mypy "$#" || true' --
two pieces make this work:
verbose: true always produces the output -- this option is really only intended for debugging purposes, but you can turn it on always (it can be noisy / annoying though)
bash + || true -- ignore the exit code
disclaimer: I am the author of pre-commit
Also note that you can temporarily disable hooks by setting the environment variable SKIP. For example:
SKIP=flake8 git commit -m 'fix thing - work in progress'
This is especially useful when you just want to make local "checkpoint" commits that you'll fix later.
Side note on mypy specifically: there's a potentially big issue with using mypy in a non-blocking way like this. If you allow commits with type errors to be merged, everyone else will start to see those type errors in their pre-commit checks.
When developers are making further changes, it's confusing whether the mypy errors that appear were there from before, or due to their further changes. This can be a recipe for frustration/confusion, and also for allowing further type errors to accumulate.
I think the mypy guide on using mypy with an existing codebase is pretty good advice.
If you just need to temporarily skip mypy checks so you can checkpoint your work, push a PR for initial review, or whatever, you can just do SKIP=mypy as mentioned above.

How to fail the whole state.sls if one statement inside fails

I'm just started with saltstack, so could anyone help with this question: How to fail the whole state.sls if one statement inside fails? Are requisites
require/watch
suitable for this?
You can use requisites and all depending states will fail as well as the required one.
An alternative is to abort the whole execution on a single state failure:
abort_on_failure_state_example:
test.fail_without_changes:
- failhard: True
No further state will be executed, even those from other included sls files.
I use that to ensure that some required grains are set before applying states, without having to check/require them on every state.
This is documented in https://docs.saltstack.com/en/latest/ref/states/failhard.html
Well. You can specify that a certain state will only be applied if another state is successfully applied as well. Like this:
vim:
pkg.installed: []
/etc/vimrc:
file.managed:
- source: salt://edit/vimrc
- require:
- pkg: vim
The vimrc file will only be managed if the package vim is successfully is installed. It doesn't mater if vim was already installed or that it is a new installed package after this state. If the pkg: vim is successfully installed it will run the second state.
This is useful if you have a particularly state that might fail. You can add the require to all other states to make sure they will only be applied if this particularly state is successful.
To answer your question:
It's not possible to make all states inside the state.sls fail if one of them fails. You can work around this issue by running your state.sls with salt '*' state.apply state test=True to see what would happen. If one of the states would fail you can decide to not actually apply the state.
I hope this answers your question. If it's still unclear you might want to come up with an example :-)

sbt key that corresponds to command that I type in

I want to make my tests run every time I type universal:package-zip-tarball. I know that to do this, I have to put something like
someKey <<= someKey dependsOn (test in Test)
in my project/Build.scala, where someKey is the key that provides the task I want to depend on the test run, in this case, universal:package-zip-tarball.
But my generic question is: how do I find out what someKey should be?
Note that this is a Play framework project, and I don't even know if universal:package-zip-tarball is provided by Play, or by some other sbt plugin.
Is there any way sbt can just tell me, without me having to go searching for the source code repository containing the relevant code?
Use the inspect command:
$ inspect universal:package-zip-tarball
[...]
[info] Defined at:
[info] (com.typesafe.sbt.packager.universal.UniversalPlugin)
UniversalPlugin.scala:73
This is actually the location of the definition of the code of the task, but this is close enough to help, because it lets us find the key (the key will be in the same sbt plugin).
From this we can find out that the key is:
com.typesafe.sbt.packager.universal.Keys.packageZipTarball
Unfortunately, just substituting this in doesn't work - it says:
[error] Reference to undefined setting:
[error]
[error] my-project/*:packageZipTarball from my-project/*:packageZipTarball
[error] Did you mean my-project/universal-docs:packageZipTarball ?
[error]
[error] Use 'last' for the full log.
So to fix this, the only thing remaining is to translate the universal: prefix. It is in fact this:
packageZipTarball in Universal <<= packageZipTarball in Universal dependsOn (test in Test)
but it just needs an extra import to make it compile:
import com.typesafe.sbt.SbtNativePackager._
(In this case, SbtNativePackager is the main plugin object, I think. Other plugins might require importing something else, to translate such a prefix.)

"include_recipe" vs. Vagrantfile "chef.add_recipe". What's the difference?

Just ran nginx::source recipe on my vagrant box, and I have very unusual behaviour.
When I include a recipe from the Vagrantfile (as below), everything works like a charm,
chef.add_recipe("project::nginx")
chef.add_recipe("nginx::source")
(project::nginx recipe is very simple. Using it to override default attributes of the nginx cookbook)
but if I include a recipe at the very end of project::nginx (mentioned up), everything falls apart:
node.default['nginx']['server_names_hash_bucket_size'] = 128
include_recipe "nginx::source"
Until now I didn't know there's any difference in behaviour between those two invocations. Does anybody here knows what's the difference?
Gotya! Chef 11 feature. Issue with it exist in chef-solo solely :)
To make a quick resume, difference is:
chef.add_recipe() - loads entire cookbook context (all the files, e.g. recipes, definitions, attributes...)
include_recipe "" - files(attributes, definitions etc.) that are not in the expended run list are not loaded.
There are at least 4 ways to solve the issue(put files in the run list):
include_attribute - include desired attribute file explicitly.
metadata.rb->dependency - if your cookbook is using recipe from another cookbook, put that cookbook in metadata.rb's dependency section, and all it's files will be loaded.
chef.add_recipe() - Load recipe via Vagrantfile. (Mentioned here just for reference)
Berkshelf - you may use this cookbook manager to solve the issue as well. Here's the Stackoverflow thread about this exact problem and some Docs
For those who are interested in further reading, Chef 11 introduced dependency-based cookbook loading for non-recipe files. The new loading logic means that files belonging to cookbooks which exist in the cookbook_path but are not in the expanded run_list or dependencies of the cookbooks in the expanded run_list will no longer be loaded. REF: Opscode breaking changes documentation, and if you need a signature of the error I got, here's the exactly same one, even for the same cause.

PHPUnit - does nothing, no errors, no output

Sorry for another 'phpunit doesn't work' question. It used to work for years now. Today I reinstalled PEAR and phpunit for reasons not connected to this problem. Now when I run phpunit as I usually did. Nothing happens. The cli just shows me a new line, no output whatsoever.
Has anyone encountered this problem or has an idea what could have caused it.
PHPUnit Version: 3.5.15
PEAR Version: 1.9.4
PHP Version: 5.3.8
Windows 7
I'm on OSX and MAMP. To get error messages I had to adjust the following entries in php.ini:
display_errors = On
display_startup_errors = On
Please not that this has to go into /Applications/MAMP/bin/php/php5.3.6/conf/php.ini .
For future reference, for those who are facing any problem with PHPUnit, and PHPUnit is failing silently, just add this three lines inside phpunit.xml:
<phpunit ....... >
...
...
<php>
<ini name="display_errors" value="true"/>
</php>
</phpunit>
After that run the tests again, and now you can see why PHPUnit is failing,
AND ... ENJOY UNIT TESTING :)
I know the orignal poster's question is already answered, but just for any people searching in the future: one thing that can cause PHPUnit to fail silently (i.e. it just stops running tests without telling you why) is that it has an error handler that it set up before each test run, which is intended to capture errors and display them at the end of the test run. The problem is that certain errors halt execution of the whole test run.
What I generally do when this happens, as a first step, is reset the error handler to something that will immediately output the error message. In my base test class I have a method called setVerboseErrorHandler, which I'll call at the top of the test (or in setUp) when this happens. The below requires php 5.3 or higher (due to the closure), so if you're on 5.2 or lower you can make it a regular function.
protected function setVerboseErrorHandler()
{
$handler = function($errorNumber, $errorString, $errorFile, $errorLine) {
echo "
ERROR INFO
Message: $errorString
File: $errorFile
Line: $errorLine
";
};
set_error_handler($handler);
}
Create the simplest test class you can without a bootstrap.php or phpunit.xml to first verify that your new installation works. PHPUnit will stop without any message if it cannot instantiate all of the test cases--one for each test method and data provider--before running any tests.
You have already figured out how to get it to work, but my solution was a little different.
First thing you can do is check the exit status. If it's not 0, then PHP exited and because of the INI configuration settings set, none of the PHP error messages were outputted. What I did was to enable the "display_errors" INI setting, and set "error_reporting" to E_ALL. I was then able to identify errors such as PHP not being able to parse a certain script. Once I fixed that, PHPUnit ran properly.
I managed to spectacularly paint myself in a corner with a custom "fatal error handler" that in certain rare conditions turned out to output nothing. Those conditions, in accordance with Murphy's Law, materialized once I had forgotten the handler was in place.
Since it wasn't really a "PHPunit problem", none of the other answers helped [although #David's problem was at the bottom the same thing], even though the symptoms were the same - phpunit terminating with no output, no errors, no log and no clue.
In the end I had to resort to a step-by-step tracing of the whole test suite by adding this in the bootstrap code:
register_shutdown_function(function() {
foreach ($GLOBALS['lastStack'] as $i => $frame) {
print "{$i}. {$frame['file']} +{$frame['line']} in function {$frame['function']}\n";
}
});
register_tick_function(function() {
$GLOBALS['lastStack'] = debug_backtrace(DEBUG_BACKTRACE_IGNORE_ARGS, 8);
});
declare(ticks=1);
If anyone ever manages to do worse than this, and somehow block stdout as well, this modification should work:
register_shutdown_function(function() {
$fp = fopen("/tmp/path-to-debugfile.txt", "w");
foreach ($GLOBALS['lastStack'] as $i => $frame) {
fwrite($fp, "{$i}. {$frame['file']} +{$frame['line']} in function {$frame['function']}\n");
}
fclose($fp);
});
an old thread this one, but one I stumbled across when having the same problem.
I had the same problem, nothing being returned to console including print, print_r, echo etc.
solved it by using --stderr as a test-runner option.
Check that you haven't written any logic into your code that just dies, with no output. For example,
<?php
if (!array_key_exists('SERVER_NAME', $_SERVER)) {
die();
}
This was exactly my case; I'd made some assumptions about the environment which were correct when running the code via Apache, but weren't fulfilled when running from CLI and the code did not echo any output.
PHPUnit tried to include the bootstrap file before giving the usual init output, but died during the bootstrapping proccess, hence exiting with status 0 and no output.
If when you run from command line a recent version of phpunit like this
> php phpunit
or
> ./phpunit
or
> php ./phpunit.phar
or
> ./phpunit.phar
And you immediatly return to the prompt with no messages, this is probably due to a "suhosin secutiry" setup.
phpunit is now a "phar" package including all libraries. To be able to run such file when php has the suhosin security module enabled, you must first set this
suhosin.executor.include.whitelist = phar
into you php.ini file (for example, with debian/ubuntu, you may have to edit file /etc/php5/conf.d/suhosin.ini
i tried everything here, but nothing worked until i tried phpunit --no-configuration simpletest.php. that finally gave me some output, which implies that my phpunit.xml.dist file is broken. (i'll come back and update this once i debug it.)
the contents of simpletest.php are below, but any test file should work.
<?php
use PHPUnit\Framework\TestCase;
final class FooTest extends TestCase
{
public function testFoo()
{
$this->assertEquals('x', 'y');
}
}
Check if the phpunit you're running and the one you installed are the same:
$ pear list phpunit/phpunit
...
script /path/to/phpunit
...
Try to execute exactly that phpunit with the full path.
Then check your PATH variable and see if the correct directory is in it. If not, fix that.
If that does not help, use write something into the phpunit executable, e.g. "echo 123;" and run phpunit. Check if you see that.
For me the conflict was with Xdebug's directive
xdebug.remote_enable=1
If you are using composer, check to make sure your included PHP files are not ending the code executions. The same goes for when you included certain PHP files explicitly.
In my case, I was working on a WordPress plugin and one of the PHP files I included directly in composer.json (which I don't want to load through PSR-4 because WordPress's coding standards don't support it yet) had this code on top;
if (!defined('ABSPATH')) {
exit(); // exit if accessed directly
}
And since ABSPATH will not be defined when I run the tests directly, the code was exiting.
This is because, since I told Composer to always load these files each time, this part of the code will execute, while the other files included though autoload PSR will load on demand.
So check to make sure any of the files you included are not stopping the code execution. If it happens, then even when you run phpunit --info the code will still exit and you won't see any output.
I was facing a seems problem. I could run phpunit from root directory but not from anywhere else. so I put the "--configuration" tag, and point it to my xml configuration.
$ ./<path_to_phpunit>phpunit --configuration <path_to_phpunitxml>/phpunit.xml
The path to phpunit is optinal, I used it because I installed locally by composer.
In /composer/vendor/phpunit/phpunit/src/TextUI/Command.php in main() function in catch (Throwable $t) {} block var_dump (echo / print_r) exception. If an exception exists, you, probably, will solve the current problem.

Resources