Is it possible to run mypy pre-commit without making it fail? - mypy

I would like to add the following to pre-commit for a team:
- repo: https://github.com/pre-commit/mirrors-mypy
rev: 'v0.720'
hooks:
- id: mypy
args: [--ignore-missing-imports]
My team is worried that this might be too strict. To have a gradual introduction, I would like this hook not to make the commit fail, but only to show the issues. Is that possible?

you can, but I wouldn't suggest it -- warning noise is likely to have your whole team ignore the entire output and the entire tool
here's how you would do such a thing (note that it has reduced portability due to bash -- mostly because the framework intentionally does not suggest this)
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v0.720
hooks:
- id: mypy
verbose: true
entry: bash -c 'mypy "$#" || true' --
two pieces make this work:
verbose: true always produces the output -- this option is really only intended for debugging purposes, but you can turn it on always (it can be noisy / annoying though)
bash + || true -- ignore the exit code
disclaimer: I am the author of pre-commit

Also note that you can temporarily disable hooks by setting the environment variable SKIP. For example:
SKIP=flake8 git commit -m 'fix thing - work in progress'
This is especially useful when you just want to make local "checkpoint" commits that you'll fix later.
Side note on mypy specifically: there's a potentially big issue with using mypy in a non-blocking way like this. If you allow commits with type errors to be merged, everyone else will start to see those type errors in their pre-commit checks.
When developers are making further changes, it's confusing whether the mypy errors that appear were there from before, or due to their further changes. This can be a recipe for frustration/confusion, and also for allowing further type errors to accumulate.
I think the mypy guide on using mypy with an existing codebase is pretty good advice.
If you just need to temporarily skip mypy checks so you can checkpoint your work, push a PR for initial review, or whatever, you can just do SKIP=mypy as mentioned above.

Related

mypy: pre-commit results differ from

I am trying to introduce pre-commit into quite a large existing project and replace the existing direct call to mypy with pre-commit, however, the results differ. It would be unreasonable to try and describe everything I've tried in detail or expect a definitive answer as to what to do. I just hope for some direction or advice as to how to diagnose the situation.
Here's the problem. When I run the 'legacy' mypy invocation, everything passes, but when I call the pre-commit run mypy --all-files, I get a lot of errors along the lines of:
pack_a/pack_b/pack_c/json_formatter.py:171:13: error: Returning
Any from function declared to return "str" [no-any-return]
return orjson.dumps(result, self._default).decode('UTF-8')
The function in question is indeed declared to return str, but so is orjson.dumps, so it seems that this should pass (it returns bytes to be precise, but not Any).
The other type of error I get is Unused "type: ignore" comment
I tried to figure out the difference between the configurations, but couldn't.
Here's how mypy is called now:
$(VENV)/bin/mypy --install-types --non-interactive $(CODE)
Where $(CODE) is the list of all the directories that contain 'code' (as opposed to tests and supporting scripts)
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v0.950
hooks:
- id: mypy
args: [
"--config-file=setup.cfg",
--no-strict-optional,
--ignore-missing-imports,
]
additional_dependencies:
- pydantic
- returns
... (I checked all the types-* files the old mypy installs and added them here)
exclude: (a regex to exclude the non-code dirs)
I am not sure how to check that the both configurations check the same set of files, but the number of files checked they report is the same.
There's also the problem that when I edit a file and then run pre-commit run mypy, I get even more errors than with pre-commit run mypy --all-files
Is there anything I could try to diagnose this?
the mirrors-mypy README addresses this --
Because pre-commit runs mypy from an isolated virtualenv (without your dependencies) you may also find it useful to add the typed dependencies to additional_dependencies so mypy can better perform dynamic analysis
at least from your description you're missing orjson there -- perhaps more
disclaimer: I wrote pre-commit

How to fail the whole state.sls if one statement inside fails

I'm just started with saltstack, so could anyone help with this question: How to fail the whole state.sls if one statement inside fails? Are requisites
require/watch
suitable for this?
You can use requisites and all depending states will fail as well as the required one.
An alternative is to abort the whole execution on a single state failure:
abort_on_failure_state_example:
test.fail_without_changes:
- failhard: True
No further state will be executed, even those from other included sls files.
I use that to ensure that some required grains are set before applying states, without having to check/require them on every state.
This is documented in https://docs.saltstack.com/en/latest/ref/states/failhard.html
Well. You can specify that a certain state will only be applied if another state is successfully applied as well. Like this:
vim:
pkg.installed: []
/etc/vimrc:
file.managed:
- source: salt://edit/vimrc
- require:
- pkg: vim
The vimrc file will only be managed if the package vim is successfully is installed. It doesn't mater if vim was already installed or that it is a new installed package after this state. If the pkg: vim is successfully installed it will run the second state.
This is useful if you have a particularly state that might fail. You can add the require to all other states to make sure they will only be applied if this particularly state is successful.
To answer your question:
It's not possible to make all states inside the state.sls fail if one of them fails. You can work around this issue by running your state.sls with salt '*' state.apply state test=True to see what would happen. If one of the states would fail you can decide to not actually apply the state.
I hope this answers your question. If it's still unclear you might want to come up with an example :-)

Create "update all" state in Saltstack

Hello helpful friends,
We have quite a setup here of 100+ servers being managed by Salt states. With different roles in the organization executed by different people, I'd really like to have a possibility to "aggregate" some states. In this case: updating (yum) packages.
I would really like to have our sysadmins safely being able to execute a command like this on the master:
salt '*' state.apply update.packages
while maybe our developers would be able to execute:
salt 'dev-*' state.apply update.application
Of course we have a large set of sls files and the key to this issue is that I don't want all those states executed, but just a selected bunch of them.
To achieve this, I've tried to create an update/packages.sls state, containing:
update-packages:
test.nop
And then added to, for example the following existing state:
nagios-plugins-all:
pkg.latest:
- require:
- pkg: corepackages
a watch_in as follows:
nagios-plugins-all:
pkg.latest:
- require:
- pkg: corepackages
- watch_in:
- test: update-packages
Unfortunately, this is clearly not the way to go, as executing salt 'testserver001' state.apply update.packages now only returns:
testserver001:
----------
test_|-update-packages_|-update-packages_|-nop:
----------
__id__:
update-packages
__run_num__:
0
changes:
----------
comment:
Success!
duration:
0.946
name:
update-packages
result:
True
start_time:
12:10:46.035686
while I know for sure that updated packages are available. I can't include all the existing state files into the update/packages.sls file, as that would cause all states to be executed in those files and that's not what I want either. It would also become a very messy file.
I also don't want to just execute salt '*' pkg.upgrade as I have states depending on updates; i.e. if the package nagios is updated, the states concerning the up-to-date config files should be run and consequently a restart of the nagios service should be executed. All of that is configured in salt using watch and require arguments, so I'd like to use that also when updating my packages. Also, I want to be in control of which packages can be updated.
I don't know if I'm on the right path, or whether this is possible with Salt at all, but maybe someone here has a brilliant idea on how to achieve this behavior. I would be very thankful!
You might want to look at External Auth System of salt.
This way you can limit users and group to specific minions and commands, and even restrict the parameters.

Can Ansible unarchive be made to write static folder modification times?

I am writing a build process for a WordPress installation using Ansible. It doesn't have a application-level build system at the moment, and I've chosen Ansible so that it can cleanly integrate with server build scripts, so I can bring up a working server at the touch of a button.
Most of my WordPress plugins are being installed with the unarchive feature, pointing to versioned plugin builds on the official wordpress.org installation server. I've encountered a problem with just one of these, which is that it is always being marked as "changed" even though the files are exactly the same.
Having examined the state of ls -Rl before and after, I noticed that this plugin (WordPress HTTPS) is the only one to use internal sub-directories, and upon each decompression, the modification time of folders is getting bumped.
It may be useful to know that this is a project build script, with a connection of local. I guess therefore that means that SSH is not being used.
Here is a snippet of my playbook:
- name: Install the W3 Total Cache plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/w3-total-cache.0.9.4.1.zip
dest=wp-content/plugins
copy=no
- name: Install the WP DB Manager plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/wp-dbmanager.2.78.1.zip
dest=wp-content/plugins
copy=no
# #todo Since this has internal sub-folders, need to work out
# how to preserve timestamps of the original folders rather than
# re-writing them, which forces Ansible to record a change of
# server state.
- name: Install the WordPress HTTPS plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/wordpress-https.3.3.6.zip
dest=wp-content/plugins
copy=no
One hacky way of fixing this is to use ls -R before and after, using options to include file sizes but not timestamps, and then md5sum that output. I could then mark it as changed if there is a change in checksum. It'd work but it's not very elegant (and I'd want to do that for all plugins, for consistency).
Another approach is to abandon the task if a plugin file already exists, but that would cause problems when I bump the plugin version number to the latest copy.
Thus, ideally, I am looking for a switch to present to unarchive to say that I want the folder modification times from the zip file, not from playbook runtime. Is it possible?
Update: a commenter asked if the file contents could have changed in any way. To determine whether they have, I wrote this script, which creates a checksum for (1) all file contents and (2) all file/directory timestamps:
#!/bin/bash
# Save pwd and then change dir to root location
STARTDIR=`pwd`
cd `dirname $0`/../..
# Clear collation file
echo > /tmp/wp-checksum
# List all files recursively
find wp-content/plugins/wordpress-https/ -type f | while read file
do
#echo $file
cat $file >> /tmp/wp-checksum
done
# Get checksum of file contents
sha1sum /tmp/wp-checksum
# Get checksum of file sizes
ls -Rl wp-content/plugins/wordpress-https/ | sha1sum
# Go back to original dir
cd $STARTDIR
I ran this as part of my playbook (running it in isolation using tags) and received this:
PLAY [Set this playbook to run locally] ****************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [jonblog : Run checksum command] ******************************************
changed: [localhost]
TASK [jonblog : debug] *********************************************************
ok: [localhost] => {
"checksum_before.stdout_lines": [
"374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum",
"10d66f7bdbbdd3af531d1b11a3db3059a5868838 -"
]
}
TASK [jonblog : Install the WordPress HTTPS plugin] ***************
changed: [localhost]
TASK [jonblog : Run checksum command] ******************************************
changed: [localhost]
TASK [jonblog : debug] *********************************************************
ok: [localhost] => {
"checksum_after.stdout_lines": [
"374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum",
"719c9da94b525e723b1abe188ee9f5bbaf121f3f -"
]
}
PLAY RECAP *********************************************************************
localhost : ok=6 changed=3 unreachable=0 failed=0
The debug lines reflect the checksum hash of the contents of the files (this is identical) and then the checksum hash of ls -Rl of the file structure (this has changed). This is in keeping with my prior manual finding that directory checksums are changing.
So, what can I do next to track down why folder modification times are incorrectly flagging this operation as changed?
Rather than overwriting all files each time and find a way to keep the same modification datetime, you may want to use the creates option of the unarchive module.
As you maybe already know, this tells Ansible that a specific file/folder will be created as a result of the task. Thus, next time the task will not be run again if that file/folder already exists.
See http://docs.ansible.com/ansible/unarchive_module.html#options
My solution is to modify the checksum script and to make that a permanent feature of the Ansible process. It feels a bit hacky to do my own checksumming, when Ansible should do it for me, but it works.
New answers that explain that I am doing something wrong, or that a new version of Ansible fixes the problem, would be most welcome.
If I get a moment, I will raise this as a possible bug with the Ansible team. However I do sometimes wonder about the effort/reward ratio when raising bugs on a busy tracker - I already have one item outstanding, it has been waiting a while, and I've chosen to work around that too.
Update (18 months later)
This Ansible build system never made it into live. It felt like I was always working around something. Recently, when I decided I needed to move my blog to another server, I finally Dockerised it. This took several weeks (since there is a surprising amount of things to think about in a real WordPress installation) but in general I found the process much nicer than using orchestration tools.

"include_recipe" vs. Vagrantfile "chef.add_recipe". What's the difference?

Just ran nginx::source recipe on my vagrant box, and I have very unusual behaviour.
When I include a recipe from the Vagrantfile (as below), everything works like a charm,
chef.add_recipe("project::nginx")
chef.add_recipe("nginx::source")
(project::nginx recipe is very simple. Using it to override default attributes of the nginx cookbook)
but if I include a recipe at the very end of project::nginx (mentioned up), everything falls apart:
node.default['nginx']['server_names_hash_bucket_size'] = 128
include_recipe "nginx::source"
Until now I didn't know there's any difference in behaviour between those two invocations. Does anybody here knows what's the difference?
Gotya! Chef 11 feature. Issue with it exist in chef-solo solely :)
To make a quick resume, difference is:
chef.add_recipe() - loads entire cookbook context (all the files, e.g. recipes, definitions, attributes...)
include_recipe "" - files(attributes, definitions etc.) that are not in the expended run list are not loaded.
There are at least 4 ways to solve the issue(put files in the run list):
include_attribute - include desired attribute file explicitly.
metadata.rb->dependency - if your cookbook is using recipe from another cookbook, put that cookbook in metadata.rb's dependency section, and all it's files will be loaded.
chef.add_recipe() - Load recipe via Vagrantfile. (Mentioned here just for reference)
Berkshelf - you may use this cookbook manager to solve the issue as well. Here's the Stackoverflow thread about this exact problem and some Docs
For those who are interested in further reading, Chef 11 introduced dependency-based cookbook loading for non-recipe files. The new loading logic means that files belonging to cookbooks which exist in the cookbook_path but are not in the expanded run_list or dependencies of the cookbooks in the expanded run_list will no longer be loaded. REF: Opscode breaking changes documentation, and if you need a signature of the error I got, here's the exactly same one, even for the same cause.

Resources