pip.installed keeps re-installing the same package from github - salt-stack

I have a state defined to install a package with pip directly from a github repository:
cloudprint_package:
pip.installed:
- name: git+https://github.com/forked/cloudprint.git
The cloudprint package is installed, but Salt is re-installing it on each successive state run:
----------
ID: cloudprint_package
Function: pip.installed
Name: git+https://github.com/forked/cloudprint.git
Result: True
Comment: There was no error installing package
'git+https://github.com/forked/cloudprint.git'
although it does not show when calling 'pip.freeze'.
Started: 21:37:18.772181
Duration: 38152.208 ms
Changes:
----------
git+https://github.com/forked/cloudprint.git==???:
Installed
Question: How do I prevent it from reinstalling the package?

Output of pip freeze | grep -i cloudprint
cloudprint==0.11
Solution: Specify an egg identifier to the pip command:
cloudprint_package:
pip.installed:
- name: git+https://github.com/forked/cloudprint.git#egg=cloudprint==0.11

Related

Issue installing apache-airflow-backport-providers-google module on airflow instance of Google Composer

I need execute Data Fusion pipelines from Composer, using de operatos for this:
from airflow.providers.google.cloud.operators.datafusion import (
CloudDataFusionCreateInstanceOperator,
CloudDataFusionCreatePipelineOperator,
CloudDataFusionDeleteInstanceOperator,
CloudDataFusionDeletePipelineOperator,
CloudDataFusionGetInstanceOperator,
CloudDataFusionListPipelinesOperator,
CloudDataFusionRestartInstanceOperator,
CloudDataFusionStartPipelineOperator,
CloudDataFusionStopPipelineOperator,
CloudDataFusionUpdateInstanceOperator,
)
The issue I have is about modulo "apache-airflow-backport-providers-google", with the support of this links i knew what I need to use this modulo:
reference to install the modulo in airflow instance (answered by #Gonzalo Pérez Fernández): https://airflow.apache.org/docs/apache-airflow-providers-google/stable/operators/cloud/datafusion.html
when i tried to install python dependency on Composer like PyPi Package i get this error:
UPDATE operation on this environment failed 7 minutes ago with the following error message:
Failed to install PyPI packages.
apache-airflow-providers-google 5.0.0 has requirement google-ads>=12.0.0, but you have google-ads 7.0.0. Check the Cloud Build log at https://console.cloud.google.com/cloud-build/builds/a2ecf37a-4c47-4770-9489-6fb65e87d82f?project=341768372632 for details. For detailed instructions see https://cloud.google.com/composer/docs/troubleshooting-package-installation
the log deail is:
apache-airflow-providers-google 5.0.0 has requirement google-ads>=12.0.0, but you have google-ads 7.0.0.
apache-airflow-backport-providers-google 2021.3.3 has requirement apache-airflow~=1.10, but you have apache-airflow 2.1.2+composer.
The command '/bin/sh -c bash installer.sh $COMPOSER_PYTHON_VERSION fail' returned a non-zero code: 1
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
is there any way to use de module "apache-airflow-backport-providers-google" without depedency issues on composer instance?, or What would be the best way to use data fusion operators no need to change or parse package versions in python?.
Composer Image version used:
composer-1.17.0-airflow-2.1.2
Thanks.
There is no need to install apache-airflow-backport-providers-google in Airflow 2.0+. This package actually backports Airflow 2 operators into Airflow 1.10.*. In addition, in Composer version composer-1.17.0-airflow-2.1.2 the apache-airflow-providers-google==5.0.0 package is already installed according to the documentation. You should be able to import the Data Fusion operators with the code snippet you posted as is.
However, if this is not the case, you should probably handle the conflict shown in the logs when trying to reinstall apache-airflow-providers-google==5.0.0:
apache-airflow-providers-google 5.0.0 has requirement google-ads>=12.0.0, but you have google-ads 7.0.0.
You can add the requirement for google-ads=12.0.0 in your PyPi dependencies and see if it works.

Replicating CRAN valgrind issues

I am trying to fix some issue with my package CamelUp on CRAN. This package uses Rcpp to implement a board game. My recent CRAN submissions have come back with comments and output such as:
==32365== 16,591,624 (2,608,512 direct, 13,983,112 indirect) bytes in
20,379 blocks are definitely lost in loss record 3,036 of 3,036
==32365== at 0x4838E86: operator new(unsigned long)
(/builddir/build/BUILD/valgrind-3.15.0/coregrind/m_replacemalloc/vg_replace_malloc.c:344)
==32365== by 0x184ED3E5: Board::Board(Board const&)
(/tmp/CamelUp.Rcheck/00_pkg_src/CamelUp/src/Board.cpp:67)
...
==32365== by 0x1853045D: Simulator::simulateDecision(bool, int)
(/tmp/CamelUp.Rcheck/00_pkg_src/CamelUp/src/Simulator.cpp:64)
==32365== by 0x18536509: Rcpp::CppMethod2<Simulator, Rcpp::Vector<19,
Rcpp::PreserveStorage>, bool, int>::operator()(Simulator*, SEXPREC**)
(R-devel/site-library/Rcpp/include/Rcpp/module/Module_generated_CppMethod.h:195)
==32365== by 0x18535B32:
Rcpp::class_<Simulator>::invoke_notvoid(SEXPREC*, SEXPREC*, SEXPREC**,
int) (R-devel/site-library/Rcpp/include/Rcpp/module/class.h:234)
==32365== by 0x17B9EBE1: CppMethod__invoke_notvoid(SEXPREC*)
(/tmp/RtmpKDbrDI/R.INSTALL1d1838b282b2/Rcpp/src/module.cpp:220)
I'm having trouble replicating these errors and I'm wondering if there is a straightforward way to use valgrind with my package to reproduce these errors. I've tried running locally with valgrind but couldn't get the track origins option to work and make it clear where these errors were in my code. I have also tried using Travis-CI with the following .travis.yml file:
language: r
cache: packages
r_check_args: '--use-valgrind'
addons:
apt:
packages:
- valgrind
r:
- oldrel
- release
- devel
env:
- VALGRIND_OPTS='--tool=memcheck --memcheck:leak-check=full --track-origins=yes'
I'm hoping there is a way to replicate these errors so I can fix them.
I successfully used Docker to run valgrind for my tests, but I ended up deciding that the best way to integrate this testing was with Travis-CI. My .travis.yml file looks like this:
language: r
cache: packages
addons:
apt:
packages:
- valgrind
r:
- oldrel
- release
- devel
after_success:
- R -e "install.packages('${PKG_TARBALL}', repos=NULL, type='source')"
- cd tests
- R -d "valgrind --tool=memcheck --leak-check=full --track-origins=yes" --vanilla < testthat.R
- cd ..
This now runs the tests with valgrind, although I have to manually scroll through to see the results. At some point I will make the build fail if there are memory leaks, but for now this works for me. I was mostly unfamiliar with Docker and valgrind, and didn't realize I need to be in the tests directory to run the tests in testthat.R.

Error installing miniconda on GitHub Actions in an R project

I am trying to build and deploy this bookdown project with GitHub Actions. One of the chapters uses the keras R package, which means I need to install Conda (or set up a virtual environment). At the end of the Miniconda installation command, there is an error when trying to collect metadata.
2020-06-24T04:47:59.7495480Z * Miniconda has been successfully installed at '/Users/runner/Library/r-miniconda'.
2020-06-24T04:47:59.7496060Z [1] "/Users/runner/Library/r-miniconda"
2020-06-24T04:48:00.3909040Z * Project '~/runners/2.263.0/work/drake/drake' loaded. [renv 0.10.0]
2020-06-24T04:48:00.7964920Z * The project and lockfile are out of sync -- use `renv::status()` for more details.
2020-06-24T04:48:00.7968340Z Warning message:
2020-06-24T04:48:00.7969190Z Project requested R version '3.6.0' but '4.0.1' is currently being used
2020-06-24T04:48:05.2408080Z Collecting package metadata (current_repodata.json): ...working... failed
2020-06-24T04:48:05.2410390Z
2020-06-24T04:48:05.2410820Z NotWritableError: The current user does not have write permissions to a required path.
2020-06-24T04:48:05.2411080Z path: /usr/local/miniconda/pkgs/cache/b89cf7bf.json
2020-06-24T04:48:05.2411230Z uid: 501
2020-06-24T04:48:05.2411350Z gid: 20
2020-06-24T04:48:05.2411430Z
2020-06-24T04:48:05.2411690Z If you feel that permissions on this path are set incorrectly, you can manually
2020-06-24T04:48:05.2411940Z change them by executing
2020-06-24T04:48:05.2412010Z
2020-06-24T04:48:05.2412260Z $ sudo chown 501:20 /usr/local/miniconda/pkgs/cache/b89cf7bf.json
2020-06-24T04:48:05.2412330Z
2020-06-24T04:48:05.2413470Z In general, it's not advisable to use 'sudo conda'.
2020-06-24T04:48:05.2413570Z
2020-06-24T04:48:05.2414250Z
2020-06-24T04:48:05.2886400Z ##[error]Error: Error 1 occurred creating conda environment r-reticulate
2020-06-24T04:48:05.2890770Z Execution halted
2020-06-24T04:48:05.3050700Z ##[error]Process completed with exit code 1.
The full job log is here.
Depending on how R is set up, this post might be helpful for you. You might need to configure the .Renviron file.
Unable to change python path in reticulate (R)

First R package - having trouble connecting to GitHub with usethis::use_github

Working on first R package ever.
A data package for medical datasets for teaching.
Using the usethis workflow as described by Emil Hvitfelt
on Macbook Air with MacOS 10.15.3 Cataline
Project seems ok.
R> proj_sitrep()
working_directory: '/Users/peterhiggins/Documents/Rcode/medicaldata'
active_usethis_proj: '/Users/peterhiggins/Documents/Rcode/medicaldata'
active_rstudio_proj: '/Users/peterhiggins/Documents/Rcode/medicaldata'
I appear to be stuck on the GitHub connection
I have an account with repos, I have a PAT.
but
R> git_sitrep()
Git user
Name: 'Peter Higgins'
Email: 'higgi13425#yahoo.com'
Vaccinated: TRUE
usethis + git2r
Default usethis protocol: 'https'
git2r supports SSH: FALSE
Credentials: ''
GitHub
Personal access token: ''
User: 'higgi13425'
Name: 'Peter Higgins'
Repo
Path: '/Users/peterhiggins/Documents/Rcode/.git'
Local branch -> remote tracking branch: '' -> ''
GitHub pull request readiness
:information_source: This repo has neither 'origin' nor 'upstream' remote on GitHub.com.
and
R> use_github()
:heavy_check_mark: Checking that current branch is 'master'
Error: Detached head; can't continue
Not sure how to get past this problem.
Ideas welcome.
and any explanation of how I borked this also welcome.
thanks!

How to run covr::codecov() for a R package on Travis CI

I am trying to add Codecov support via library(covr) to my personal R package sesh.
When I check locally the coverage tests run and report without incident:
covr::package_coverage()
sesh Coverage: 68.75%
R/executeDevtoolDocument.R: 0.00%
R/sesh.R: 69.23%
But when it runs on Travis it encounters an error for missing token:
$ Rscript -e 'covr::codecov()'
Error in if (nzchar(token)) { : argument is of length zero
Calls: <Anonymous>
Execution halted
The R CMD check runs successfully on Travis.
The contents of my .travis.yml:
language: R
matrix:
include:
- r: release
after_success: Rscript -e 'covr::codecov()'
r_github_packages:
- r-lib/covr
And a link to the most recent Travis report.
I have tried to faithfully follow the covr README for getting set up. And the README says Travis is supported without needing CODECOV_TOKEN, so I have not tried to pass one yet.
What am I missing here?
Following is my .travis.yml
language: r
cache: packages
script:
- R CMD build .
- R CMD check *tar.gz
r_github_packages:
- r-lib/covr
after_success:
- Rscript -e 'covr::codecov()'
Adding the repository upload token to codecov.yml avoids the error and successfully runs the coverage report.
codecov:
token: a1c53d1f-266f-47bc-bb23-3b3d67c57b2d
The token is found in the 'Settings(tab) >>> General(sidebar)' menu on the Codecov page for the repo (which is only visible once you are logged in).

Resources