If I have tests/testhat/testhat.R, devtools::test() finds it and runs it, but covr:package_coverage() and R CMD check does not find it.
If I have it as tests/testhat.R, devtools::test() doesn't find it but covr:package_coverage() and R CMD check does.
What's the best way to do this?
R 4.0.0; testthat 2.3.2; covr 3.5.0
Your directory structure of the test folder should look like this:
.
├── testthat
│ ├── test-1.R
│ ├── test-2.R
│ ├── test-3.R
│ ├── test-4.R
│ └── test-5.R
└── testthat.R
And testthat.R contains
library(testthat)
library(mypackage)
test_check("mypackage")
This works with R CMD check, covr and devtools::check().
Related
I am currently building a backend using FastAPI and I am facing some issues to run the backend using poetry scripts. This is my project structure:
├── backend
└── src
└── asgi.py
└── Dockerfile
└── poetry.lock
└── pyproject.toml
pyproject.toml
[tool.poetry]
name = "backend"
version = "0.1.0"
description = ""
authors = ["Pierre-Alexandre35 <46579114+pamousset75#users.noreply.github.com>"]
readme = "README.md"
[tool.poetry.dependencies]
python = "^3.9"
uvicorn = "^0.17.6"
fastapi = "^0.78.0"
psycopg2 = "^2.9.3"
jwt = "^1.3.1"
python-multipart = "^0.0.5"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
[tool.poetry.scripts]
foo='asgi:__main__'
If I am running poetry run python asgi.py, it is working perfectly but if I am using poetry foo script, I am getting No file/folder found for package backend. Those are all combinaisons I tried and I have the same error for every poetry run foo:
foo='asgi:main'
foo='backend.asgi:__main__'
foo='backend.asgi:main'
foo='backend.asgi:.'
Your project structure does not seem to be correct. Assuming backend is the package u are trying to create.
Use this structure
└── pyproject.toml
└── poetry.lock
└── README.md
├── backend
└── src
└── asgi.py
└── Dockerfile
└── __init__.py
Also in scripts use. (Assuming you are trying to run main with foo)
[tool.poetry.scripts]
foo='backend.asgi:__main__'
I have a project with a proto files in a:
$ tree proto/
proto/
├── common
│ └── request.proto
├── file
│ ├── file.proto
│ └── file_service.proto
├── job
│ ├── job.proto
│ └── job_service.proto
├── pool
│ ├── pool.proto
│ └── pool_service.proto
└── worker
├── worker.proto
└── worker_service.proto
5 directories, 9 files
I want to generate a one single file from worker_service.proto but these file has imports from common.
Is there a option in grpc_tools.protoc to generate one single python file?
Or is there a tool to generate one proto file?
Based on the information, I guess by generate one Python file means: instead of generate one Python file for messages (*_pb2.py) and one Python file for services (*_pb2_grpc.py), you hope to concatenate both of them into one Python file. To take a look at the generated file content, here is the Helloworld example.
Combining the two output file is currently not supported by the gRPC Python ProtoBuf plugin (unlike Java/Go). You can post a feature request and add more detail about your use case: https://github.com/grpc/grpc/issues
To make Artifactory as self-service as possible for our users, giving permissions to users to deploy to parts of repositories using their personal or team accounts, I'm trying to figure out how to configure this.
For readable directory structure based repositories like anything in the java world, the Permission Targets work perfectly (https://www.jfrog.com/confluence/display/RTF/Managing+Permissions). But I can't find any docs on how to use this for non-human-predicatable/readable directory structures, like PIP, or the flat directory structure, like NPM.
In the java world, repositories have a nicely structured tree like:
~/.m2/repository$ tree org/ | head -20
org/
├── antlr
│ ├── antlr4-master
│ │ └── 4.7.1
│ │ ├── antlr4-master-4.7.1.pom
│ │ ├── antlr4-master-4.7.1.pom.sha1
│ │ └── _remote.repositories
│ └── antlr4-runtime
│ └── 4.7.1
│ ├── antlr4-runtime-4.7.1.jar
│ ├── antlr4-runtime-4.7.1.jar.sha1
│ ├── antlr4-runtime-4.7.1.pom
│ ├── antlr4-runtime-4.7.1.pom.sha1
│ └── _remote.repositories
├── apache
│ ├── ant
│ │ ├── ant
│ │ │ ├── 1.10.1
│ │ │ │ ├── ant-1.10.1.jar
│ │ │ │ ├── ant-1.10.1.jar.sha1
For example, to give teamantl permission to only read, annotate, and write to org/antlr/antlr4-master/**, the following json can be PUT to Artifactory REST API (PUT /api/security/permissions/{permissionTargetName})
{
"includesPattern": "org/antlr/antlr4-master/**",
"repositories": [
"libs-release-local",
"libs-snapshot-local"
],
"principals": {
"groups" : {
"teamantl": ["r","n","w"]
}
}
}
But for example a pip repo is completely hashed:
Which is completely useless in the permission target "includesPattern".
How should this (Permission Targets) work for repo's like PIP, and NPM?
Your screenshot shows a virtual PyPI repo, which is generated and thus hash-structured.
Normally, these are backed by physical repos, filled using twine upload and thus having a ‹pkg›/‹version›/‹file› structure – i.e. perfectly usable as permission targets with package granularity.
I am trying to use grunt on my new windows 8 machine, which is not working. Here's the problem.
c:\Users\User\Documents\Source\Project>npm install -g grunt
grunt#0.4.5 C:\Users\User\AppData\Roaming\npm\node_modules\grunt
├── dateformat#1.0.2-1.2.3
├── eventemitter2#0.4.14
├── which#1.0.5
├── getobject#0.1.0
├── colors#0.6.2
├── rimraf#2.2.8
├── async#0.1.22
├── hooker#0.2.3
├── grunt-legacy-util#0.2.0
├── exit#0.1.2
├── lodash#0.9.2
├── coffee-script#1.3.3
├── underscore.string#2.2.1
├── iconv-lite#0.2.11
├── grunt-legacy-log#0.1.1 (underscore.string#2.3.3, lodash#2.4.1)
├── nopt#1.0.10 (abbrev#1.0.5)
├── glob#3.1.21 (inherits#1.0.0, graceful-fs#1.2.3)
├── minimatch#0.2.14 (sigmund#1.0.0, lru-cache#2.5.0)
├── findup-sync#0.1.3 (lodash#2.4.1, glob#3.2.11)
└── js-yaml#2.0.5 (esprima#1.0.4, argparse#0.1.15)
c:\Users\User\Documents\Source\Project>grunt
'grunt' is not recognized as an internal or external command,
operable program or batch file.
I'm trying to look for grunt.cmd which I cannot find anywhere in my system. Is somebody else facing this problem too?
Try installing grunt-cli and see if that fixes your problem:
npm install -g grunt-cli
I am using testthat to test a package with a file tree similar to the following:
.
├── data
│ └── testhaplom.out
├── inst
│ └── test
│ ├── test1.r
│ ├── tmp_S7byVksGRI6Q
│ │ └── testm.desc
│ └── tmp_vBcIkMN1arbn
│ ├──testm.bin
│ └── testm.desc
├── R
│ ├── haplom.r
│ └── winIdx.r
└── tmp_eUG3Qb0PKuiN
└── testhaplom.hap2.desc
In the test1.r file, I need to use the data/testhaplom.out file as input data for a certain function, but if I do test_file(test1.r), it changes into the inst/test directory and cannot see the data file, giving the error below:
...Error in file(file, "rt") : cannot open the connection
In addition: Warning message:
In file(file, "rt") :
cannot open file 'data/testhaplom.out': No such file or directory
There are two solutions for your problem:
You could use the relative path (../data/testhaplom.out):
expect_true(file.exists(file.path("..", "data", "testhaplom.out")))
Or you could use system.file to get the location of the data directory:
expect_true(file.exists(file.path(system.file("data", package="YOUR_R_PACKAGE"), "testhaplom.out")))
I prefer the second solution.
BTW: file.path use the correct path separator on each platform.