I am using testthat to test a package with a file tree similar to the following:
.
├── data
│ └── testhaplom.out
├── inst
│ └── test
│ ├── test1.r
│ ├── tmp_S7byVksGRI6Q
│ │ └── testm.desc
│ └── tmp_vBcIkMN1arbn
│ ├──testm.bin
│ └── testm.desc
├── R
│ ├── haplom.r
│ └── winIdx.r
└── tmp_eUG3Qb0PKuiN
└── testhaplom.hap2.desc
In the test1.r file, I need to use the data/testhaplom.out file as input data for a certain function, but if I do test_file(test1.r), it changes into the inst/test directory and cannot see the data file, giving the error below:
...Error in file(file, "rt") : cannot open the connection
In addition: Warning message:
In file(file, "rt") :
cannot open file 'data/testhaplom.out': No such file or directory
There are two solutions for your problem:
You could use the relative path (../data/testhaplom.out):
expect_true(file.exists(file.path("..", "data", "testhaplom.out")))
Or you could use system.file to get the location of the data directory:
expect_true(file.exists(file.path(system.file("data", package="YOUR_R_PACKAGE"), "testhaplom.out")))
I prefer the second solution.
BTW: file.path use the correct path separator on each platform.
Related
I have a project with a proto files in a:
$ tree proto/
proto/
├── common
│ └── request.proto
├── file
│ ├── file.proto
│ └── file_service.proto
├── job
│ ├── job.proto
│ └── job_service.proto
├── pool
│ ├── pool.proto
│ └── pool_service.proto
└── worker
├── worker.proto
└── worker_service.proto
5 directories, 9 files
I want to generate a one single file from worker_service.proto but these file has imports from common.
Is there a option in grpc_tools.protoc to generate one single python file?
Or is there a tool to generate one proto file?
Based on the information, I guess by generate one Python file means: instead of generate one Python file for messages (*_pb2.py) and one Python file for services (*_pb2_grpc.py), you hope to concatenate both of them into one Python file. To take a look at the generated file content, here is the Helloworld example.
Combining the two output file is currently not supported by the gRPC Python ProtoBuf plugin (unlike Java/Go). You can post a feature request and add more detail about your use case: https://github.com/grpc/grpc/issues
I have to work on a shared Airflow 1.10 project, so I have cloned the repository and the structure is as follow:
airflow
├── airflow.cfg
├── airflow.db
├── dags
│ ├── dags_here.py
│
├── dags_conf
│ ├── some settings for dags
│ │ └── settings.py
│ │ ├── __init__.py
│ │ ├── settings.py
│ │ └── todo.txt
| |__ init.py
|
├── utilities
│ ├── __init__.py
│ └── some_name
│ ├── airflow
│ │ └── hooks.py
└── sensors.py
└── plugins.py
│ ├── common
│ │ ├some_other_code_files.py
│ ├── __init__.py
|
├── README.md
├── unittests.cfg
└── venv
├── lib64 -> lib
└── pyvenv.cfg
But when I try to list dags I face this error:
plugins_manager.py:225} ERROR - No module named 'common' Traceback
(most recent call last): File
"/home/xxx/venv/lib/python3.8/site-packages/airflow/plugins_manager.py",
line 218, in
m = imp.load_source(namespace, filepath) File "/usr/lib/python3.8/imp.py", line 171, in load_source
module = _load(spec) File "", line 702, in _load File "", line 671, in
_load_unlocked File "", line 783, in exec_module File "", line 219,
in _call_with_frames_removed File
"/home/xxx/airflow/utilities/xxx/common/insert_obj.py", line 3, in
from common.xxxxx import Class_name ModuleNotFoundError: No module named 'common' [2021-05-30 23:13:05,933] {plugins_manager.py:226}
ERROR - Failed to import plugin
/home/xxxx/airflow/utilities/xxxxx/common/dag_file.py
I have tried below configs inside may airflow.cfg but always the same error:
plugins_folder=/home/xxx/airflow/utilities
Adding a parent directory inside Airflow folder called plugins with an __init__.py file and copying utilities inside that:
plugins_folder=/home/xxx/airflow/plugins
confirm how the environment variables that Airflow uses or the airflow.cfg settings are configured.
environment variables
AIRFLOW_HOME=<YOUR_PROJ>
AIRFLOW__CORE__DAGS_FOLDER=$AIRFLOW_HOME/dags
AIRFLOW__CORE__PLUGINS_FOLDER=$AIRFLOW_HOME/utilities
airflow.cfg
dags_folder = {AIRFLOW_HOME}/dags
plugins_folder = {AIRFLOW_HOME}/utilities
Also, see what is configured in PYTHONPATH:
import sys
from pprint import pprint
pprint(sys.path)
# [
# '/ssddisk/project_name/dags',
# '/ssddisk/project_name/config',
# '/ssddisk/project_name/utilities',
# ...
# ]
See documentation for more details: https://airflow.apache.org/docs/apache-airflow/stable/howto/custom-operator.html#creating-a-custom-operator
To make Artifactory as self-service as possible for our users, giving permissions to users to deploy to parts of repositories using their personal or team accounts, I'm trying to figure out how to configure this.
For readable directory structure based repositories like anything in the java world, the Permission Targets work perfectly (https://www.jfrog.com/confluence/display/RTF/Managing+Permissions). But I can't find any docs on how to use this for non-human-predicatable/readable directory structures, like PIP, or the flat directory structure, like NPM.
In the java world, repositories have a nicely structured tree like:
~/.m2/repository$ tree org/ | head -20
org/
├── antlr
│ ├── antlr4-master
│ │ └── 4.7.1
│ │ ├── antlr4-master-4.7.1.pom
│ │ ├── antlr4-master-4.7.1.pom.sha1
│ │ └── _remote.repositories
│ └── antlr4-runtime
│ └── 4.7.1
│ ├── antlr4-runtime-4.7.1.jar
│ ├── antlr4-runtime-4.7.1.jar.sha1
│ ├── antlr4-runtime-4.7.1.pom
│ ├── antlr4-runtime-4.7.1.pom.sha1
│ └── _remote.repositories
├── apache
│ ├── ant
│ │ ├── ant
│ │ │ ├── 1.10.1
│ │ │ │ ├── ant-1.10.1.jar
│ │ │ │ ├── ant-1.10.1.jar.sha1
For example, to give teamantl permission to only read, annotate, and write to org/antlr/antlr4-master/**, the following json can be PUT to Artifactory REST API (PUT /api/security/permissions/{permissionTargetName})
{
"includesPattern": "org/antlr/antlr4-master/**",
"repositories": [
"libs-release-local",
"libs-snapshot-local"
],
"principals": {
"groups" : {
"teamantl": ["r","n","w"]
}
}
}
But for example a pip repo is completely hashed:
Which is completely useless in the permission target "includesPattern".
How should this (Permission Targets) work for repo's like PIP, and NPM?
Your screenshot shows a virtual PyPI repo, which is generated and thus hash-structured.
Normally, these are backed by physical repos, filled using twine upload and thus having a ‹pkg›/‹version›/‹file› structure – i.e. perfectly usable as permission targets with package granularity.
I'm porting an application from php to node(sailsjs) at the same time trying to replace ant with grunt. I like the current project build structure and I would like to preserve some of it.
It looks like below...
project root
├── build (git ignored)
│ ├── coverage
│ ├── dist(to be deployed to target env)
│ └── local(to be deployed to local env)
├── lib
│ └── some library files like selenium..etc.
├── src
│ ├── conf
│ │ └── target/local properties
│ ├── scripts(may not be needed with grunt??)
│ │ ├── db
│ │ │ └── create_scripts...
│ │ ├── se
│ │ │ └── run_selenium_scripts...
│ │ └── tests
│ │ └── run_unit_test_scripts...
│ ├── tests
│ │ └── test_code....
│ └── webapp(this is where I'd like to place node[sailsjs] code)
│ └── code....
└── wiki .etc...
It doesn't exactly have to be the same way as above but more or less I prefer to build something similar. Now, pretty much all the sailsjs examples I have seen look like below.
project root
├── .tmp
│ └── stuff...
├── package.json
├── tasks
│ ├── config
│ │ └── grunt task configs...
│ └── register
│ └── grunt task registrations...
├── tests
│ ├── unit
│ └── selenium
└── Gruntfile.js
Where should I place Gruntfile.js, app.js, package.json to achieve what I want? What other detail should I have to make grunt function and create artifacts as I want them?
Note: Obviously I'm not expecting to get all the details of grunt configuration. But I guess it helps to see where most important things go and how basic tasks could be configured.
Thanks for your answer.
It's hard to give a precise answer without a detail of your build steps, but I would suggest:
Gruntfile.js and package.json go to your root folder
you setup your individual build tasks (whatever they are) to output to build: see the doc of each task on how to do that, it's usually the dest option
Hope this helps a bit.
First of all, I am familiar with what the Meteor docs say about this, which I have summarized here:
Files in subdirectories are loaded before files in parent
directories, so that files in the deepest subdirectory are loaded
first, and files in the root directory are loaded last.
Within a directory, files are loaded in alphabetical order by
filename.
After sorting as described above, all files under directories named
lib are moved before everything else (preserving their order).
Finally, all files that match main.* are moved after everything else
(preserving their order).
(Not sure why they say "moved" instead of "loaded", but I think they just mean "loaded".)
My app has the following structure:
├── client/
│ ├── html/
│ │ ├── main.html
│ │ ├── nav.html
│ │ └── login.html
│ ├── js/
│ │ ├── lib/
│ │ │ └── util.js
│ │ ├── main.js
│ │ └── nav.js
│ └── my_app.less
├── packages/
│ └── some_stuff_here
├── server/
│ └── main.js
├── shared.js
├── smart.json
└── smart.lock
In client/js/nav.js file I have the following JavaScript code:
Template.nav.nav_close = function() {
return ! Session.get(slugify(this.name)+'-nav-close')
}
In client/js/lib/util.js file I have the following JavaScript code:
var slugify = function(value) {
if (value) {
return value.replace(/\s+/g, '-').replace(/\./g, '-').toLowerCase();
}
}
My understanding is that the client/js/lib/util.js file should get loaded first, which will make my slugify function available, and then client/js/nav.js should get loaded and the slugify function should be available to it.
In fact what happens is that I see the following error in my Chrome console:
Exception from Deps recompute function: ReferenceError: slugify is not defined
at Object.Template.nav.nav_close (http://localhost:3000/client/js/nav.js?4d7c7953063828c0e4ec237c1a5c67b849076cb5:2:26)
Why am I getting this error?
slugify has file scope because it is declared with var. Remove var to give it package (application) scope.
Meteor Namespacing
slugify = function(value) {
if (value) {
return value.replace(/\s+/g, '-').replace(/\./g, '-').toLowerCase();
}
}