How to fix DVC error 'FileNotFoundError: [Errno 2] No such file or directory' in Github actions - dvc

Trying to pull a folder with test data into a GitHub actions container, I get
FileNotFoundError: [Errno 2] No such file or directory
I tried running dvc checkout --relink locally, but that did not work. I am using Gdrive for the data-repository with a service account. It seems the login works. But strangely the file is not present. Maybe I can push the files again somehow and recreate the data in the repository?
dvc doctor output:
DVC version: 2.10.2 (pip)
---------------------------------
Platform: Python 3.7.13 on Linux-5.15.0-1014-azure-x86_64-with-debian-bullseye-sid
Supports:
gdrive (pydrive2 = 1.14.0),
webhdfs (fsspec = 2022.5.0),
http (aiohttp = 3.8.1, aiohttp-retry = 2.5.2),
https (aiohttp = 3.8.1, aiohttp-retry = 2.5.2)
Cache types: <https://error.dvc.org/no-dvc-cache>
Caches: local
Remotes: gdrive
Workspace directory: ext4 on /dev/sda1
Repo: dvc, git
I was able to clone the GIT repo and pull the data from the DVC data-registry, however, it did not work from GH.
Full traceback
Successfully installed PyYAML-6.0 aiohttp-3.8.1 aiohttp-retry-2.5.1 aiosignal-1.2.0 appdirs-1.4.4 async-timeout-4.0.2 asynctest-0.13.0 atpublic-2.3 attrs-21.4.0 cached-property-1.5.2 cachetools-4.2.4 certifi-2022.6.15 cffi-1.15.1 charset-normalizer-2.0.12 colorama-0.4.5 commonmark-0.9.1 configobj-5.0.6 contextvars-2.4 cryptography-37.0.4 dataclasses-0.8 decorator-4.4.2 dictdiffer-0.9.0 diskcache-5.4.0 distro-1.7.0 dpath-2.0.6 dulwich-0.20.45 dvc-2.8.1 flatten-dict-0.4.2 flufl.lock-3.2 frozenlist-1.2.0 fsspec-2022.1.0 ftfy-6.0.3 funcy-1.17 future-0.18.2 gitdb-4.0.9 gitpython-3.1.18 google-api-core-2.8.2 google-api-python-client-2.52.0 google-auth-2.9.1 google-auth-httplib2-0.1.0 googleapis-common-protos-1.56.3 grandalf-0.6 httplib2-0.20.4 idna-3.3 idna-ssl-1.1.0 immutables-0.18 importlib-metadata-4.8.3 importlib-resources-5.4.0 mailchecker-4.1.18 multidict-5.2.0 nanotime-0.5.2 networkx-2.5.1 oauth2client-4.1.3 packaging-21.3 pathspec-0.8.1 phonenumbers-8.12.52 ply-3.11 protobuf-3.19.4 psutil-5.9.1 pyOpenSSL-22.0.0 pyasn1-0.4.8 pyasn1-modules-0.2.8 pycparser-2.21 pydot-1.4.2 pydrive2-1.10.1 pygit2-1.6.1 pygments-2.12.0 pygtrie-2.5.0 pyparsing-2.4.7 python-benedict-0.25.2 python-dateutil-2.8.2 python-fsutil-0.6.1 python-slugify-6.1.2 requests-2.27.1 rich-12.5.1 rsa-4.9 ruamel.yaml-0.17.21 ruamel.yaml.clib-0.2.6 shortuuid-1.0.9 shtab-1.5.5 six-1.16.0 smmap-5.0.0 tabulate-0.8.10 text-unidecode-1.3 toml-0.10.2 tqdm-4.64.0 typing-extensions-4.1.1 uritemplate-4.1.1 urllib3-1.26.10 voluptuous-0.13.1 wcwidth-0.2.5 xmltodict-0.13.0 yarl-1.7.2 zc.lockfile-2.0 zipp-3.6.0
DVC version: 2.8.1 (pip)
---------------------------------
Platform: Python 3.6.15 on Linux-5.15.0-1014-azure-x86_64-with-debian-bullseye-sid
Supports:
gdrive (pydrive2 = 1.10.1),
webhdfs (fsspec = 2022.1.0),
http (aiohttp = 3.8.1, aiohttp-retry = 2.5.1),
https (aiohttp = 3.8.1, aiohttp-retry = 2.5.1)
Cache types: <https://error.dvc.org/no-dvc-cache>
Caches: local
Remotes: gdrive
Workspace directory: ext4 on /dev/sdb1
Repo: dvc, git
2022-07-22 19:01:08,361 DEBUG: failed to pull cache for 'tests/data'
2022-07-22 19:01:08,364 WARNING: No file hash info found for 'tests/data'. It won't be created.
1 file failed
2022-07-22 19:01:08,365 ERROR: failed to pull data from the cloud - Checkout failed for following targets:
tests/data
Is your cache up to date?
<https://error.dvc.org/missing-files>
------------------------------------------------------------
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/dvc/command/data_sync.py", line 41, in run
glob=self.args.glob,
File "/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/dvc/repo/__init__.py", line 50, in wrapper
return f(repo, *args, **kwargs)
File "/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/dvc/repo/pull.py", line 44, in pull
recursive=recursive,
File "/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/dvc/repo/__init__.py", line 50, in wrapper
return f(repo, *args, **kwargs)
File "/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/dvc/repo/checkout.py", line 110, in checkout
raise CheckoutError(stats["failed"], stats)
dvc.exceptions.CheckoutError: Checkout failed for following targets:
tests/data
Is your cache up to date?
<https://error.dvc.org/missing-files>
------------------------------------------------------------
2022-07-22 19:01:08,369 DEBUG: Analytics is enabled.
2022-07-22 19:01:08,412 DEBUG: Trying to spawn '['daemon', '-q', 'analytics', '/tmp/tmp6i9uchzh']'
2022-07-22 19:01:08,413 DEBUG: Spawned '['daemon', '-q', 'analytics', '/tmp/tmp6i9uchzh']'
Error: The operation was canceled.

Related

RuntimeException: Runtime Error Could not run dbt

I used dbt init to create a profiles.yml in my .dbt folder. It looks like this:
spring_project:
outputs:
dev:
account: xxx.snowflakecomputing.com
database: PROD_DWH
password: password
role: SYSADMIN
schema: STG
threads: 1
type: snowflake
user: MYUSERNAME
warehouse: DEV_XS_WH
target: dev
Now, I created a new folder on my desktop which only contains a dbt_project.yml file that has this:
profile: 'spring_project'
When I run this from my project folder:
dbt debug --config-dir
I get this:
21:48:59 Running with dbt=1.2.1
21:48:59 To view your profiles.yml file, run:
open /Users/myusername/.dbt
However, when I run dbt
dbt run --profiles-dir /Users/myusername/.dbt
I get this:
21:43:39 Encountered an error while reading the project:
21:43:39 ERROR: Runtime Error
Invalid config version: 1, expected 2
Error encountered in /Users/myusername/Desktop/spring_project/dbt_project.yml
21:43:39 Encountered an error:
Runtime Error
Could not run dbt
21:43:39 Traceback (most recent call last):
File "/opt/homebrew/lib/python3.10/site-packages/dbt/task/base.py", line 108, in from_args
config = cls.ConfigType.from_args(args)
File "/opt/homebrew/lib/python3.10/site-packages/dbt/config/runtime.py", line 226, in from_args
project, profile = cls.collect_parts(args)
File "/opt/homebrew/lib/python3.10/site-packages/dbt/config/runtime.py", line 194, in collect_parts
partial = Project.partial_load(project_root, verify_version=version_check)
File "/opt/homebrew/lib/python3.10/site-packages/dbt/config/project.py", line 639, in partial_load
return PartialProject.from_project_root(
File "/opt/homebrew/lib/python3.10/site-packages/dbt/config/project.py", line 485, in from_project_root
raise DbtProjectError(
dbt.exceptions.DbtProjectError: Runtime Error
Invalid config version: 1, expected 2
Error encountered in /Users/myusername/Desktop/spring_project/dbt_project.yml
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.10/site-packages/dbt/main.py", line 129, in main
results, succeeded = handle_and_check(args)
File "/opt/homebrew/lib/python3.10/site-packages/dbt/main.py", line 191, in handle_and_check
task, res = run_from_args(parsed)
File "/opt/homebrew/lib/python3.10/site-packages/dbt/main.py", line 218, in run_from_args
task = parsed.cls.from_args(args=parsed)
File "/opt/homebrew/lib/python3.10/site-packages/dbt/task/base.py", line 185, in from_args
return super().from_args(args)
File "/opt/homebrew/lib/python3.10/site-packages/dbt/task/base.py", line 114, in from_args
raise dbt.exceptions.RuntimeException("Could not run dbt") from exc
dbt.exceptions.RuntimeException: Runtime Error
Could not run dbt
What am I doing wrong?
Most likely the reason is lack of config-version:
dbt.exceptions.DbtProjectError: Runtime Error
Invalid config version: 1, expected 2
config-version
config-version: 2
Specify your dbt_project.yml as using the v2 structure.
Default:
Without this configuration, dbt will assume your dbt_project.yml uses the version 1 syntax, which was deprecated in dbt v0.19.0.

DVC - Forbidden: An error occurred (403) when calling the HeadObject operation

I just started with DVC. following are the steps I am doing to push my models on S3
Initialize
dvc init
Add bucket url
dvc remote add -d storage s3://mybucket/dvcstore
add some files
dvc add somefiles
Add aws keys
dvc remote modify storage access_key_id AWS_ACCESS_KEY_ID
dvc remote modify storage secret_access_key AWS_SECRET_ACCESS_KEY
now when I push
dvc push
it shows
ERROR: unexpected error - Forbidden: An error occurred (403) when calling the HeadObject operation: Forbidden
Am i missing something?
update1
result of dvc doctor
C:\my-server>dvc doctor
DVC version: 2.7.4 (pip)
---------------------------------
Platform: Python 3.8.0 on Windows-10-10.0.19041-SP0
Supports:
http (aiohttp = 3.7.4.post0, aiohttp-retry = 2.4.5),
https (aiohttp = 3.7.4.post0, aiohttp-retry = 2.4.5),
s3 (s3fs = 2021.8.1, boto3 = 1.17.106)
Cache types: hardlink
Cache directory: NTFS on C:\
Caches: local
Remotes: s3
Workspace directory: NTFS on C:\
Repo: dvc, git
and the dvc push-vv
C:\my-server>dvc push -vv
2021-09-21 13:21:38,382 TRACE: Namespace(all_branches=False, all_commits=False, all_tags=False, cd='.', cmd='push', cprofile=False, cprofile_dump=None, func=<class 'dvc.command.data_sync.CmdDataPush'>, glob=False, instrument=False, instrument_open=False, jobs=None, pdb=False, quiet=0, recursive=False, remote=None, run_cache=False, targets=[], verbose=2, version=None, with_deps=False)
2021-09-21 13:21:39,293 TRACE: Assuming 'C:\my-server\.dvc\cache\02\5b196462b86d2f10a9f659e2224da8.dir' is unchanged since
it is read-only
2021-09-21 13:21:39,296 TRACE: Assuming 'C:\my-server\.dvc\cache\02\5b196462b86d2f10a9f659e2224da8.dir' is unchanged since
it is read-only
2021-09-21 13:21:40,114 DEBUG: Preparing to transfer data from '.dvc\cache' to 's3://my-bucket/models'
2021-09-21 13:21:40,117 DEBUG: Preparing to collect status from 's3://my-bucket/models'
2021-09-21 13:21:40,119 DEBUG: Collecting status from 's3://my-bucket/models'
2021-09-21 13:21:40,121 DEBUG: Querying 1 hashes via object_exists
2021-09-21 13:21:44,840 ERROR: unexpected error - Forbidden: An error occurred (403) when calling the HeadObject operation: Forbidden
------------------------------------------------------------
Traceback (most recent call last):
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\s3fs\core.py", line 248, in _call_s3
out = await method(**additional_kwargs)
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\aiobotocore\client.py", line 155, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (InvalidAccessKeyId) when calling the ListObjectsV2 operation: The AWS Access Key Id you provided does not exist in our records.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\s3fs\core.py", line 1080, in _info
out = await self._simple_info(path)
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\s3fs\core.py", line 993, in _simple_info
out = await self._call_s3(
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\s3fs\core.py", line 268, in _call_s3
raise err
PermissionError: The AWS Access Key Id you provided does not exist in our records.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\s3fs\core.py", line 248, in _call_s3
out = await method(**additional_kwargs)
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\aiobotocore\client.py", line 155, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\main.py", line 55, in main
ret = cmd.do_run()
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\command\base.py", line 45, in do_run
return self.run()
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\command\data_sync.py", line 57, in run
processed_files_count = self.repo.push(
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\repo\__init__.py", line 50, in wrapper
return f(repo, *args, **kwargs)
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\repo\push.py", line 48, in push
pushed += self.cloud.push(obj_ids, jobs, remote=remote, odb=odb)
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\data_cloud.py", line 85, in push
return transfer(
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\objects\transfer.py", line 153, in transfer
status = compare_status(src, dest, obj_ids, check_deleted=False, **kwargs)
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\objects\status.py", line 160, in compare_status
dest_exists, dest_missing = status(
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\objects\status.py", line 122, in status
exists = hashes.intersection(
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\objects\status.py", line 48, in _indexed_dir_hashes
dir_exists.update(odb.list_hashes_exists(dir_hashes - dir_exists))
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\objects\db\base.py", line 415, in list_hashes_exists
ret = list(itertools.compress(hashes, in_remote))
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\concurrent\futures\_base.py", line 611, in result_iterator
yield fs.pop().result()
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\concurrent\futures\_base.py", line 439, in result
return self.__get_result()
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\concurrent\futures\_base.py", line 388, in __get_result
raise self._exception
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\concurrent\futures\thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\objects\db\base.py", line 406, in exists_with_progress
ret = self.fs.exists(path_info)
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\fs\fsspec_wrapper.py", line 97, in exists
return self.fs.exists(self._with_bucket(path_info))
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\fsspec\asyn.py", line 88, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\fsspec\asyn.py", line 69, in sync
raise result[0]
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\fsspec\asyn.py", line 25, in _runner
result[0] = await coro
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\s3fs\core.py", line 820, in _exists
await self._info(path, bucket, key, version_id=version_id)
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\s3fs\core.py", line 1084, in _info
out = await self._version_aware_info(path, version_id)
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\s3fs\core.py", line 1027, in _version_aware_info
out = await self._call_s3(
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\s3fs\core.py", line 268, in _call_s3
raise err
PermissionError: Forbidden
------------------------------------------------------------
2021-09-21 13:21:45,178 DEBUG: Version info for developers:
DVC version: 2.7.4 (pip)
---------------------------------
Platform: Python 3.8.0 on Windows-10-10.0.19041-SP0
Supports:
http (aiohttp = 3.7.4.post0, aiohttp-retry = 2.4.5),
https (aiohttp = 3.7.4.post0, aiohttp-retry = 2.4.5),
s3 (s3fs = 2021.8.1, boto3 = 1.17.106)
Cache types: hardlink
Cache directory: NTFS on C:\
Caches: local
Remotes: s3
Workspace directory: NTFS on C:\
Repo: dvc, git
Having any troubles? Hit us up at https://dvc.org/support, we are always happy to help!
2021-09-21 13:21:45,185 DEBUG: Analytics is enabled.
2021-09-21 13:21:45,446 DEBUG: Trying to spawn '['daemon', '-q', 'analytics', 'C:\\Users\\sgarg\\AppData\\Local\\Temp\\tmpm_p9f3eq']'
2021-09-21 13:21:45,456 DEBUG: Spawned '['daemon', '-q', 'analytics', 'C:\\Users\\sgarg\\AppData\\Local\\Temp\\tmpm_p9f3eq']'
Could you please run dvc doctor and rerun dvc push and add -vv flag. And give the two results?
PermissionError: The AWS Access Key Id you provided does not exist in our records.
Does the aws cli works correctly for you? First setup AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in envs then
aws s3 ls s3://mybucket/dvcstore
I faced the same issue, even after configuring both CLI and dvc.config properly. Looks like what I missed out was installing pip install dvc_s3 instead of pip install 'dvc[s3]'. The latter resolved my issue
Solution
Check what S3 url dvc is pointing to, in ./.dvc/cache/: ['remote "storage"']
Check if your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY is set in $ aws configure
Ask AWS admin if AWS_ACCESS_KEY_ID is on the S3 policy, on the bucket's permitted list.

How to deal with airflow error taskinstance.py line 983, in _run_raw_task / psycopg2?

I have a couple of DAGs running, which execute python scripts which are basically copying data from A to B. But one of the dags throws an error - but data is still getting copied, so somehow it does not have influence on the execution of the python script.
The only "special" what is within this dag is, that the python script builds up a connecton to a postgres database using psycopg2~=2.8.5 but not sure if this is somehow the root cause.
I also checked the permissions for the user, which seem to be fine at least in the dags folder.
Is there any specific timeout value I have to adjust in the config?
[2021-05-19 12:53:42,036] {taskinstance.py:1145} ERROR - Bash command failed 255
Traceback (most recent call last):
File "/hereisthepath/venv/lib64/python3.6/site-packages/airflow/models/taskinstance.py", line 983, in _run_raw_task
result = task_copy.execute(context=context)
File "/hereisthepath/venv/lib64/python3.6/site-packages/airflow/operators/bash_operator.py", line 134, in execute
raise AirflowException("Bash command failed")
airflow.exceptions.AirflowException: Bash command failed
Update: This is the passage of the operator, which fails. I copied the entire function, however the error throws at line 134 ("raise AirflowException("Bash command failed"))
def execute(self, context):
"""
Execute the bash command in a temporary directory
which will be cleaned afterwards
"""
self.log.info("Tmp dir root location: \n %s", gettempdir())
# Prepare env for child process.
env = self.env
if env is None:
env = os.environ.copy()
airflow_context_vars = context_to_airflow_vars(context, in_env_var_format=True)
self.log.debug('Exporting the following env vars:\n%s',
'\n'.join(["{}={}".format(k, v)
for k, v in airflow_context_vars.items()]))
env.update(airflow_context_vars)
self.lineage_data = self.bash_command
with TemporaryDirectory(prefix='airflowtmp') as tmp_dir:
with NamedTemporaryFile(dir=tmp_dir, prefix=self.task_id) as f:
f.write(bytes(self.bash_command, 'utf_8'))
f.flush()
fname = f.name
script_location = os.path.abspath(fname)
self.log.info(
"Temporary script location: %s",
script_location
)
def pre_exec():
# Restore default signal disposition and invoke setsid
for sig in ('SIGPIPE', 'SIGXFZ', 'SIGXFSZ'):
if hasattr(signal, sig):
signal.signal(getattr(signal, sig), signal.SIG_DFL)
os.setsid()
self.log.info("Running command: %s", self.bash_command)
self.sub_process = Popen(
['bash', fname],
stdout=PIPE, stderr=STDOUT,
cwd=tmp_dir, env=env,
preexec_fn=pre_exec)
self.log.info("Output:")
line = ''
for line in iter(self.sub_process.stdout.readline, b''):
line = line.decode(self.output_encoding).rstrip()
self.log.info(line)
self.sub_process.wait()
self.log.info(
"Command exited with return code %s",
self.sub_process.returncode
)
if self.sub_process.returncode:
raise AirflowException("Bash command failed")
if self.xcom_push_flag:
return line
Update2: It really seems, that this behavior is related to the psycopg2: I now tested all other possible error sources and only when I test with the postgres datasource using psycopg2 package, the error occurs. Meanwhile I also upgraded to the most recent version of psycopg2 (2.8.6) but without success.
Maybe this helps for further investigation

ResolvePackageNotFound: m2w64_c_win-64 while building R package r-ffbase

I'm trying to install an R package in my anaconda environment DS-ML i've created with conda skeleton. The environment has R and python installed in it.
The R package is called r-ffbase, and it is on CRAN repository.
I'm on C:\users\public. After typing:
conda skeleton cran ffbase
a skeleton folder named "r-ffbase" has been created successfully.
I get a conda exception while building the package -
conda.exceptions.ResolvePackageNotFound:
- m2w64_c_win-64
I couldn't find any package in any channel I think. Where can I get this package from?
And why do I need this package for?
This is for a windows 10 server (win64), with Anaconda installed with conda 4.7.12, numpy 1.16.5 and R version 3.6.1 in the DS-ML enviroment.
I've tried typing in the conda prompt:
conda build r-ffbase --R=3.6.1 --numpy=1.16.5
I get the following full error message:
(DS-ML) C:\Users\Public>conda build r-ffbase --R=3.6.1 --numpy=1.16.5
Adding in variants from internal_defaults
INFO:conda_build.variants:Adding in variants from internal_defaults
Adding in variants from config.variant
INFO:conda_build.variants:Adding in variants from config.variant
Attempting to finalize metadata for r-ffbase
INFO:conda_build.metadata:Attempting to finalize metadata for r-ffbase
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... failed
Leaving build/test directories:
Work:
C:\ProgramData\Anaconda3\conda-bld\work
Test:
C:\ProgramData\Anaconda3\conda-bld\test_tmp
Leaving build/test environments:
Test:
source activate C:\ProgramData\Anaconda3\conda-bld\_test_env
Build:
source activate C:\ProgramData\Anaconda3\conda-bld\_build_env
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\conda_build\environ.py", line 756, in get_install_actions
actions = install_actions(prefix, index, specs, force=True)
File "C:\ProgramData\Anaconda3\lib\site-packages\conda\common\io.py", line 88, in decorated
return f(*args, **kwds)
File "C:\ProgramData\Anaconda3\lib\site-packages\conda\plan.py", line 474, in install_actions
txn = solver.solve_for_transaction(prune=prune, ignore_pinned=not pinned)
File "C:\ProgramData\Anaconda3\lib\site-packages\conda\core\solve.py", line 117, in solve_for_transaction
should_retry_solve)
File "C:\ProgramData\Anaconda3\lib\site-packages\conda\core\solve.py", line 158, in solve_for_diff
force_remove, should_retry_solve)
File "C:\ProgramData\Anaconda3\lib\site-packages\conda\core\solve.py", line 275, in solve_final_state
ssc = self._add_specs(ssc)
File "C:\ProgramData\Anaconda3\lib\site-packages\conda\core\solve.py", line 555, in _add_specs
explicit_pool = ssc.r._get_package_pool(self.specs_to_add)
File "C:\ProgramData\Anaconda3\lib\site-packages\conda\resolve.py", line 523, in _get_package_pool
pool = self.get_reduced_index(specs)
File "C:\ProgramData\Anaconda3\lib\site-packages\conda\common\io.py", line 88, in decorated
return f(*args, **kwds)
File "C:\ProgramData\Anaconda3\lib\site-packages\conda\resolve.py", line 544, in get_reduced_index
explicit_specs, features = self.verify_specs(explicit_specs)
File "C:\ProgramData\Anaconda3\lib\site-packages\conda\resolve.py", line 280, in verify_specs
raise ResolvePackageNotFound(bad_deps)
conda.exceptions.ResolvePackageNotFound:
- m2w64_c_win-64
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\Scripts\conda-build-script.py", line 10, in <module>
sys.exit(main())
File "C:\ProgramData\Anaconda3\lib\site-packages\conda_build\cli\main_build.py", line 445, in main
execute(sys.argv[1:])
File "C:\ProgramData\Anaconda3\lib\site-packages\conda_build\cli\main_build.py", line 436, in execute
verify=args.verify, variants=args.variants)
File "C:\ProgramData\Anaconda3\lib\site-packages\conda_build\api.py", line 209, in build
notest=notest, need_source_download=need_source_download, variants=variants)
File "C:\ProgramData\Anaconda3\lib\site-packages\conda_build\build.py", line 2343, in build_tree
notest=notest,
File "C:\ProgramData\Anaconda3\lib\site-packages\conda_build\build.py", line 1334, in build
output_metas = expand_outputs([(m, need_source_download, need_reparse_in_env)])
File "C:\ProgramData\Anaconda3\lib\site-packages\conda_build\render.py", line 746, in expand_outputs
for (output_dict, m) in _m.copy().get_output_metadata_set(permit_unsatisfiable_variants=False):
File "C:\ProgramData\Anaconda3\lib\site-packages\conda_build\metadata.py", line 2047, in get_output_metadata_set
bypass_env_check=bypass_env_check)
File "C:\ProgramData\Anaconda3\lib\site-packages\conda_build\metadata.py", line 721, in finalize_outputs_pass
permit_unsatisfiable_variants=permit_unsatisfiable_variants)
File "C:\ProgramData\Anaconda3\lib\site-packages\conda_build\render.py", line 527, in finalize_metadata
exclude_pattern)
File "C:\ProgramData\Anaconda3\lib\site-packages\conda_build\render.py", line 390, in add_upstream_pins
permit_unsatisfiable_variants, exclude_pattern)
File "C:\ProgramData\Anaconda3\lib\site-packages\conda_build\render.py", line 378, in _read_upstream_pin_files
permit_unsatisfiable_variants=permit_unsatisfiable_variants)
File "C:\ProgramData\Anaconda3\lib\site-packages\conda_build\render.py", line 154, in get_env_dependencies
channel_urls=tuple(m.config.channel_urls))
File "C:\ProgramData\Anaconda3\lib\site-packages\conda_build\environ.py", line 758, in get_install_actions
raise DependencyNeedsBuildingError(exc, subdir=subdir)
conda_build.exceptions.DependencyNeedsBuildingError: Unsatisfiable dependencies for platform win-64: {'m2w64_c_win-64'}
in my recipe/meta.yaml:
change
requirements:
build:
- {{ compiler("c") }} # [unix]
- {{ compiler('m2w64_c') }} # [win]
- {{ compiler("fortran") }} # [unix]
- {{ compiler('m2w64_fortran') }} # [win]
to:
requirements:
build:
- {{ compiler("c") }} # [unix]
- m2w64-gcc [win]
- {{ compiler("fortran") }} # [unix]
- m2w64-gcc-fortran [win]

build tensorflow/serving for python3 failed

System information
OS Platform and Distribution: Linux CentOS7.2
TensorFlow Serving installed from (source or binary): Source
TensorFlow Serving version: r1.9
Python Version: 3.6.2
Describe the problem
The centos7.2 install python2.7 defaultly, I want to build tensorflow-serving for python3, So I use the pyenv to install a python3.6.2, and in the pyenv virtualenv, I cloned the tensorflow/serving r1.9, to bazel build it from source. But the at last it failed becase of "import mork" error. I saw in the serving/tools/bazel.rc file, it use the /usr/bin/python as python runner, so I changed the file to my python3.6.2, but failed again. with error like
" _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
ImportError: /home/pyenv/.cache/bazel/_bazel_pyenv/3e832003f40725f2f414b6fe95aca127/execroot/tf_serving/bazel-out/host/bin/external/org_tensorflow/tensorflow/tools/api/generator/create_python_api.runfiles/org_tensorflow/tensorflow/python/_pywrap_tensorflow_internal.so: undefined symbol: _Py_FalseStruct"
Exact Steps to Reproduce
I changed the bazel.rc file as follows
build:cuda --crosstool_top=#local_config_cuda//crosstool:toolchain
build:cuda --define=using_cuda=true --define=using_cuda_nvcc=true
build --action_env PYTHON_BIN_PATH="/home/pyenv/.pyenv/versions/tensorflow-serving/bin/python"
build --define PYTHON_BIN_PATH=/home/pyenv/.pyenv/versions/tensorflow-serving/bin/python # tensorflow-serving is a virtualenv from python3.6.2 installed by pyenv
build --spawn_strategy=standalone --genrule_strategy=standalone
test --spawn_strategy=standalone --genrule_strategy=standalone
run --spawn_strategy=standalone --genrule_strategy=standalone
build --define=grpc_no_ares=true
build --define with_hdfs_support=true # support HDFS
# TODO(b/69809703): Remove once no longer required for TensorFlow to build.
build --copt=-DGEMMLOWP_ALLOW_SLOW_SCALAR_FALLBACK
Source code / logs
the built also failed, the error log is as follows
ERROR:
/home/pyenv/.cache/bazel/_bazel_pyenv/3e832003f40725f2f414b6fe95aca127/external/org_tensorflow/tensorflow/BUILD:541:1: Executing genrule #org_tensorflow//tensorflow:python_api_gen failed (Exit 1): bash failed: error executing command
(cd /home/pyenv/.cache/bazel/_bazel_pyenv/3e832003f40725f2f414b6fe95aca127/execroot/tf_serving && \
exec env - \
PATH=/home/pyenv/.pyenv/plugins/pyenv-virtualenv/shims:/home/pyenv/.pyenv/shims:/home/pyenv/.pyenv/bin:/usr/local/ffmpeg/bin/:/opt/jdk1.8.0_112/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/pyenv/.local/bin:/home/pyenv/bin \
PYTHON_BIN_PATH=/home/pyenv/.pyenv/versions/tensorflow-serving/bin/python \
/bin/bash -c 'source external/bazel_tools/tools/genrule/genrule-setup.sh; bazel-out/host/bin/external/org_tensorflow/tensorflow/tools/api/generator/create_python_api --root_init_template=external/org_tensorflow/tensorflow/api_template.__init__.py --apidir=bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/app/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/bitwise/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/compat/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/data/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/distributions/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/distributions/bijectors/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/errors/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/estimator/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/estimator/export/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/estimator/inputs/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/feature_column/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/gfile/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/graph_util/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/image/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/initializers/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/activations/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/applications/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/applications/densenet/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/applications/inception_resnet_v2/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/applications/inception_v3/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/applications/mobilenet/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/applications/nasnet/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/applications/resnet50/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/applications/vgg16/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/applications/vgg19/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/applications/xception/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/backend/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/callbacks/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/constraints/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/datasets/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/datasets/boston_housing/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/datasets/cifar10/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/datasets/cifar100/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/datasets/fashion_mnist/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/datasets/imdb/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/datasets/mnist/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/datasets/reuters/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/estimator/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/initializers/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/layers/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/losses/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/metrics/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/models/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/optimizers/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/preprocessing/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/preprocessing/image/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/preprocessing/sequence/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/preprocessing/text/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/regularizers/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/utils/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/wrappers/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/keras/wrappers/scikit_learn/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/layers/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/linalg/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/logging/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/losses/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/manip/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/math/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/metrics/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/nn/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/nn/rnn_cell/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/profiler/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/python_io/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/resource_loader/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/strings/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/saved_model/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/saved_model/builder/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/saved_model/constants/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/saved_model/loader/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/saved_model/main_op/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/saved_model/signature_constants/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/saved_model/signature_def_utils/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/saved_model/tag_constants/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/saved_model/utils/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/sets/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/sparse/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/spectral/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/summary/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/sysconfig/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/test/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/train/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/train/queue_runner/__init__.py bazel-out/k8-opt/genfiles/external/org_tensorflow/tensorflow/user_ops/__init__.py')
Traceback (most recent call last):
File "/home/pyenv/.cache/bazel/_bazel_pyenv/3e832003f40725f2f414b6fe95aca127/execroot/tf_serving/bazel-out/host/bin/external/org_tensorflow/tensorflow/tools/api/generator/create_python_api.runfiles/tf_serving/../org_tensorflow/tensorflow/tools/api/generator/create_python_api.py", line 27, in <module>
from tensorflow.python.util import tf_decorator
File "/home/pyenv/.cache/bazel/_bazel_pyenv/3e832003f40725f2f414b6fe95aca127/execroot/tf_serving/bazel-out/host/bin/external/org_tensorflow/tensorflow/tools/api/generator/create_python_api.runfiles/org_tensorflow/tensorflow/python/__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "/home/pyenv/.cache/bazel/_bazel_pyenv/3e832003f40725f2f414b6fe95aca127/execroot/tf_serving/bazel-out/host/bin/external/org_tensorflow/tensorflow/tools/api/generator/create_python_api.runfiles/org_tensorflow/tensorflow/python/pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "/home/pyenv/.cache/bazel/_bazel_pyenv/3e832003f40725f2f414b6fe95aca127/execroot/tf_serving/bazel-out/host/bin/external/org_tensorflow/tensorflow/tools/api/generator/create_python_api.runfiles/org_tensorflow/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/home/pyenv/.cache/bazel/_bazel_pyenv/3e832003f40725f2f414b6fe95aca127/execroot/tf_serving/bazel-out/host/bin/external/org_tensorflow/tensorflow/tools/api/generator/create_python_api.runfiles/org_tensorflow/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/home/pyenv/.cache/bazel/_bazel_pyenv/3e832003f40725f2f414b6fe95aca127/execroot/tf_serving/bazel-out/host/bin/external/org_tensorflow/tensorflow/tools/api/generator/create_python_api.runfiles/org_tensorflow/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
ImportError: /home/pyenv/.cache/bazel/_bazel_pyenv/3e832003f40725f2f414b6fe95aca127/execroot/tf_serving/bazel-out/host/bin/external/org_tensorflow/tensorflow/tools/api/generator/create_python_api.runfiles/org_tensorflow/tensorflow/python/_pywrap_tensorflow_internal.so: undefined symbol: _Py_FalseStruct
Then how can I build tensorflow-serving r1.9 with python3.6.2 support on CentOS7?
change the tools/bazel.rc to config python path can resolve this problem.
but this seems not true for tensorflow-serving r1.10 or r1.11

Resources