Related
I am trying to deploy a simple Flask app with a SQLite database. It works fine when running locally. I deploy it on Vercel via Github. It works until it needs to write to the db. Then there is
OperationalError: attempt to write a readonly database
Both the DB and the folder ‘instance’ that contains it are Full-access on my local system, I can see it with
attrib ~/instance
in the command line (I am working with Windows 10). Is there a way to check and change the access mode of a file or a folder right on the Github repo?
Here are the logs regarding this error:
[ERROR] 2023-01-12T17:19:33.931Z 63bacb5b-ec13-4247-b784-9aaba99376a0
Exception on /register [POST]Traceback (most recent call last):
File "/var/task/sqlalchemy/engine/base.py", line 1900, in _execute_context self.dialect.do_execute
( File "/var/task/sqlalchemy/engine/default.py", line 736, in do_execute cursor.execute(statement, parameters)
sqlite3.OperationalError: attempt to write a readonly database
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/var/task/flask/app.py", line 2525, in wsgi_app response = self.full_dispatch_request() File "/var/task/flask/app.py", line 1822, in full_dispatch_request rv = self.handle_user_exception(e) File "/var/task/flask/app.py", line 1820, in full_dispatch_request rv = self.dispatch_request() File "/var/task/flask/app.py", line 1796, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File "./app.py", line 136, in register db.session.commit()
File "<string>", line 2, in commit File "/var/task/sqlalchemy/orm/session.py", line 1451, in commit self._transaction.commit(_to_root=self.future) File "/var/task/sqlalchemy/orm/session.py", line 829, in commit self._prepare_impl() File "/var/task/sqlalchemy/orm/session.py", line 808, in _prepare_impl self.session.flush() File "/var/task/sqlalchemy/orm/session.py", line 3444, in flush self._flush(objects) File "/var/task/sqlalchemy/orm/session.py", line 3584, in _flush transaction.rollback(_capture_exception=True) File "/var/task/sqlalchemy/util/langhelpers.py", line 70, in __exit__ compat.raise_( File "/var/task/sqlalchemy/util/compat.py", line 211, in raise_ raise exception File "/var/task/sqlalchemy/orm/session.py", line 3544, in _flush flush_context.execute() File "/var/task/sqlalchemy/orm/unitofwork.py", line 456, in execute rec.execute(self) File "/var/task/sqlalchemy/orm/unitofwork.py", line 630, in execute util.preloaded.orm_persistence.save_obj( File "/var/task/sqlalchemy/orm/persistence.py", line 245, in save_obj _emit_insert_statements( File "/var/task/sqlalchemy/orm/persistence.py", line 1238, in _emit_insert_statements result = connection._execute_20( File "/var/task/sqlalchemy/engine/base.py", line 1705, in _execute_20 return meth(self, args_10style, kwargs_10style, execution_options) File "/var/task/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection return connection._execute_clauseelement( File "/var/task/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement ret = self._execute_context( File "/var/task/sqlalchemy/engine/base.py", line 1943, in _execute_context self._handle_dbapi_exception( File "/var/task/sqlalchemy/engine/base.py", line 2124, in _handle_dbapi_exception util.raise_( File "/var/task/sqlalchemy/util/compat.py", line 211, in raise_ raise exception File "/var/task/sqlalchemy/engine/base.py", line 1900, in _execute_context self.dialect.do_execute( File "/var/task/sqlalchemy/engine/default.py", line 736, in do_execute cursor.execute(statement, parameters)sqlalchemy.exc.
OperationalError: (sqlite3.OperationalError) attempt to write a readonly database
I double-checked the access mode of the DB in my local system both as User and as Admin
It turns out that Vercel does not support SQLite. It is clearly written in their guides:
SQLite needs a local file system on the server to store the data
permanently when write requests are made. In a serverless environment,
this central single permanent storage is not available because storage
is ephemeral with serverless functions.
So it's not the problem of a Read-only mode, but rather the issue with SQL databases on Vercel.
We deployed Apache Airflow 2.3.3 to Azure.
Webserver - Web App
Scheduler - ACI
Celery Worker - ACI
We were seeing errors on the Celery ACI console related to Postgres and Redis connection timeouts as below
[2022-09-22 18:55:50,650: WARNING/ForkPoolWorker-15] Failed operation _store_result. Retrying 2 more times.
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1803, in _execute_context
cursor, statement, parameters, context
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute
cursor.execute(statement, parameters)
psycopg2.DatabaseError: could not receive data from server: Connection timed out
SSL SYSCALL error: Connection timed out
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/celery/backends/database/__init__.py", line 47, in _inner
return fun(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/celery/backends/database/__init__.py", line 117, in _store_result
task = list(session.query(self.task_cls).filter(self.task_cls.task_id == task_id))
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 2887, in __iter__
return self._iter().__iter__()
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 2897, in _iter
execution_options={"_sa_orm_load_options": self.load_options},
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 1689, in execute
result = conn._execute_20(statement, params or {}, execution_options)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1614, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 326, in _execute_on_connection
self, multiparams, params, execution_options
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1491, in _execute_clauseelement
cache_hit=cache_hit,
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1846, in _execute_context
e, statement, parameters, cursor, context
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2027, in _handle_dbapi_exception
sqlalchemy_exception, with_traceback=exc_info[2], from_=e
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1803, in _execute_context
cursor, statement, parameters, context
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.DatabaseError: (psycopg2.DatabaseError) could not receive data from server: Connection timed out
SSL SYSCALL error: Connection timed out
[SQL: SELECT celery_taskmeta.id AS celery_taskmeta_id, celery_taskmeta.task_id AS celery_taskmeta_task_id, celery_taskmeta.status AS celery_taskmeta_status, celery_taskmeta.result AS celery_taskmeta_result, celery_taskmeta.date_done AS celery_taskmeta_date_done, celery_taskmeta.traceback AS celery_taskmeta_traceback
FROM celery_taskmeta
WHERE celery_taskmeta.task_id = %(task_id_1)s]
[parameters: {'task_id_1': 'c5f9f53c-8afe-4d67-8d3b-d7ad84875de1'}]
(Background on this error at: https://sqlalche.me/e/14/4xp6)
[2022-09-22 18:55:50,929: INFO/ForkPoolWorker-15] [c5f9f53c-8afe-4d67-8d3b-d7ad84875de1] Executing command in Celery: ['airflow', 'tasks', 'run', 'CS_ALERTING', 'CheckRunningTasks', 'scheduled__2022-09-22T18:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/CS_ALERTING.py']
[2022-09-22 18:55:51,241: INFO/ForkPoolWorker-15] Filling up the DagBag from /opt/airflow/platform_pam/dags/CS_ALERTING.py
[2022-09-22 18:55:53,467: INFO/ForkPoolWorker-15] Running <TaskInstance: CS_ALERTING.CheckRunningTasks scheduled__2022-09-22T18:00:00+00:00 [queued]> on host localhost
[2022-09-22 18:55:58,304: INFO/ForkPoolWorker-15] Task airflow.executors.celery_executor.execute_command[c5f9f53c-8afe-4d67-8d3b-d7ad84875de1] succeeded in 960.1964174450004s: None
[2022-09-22 19:29:25,931: WARNING/MainProcess] consumer: Connection to broker lost. Trying to re-establish the connection...
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/redis/connection.py", line 706, in send_packed_command
sendall(self._sock, item)
File "/home/airflow/.local/lib/python3.7/site-packages/redis/_compat.py", line 9, in sendall
return sock.sendall(*args, **kwargs)
File "/usr/local/lib/python3.7/ssl.py", line 1034, in sendall
v = self.send(byte_view[count:])
File "/usr/local/lib/python3.7/ssl.py", line 1003, in send
return self._sslobj.write(data)
TimeoutError: [Errno 110] Connection timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 332, in start
blueprint.start(self)
File "/home/airflow/.local/lib/python3.7/site-packages/celery/bootsteps.py", line 116, in start
step.start(parent)
File "/home/airflow/.local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 628, in start
c.loop(*c.loop_args())
File "/home/airflow/.local/lib/python3.7/site-packages/celery/worker/loops.py", line 97, in asynloop
next(loop)
File "/home/airflow/.local/lib/python3.7/site-packages/kombu/asynchronous/hub.py", line 301, in create_loop
poll_timeout = fire_timers(propagate=propagate) if scheduled else 1
File "/home/airflow/.local/lib/python3.7/site-packages/kombu/asynchronous/hub.py", line 143, in fire_timers
entry()
File "/home/airflow/.local/lib/python3.7/site-packages/kombu/asynchronous/timer.py", line 64, in __call__
return self.fun(*self.args, **self.kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/kombu/asynchronous/timer.py", line 126, in _reschedules
return fun(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/kombu/transport/redis.py", line 557, in maybe_check_subclient_health
client.check_health()
File "/home/airflow/.local/lib/python3.7/site-packages/redis/client.py", line 3522, in check_health
check_health=False)
File "/home/airflow/.local/lib/python3.7/site-packages/redis/connection.py", line 726, in send_command
check_health=kwargs.get('check_health', True))
File "/home/airflow/.local/lib/python3.7/site-packages/redis/connection.py", line 718, in send_packed_command
(errno, errmsg))
redis.exceptions.ConnectionError: Error 110 while writing to socket. Connection timed out.
[2022-09-22 19:29:26,023: WARNING/MainProcess] /home/airflow/.local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py:367: CPendingDeprecationWarning:
I referred the Airflow's documentation and found setting up database
We are modifying Airflow's docker image and adding a python file,
airflow.www.db_utils.db_config (This file is installed to site_packages) and defined the dictionary
keepalive_kwargs = {
"keepalives": 1,
"keepalives_idle": 30,
"keepalives_interval": 5,
"keepalives_count": 5,
}
Finally, we are setting
ENV AIRFLOW__DATABASE__SQL_ALCHEMY_CONNECT_ARGS="airflow.www.db_utils.db_config.keepalive_kwargs"
Unfortunately, the error stills persist. It will be great if someone helps me to resolve this issue.
I have installed the spacy 3.1.2 and trying to install en_core_web_sm on jupyter notebook in Jupyterlab by using
!python3 -m spacy download en_core_web_sm
but it is showing the following error
2021-09-01 07:00:12.028366: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory
2021-09-01 07:00:12.028469: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory
2021-09-01 07:00:12.028486: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 488, in wrap_socket
cnx.do_handshake()
File "/opt/conda/lib/python3.7/site-packages/OpenSSL/SSL.py", line 1934, in do_handshake
self._raise_ssl_error(self._ssl, result)
File "/opt/conda/lib/python3.7/site-packages/OpenSSL/SSL.py", line 1671, in _raise_ssl_error
_raise_current_error()
File "/opt/conda/lib/python3.7/site-packages/OpenSSL/_util.py", line 54, in exception_from_error_queue
raise exception_type(errors)
OpenSSL.SSL.Error: [('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py", line 667, in urlopen
self._prepare_proxy(conn)
File "/opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py", line 932, in _prepare_proxy
conn.connect()
File "/opt/conda/lib/python3.7/site-packages/urllib3/connection.py", line 371, in connect
ssl_context=context,
File "/opt/conda/lib/python3.7/site-packages/urllib3/util/ssl_.py", line 384, in ssl_wrap_socket
return context.wrap_socket(sock, server_hostname=server_hostname)
File "/opt/conda/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 494, in wrap_socket
raise ssl.SSLError("bad handshake: %r" % e)
ssl.SSLError: ("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])",)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py", line 727, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/opt/conda/lib/python3.7/site-packages/urllib3/util/retry.py", line 439, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /explosion/spacy-models/master/compatibility.json (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])")))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/opt/conda/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/jovyan/.local/lib/python3.7/site-packages/spacy/__main__.py", line 4, in <module>
setup_cli()
File "/home/jovyan/.local/lib/python3.7/site-packages/spacy/cli/_util.py", line 69, in setup_cli
command(prog_name=COMMAND)
File "/opt/conda/lib/python3.7/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/opt/conda/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/opt/conda/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/opt/conda/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/home/jovyan/.local/lib/python3.7/site-packages/typer/main.py", line 497, in wrapper
return callback(**use_params) # type: ignore
File "/home/jovyan/.local/lib/python3.7/site-packages/spacy/cli/download.py", line 35, in download_cli
download(model, direct, sdist, *ctx.args)
File "/home/jovyan/.local/lib/python3.7/site-packages/spacy/cli/download.py", line 67, in download
compatibility = get_compatibility()
File "/home/jovyan/.local/lib/python3.7/site-packages/spacy/cli/download.py", line 78, in get_compatibility
r = requests.get(about.__compatibility__)
File "/opt/conda/lib/python3.7/site-packages/requests/api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/opt/conda/lib/python3.7/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/requests/adapters.py", line 514, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /explosion/spacy-models/master/compatibility.json (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])")))
Due to this, I can't use
import spacy
nlp = spacy.load('en_core_web_sm')
Because it is showing error as:
OSError: [E050] Can't find model 'en_core_web_sm'. It doesn't seem to be a Python package or a valid path to a data directory.
I have searched various post and articles related to it but nothing is working for me.
Can anyone tell me how to resolve this error ? It would be very helpful for me.
my ~/.aws/credentials looks like
[default]
aws_access_key_id = XYZ
aws_secret_access_key = ABC
[testing]
source_profile = default
role_arn = arn:aws:iam::54:role/ad
I add my remote like
dvc remote add --local -v myremote s3://bib-ds-models-testing/data/dvc-test
I have made my .dvc/config.local to look like
[‘remote “myremote”’]
url = s3://bib-ds-models-testing/data/dvc-test
access_key_id = XYZ
secret_access_key = ABC/h2hOsRcCIFqwYWV7eZaUq3gNmS
profile=‘testing’
credentialpath = /Users/nyt21/.aws/credentials
but still after running dvc push -r myremote I get
ERROR: unexpected error - Forbidden: An error occurred (403) when calling the HeadObject operation: Forbidden
** Update
here is the output of dvc push -v
2021-07-25 22:40:38,887 DEBUG: Check for update is enabled.
2021-07-25 22:40:39,022 DEBUG: Preparing to upload data to 's3://bib-ds-models-testing/data/dvc-test'
2021-07-25 22:40:39,022 DEBUG: Preparing to collect status from s3://bib-ds-models-testing/data/dvc-test
2021-07-25 22:40:39,022 DEBUG: Collecting information from local cache...
2021-07-25 22:40:39,022 DEBUG: Collecting information from remote cache...
2021-07-25 22:40:39,022 DEBUG: Matched '0' indexed hashes
2021-07-25 22:40:39,022 DEBUG: Querying 1 hashes via object_exists
2021-07-25 22:40:39,644 ERROR: unexpected error - Forbidden: An error occurred (403) when calling the HeadObject operation: Forbidden
------------------------------------------------------------
Traceback (most recent call last):
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/s3fs/core.py", line 246, in _call_s3
out = await method(**additional_kwargs)
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/aiobotocore/client.py", line 154, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/s3fs/core.py", line 1057, in _info
out = await self._simple_info(path)
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/s3fs/core.py", line 970, in _simple_info
out = await self._call_s3(
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/s3fs/core.py", line 265, in _call_s3
raise translate_boto_error(err)
PermissionError: Access Denied
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/s3fs/core.py", line 246, in _call_s3
out = await method(**additional_kwargs)
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/aiobotocore/client.py", line 154, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/dvc/main.py", line 55, in main
ret = cmd.do_run()
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/dvc/command/base.py", line 50, in do_run
return self.run()
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/dvc/command/data_sync.py", line 57, in run
processed_files_count = self.repo.push(
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/dvc/repo/__init__.py", line 51, in wrapper
return f(repo, *args, **kwargs)
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/dvc/repo/push.py", line 44, in push
pushed += self.cloud.push(objs, jobs, remote=remote)
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/dvc/data_cloud.py", line 79, in push
return remote_obj.push(
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/dvc/remote/base.py", line 57, in wrapper
return f(obj, *args, **kwargs)
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/dvc/remote/base.py", line 494, in push
ret = self._process(
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/dvc/remote/base.py", line 351, in _process
dir_status, file_status, dir_contents = self._status(
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/dvc/remote/base.py", line 195, in _status
self.hashes_exist(
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/dvc/remote/base.py", line 145, in hashes_exist
return indexed_hashes + self.odb.hashes_exist(list(hashes), **kwargs)
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/dvc/objects/db/base.py", line 438, in hashes_exist
remote_hashes = self.list_hashes_exists(hashes, jobs, name)
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/dvc/objects/db/base.py", line 389, in list_hashes_exists
ret = list(itertools.compress(hashes, in_remote))
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/concurrent/futures/_base.py", line 619, in result_iterator
yield fs.pop().result()
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/concurrent/futures/_base.py", line 444, in result
return self.__get_result()
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
raise self._exception
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/dvc/objects/db/base.py", line 380, in exists_with_progress
ret = self.fs.exists(path_info)
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/dvc/fs/fsspec_wrapper.py", line 92, in exists
return self.fs.exists(self._with_bucket(path_info))
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/fsspec/asyn.py", line 87, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/fsspec/asyn.py", line 68, in sync
raise result[0]
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/fsspec/asyn.py", line 24, in _runner
result[0] = await coro
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/s3fs/core.py", line 802, in _exists
await self._info(path, bucket, key, version_id=version_id)
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/s3fs/core.py", line 1061, in _info
out = await self._version_aware_info(path, version_id)
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/s3fs/core.py", line 1004, in _version_aware_info
out = await self._call_s3(
File "/Users/nyt21/opt/miniconda3/envs/dvc/lib/python3.8/site-packages/s3fs/core.py", line 265, in _call_s3
raise translate_boto_error(err)
PermissionError: Forbidden
------------------------------------------------------------
2021-07-25 22:40:39,712 DEBUG: Version info for developers:
DVC version: 2.5.4 (pip)
---------------------------------
Platform: Python 3.8.10 on macOS-10.16-x86_64-i386-64bit
Supports:
http (requests = 2.26.0),
https (requests = 2.26.0),
s3 (s3fs = 2021.6.1, boto3 = 1.18.6)
Cache types: reflink, hardlink, symlink
Cache directory: apfs on /dev/disk3s1s1
Caches: local
Remotes: s3
Workspace directory: apfs on /dev/disk3s1s1
Repo: dvc, git
Having any troubles? Hit us up at https://dvc.org/support, we are always happy to help!
2021-07-25 22:40:39,713 DEBUG: Analytics is enabled.
2021-07-25 22:40:39,765 DEBUG: Trying to spawn '['daemon', '-q', 'analytics', '/var/folders/4x/xhm22wt16gl6m9nvkl9gllkc0000gn/T/tmpo86jdns5']'
2021-07-25 22:40:39,769 DEBUG: Spawned '['daemon', '-q', 'analytics', '/var/folders/4x/xhm22wt16gl6m9nvkl9gllkc0000gn/T/tmpo86jdns5']'
I can upload through python
import boto3
import os
import pickle
bucket_name = 'bib-ds-models-testing'
os.environ["AWS_PROFILE"] = "testing"
session = boto3.Session()
s3_client = boto3.client('s3')
s3_client.upload_file('/Users/nyt21/Devel/DVC/test/data/iris.csv',
'bib-ds-models-testing',
'data/dvc-test/my_iris.csv')
I don't use aws CLI but the following also gives an access deny !
aws s3 ls s3://bib-ds-models-testing/data/dvc-test
An error occurred (AccessDenied) when calling the ListObjectsV2
operation: Access Denied
but it works if I add --profile=testing
aws s3 ls s3://bib-ds-models-testing/data/dvc-test --profile=testing
PRE dvc-test/
just you know environment variable AWS_PROFILE is already set to 'testing'
UPDATE
I have tried both AWS_PROFILE='testing' and AWS_PROFILE=testing, neither of them worked.
Good morning,
i am using tensorflow lite and i also wanted to use telepot.
I also installed the Coral USB accelerator, but I don't think it depends on him, also because it is independent of whether or not to add --edgetpu to the end of the program start command.
It works only if the transfer of messages or images I place it before this instruction:
from tensorflow.lite.python.interpreter import Interpreter
it's how telepot is incompatible with tflite.
Obviously everything works without the telepot instruction
What can I do?
I'm using a raspberry pi 4 with s.o. Debian burst, Python 3.7 and Opencv 4.1
This is the error that gives me:
Traceback (most recent call last):
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 485, in wrap_socket
cnx.do_handshake()
File "/usr/lib/python3/dist-packages/OpenSSL/SSL.py", line 1915, in do_handshake
self._raise_ssl_error(self._ssl, result)
File "/usr/lib/python3/dist-packages/OpenSSL/SSL.py", line 1647, in _raise_ssl_error
_raise_current_error()
File "/usr/lib/python3/dist-packages/OpenSSL/_util.py", line 54, in exception_from_error_queue
raise exception_type(errors)
OpenSSL.SSL.Error: [('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 672, in urlopen
chunked=chunked,
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 376, in _make_request
self._validate_conn(conn)
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 994, in _validate_conn
conn.connect()
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/connection.py", line 394, in connect
ssl_context=context,
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/util/ssl_.py", line 370, in ssl_wrap_socket
return context.wrap_socket(sock, server_hostname=server_hostname)
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 491, in wrap_socket
raise ssl.SSLError("bad handshake: %r" % e)
ssl.SSLError: ("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])",)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "TFLite_detection_webcam_Prova1.py", line 137, in <module>
spedisci()
File "TFLite_detection_webcam_Prova1.py", line 33, in spedisci
bot.sendPhoto(256868258, foto)
File "/usr/local/lib/python3.7/dist-packages/telepot/__init__.py", line 539, in sendPhoto
return self._api_request_with_file('sendPhoto', _rectify(p), 'photo', photo)
File "/usr/local/lib/python3.7/dist-packages/telepot/__init__.py", line 499, in _api_request_with_file
return self._api_request(method, _rectify(params), files, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/telepot/__init__.py", line 491, in _api_request
return api.request((self._token, method, params, files), **kwargs)
File "/usr/local/lib/python3.7/dist-packages/telepot/api.py", line 154, in request
r = fn(*args, **kwargs) # `fn` must be thread-safe
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/request.py", line 171, in request_encode_body
return self.urlopen(method, url, **extra_kw)
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/poolmanager.py", line 330, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 760, in urlopen
**response_kw
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 760, in urlopen
**response_kw
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 760, in urlopen
**response_kw
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 720, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/util/retry.py", line 436, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool
(host='api.telegram.org', port=443): Max retries exceeded with url: /bot926377239:AAEl0gqMWzG0dMidkGNqcGr2wkeTLbgZn3g/sendPhoto (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])")))