DVC - Forbidden: An error occurred (403) when calling the HeadObject operation - dvc

I just started with DVC. following are the steps I am doing to push my models on S3
Initialize
dvc init
Add bucket url
dvc remote add -d storage s3://mybucket/dvcstore
add some files
dvc add somefiles
Add aws keys
dvc remote modify storage access_key_id AWS_ACCESS_KEY_ID
dvc remote modify storage secret_access_key AWS_SECRET_ACCESS_KEY
now when I push
dvc push
it shows
ERROR: unexpected error - Forbidden: An error occurred (403) when calling the HeadObject operation: Forbidden
Am i missing something?
update1
result of dvc doctor
C:\my-server>dvc doctor
DVC version: 2.7.4 (pip)
---------------------------------
Platform: Python 3.8.0 on Windows-10-10.0.19041-SP0
Supports:
http (aiohttp = 3.7.4.post0, aiohttp-retry = 2.4.5),
https (aiohttp = 3.7.4.post0, aiohttp-retry = 2.4.5),
s3 (s3fs = 2021.8.1, boto3 = 1.17.106)
Cache types: hardlink
Cache directory: NTFS on C:\
Caches: local
Remotes: s3
Workspace directory: NTFS on C:\
Repo: dvc, git
and the dvc push-vv
C:\my-server>dvc push -vv
2021-09-21 13:21:38,382 TRACE: Namespace(all_branches=False, all_commits=False, all_tags=False, cd='.', cmd='push', cprofile=False, cprofile_dump=None, func=<class 'dvc.command.data_sync.CmdDataPush'>, glob=False, instrument=False, instrument_open=False, jobs=None, pdb=False, quiet=0, recursive=False, remote=None, run_cache=False, targets=[], verbose=2, version=None, with_deps=False)
2021-09-21 13:21:39,293 TRACE: Assuming 'C:\my-server\.dvc\cache\02\5b196462b86d2f10a9f659e2224da8.dir' is unchanged since
it is read-only
2021-09-21 13:21:39,296 TRACE: Assuming 'C:\my-server\.dvc\cache\02\5b196462b86d2f10a9f659e2224da8.dir' is unchanged since
it is read-only
2021-09-21 13:21:40,114 DEBUG: Preparing to transfer data from '.dvc\cache' to 's3://my-bucket/models'
2021-09-21 13:21:40,117 DEBUG: Preparing to collect status from 's3://my-bucket/models'
2021-09-21 13:21:40,119 DEBUG: Collecting status from 's3://my-bucket/models'
2021-09-21 13:21:40,121 DEBUG: Querying 1 hashes via object_exists
2021-09-21 13:21:44,840 ERROR: unexpected error - Forbidden: An error occurred (403) when calling the HeadObject operation: Forbidden
------------------------------------------------------------
Traceback (most recent call last):
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\s3fs\core.py", line 248, in _call_s3
out = await method(**additional_kwargs)
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\aiobotocore\client.py", line 155, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (InvalidAccessKeyId) when calling the ListObjectsV2 operation: The AWS Access Key Id you provided does not exist in our records.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\s3fs\core.py", line 1080, in _info
out = await self._simple_info(path)
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\s3fs\core.py", line 993, in _simple_info
out = await self._call_s3(
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\s3fs\core.py", line 268, in _call_s3
raise err
PermissionError: The AWS Access Key Id you provided does not exist in our records.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\s3fs\core.py", line 248, in _call_s3
out = await method(**additional_kwargs)
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\aiobotocore\client.py", line 155, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\main.py", line 55, in main
ret = cmd.do_run()
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\command\base.py", line 45, in do_run
return self.run()
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\command\data_sync.py", line 57, in run
processed_files_count = self.repo.push(
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\repo\__init__.py", line 50, in wrapper
return f(repo, *args, **kwargs)
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\repo\push.py", line 48, in push
pushed += self.cloud.push(obj_ids, jobs, remote=remote, odb=odb)
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\data_cloud.py", line 85, in push
return transfer(
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\objects\transfer.py", line 153, in transfer
status = compare_status(src, dest, obj_ids, check_deleted=False, **kwargs)
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\objects\status.py", line 160, in compare_status
dest_exists, dest_missing = status(
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\objects\status.py", line 122, in status
exists = hashes.intersection(
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\objects\status.py", line 48, in _indexed_dir_hashes
dir_exists.update(odb.list_hashes_exists(dir_hashes - dir_exists))
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\objects\db\base.py", line 415, in list_hashes_exists
ret = list(itertools.compress(hashes, in_remote))
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\concurrent\futures\_base.py", line 611, in result_iterator
yield fs.pop().result()
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\concurrent\futures\_base.py", line 439, in result
return self.__get_result()
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\concurrent\futures\_base.py", line 388, in __get_result
raise self._exception
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\concurrent\futures\thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\objects\db\base.py", line 406, in exists_with_progress
ret = self.fs.exists(path_info)
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\dvc\fs\fsspec_wrapper.py", line 97, in exists
return self.fs.exists(self._with_bucket(path_info))
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\fsspec\asyn.py", line 88, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\fsspec\asyn.py", line 69, in sync
raise result[0]
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\fsspec\asyn.py", line 25, in _runner
result[0] = await coro
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\s3fs\core.py", line 820, in _exists
await self._info(path, bucket, key, version_id=version_id)
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\s3fs\core.py", line 1084, in _info
out = await self._version_aware_info(path, version_id)
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\s3fs\core.py", line 1027, in _version_aware_info
out = await self._call_s3(
File "c:\users\sgarg\appdata\local\programs\python\python38\lib\site-packages\s3fs\core.py", line 268, in _call_s3
raise err
PermissionError: Forbidden
------------------------------------------------------------
2021-09-21 13:21:45,178 DEBUG: Version info for developers:
DVC version: 2.7.4 (pip)
---------------------------------
Platform: Python 3.8.0 on Windows-10-10.0.19041-SP0
Supports:
http (aiohttp = 3.7.4.post0, aiohttp-retry = 2.4.5),
https (aiohttp = 3.7.4.post0, aiohttp-retry = 2.4.5),
s3 (s3fs = 2021.8.1, boto3 = 1.17.106)
Cache types: hardlink
Cache directory: NTFS on C:\
Caches: local
Remotes: s3
Workspace directory: NTFS on C:\
Repo: dvc, git
Having any troubles? Hit us up at https://dvc.org/support, we are always happy to help!
2021-09-21 13:21:45,185 DEBUG: Analytics is enabled.
2021-09-21 13:21:45,446 DEBUG: Trying to spawn '['daemon', '-q', 'analytics', 'C:\\Users\\sgarg\\AppData\\Local\\Temp\\tmpm_p9f3eq']'
2021-09-21 13:21:45,456 DEBUG: Spawned '['daemon', '-q', 'analytics', 'C:\\Users\\sgarg\\AppData\\Local\\Temp\\tmpm_p9f3eq']'

Could you please run dvc doctor and rerun dvc push and add -vv flag. And give the two results?
PermissionError: The AWS Access Key Id you provided does not exist in our records.
Does the aws cli works correctly for you? First setup AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in envs then
aws s3 ls s3://mybucket/dvcstore

I faced the same issue, even after configuring both CLI and dvc.config properly. Looks like what I missed out was installing pip install dvc_s3 instead of pip install 'dvc[s3]'. The latter resolved my issue

Solution
Check what S3 url dvc is pointing to, in ./.dvc/cache/: ['remote "storage"']
Check if your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY is set in $ aws configure
Ask AWS admin if AWS_ACCESS_KEY_ID is on the S3 policy, on the bucket's permitted list.

Related

AirflowException("Task received SIGTERM signal")

I'm running Airflow with Docker swarm on 5 servers. After using 2 months, there are some errors on Dags like this. These errors occurred in dags that using a custom hive operator (similar to the inner function ) and no error occurred before 2 months. (Nothing changed with Dags...)
Also, if I tried to retry dag, sometimes it succeeded and sometimes it failed.
The really weird thing about this issue is that hive job was not failed. After the task was marked as failed in the airflow webserver (Sigterm), the query was complete after 1~10 mins.
As a result, flow is like this.
Task start -> 5~10 mins -> error (sigterm, airflow) -> 1~10 mins -> hive job success (hadoop log)
[2023-01-09 08:06:07,583] {local_task_job.py:208} WARNING - State of this instance has been externally set to up_for_retry. Terminating instance.
[2023-01-09 08:06:07,588] {process_utils.py:100} INFO - Sending Signals.SIGTERM to GPID 135213
[2023-01-09 08:06:07,588] {taskinstance.py:1236} ERROR - Received SIGTERM. Terminating subprocesses.
[2023-01-09 08:13:42,510] {taskinstance.py:1463} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/airflow/dags/common/operator/hive_q_operator.py", line 81, in execute
cur.execute(statement) # hive query custom operator
File "/home/airflow/.local/lib/python3.8/site-packages/pyhive/hive.py", line 454, in execute
response = self._connection.client.ExecuteStatement(req)
File "/home/airflow/.local/lib/python3.8/site-packages/TCLIService/TCLIService.py", line 280, in ExecuteStatement
return self.recv_ExecuteStatement()
File "/home/airflow/.local/lib/python3.8/site-packages/TCLIService/TCLIService.py", line 292, in recv_ExecuteStatement
(fname, mtype, rseqid) = iprot.readMessageBegin()
File "/home/airflow/.local/lib/python3.8/site-packages/thrift/protocol/TBinaryProtocol.py", line 134, in readMessageBegin
sz = self.readI32()
File "/home/airflow/.local/lib/python3.8/site-packages/thrift/protocol/TBinaryProtocol.py", line 217, in readI32
buff = self.trans.readAll(4)
File "/home/airflow/.local/lib/python3.8/site-packages/thrift/transport/TTransport.py", line 62, in readAll
chunk = self.read(sz - have)
File "/home/airflow/.local/lib/python3.8/site-packages/thrift_sasl/__init__.py", line 173, in read
self._read_frame()
File "/home/airflow/.local/lib/python3.8/site-packages/thrift_sasl/__init__.py", line 177, in _read_frame
header = self._trans_read_all(4)
File "/home/airflow/.local/lib/python3.8/site-packages/thrift_sasl/__init__.py", line 210, in _trans_read_all
return read_all(sz)
File "/home/airflow/.local/lib/python3.8/site-packages/thrift/transport/TTransport.py", line 62, in readAll
chunk = self.read(sz - have)
File "/home/airflow/.local/lib/python3.8/site-packages/thrift/transport/TSocket.py", line 150, in read
buff = self.handle.recv(sz)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1238, in signal_handler
raise AirflowException("Task received SIGTERM signal")
airflow.exceptions.AirflowException: Task received SIGTERM signal
I already restarted the airflow server and there was nothing changed.
Here is the failed task's log (flower log)
Is there any helpful guide for me?
Thanks :)
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/celery/app/trace.py", line 412, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/celery/app/trace.py", line 704, in __protected_call__
return self.run(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/executors/celery_executor.py", line 88, in execute_command
_execute_in_fork(command_to_exec)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/executors/celery_executor.py", line 99, in _execute_in_fork
raise AirflowException('Celery command failed on host: ' + get_hostname())
airflow.exceptions.AirflowException: Celery command failed on host: 8be4caa25d17

RuntimeException: Runtime Error Could not run dbt

I used dbt init to create a profiles.yml in my .dbt folder. It looks like this:
spring_project:
outputs:
dev:
account: xxx.snowflakecomputing.com
database: PROD_DWH
password: password
role: SYSADMIN
schema: STG
threads: 1
type: snowflake
user: MYUSERNAME
warehouse: DEV_XS_WH
target: dev
Now, I created a new folder on my desktop which only contains a dbt_project.yml file that has this:
profile: 'spring_project'
When I run this from my project folder:
dbt debug --config-dir
I get this:
21:48:59 Running with dbt=1.2.1
21:48:59 To view your profiles.yml file, run:
open /Users/myusername/.dbt
However, when I run dbt
dbt run --profiles-dir /Users/myusername/.dbt
I get this:
21:43:39 Encountered an error while reading the project:
21:43:39 ERROR: Runtime Error
Invalid config version: 1, expected 2
Error encountered in /Users/myusername/Desktop/spring_project/dbt_project.yml
21:43:39 Encountered an error:
Runtime Error
Could not run dbt
21:43:39 Traceback (most recent call last):
File "/opt/homebrew/lib/python3.10/site-packages/dbt/task/base.py", line 108, in from_args
config = cls.ConfigType.from_args(args)
File "/opt/homebrew/lib/python3.10/site-packages/dbt/config/runtime.py", line 226, in from_args
project, profile = cls.collect_parts(args)
File "/opt/homebrew/lib/python3.10/site-packages/dbt/config/runtime.py", line 194, in collect_parts
partial = Project.partial_load(project_root, verify_version=version_check)
File "/opt/homebrew/lib/python3.10/site-packages/dbt/config/project.py", line 639, in partial_load
return PartialProject.from_project_root(
File "/opt/homebrew/lib/python3.10/site-packages/dbt/config/project.py", line 485, in from_project_root
raise DbtProjectError(
dbt.exceptions.DbtProjectError: Runtime Error
Invalid config version: 1, expected 2
Error encountered in /Users/myusername/Desktop/spring_project/dbt_project.yml
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.10/site-packages/dbt/main.py", line 129, in main
results, succeeded = handle_and_check(args)
File "/opt/homebrew/lib/python3.10/site-packages/dbt/main.py", line 191, in handle_and_check
task, res = run_from_args(parsed)
File "/opt/homebrew/lib/python3.10/site-packages/dbt/main.py", line 218, in run_from_args
task = parsed.cls.from_args(args=parsed)
File "/opt/homebrew/lib/python3.10/site-packages/dbt/task/base.py", line 185, in from_args
return super().from_args(args)
File "/opt/homebrew/lib/python3.10/site-packages/dbt/task/base.py", line 114, in from_args
raise dbt.exceptions.RuntimeException("Could not run dbt") from exc
dbt.exceptions.RuntimeException: Runtime Error
Could not run dbt
What am I doing wrong?
Most likely the reason is lack of config-version:
dbt.exceptions.DbtProjectError: Runtime Error
Invalid config version: 1, expected 2
config-version
config-version: 2
Specify your dbt_project.yml as using the v2 structure.
Default:
Without this configuration, dbt will assume your dbt_project.yml uses the version 1 syntax, which was deprecated in dbt v0.19.0.

Saltstack -> 'Pillar failed to render with the following messages'

On my FreeBSD I have a file packages.sls in the following path /usr/local/etc/salt/states
I'm getting the following error message when i do salt '*' state.apply packages :
freebsd:
Data failed to compile:
----------
Pillar failed to render with the following messages:
----------
Rendering SLS 'config' failed. Please see master log for details.
On the file master log i have the following details:
2022-06-02 10:05:12,222 [salt.roster :104 ][ERROR ][3425] Can't access roster for backend flat: Roster file "/usr/local/etc/salt/roster" not found
2022-06-02 10:05:12,434 [salt.pillar :900 ][CRITICAL][3427] Rendering SLS 'config' failed, render error:
found unexpected end of stream
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/salt/renderers/yaml.py", line 62, in render
data = yamlloader.load(yaml_data, Loader=get_yaml_loader(argline))
File "/usr/local/lib/python3.8/site-packages/salt/utils/yamlloader.py", line 169, in load
return yaml.load(stream, Loader=Loader)
File "/usr/local/lib/python3.8/site-packages/yaml/__init__.py", line 114, in load
return loader.get_single_data()
File "/usr/local/lib/python3.8/site-packages/yaml/constructor.py", line 49, in get_single_data
node = self.get_single_node()
File "yaml/_yaml.pyx", line 707, in yaml._yaml.CParser.get_single_node
File "yaml/_yaml.pyx", line 725, in yaml._yaml.CParser._compose_document
File "yaml/_yaml.pyx", line 776, in yaml._yaml.CParser._compose_node
File "yaml/_yaml.pyx", line 890, in yaml._yaml.CParser._compose_mapping_node
File "yaml/_yaml.pyx", line 776, in yaml._yaml.CParser._compose_node
File "yaml/_yaml.pyx", line 892, in yaml._yaml.CParser._compose_mapping_node
File "yaml/_yaml.pyx", line 905, in yaml._yaml.CParser._parse_next_event
yaml.scanner.ScannerError: while scanning a quoted scalar
in "<unicode string>", line 3, column 27
found unexpected end of stream
in "<unicode string>", line 4, column 1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/salt/pillar/__init__.py", line 887, in render_pstate
state = compile_template(
File "/usr/local/lib/python3.8/site-packages/salt/template.py", line 99, in compile_template
ret = render(input_data, saltenv, sls, **render_kwargs)
File "/usr/local/lib/python3.8/site-packages/salt/loader/lazy.py", line 149, in __call__
return self.loader.run(run_func, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/salt/loader/lazy.py", line 1201, in run
return self._last_context.run(self._run_as, _func_or_method, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/salt/loader/lazy.py", line 1216, in _run_as
return _func_or_method(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/salt/renderers/yaml.py", line 66, in render
raise SaltRenderError(err_type, line_num, exc.problem_mark.buffer)
salt.exceptions.SaltRenderError: found unexpected end of stream
2022-06-02 10:05:12,435 [salt.pillar :1224][CRITICAL][3427] Pillar render error: Rendering SLS 'config' failed. Please see master log for details.
My sls file packages.sls
install_bash:
pkg.installed:
- pkgs:
- bash
- vim
- curl
Any idea on how to solve this situation?
Thank you
It was a problem of DNS/Cache. Issue solved after changing the hostname in minion.id ,clear cache, accepted new key and restart.

problem in Change Directory in colab for google drive

I want download video from YouTube with youtube_dl with Colab and save it in google drive. I make a directory with the name of video title and save video in that folder. Then I use this code:
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
URL = "https://www.youtube.com/watch?v=QTPP-iaF7BY&t=1955s"
!pip install youtube_dl
import youtube_dl
with youtube_dl.YoutubeDL({"ignoreerrors": True, "quiet": True}) as ydl:
playlist_dict = ydl.extract_info(URL, download=False)
print('\n', playlist_dict['title'], '\n')
import os
new_folder = playlist_dict['title']
path = f"//content//drive//MyDrive//{new_folder}//".replace("'"," ").replace(".","-").replace(":","-")
os.makedirs(path, exist_ok=True)
print('\n', path, '\n')
%cd {path}
But for the URL that I specified in the above code it get this error:
shell-init: error retrieving current directory: getcwd: cannot access parent directories: Transport endpoint is not connected
shell-init: error retrieving current directory: getcwd: cannot access parent directories: Transport endpoint is not connected
The folder you are executing pip from can no longer be found.
ERROR:root:Internal Python error in the inspect module.
Below is the traceback from this internal error.
Pillai "Hoeffding's Inequality"
//content//drive//MyDrive//Pillai "Hoeffding's Inequality"//
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-13-dd9eae6c92da>", line 20, in <module>
get_ipython().magic('cd {path}')
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2160, in magic
return self.run_line_magic(magic_name, magic_arg_s)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2081, in run_line_magic
result = fn(*args,**kwargs)
File "<decorator-gen-84>", line 2, in cd
File "/usr/local/lib/python3.7/dist-packages/IPython/core/magic.py", line 188, in <lambda>
call = lambda f, *a, **k: f(*a, **k)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/magics/osm.py", line 288, in cd
oldcwd = py3compat.getcwd()
OSError: [Errno 107] Transport endpoint is not connected
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 1823, in showtraceback
stb = value._render_traceback_()
AttributeError: 'OSError' object has no attribute '_render_traceback_'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/IPython/core/ultratb.py", line 1132, in get_records
return _fixed_getinnerframes(etb, number_of_lines_of_context, tb_offset)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/ultratb.py", line 313, in wrapped
return f(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/ultratb.py", line 358, in _fixed_getinnerframes
records = fix_frame_records_filenames(inspect.getinnerframes(etb, context))
File "/usr/lib/python3.7/inspect.py", line 1502, in getinnerframes
frameinfo = (tb.tb_frame,) + getframeinfo(tb, context)
File "/usr/lib/python3.7/inspect.py", line 1460, in getframeinfo
filename = getsourcefile(frame) or getfile(frame)
File "/usr/lib/python3.7/inspect.py", line 696, in getsourcefile
if getattr(getmodule(object, filename), '__loader__', None) is not None:
File "/usr/lib/python3.7/inspect.py", line 725, in getmodule
file = getabsfile(object, _filename)
File "/usr/lib/python3.7/inspect.py", line 709, in getabsfile
return os.path.normcase(os.path.abspath(_filename))
File "/usr/lib/python3.7/posixpath.py", line 383, in abspath
cwd = os.getcwd()
OSError: [Errno 107] Transport endpoint is not connected
with other URL in YouTube I haven't this problem and it downloads and saves correctly in Google Drive.
EDIT
With changing %cd {path} to os.chdir(path) the problem solved. But
I don't understand why %cd {path} work for some and don't work
for others.

AuthorizationFailed - "The client 'xxx' with object id 'xxx does not have authorization to perform action

I've tried to get Workspace from config which I do have access to, but it fails with the following error:
import azureml.core
print("SDK version:", azureml.core.VERSION)
from azureml.core.workspace import Workspace
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
SDK version: 0.1.80 Found the config file in:
C:\Users\gubert\Repos\Gimmonix\HotelMappingAI\aml_config\config.json
get_workspace error using subscription_id=xxxxxxxxxxxxxxxxxxxxxxx,
resource_group_name=xxxxxxxxxxxx, workspace_name=gmx-ml-mapping
Traceback (most recent call last): File
"C:\Users\gubert.azureml\envs\myenv\lib\site-packages\azureml_project_commands.py",
line 320, in get_workspace workspace_name) File
"C:\Users\gubert.azureml\envs\myenv\lib\site-packages\azureml_base_sdk_common\workspace\operations\workspaces_operations.py",
line 78, in get raise
models.ErrorResponseWrapperException(self._deserialize, response)
azureml._base_sdk_common.workspace.models.error_response_wrapper.ErrorResponseWrapperException:
Operation returned an invalid status code 'Forbidden'
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File
"c:\Users\gubert.vscode\extensions\ms-python.python-2018.10.1\pythonFiles\experimental\ptvsd_launcher.py",
line 38, in main(sys.argv) File
"c:\Users\gubert.vscode\extensions\ms-python.python-2018.10.1\pythonFiles\experimental\ptvsd\ptvsd_main_.py",
line 265, in main wait=args.wait) File
"c:\Users\gubert.vscode\extensions\ms-python.python-2018.10.1\pythonFiles\experimental\ptvsd\ptvsd_main_.py",
line 256, in handle_args run_main(addr, name, kind, *extra, **kwargs)
File
"c:\Users\gubert.vscode\extensions\ms-python.python-2018.10.1\pythonFiles\experimental\ptvsd\ptvsd_local.py",
line 52, in run_main runner(addr, name, kind == 'module', *extra,
**kwargs) File "c:\Users\gubert.vscode\extensions\ms-python.python-2018.10.1\pythonFiles\experimental\ptvsd\ptvsd\runner.py",
line 32, in run set_trace=False) File
"c:\Users\gubert.vscode\extensions\ms-python.python-2018.10.1\pythonFiles\experimental\ptvsd\ptvsd_vendored\pydevd\pydevd.py",
line 1283, in run return self._exec(is_module, entry_point_fn,
module_name, file, globals, locals) File
"c:\Users\gubert.vscode\extensions\ms-python.python-2018.10.1\pythonFiles\experimental\ptvsd\ptvsd_vendored\pydevd\pydevd.py",
line 1290, in _exec pydev_imports.execfile(file, globals, locals) #
execute the script File
"c:\Users\gubert.vscode\extensions\ms-python.python-2018.10.1\pythonFiles\experimental\ptvsd\ptvsd_vendored\pydevd_pydev_imps_pydev_execfile.py",
line 25, in execfile exec(compile(contents+"\n", file, 'exec'), glob,
loc) File "c:\Users\gubert\Repos\Gimmonix\HotelMappingAI\test.py",
line 8, in ws = Workspace.from_config() File
"C:\Users\gubert.azureml\envs\myenv\lib\site-packages\azureml\core\workspace.py",
line 153, in from_config auth=auth) File
"C:\Users\gubert.azureml\envs\myenv\lib\site-packages\azureml\core\workspace.py",
line 86, in init auto_rest_workspace = _commands.get_workspace(auth,
subscription_id, resource_group, workspace_name) File
"C:\Users\gubert.azureml\envs\myenv\lib\site-packages\azureml_project_commands.py",
line 326, in get_workspace resource_error_handling(response_exception,
WORKSPACE) File
"C:\Users\gubert.azureml\envs\myenv\lib\site-packages\azureml_base_sdk_common\common.py",
line 270, in resource_error_handling raise
ProjectSystemException(response_message)
azureml.exceptions._azureml_exception.ProjectSystemException: {
"error_details": { "error": { "code": "AuthorizationFailed",
"message": "The client 'xxxxxxxxxx#microsoft.com' with object id
'xxxxxxxxxxxxx' does not have authorization to perform action
'Microsoft.MachineLearningServices/workspaces/read' over scope
'/subscriptions/xxxxxxxxxxxxxx/resourceGroups/CarsolizeCloud - Test
Global/providers/Microsoft.MachineLearningServices/workspaces/gmx-ml-mapping'."
} }, "status_code": 403, "url":
"https://management.azure.com/subscriptions/xxxxxxxxxxxxx/resourceGroups/CarsolizeCloud%20-%20Test%20Global/providers/Microsoft.MachineLearningServices/workspaces/gmx-ml-mapping?api-version=2018-03-01-preview"
}
Try using the newest SDK version 1.0.10, this is a fairly old preview version you're using. If you still have a problem, let me know as I work on this SDK.

Resources