pip search not working on pypi repository - artifactory

I have setup a PyPi repo in Artifactory, but I cannot search uploaded packages with pip.
I created a PyPi repo in Artifactory and pushed two versions of an example package, which worked perfectly. The package and it's two versions is present in Artifactory under the correct repo. Running pip search and trying to find this package results in a timeout.
Uploading the packages dind't present any issues at all.
I have tried without /simple as well.
pip search example -i http://artifactory_server/api/pypi/pypi-repo/simple
produces the following:
Exception:
Traceback (most recent call last):
File "c:\python27\lib\site-packages\pip\_internal\cli\base_command.py", line 179, in main
status = self.run(options, args)
File "c:\python27\lib\site-packages\pip\_internal\commands\search.py", line 48, in run
pypi_hits = self.search(query, options)
File "c:\python27\lib\site-packages\pip\_internal\commands\search.py", line 65, in search
hits = pypi.search({'name': query, 'summary': query}, 'or')
File "c:\python27\lib\xmlrpclib.py", line 1243, in __call__
return self.__send(self.__name, args)
File "c:\python27\lib\xmlrpclib.py", line 1602, in __request
verbose=self.__verbose
File "c:\python27\lib\site-packages\pip\_internal\download.py", line 823, in request
headers=headers, stream=True)
File "c:\python27\lib\site-packages\pip\_vendor\requests\sessions.py", line 581, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "c:\python27\lib\site-packages\pip\_internal\download.py", line 403, in request
return super(PipSession, self).request(method, url, *args, **kwargs)
File "c:\python27\lib\site-packages\pip\_vendor\requests\sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "c:\python27\lib\site-packages\pip\_vendor\requests\sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "c:\python27\lib\site-packages\pip\_vendor\requests\adapters.py", line 529, in send
raise ReadTimeout(e, request=request)
ReadTimeout: HTTPConnectionPool(host='artifactory_server', port=80): Read timed out. (read timeout=15)
Any ideas are most welcome.
Thanks.

Since you are not using https, you need to add the following option at the end of your command
--trusted-host artifactory_server

Related

Custom timetable not registered by airflow webserver in Cloud Composer 1

I've recently created custom timetable. Worked perfectly locally (python==3.9.12, airflow==2.3.0), so decided to upload it to plugins folder in my Cloud Composer (version==1.18.11, airflow==2.2.5). While scheduler picks up timetable and dag is run based on it, trying to open dag in UI throws me this error window:
Something bad has happened.
Airflow is used by many users, and it is very likely that others had similar problems and you can easily find
a solution to your problem.
Consider following these steps:
* gather the relevant information (detailed logs with errors, reproduction steps, details of your deployment)
* find similar issues using:
* GitHub Discussions
* GitHub Issues
* Stack Overflow
* the usual search engine you use on a daily basis
* if you run Airflow on a Managed Service, consider opening an issue using the service support channels
* if you tried and have difficulty with diagnosing and fixing the problem yourself, consider creating a bug report.
Make sure however, to include all relevant details and results of your investigation so far.
Python version: 3.8.12
Airflow version: 2.2.5+composer
Node: 67b211ed8faa
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/opt/python3.8/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/opt/python3.8/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/opt/python3.8/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/opt/python3.8/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/opt/python3.8/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/www/auth.py", line 51, in decorated
return func(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/www/decorators.py", line 108, in view_func
return f(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/www/decorators.py", line 71, in wrapper
return f(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/www/views.py", line 2328, in tree
dag = current_app.dag_bag.get_dag(dag_id)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/dagbag.py", line 186, in get_dag
self._add_dag_from_db(dag_id=dag_id, session=session)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/dagbag.py", line 261, in _add_dag_from_db
dag = row.dag
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/serialized_dag.py", line 180, in dag
dag = SerializedDAG.from_dict(self.data) # type: Any
File "/opt/python3.8/lib/python3.8/site-packages/airflow/serialization/serialized_objects.py", line 951, in from_dict
return cls.deserialize_dag(serialized_obj['dag'])
File "/opt/python3.8/lib/python3.8/site-packages/airflow/serialization/serialized_objects.py", line 877, in deserialize_dag
v = _decode_timetable(v)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/serialization/serialized_objects.py", line 167, in _decode_timetable
raise _TimetableNotRegistered(importable_string)
airflow.serialization.serialized_objects._TimetableNotRegistered: Timetable class '<enter_your_timetable_plugin_name>.<enter_your_timetable_class_name>' is not registered
Going to window Plugins shows that no plugins are added (both Cloud Composer==2.0.15, airflow==2.2.5) and my local setup uploads plugin properly.
What's really interesting that while having same airflow version, both versions of Cloud Composer works differently.
I don't override any of default airflow variables, nor that should impact anything that's described here.
Many many thanks for any suggestions.

AZURE Cognitive Serivces -> KeyError: 'Endpoint'

I am using the SDK (Python) for Computer Vision published in Mocrosoft Docs (https://learn.microsoft.com/es-es/azure/cognitive-services/computer-vision/quickstarts-sdk/python-sdk).
When I run the code, this error occurs:
Traceback (most recent call last):
File "c:/analyze_image_local.py", line 68, in <module>
description_result = computervision_client.describe_image_in_stream(local_image)
File "C:\Anaconda3\lib\site-packages\azure\cognitiveservices\vision\computervision\operations\_computer_vision_client_operations.py", line 1202, in describe_image_in_stream
request = self._client.post(url, query_parameters, header_parameters, body_content)
File "C:\Anaconda3\lib\site-packages\msrest\service_client.py", line 193, in post
request = self._request('POST', url, params, headers, content, form_content)
File "C:\Anaconda3\lib\site-packages\msrest\service_client.py", line 108, in _request
request = ClientRequest(method, self.format_url(url))
File "C:\Anaconda3\lib\site-packages\msrest\service_client.py", line 155, in format_url
base = self.config.base_url.format(**kwargs).rstrip('/')
KeyError: 'Endpoint'
I rather used the REST API method (https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts/python-disk)
However, it can be useful to complete the endpoint using this command line:
analyze_url = endpoint + "vision/v2.1/analyze"
Simply run the following command to reinstall customvision SDK worked for me.
pip uninstall azure-cognitiveservices-vision-customvision
pip install azure-cognitiveservices-vision-customvision

iprof and iprof_totals profiling error

I get this error after trying :
openmdao iprof x.py
or
openmdao iprof_totals x.py
on my terminal. Any idea why it could be? Do we have a simple sample code where the iprof works smoothly.
Traceback (most recent call last):
File "/home/user/miniconda3/bin/openmdao", line 11, in
sys.exit(openmdao_cmd())
File "/home/user/miniconda3/lib/python3.6/site-packages/openmdao/utils/om.py", line 403, in openmdao_cmd
options.executor(options)
File "/home/user/miniconda3/lib/python3.6/site-packages/openmdao/devtools/iprofile.py", line 373, in _iprof_totals_exec
_iprof_py_file(options)
File "/home/user/miniconda3/lib/python3.6/site-packages/openmdao/devtools/iprofile.py", line 429, in _iprof_py_file
_finalize_profile()
File "/home/user/miniconda3/lib/python3.6/site-packages/openmdao/devtools/iprofile.py", line 183, in _finalize_profile
qfile, qclass, qname = find_qualified_name(filename, int(line), cache, full=False)
File "/home/user/miniconda3/lib/python3.6/site-packages/openmdao/devtools/iprof_utils.py", line 73, in find_qualified_name
with open(filename, 'Ur') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'packages/openmdao/jacobians/jacobian.py'
I looked into this and there is currently a bug when using 'openmdao iprof' and 'openmdao iprof_totals' on a version of OpenMDAO that was not installed from an OpenMDAO repository using 'pip install -e'. I put a story in our bug tracker to fix it.

Apache Airflow - Was Working Fine Now Says Log File Isn't Local Error & Exceptions are Popping Up

So it looks like my install of apache airflow on a Google Compute Engine instance broke down. Everything was working great and then two days ago all the DAG runs show up stuck in a running state. I am using LocalExecutioner.
When I try to look at the log I get this error:
* Log file isn't local.
* Fetching here: http://:8793/log/collector/aa_main_combined_collector/2017-12-15T09:00:00
*** Failed to fetch log file from worker.
I didn't touch a setting anywhere. I looked through all the config files and I scanned the logs and I see this error
[2017-12-16 20:08:42,558] {jobs.py:355} DagFileProcessor0 ERROR - Got an exception! Propagating...
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/airflow/jobs.py", line 347, in helper
pickle_dags)
File "/usr/local/lib/python3.4/dist-packages/airflow/utils/db.py", line 53, in wrapper
result = func(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/airflow/jobs.py", line 1584, in process_file
self._process_dags(dagbag, dags, ti_keys_to_schedule)
File "/usr/local/lib/python3.4/dist-packages/airflow/jobs.py", line 1173, in _process_dags
dag_run = self.create_dag_run(dag)
File "/usr/local/lib/python3.4/dist-packages/airflow/utils/db.py", line 53, in wrapper
result = func(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/airflow/jobs.py", line 763, in create_dag_run
last_scheduled_run = qry.scalar()
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2843, in scalar
ret = self.one()
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2814, in one
ret = self.one_or_none()
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2784, in one_or_none
ret = list(self)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2855, in iter
return self._execute_and_instances(context)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2878, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 945, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/sql/elements.py", line 263, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 1053, in _execute_clauseelement
compiled_sql, distilled_params
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 1189, in _execute_context
context)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 1405, in _handle_dbapi_exception
util.reraise(*exc_info)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 187, in reraise
raise value
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/default.py", line 470, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.4/dist-packages/airflow/bin/cli.py", line 69, in sigint_handler
sys.exit(0)
SystemExit: 0
Any thoughts out there?
I solved this problem though in doing so I discovered another problem.
Long and short of it as soon as I manually started the scheduler, everything worked again. It appears the problem was that the scheduler did not get restarted correctly after a system reboot.
I have scheduler running through SystemD. The Webserver .service works fine. However I do notice that the scheduler .service continually restarts. It appears there is an issue there I need to resolve. This part of it is solved for now.
Look at the log URL, verify if it ends with a date with special characters +:
&execution_date=2018-02-23T08:00:00+00:00
This was fixed here.
You can replace the + for -, or replace all special characters, in my case:
&execution_date=2018-02-23T08%3A00%3A00%2B00%3A00
This happens here.
The FileTaskHandler can not load the log from local disk, and try to load from worker.
Another thing that could be causing this error is the exclusion of the airflow/logs folder or the sub folders inside it.

OpenStack Nova and Oslo error

I have installed OpenStack with devstack scripts in a two-node configuration, i.e. a controller/network node and a separate compute node. Everything seems have started properly, I see the services running on the nodes (if connect to screen sessions). However when trying to start an instance via dashboard, it fails with the following in the logs:
ERROR oslo.messaging._drivers.common [req-892e7b17-49dc-4ce5-a193-ceaf547c97eb admin demo]
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 137, in _dispatch_and_reply incoming.message))
File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 180, in _dispatch
return self._do_dispatch(endpoint, method, ctxt, args)
File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 126, in _do_dispatch
result = getattr(endpoint, method)(ctxt, **new_args)
File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/server.py", line 139, in inner
return func(*args, **kwargs)
File "/opt/stack/nova/nova/scheduler/manager.py", line 175, in select_destinations
filter_properties)
File "/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 147, in select_destinations
filter_properties)
File "/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 276, in _schedule
filter_properties, index=num)
File "/opt/stack/nova/nova/scheduler/host_manager.py", line 359, in get_filtered_hosts
filter_classes = self._choose_host_filters(filter_class_names)
File "/opt/stack/nova/nova/scheduler/host_manager.py", line 309, in _choose_host_filters
raise exception.SchedulerHostFilterNotFound(filter_name=msg)
SchedulerHostFilterNotFound: Scheduler Host Filter could not be found.
If somebody faced this issue before, could you explain how to fix it?

Resources