Plone vagrant mailer issues - plone

I'm running Plone 5.0 under vagrant. For some reason the mailer is being blocked and returning ErrNo 111. I've tested on two different machines and I can replicate it on both. The vagrant installer should provide the headless ubuntu a smtp and setup the proper ports.
Error:
File "/home/vagrant/Plone/buildout-cache/eggs/Products.CMFPlone-5.0-py2.7.egg/Products/CMFPlone/controlpanel/browser/mail.py", line 83, in handle_test_action
immediate=True)
File "/home/vagrant/Plone/buildout-cache/eggs/Products.MailHost-2.13.2-py2.7.egg/Products/MailHost/MailHost.py", line 237, in send
self._send(mfrom, mto, messageText, immediate)
File "/home/vagrant/Plone/buildout-cache/eggs/Products.MailHost-2.13.2-py2.7.egg/Products/MailHost/MailHost.py", line 337, in _send
self._makeMailer().send(mfrom, mto, messageText)
File "/home/vagrant/Plone/buildout-cache/eggs/Products.CMFPlone-5.0-py2.7.egg/Products/CMFPlone/patches/sendmail.py", line 17, in _catch
return func(*args, **kwargs)
File "/home/vagrant/Plone/buildout-cache/eggs/zope.sendmail-3.7.5-py2.7.egg/zope/sendmail/mailer.py", line 46, in send
connection = self.smtp(self.hostname, str(self.port))
File "/usr/lib/python2.7/smtplib.py", line 256, in __init__
(code, msg) = self.connect(host, port)
File "/usr/lib/python2.7/smtplib.py", line 316, in connect
self.sock = self._get_socket(host, port, self.timeout)
File "/usr/lib/python2.7/smtplib.py", line 291, in _get_socket
return socket.create_connection((host, port), timeout)
File "/usr/lib/python2.7/socket.py", line 571, in create_connection
raise err
error: [Errno 111] Connection refused
That being said, how do I set this up?

Related

Apache Airflow on Azure sqlalchemy Postgres SSL SYSCALL error: EOF detected

We deployed Apache Airflow 2.3.3 to Azure.
Webserver - Web App
Scheduler - ACI
Celery Worker - ACI
We were seeing errors on the Celery ACI console related to Postgres and Redis connection timeouts as below
[2022-09-22 18:55:50,650: WARNING/ForkPoolWorker-15] Failed operation _store_result. Retrying 2 more times.
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1803, in _execute_context
cursor, statement, parameters, context
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute
cursor.execute(statement, parameters)
psycopg2.DatabaseError: could not receive data from server: Connection timed out
SSL SYSCALL error: Connection timed out
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/celery/backends/database/__init__.py", line 47, in _inner
return fun(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/celery/backends/database/__init__.py", line 117, in _store_result
task = list(session.query(self.task_cls).filter(self.task_cls.task_id == task_id))
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 2887, in __iter__
return self._iter().__iter__()
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 2897, in _iter
execution_options={"_sa_orm_load_options": self.load_options},
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 1689, in execute
result = conn._execute_20(statement, params or {}, execution_options)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1614, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 326, in _execute_on_connection
self, multiparams, params, execution_options
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1491, in _execute_clauseelement
cache_hit=cache_hit,
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1846, in _execute_context
e, statement, parameters, cursor, context
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2027, in _handle_dbapi_exception
sqlalchemy_exception, with_traceback=exc_info[2], from_=e
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1803, in _execute_context
cursor, statement, parameters, context
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.DatabaseError: (psycopg2.DatabaseError) could not receive data from server: Connection timed out
SSL SYSCALL error: Connection timed out
[SQL: SELECT celery_taskmeta.id AS celery_taskmeta_id, celery_taskmeta.task_id AS celery_taskmeta_task_id, celery_taskmeta.status AS celery_taskmeta_status, celery_taskmeta.result AS celery_taskmeta_result, celery_taskmeta.date_done AS celery_taskmeta_date_done, celery_taskmeta.traceback AS celery_taskmeta_traceback
FROM celery_taskmeta
WHERE celery_taskmeta.task_id = %(task_id_1)s]
[parameters: {'task_id_1': 'c5f9f53c-8afe-4d67-8d3b-d7ad84875de1'}]
(Background on this error at: https://sqlalche.me/e/14/4xp6)
[2022-09-22 18:55:50,929: INFO/ForkPoolWorker-15] [c5f9f53c-8afe-4d67-8d3b-d7ad84875de1] Executing command in Celery: ['airflow', 'tasks', 'run', 'CS_ALERTING', 'CheckRunningTasks', 'scheduled__2022-09-22T18:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/CS_ALERTING.py']
[2022-09-22 18:55:51,241: INFO/ForkPoolWorker-15] Filling up the DagBag from /opt/airflow/platform_pam/dags/CS_ALERTING.py
[2022-09-22 18:55:53,467: INFO/ForkPoolWorker-15] Running <TaskInstance: CS_ALERTING.CheckRunningTasks scheduled__2022-09-22T18:00:00+00:00 [queued]> on host localhost
[2022-09-22 18:55:58,304: INFO/ForkPoolWorker-15] Task airflow.executors.celery_executor.execute_command[c5f9f53c-8afe-4d67-8d3b-d7ad84875de1] succeeded in 960.1964174450004s: None
[2022-09-22 19:29:25,931: WARNING/MainProcess] consumer: Connection to broker lost. Trying to re-establish the connection...
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/redis/connection.py", line 706, in send_packed_command
sendall(self._sock, item)
File "/home/airflow/.local/lib/python3.7/site-packages/redis/_compat.py", line 9, in sendall
return sock.sendall(*args, **kwargs)
File "/usr/local/lib/python3.7/ssl.py", line 1034, in sendall
v = self.send(byte_view[count:])
File "/usr/local/lib/python3.7/ssl.py", line 1003, in send
return self._sslobj.write(data)
TimeoutError: [Errno 110] Connection timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 332, in start
blueprint.start(self)
File "/home/airflow/.local/lib/python3.7/site-packages/celery/bootsteps.py", line 116, in start
step.start(parent)
File "/home/airflow/.local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 628, in start
c.loop(*c.loop_args())
File "/home/airflow/.local/lib/python3.7/site-packages/celery/worker/loops.py", line 97, in asynloop
next(loop)
File "/home/airflow/.local/lib/python3.7/site-packages/kombu/asynchronous/hub.py", line 301, in create_loop
poll_timeout = fire_timers(propagate=propagate) if scheduled else 1
File "/home/airflow/.local/lib/python3.7/site-packages/kombu/asynchronous/hub.py", line 143, in fire_timers
entry()
File "/home/airflow/.local/lib/python3.7/site-packages/kombu/asynchronous/timer.py", line 64, in __call__
return self.fun(*self.args, **self.kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/kombu/asynchronous/timer.py", line 126, in _reschedules
return fun(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/kombu/transport/redis.py", line 557, in maybe_check_subclient_health
client.check_health()
File "/home/airflow/.local/lib/python3.7/site-packages/redis/client.py", line 3522, in check_health
check_health=False)
File "/home/airflow/.local/lib/python3.7/site-packages/redis/connection.py", line 726, in send_command
check_health=kwargs.get('check_health', True))
File "/home/airflow/.local/lib/python3.7/site-packages/redis/connection.py", line 718, in send_packed_command
(errno, errmsg))
redis.exceptions.ConnectionError: Error 110 while writing to socket. Connection timed out.
[2022-09-22 19:29:26,023: WARNING/MainProcess] /home/airflow/.local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py:367: CPendingDeprecationWarning:
I referred the Airflow's documentation and found setting up database
We are modifying Airflow's docker image and adding a python file,
airflow.www.db_utils.db_config (This file is installed to site_packages) and defined the dictionary
keepalive_kwargs = {
"keepalives": 1,
"keepalives_idle": 30,
"keepalives_interval": 5,
"keepalives_count": 5,
}
Finally, we are setting
ENV AIRFLOW__DATABASE__SQL_ALCHEMY_CONNECT_ARGS="airflow.www.db_utils.db_config.keepalive_kwargs"
Unfortunately, the error stills persist. It will be great if someone helps me to resolve this issue.

Airflow webserver throwing error -Socket error processing request

Our webserver is running and working fine. Although in the logs we are seeing these errors periodically . Can someone provide pointers around these
[2022-04-05 02:53:58 -0400] [129502] [ERROR] Socket error processing request.
Traceback (most recent call last):
File "/airflow/airflowvirt/lib64/python3.6/site-packages/gunicorn/workers/sync.py", line 135, in handle
req = next(parser)
File "/airflow/airflowvirt/lib64/python3.6/site-packages/gunicorn/http/parser.py", line 42, in __next__
self.mesg = self.mesg_class(self.cfg, self.unreader, self.source_addr, self.req_count)
File "/airflow/airflowvirt/lib64/python3.6/site-packages/gunicorn/http/message.py", line 180, in __init__
super().__init__(cfg, unreader, peer_addr)
File "/airflow/airflowvirt/lib64/python3.6/site-packages/gunicorn/http/message.py", line 54, in __init__
unused = self.parse(self.unreader)
File "/airflow/airflowvirt/lib64/python3.6/site-packages/gunicorn/http/message.py", line 192, in parse
self.get_data(unreader, buf, stop=True)
File "/airflow/airflowvirt/lib64/python3.6/site-packages/gunicorn/http/message.py", line 183, in get_data
data = unreader.read()
File "/airflow/airflowvirt/lib64/python3.6/site-packages/gunicorn/http/unreader.py", line 37, in read
d = self.chunk()
File "/airflow/airflowvirt/lib64/python3.6/site-packages/gunicorn/http/unreader.py", line 64, in chunk
return self.sock.recv(self.mxchunk)
File "/usr/lib64/python3.6/ssl.py", line 956, in recv
return self.read(buflen)
File "/usr/lib64/python3.6/ssl.py", line 833, in read
return self._sslobj.read(len, buffer)
File "/usr/lib64/python3.6/ssl.py", line 592, in read
v = self._sslobj.read(len)
OSError: [Errno 0] Error
[2022-04-05 02:53:58 -0400] [129500] [ERROR] Socket error processing request.

Unable to install "'en_core_web_sm'"

I have installed the spacy 3.1.2 and trying to install en_core_web_sm on jupyter notebook in Jupyterlab by using
!python3 -m spacy download en_core_web_sm
but it is showing the following error
2021-09-01 07:00:12.028366: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory
2021-09-01 07:00:12.028469: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory
2021-09-01 07:00:12.028486: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 488, in wrap_socket
cnx.do_handshake()
File "/opt/conda/lib/python3.7/site-packages/OpenSSL/SSL.py", line 1934, in do_handshake
self._raise_ssl_error(self._ssl, result)
File "/opt/conda/lib/python3.7/site-packages/OpenSSL/SSL.py", line 1671, in _raise_ssl_error
_raise_current_error()
File "/opt/conda/lib/python3.7/site-packages/OpenSSL/_util.py", line 54, in exception_from_error_queue
raise exception_type(errors)
OpenSSL.SSL.Error: [('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py", line 667, in urlopen
self._prepare_proxy(conn)
File "/opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py", line 932, in _prepare_proxy
conn.connect()
File "/opt/conda/lib/python3.7/site-packages/urllib3/connection.py", line 371, in connect
ssl_context=context,
File "/opt/conda/lib/python3.7/site-packages/urllib3/util/ssl_.py", line 384, in ssl_wrap_socket
return context.wrap_socket(sock, server_hostname=server_hostname)
File "/opt/conda/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 494, in wrap_socket
raise ssl.SSLError("bad handshake: %r" % e)
ssl.SSLError: ("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])",)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py", line 727, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/opt/conda/lib/python3.7/site-packages/urllib3/util/retry.py", line 439, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /explosion/spacy-models/master/compatibility.json (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])")))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/opt/conda/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/jovyan/.local/lib/python3.7/site-packages/spacy/__main__.py", line 4, in <module>
setup_cli()
File "/home/jovyan/.local/lib/python3.7/site-packages/spacy/cli/_util.py", line 69, in setup_cli
command(prog_name=COMMAND)
File "/opt/conda/lib/python3.7/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/opt/conda/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/opt/conda/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/opt/conda/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/home/jovyan/.local/lib/python3.7/site-packages/typer/main.py", line 497, in wrapper
return callback(**use_params) # type: ignore
File "/home/jovyan/.local/lib/python3.7/site-packages/spacy/cli/download.py", line 35, in download_cli
download(model, direct, sdist, *ctx.args)
File "/home/jovyan/.local/lib/python3.7/site-packages/spacy/cli/download.py", line 67, in download
compatibility = get_compatibility()
File "/home/jovyan/.local/lib/python3.7/site-packages/spacy/cli/download.py", line 78, in get_compatibility
r = requests.get(about.__compatibility__)
File "/opt/conda/lib/python3.7/site-packages/requests/api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/opt/conda/lib/python3.7/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/requests/adapters.py", line 514, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /explosion/spacy-models/master/compatibility.json (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])")))
Due to this, I can't use
import spacy
nlp = spacy.load('en_core_web_sm')
Because it is showing error as:
OSError: [E050] Can't find model 'en_core_web_sm'. It doesn't seem to be a Python package or a valid path to a data directory.
I have searched various post and articles related to it but nothing is working for me.
Can anyone tell me how to resolve this error ? It would be very helpful for me.

from tensorflow.lite.python.interpreter import Interpreter prevents telepot

Good morning,
i am using tensorflow lite and i also wanted to use telepot.
I also installed the Coral USB accelerator, but I don't think it depends on him, also because it is independent of whether or not to add --edgetpu to the end of the program start command.
It works only if the transfer of messages or images I place it before this instruction:
from tensorflow.lite.python.interpreter import Interpreter
it's how telepot is incompatible with tflite.
Obviously everything works without the telepot instruction
What can I do?
I'm using a raspberry pi 4 with s.o. Debian burst, Python 3.7 and Opencv 4.1
This is the error that gives me:
Traceback (most recent call last):
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 485, in wrap_socket
cnx.do_handshake()
File "/usr/lib/python3/dist-packages/OpenSSL/SSL.py", line 1915, in do_handshake
self._raise_ssl_error(self._ssl, result)
File "/usr/lib/python3/dist-packages/OpenSSL/SSL.py", line 1647, in _raise_ssl_error
_raise_current_error()
File "/usr/lib/python3/dist-packages/OpenSSL/_util.py", line 54, in exception_from_error_queue
raise exception_type(errors)
OpenSSL.SSL.Error: [('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 672, in urlopen
chunked=chunked,
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 376, in _make_request
self._validate_conn(conn)
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 994, in _validate_conn
conn.connect()
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/connection.py", line 394, in connect
ssl_context=context,
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/util/ssl_.py", line 370, in ssl_wrap_socket
return context.wrap_socket(sock, server_hostname=server_hostname)
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 491, in wrap_socket
raise ssl.SSLError("bad handshake: %r" % e)
ssl.SSLError: ("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])",)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "TFLite_detection_webcam_Prova1.py", line 137, in <module>
spedisci()
File "TFLite_detection_webcam_Prova1.py", line 33, in spedisci
bot.sendPhoto(256868258, foto)
File "/usr/local/lib/python3.7/dist-packages/telepot/__init__.py", line 539, in sendPhoto
return self._api_request_with_file('sendPhoto', _rectify(p), 'photo', photo)
File "/usr/local/lib/python3.7/dist-packages/telepot/__init__.py", line 499, in _api_request_with_file
return self._api_request(method, _rectify(params), files, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/telepot/__init__.py", line 491, in _api_request
return api.request((self._token, method, params, files), **kwargs)
File "/usr/local/lib/python3.7/dist-packages/telepot/api.py", line 154, in request
r = fn(*args, **kwargs) # `fn` must be thread-safe
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/request.py", line 171, in request_encode_body
return self.urlopen(method, url, **extra_kw)
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/poolmanager.py", line 330, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 760, in urlopen
**response_kw
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 760, in urlopen
**response_kw
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 760, in urlopen
**response_kw
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 720, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/urllib3/util/retry.py", line 436, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool
(host='api.telegram.org', port=443): Max retries exceeded with url: /bot926377239:AAEl0gqMWzG0dMidkGNqcGr2wkeTLbgZn3g/sendPhoto (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])")))

Python responses module is erratic with multi-threading in Linux

I am trying to use the responses module to mock an HTTP server and I have the feeling it is broken when calls requests are multi-threaded.
Consider the following:
import json
import responses
import requests
import threading
try:
import urlparse
except ModuleNotFoundError:
import urllib.parse as urlparse
server_url = 'http://server'
headers_json = {'content-type': 'application/json'}
def init():
def endpoint(request):
id = urlparse.parse_qs(urlparse.urlparse(request.path_url).query)["id"][0]
data = {"foo": int(id)}
return 200, headers_json, json.dumps(data)
responses.add_callback(
responses.GET, server_url + '/',
callback=endpoint,
)
#responses.activate
def test():
init()
def responses_routine():
resp = requests.get(
server_url + '/?id=456',
headers=headers_json,
)
# {"foo": "456"}
print(resp.json()["foo"])
def print_routine():
print("456")
# This will print "456"
#responses_routine()
# This will print "456" after 2 seconds
#threading.Timer(2, print_routine).start()
# This will fail
threading.Timer(2, responses_routine).start()
test()
It fails with a stack that clearly indicates that no call to responses was made. The requests module did try to make an HTTP code to a remote server.
Exception in thread Thread-1:
Traceback (most recent call last):
File "/opt/python/python3.6.6/lib/python3.6/site-packages/urllib3/connection.py", line 159, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File "/opt/python/python3.6.6/lib/python3.6/site-packages/urllib3/util/connection.py", line 57, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/opt/python/python3.6.6/lib/python3.6/socket.py", line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/python/python3.6.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/opt/python/python3.6.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 354, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/opt/python/python3.6.6/lib/python3.6/http/client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/opt/python/python3.6.6/lib/python3.6/http/client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/opt/python/python3.6.6/lib/python3.6/http/client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/opt/python/python3.6.6/lib/python3.6/http/client.py", line 1026, in _send_output
self.send(msg)
File "/opt/python/python3.6.6/lib/python3.6/http/client.py", line 964, in send
self.connect()
File "/opt/python/python3.6.6/lib/python3.6/site-packages/urllib3/connection.py", line 181, in connect
conn = self._new_conn()
File "/opt/python/python3.6.6/lib/python3.6/site-packages/urllib3/connection.py", line 168, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x2aaab0930438>: Failed to establish a new connection: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/python/python3.6.6/lib/python3.6/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/opt/python/python3.6.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/opt/python/python3.6.6/lib/python3.6/site-packages/urllib3/util/retry.py", line 398, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='server', port=80): Max retries exceeded with url: /?id=456 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x2aaab0930438>: Failed to establish a new connection: [Errno -2] Name or service not known',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/python/python3.6.6/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/opt/python/python3.6.6/lib/python3.6/threading.py", line 1182, in run
self.function(*self.args, **self.kwargs)
File "tmp.py", line 35, in responses_routine
headers=headers_json,
File "/opt/python/python3.6.6/lib/python3.6/site-packages/requests/api.py", line 116, in post
return request('post', url, data=data, json=json, **kwargs)
File "/opt/python/python3.6.6/lib/python3.6/site-packages/requests/api.py", line 60, in request
return session.request(method=method, url=url, **kwargs)
File "/opt/python/python3.6.6/lib/python3.6/site-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/opt/python/python3.6.6/lib/python3.6/site-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/opt/python/python3.6.6/lib/python3.6/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='server', port=80): Max retries exceeded with url: /?id=456 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x2aaab0930438>: Failed to establish a new connection: [Errno -2] Name or service not known',))
It fails on Redhat 6.6. I am using Python 3.6.6 (it also fails in 2.7 with a similar stack) with requests 2.22 and responses 0.10.6.
It's like when the main thread exited, all "handles" to responses were lost...
Any clues ?
Actually what's created within the context manager responses.activate should not escape it.
In the code above, the underlying thread of the Timer object would be joined outside the scope, hence potentially run outside the context manager scope, which yield the exception.

Resources