Errors while using asyncio in Python 3.6 to execute terminal commands - python-3.6

I am using asyncio for the first time and I am getting these two errors:
1) RuntimeError: set_wakeup_fd only works in main thread
2) RuntimeError: Cannot add child handler, the child watcher does not have a loop attached
Usage scenario:
I have to execute two terminal commands, one followed by another. To ensure the second command gets executed only after the first, I'm using asyncio to set the loops and running them.
I am trying to execute a function in this python file using a flask api ultimately.
Has someone come across these errors and can guide me to resolve it?
Any help would be great!
Code:
async def segmentVide(command):
process = await asyncio.create_subprocess_shell(
command,
stdout=asyncio.subprocess.PIPE)
stdout, stderr = await process.communicate()
asyncio.get_child_watcher()
return stdout.decode().strip()
def segmentOutput(folderCreated):
#command = MP4Box terminal command
loop = asyncio.new_event_loop()
asyncio.get_child_watcher().attach_loop(loop)
asyncio.set_event_loop(loop)
coro = loop.run_in_executor(None, segmentVide, command)
loop.run_until_complete(coro)
loop.close()
print('completed')
There is another function executing the other terminal command but the asyncio usage part is the same in it too.

Related

Show log from sub python function in AirFlow

I'am facing with a little issue: I want to show all logs from all my python function opérateurs.
My Workflow:
my main python function is using in pythonoperateur (everything in this function were printed to the airflow logs)
my main python function calls other python functions but the print and logs in sub function not appear in airflow logs.
Even if I tried this:
import logging
logging.basicConfig(level=logging.DEBUG)
def subfunction():
logging.info('This is an info message') #not show in airflow log
def main_function():
print("call api") #it show in airflow log
subfunction()

Python3.8 asyncio behavior difference between Windows and Unix

I am working on script where I will be dealing with huge amount of data to process via python.
I have written a script using asyncio in python3.8 on windows box which is working perfectly fine but when I execute the same script on unix on python3.8 its completing the execution but not terminating the program at the end. Seems like its not release resources/lock.
When I debug further, found that on windows the asyncio uses ProactorEventLoop whereas on Unix it uses _UnixSelectorEventLoop, But not sure if this affect by any means.
I cant share the full script but it follows below structure:
import asyncio
async def myCoroutine():
print("My Coroutine")
try:
loop = asyncio.get_event_loop()
loop.run_until_complete(myCoroutine())
print("Execution Completed")
finally:
print("Closing the loop")
loop.close()
print("loop Closed")
Output:
Execution Completed
Closing the loop
loop closed
But program is not terminating.
Is anyone faced the similar issue before? Any inputs?
Thanks in Advance!!

What does `python3 ()` do?

While trying to execute a timeit command on the command line using the python command line interface I accidentally put .function() on the outside of the command like so:
$ python3 -m timeit '<code>'.function()
Rather than the timeit command being executed, I was prompted as such:
function>
Thinking I had entered the python repl I tried to quit with q. Yes, I'm aware quit() is the correct way to do this. Having returned to the command line, I noticed the error and corrected it like so:
$ python3 -m timeit `<code>.function()`
I expected this code to execute correctly, but instead I received the following error:
python3:7: command not found: q
After discussing it with some colleagues, it was suggested that I check which python was being used:
$ which python3
python3 () {
q
}
This was not what I was expecting! Normally the result would be /usr/local/bin/python3. Through some trial and error I was able to determine that the minimal case to reproduce this is:
$ python3 ()
function> q
$
Now that the context is out of the way, I have two questions about the behaviour I witnessed:
1. What exactly does python3 () do?
2. How do I return execution to its original state in the same terminal window? I'm aware I can open a new terminal window and the original state exists in that window.
The syntax foo () is used in POSIX-compliant shells (such as bash, dash, and zsh) to define a function. Your entire snippet defines a function called python3 and executes the command q when it's ran. You can bypass shell functions and aliases using the command command: command -p python3 myfile.py
To remove the function from the current shell process, you can use unset -f python3. If it keeps coming back after starting new shells, then it's likely defined in one of you shell initialization files.

Asyncio RuntimeError: Event loop is closed flask app

I have a flask app hosted on gunicorn webserver and trying to parallelize long running I/O bound task "somemethod" as shown below. However often times (not always) it throws error "Event loop is closed". What could be the reason for the error happening randomly ?
Removing loop.close() does the fix the errors, but I am not sure if there would be memory leak in the python worker process.
async def somemethod():
""" Do some work """
app.route('/hello', method=['POST'])
def sayhello():
loop = asyncio.new_event_loop()
try:
asyncio.set_event_loop(loop)
future = asyncio.ensure_future(somemethod)
loop.run_until_complete(future)
finally:
loop.close()

What to do when a py.test hangs silently?

While using py.test, I have some tests that run fine with SQLite but hang silently when I switch to Postgresql. How would I go about debugging something like that? Is there a "verbose" mode I can run my tests in, or set a breakpoint ? More generally, what is the standard plan of attack when pytest stalls silently? I've tried using the pytest-timeout, and ran the test with $ py.test --timeout=300, but the tests still hang with no activity on the screen whatsoever
I ran into the same SQLite/Postgres problem with Flask and SQLAlchemy, similar to Gordon Fierce. However, my solution was different. Postgres is strict about table locks and connections, so explicitly closing the session connection on teardown solved the problem for me.
My working code:
#pytest.yield_fixture(scope='function')
def db(app):
# app is an instance of a flask app, _db a SQLAlchemy DB
_db.app = app
with app.app_context():
_db.create_all()
yield _db
# Explicitly close DB connection
_db.session.close()
_db.drop_all()
Reference: SQLAlchemy
To answer the question "How would I go about debugging something like that?"
Run with py.test -m trace --trace to get trace of python calls.
One option (useful for any stuck unix binary) is to attach to process using strace -p <PID>. See what system call it might be stuck on or loop of system calls. e.g. stuck calling gettimeofday
For more verbose py.test output install pytest-sugar. pip install pytest-sugar And run test with pytest.py --verbose . . .
https://pypi.python.org/pypi/pytest-sugar
I had a similar problem with pytest and Postgresql while testing a Flask app that used SQLAlchemy. It seems pytest has a hard time running a teardown using its request.addfinalizer method with Postgresql.
Previously I had:
#pytest.fixture
def db(app, request):
def teardown():
_db.drop_all()
_db.app = app
_db.create_all()
request.addfinalizer(teardown)
return _db
( _db is an instance of SQLAlchemy I import from extensions.py )
But if I drop the database every time the database fixture is called:
#pytest.fixture
def db(app, request):
_db.app = app
_db.drop_all()
_db.create_all()
return _db
Then pytest won't hang after your first test.
Not knowing what is breaking in the code, the best way is to isolate the test that is failing and set a breakpoint in it to have a look. Note: I use pudb instead of pdb, because it's really the best way to debug python if you are not using an IDE.
For example, you can the following in your test file:
import pudb
...
def test_create_product(session):
pudb.set_trace()
# Create the Product instance
# Create a Price instance
# Add the Product instance to the session.
...
Then run it with
py.test -s --capture=no test_my_stuff.py
Now you'll be able to see exactly where the script locks up, and examine the stack and the database at this particular moment of execution. Otherwise it's like looking for a needle in a haystack.
I just ran into this problem for quite some time (though I wasn't using SQLite). The test suite ran fine locally, but failed in CircleCI (Docker).
My problem was ultimately that:
An object's underlying implementation used threading
The object's __del__ normally would end the threads
My test suite wasn't calling __del__ as it should have
I figured I'd add how I figured this out. Other answers suggest these:
Found usage of pytest-timeout didn't help, the test hung after completion
Invoked via pytest --timeout 5
Versions: pytest==6.2.2, pytest-timeout==1.4.2
Running pytest -m trace --trace or pytest --verbose yielded no useful information either
I ended up having to comment literally everything out, including:
All conftest.py code and test code
Slowly uncommented/re-commented regions and identified the root cause
Ultimate solution: using a factory fixture to add a finalizer to call __del__
In my case the Flask application did not check if __name__ == '__main__': so it executed app.start() when that was not my intention.
You can read many more details here.
In my case diff worked very slow on comparing 4 MB data when assert failed.
with open(path, 'rb') as f:
assert f.read() == data
Fixed by:
with open(path, 'rb') as f:
eq = f.read() == data
assert eq

Resources