Python: Prioritizing tasks and Running asynchronous tasks without a lock - asynchronous

Right now I'm using Gevent, and I wanted to ask two questions:
Is there a way to execute specific tasks that will never execute asynchronously (instead of using a Lock in each of these tasks)
Is there's a way to prioritize spawned tasks in Gevent? Like a group of tasks that will be generated with low priority that will be executed when all of the other tasks are done. For example, two tasks that listen to different socket when each of these tasks handles the socket requests in various priority
If it's not possible in Gevent, is there any other library that it can be done?
Edit
Maybe Celery can help me here?

If you want to manage computing resources, Python async libraries can't help here, because, AFAIK, neither has priority scheduler. All greenthreads are equal.
Task queues generally have a notion of priority, so Celery or Beanstalk is one way to do it.
If your problem does not require task (re)execution guarantees, persistence, multi-machine work distribution, then I would just start few worker processes, assign them CPU, IO, disk priorities using OS and send work/results via UNIX socket DGRAM. Kind of ad-hoc simpler version of task queue. If you go this way, please share your work as open source project, I believe there's demand for this kind of solution.

Related

Airflow - How to configure that all DAG's tasks run in 1 worker

I have a DAG with 2 tasks:
download_file_from_ftp >> transform_file
My concern is that tasks can be performed on different workers.The file will be downloaded on the first worker and will be transformed on another worker. An error will occur because the file is missing on the second worker. Is it possible to configure the dag that all tasks are performed on one worker?
It's a bad practice. Even if you will find a work around it will be very unreliable.
In general, if your executor allows this - you can configure tasks to execute on a specific worker type. For example in CeleryExecutor you can set tasks to a specific Queue. Assuming there is only 1 worker consuming from that queue then your tasks will be executed on the same worker BUT the fact that it's 1 worker doesn't mean it will be the same machine. It highly depended on the infrastructure that you use. For example: when you restart your machines do you get the exact same machine or new one is spawned?
I highly advise you - don't go down this road.
To solve your issue either download the file to shared disk space like S3, Google cloud storage, etc... then all workers can read the file as it's stored in cloud or combine the download and transform into a single operator thus both actions are executed together.

How to evenly balance processing many simultaneous tasks?

PROBLEM
Our PROCESSING SERVICE is serving UI, API, and internal clients and listening for commands from Kafka.
Few API clients might create a lot of generation tasks (one task is N messages) in a short time. With Kafka, we can't control commands distribution, because each command comes to the partition which is consumed by one processing instance (aka worker). Thus, UI requests could be waiting too long while API requests are processing.
In an ideal implementation, we should handle all tasks evenly, regardless of its size. The capacity of the processing service is distributed among all active tasks. And even if the cluster is heavily loaded, we always understand that the new task that has arrived will be able to start processing almost immediately, at least before the processing of all other tasks ends.
SOLUTION
Instead, we want an architecture that looks more like the following diagram, where we have separate queues per combination of customer and endpoint. This architecture gives us much better isolation, as well as the ability to dynamically adjust throughput on a per-customer basis.
On the side of the producer
the task comes from the client
immediately create a queue for this task
send all messages to this queue
On the side of the consumer
in one process, you constantly update the list of queues
in other processes, you follow this list and consume for example 1 message from each queue
scale consumers
QUESTION
Is there any common solution to such a problem? Using RabbitMQ or any other tooling. Нistorically, we use Kafka on the project, so if there is any approach using - it is amazing, but we can use any technology for the solution.
Why not use spark to execute the messages within the task? What I'm thinking is that each worker creates a spark context that then parallelizes the messages. The function that is mapped can be based on which kafka topic the user is consuming. I suspect however your queues might have tasks that contained a mixture of messages, UI, API calls, etc. This will result in a more complex mapping function. If you're not using a standalone cluster and are using YARN or something similar you can change the queueing method that the spark master is using.
As I understood the problem, you want to create request isolation from the customer using dynamically allocated queues which will allow each customer tasks to be executed independently. The problem looks like similar to Head of line blocking issue in networking
The dynamically allocating queues is difficult. This can also lead to explosion of number of queues that can be a burden to the infrastructure. Also, some queues could be empty or very less load. RabbitMQ won't help here, it is a queue with different protocol than kafka.
One alternative is to use custom partitioner in kafka that can look at the partition load and based on that load balance the tasks. This works if the tasks are independent in nature and there is no state store maintains in the worker.
The other alternative would be to load balance at the customer level. In this case you select a dedicated set of predefined queues for a set of customers. Customers with certain Ids will be getting served by a set of queues. The downside of this is some queues can have less load than others. This solution is similar to Virtual Output Queuing in networking,
My understanding is that the partitioning of the messages it's not ensuring a evenly load-balance. I think that you should avoid create overengineering and so some custom stuff that will come on top of the Kafka partitioner and instead think at a good partitioning key that will allows you to use Kafka in an efficiently manner.

How does async task interrupt main thread (from itself - the main one)?

I can't seem to find this specific implementation detail, or even a pointer to where in an OS book to find this.
Basically, main thread calls an async task (to be run later) on itself. So... when does it run?
Does it wait for the run loop to finish? Or does it just randomly interrupt the run-loop in the middle of any function?
I understand the registers will be the same (unless separate thread), but not really the instruction pointer and what happens to the stack, if anything does happen.
Thank you
In C# the task is scheduled to be run on the current SynchronizationContext. The context basically has a queue of tasks which it schedules to run on the threads it is associated with, in a GUI app there is only one thread so the task is scheduled to run there.
The GUI thread is not interrupted but it executes the task when it finishes all other tasks preceding it in the queue.
The threads of a process all share the same address space, not the same CPU registers. The thread scheduling is done depends on the programming language and the O/S. Usually there are explicit scheduling points, such as returning from a system call, blocking awaiting I/O completion, or between p-code instructions for interpreted languages. Some O/S implemtations reschedule depending on how long a thread has run for time-based scheduling. Often languages include a function that explicitly offers the CPU to any other thread or process by transferring control to the process or thread scheduler component of the O/S.
The act of switching from one thread or process to another is known as a context switch and is carefully tuned code because this is often done thousands of times per second. This can make the code difficult to follow.
The best explanation of this I've ever seen is http://www.amazon.com/The-Design-UNIX-Operating-System/dp/0132017997 classic.

Symfony2 Job Queue or Parallel Processing?

Does anyone know how to run a number of processes in the background either through a job queue or parallel processing.
I have a number of maintenance updates that take time to run and want to do this in the background.
I would recomment Gearman server, it prooved quite stable, it's totally outside of Symfony2, and you have to have server up and running (don't know what your hosting options are), but it distribues jobs perfectly. In skiniest version, it just keeps all jobs in-memory, but you can configure it to use sqlite database as backup, so for any reason server reboots, or gearman deamon breaks, you can just start it again, and your jobs will be perserved. I konw it has been tested with very large loads (adding up 1k jobs per second), and it stood it's ground. It's probably more stable nowdays, I'm speaking from experience 2 yrs ago, where we offloaded some long-running tasks in ZF application to background processing via Gearman. It should be quite self-explanitory how it works from image below:
Checkout RabbitMq. It's the most popular option according to knpbundles.com
Take a look at http://github.com/mmoreram/rsqueue-bundle
Uses Redis as queue core and will be mantained.
Take a look at enqueue libraty. There are a lot of transports (AMQP, STOMP, AmazonSQS, Redis, Filesystem, Doctrine DBAL and more) to choose from. Easy to use and feature rich. That would be enough for simple job queue, though if you need something more sophisticated look at enqueue/job-queue. It can run an exclusive job (only one job running at a given time) or a job with sub-jobs, or a job with something to do after it has been done.
Of course, there is a bundle for it

How can a LuaSocket server handle several requests simultaneously?

The problem is the inability of my Lua server to accept multiple request simultaneously.
I attempted to make each client message be processed in its on coroutine, but this seems to have failed.
while true do
local client = server:accept()
coroutine.resume(coroutine.create( function()
GiveMessage( client )
end ) )
end
This code seems to not actually accept more than one client message at the same time. What is wrong with this method? Thank you for helping.
You will not be able to create true simultaneous handling with coroutines only — coroutines are for cooperative multitasking. Only one coroutine is executed at the same time.
The code that you've wrote is no different from calling GiveMessage() in a loop directly. You need to write a coroutine dispatcher and find a sensible reason to yield from GiveMessage() for that approach to work.
There are least three solutions, depending on the specifics of your task:
Spawn several forks of your server, handle operations in coroutines in each fork. Control coroutines either with Copas or with lua-ev or with home-grown dispatcher, nothing wrong with that. I recommend this way.
Use Lua states instead of coroutines, keep a pool of states, pool of worker OS threads and a queue of tasks. Execute each task in a free Lua state with a free worker thread. Requires some low-level coding and is messier.
Look for existing more specialized solutions — there are several, but to advice on that I need to know better what kind of server you're writing.
Whatever you choose, avoid using single Lua state from several threads at the same time. (It is possible, with the right amount of coding, but a bad idea.)
AFAIK coroutines don't play nice with luaSocket out-of-the-box. But there is Copas you can use.

Resources