What are the async benefits on the backend? - asynchronous

I understand the benefits of async on the frontend (some kind of UI). With async the UI does not block and the user can "click" on other things while waiting for another operation to execute.
But what is the benefit of async programming on the backend?

The main benefit is that there could be various slow operations on the backend, which can prevent other requests from using the cpu at the same time. These operations could be: 1. Database operations 2. File operations 3. Remote calls to other services (servers), etc. You don't want to block the cpu while these operations are in progress.

First of all, there a benefit of handling more than one request at a time. Frameworks like ASP.net or django create new (or reuse existing) threads for each requests.
If you mean async operations from the thread of particular request, that's more complicated. In most cases, it does not help at all, because of the overhead of spawning new thread. But, we have things like Schedulers in C# for example, which help a lot. When correctly used, they free up a lot of CPU time normally wasted on waiting.
For example, you send a picture to a server. Your request is handled in new thread. This thread can do everything on it's own: upack the picture and save it to disk, then update the database.
Or, you can write to disk AND update the database at the same time. The thread that is done first is our focus here. When used without scheduler, it starts spinning a loop, checking if the other thread is done, which takes CPU time. If you use scheduler, it frees that thread. When the other task is done, it uses propably yet another precreated thred to finish handling of your request.
That scenario does make it seem like it's not worth the fuss, but it is easy to imagine more coplicated tasks, that can be done in the same time instead of sequentailly. On top of that, schedulers are rather smart and will make it so the total time needed will be lowest, and CPU usage moderate.

Related

Why is async programming faster

I keep hearing that using async programming patterns will make my code run faster. Why is that true? Doesn't the same exact code have to run either way, whether it runs now or it runs later?
It's not faster, it just doesn't waste time.
Synchronous code stops processing when waiting for I/O. Which means that when you're reading a file you can't run any other code. Now, if you have nothing else to do while that file is being read then asynchronous code would not buy you anything much.
Usually the additional CPU time that you can use is useful for servers. So the question is why do asynchronous programming instead of starting up a new thread for each client?
It turns out that starting and tearing down threads is expensive. Some time back in the early 2000s a web server benchmark found that tclhttpd compared favorably to Apache for serving static image files. This is despite the fact that tclhttpd was written in tcl and Apache was written in C and tcl was known to be 50 times slower than C. Tcl managed to hold its own against Apache because tcl had an easy to use asynchronous I/O API. So tclhttpd used it.
It's not that C doesn't have asynchronous I/O API. It's just that they're rarely used. So Apache didn't use it. These days, Apache2 uses asynchronous I/O internally along with thread pools. The C code ends up looking more complicated but it's faster - lesson learned.
Which leads us to the recent obsession with asynchronous programming. Why are people obsessed with it? (most answers on Stackoverflow about javascript programming for example insist that you should never use synchronous versions of asynchronous functions).
This goes back to how you rarely see asynchronous programs in C even though it's the superior way of doing things (GUI code is an exception because UI libraries learned early on to rely on asynchronous programming and Events). There are simply too many functions in C that are synchronous. So even if you wanted to do asynchronous programming you'll end up calling a synchronous function sooner or later. The alternative is to abandon stdlib and write your own asynchronous libraries for everything - from file I/O to networking to SQL.
So, in languages like javascript where asynchronous programming ended up as the default style there is pressure from other programmers to not mess it up and accidentally introduce synchronous functions which would be hard to integrate with asynchronous code without losing a lot of performance. So in the end, like taxes, asynchronous code has become a social contract.
It's not always faster. In fact, just setting up and tearing down the async environment adds a lot of time to your code. You have to spin off a new process/thread, set up an event queue/message pump, and clean up everything nicely in the end. (Even if your framework hides all these details from you, they're happening in the background).
The advantage is blocking. Lot's of our code depends on external resources. We need to query a database for the records to process, or download the latest version of something from a website. From the moment you ask that resource for information until you get an answer, your code has nothing to do. It's blocking, waiting for an answer. All the time your program spends blocking is totally wasted.
That's what async is designed for. By spinning the "wait for this blocking operation" code off into an async request, you let the rest of your non-blocking code keep running.
As a metaphor, imagine a manager telling his employee what to do that day. One the tasks is a phone call to a company with long wait times. If he told her to make the call synchronously she would call and wait on hold without doing anything else. Make it async and she can work on a lot of other tasks while the phone sits on hold in the background.
It runs the same code , but it does not wait for time taking task to finish . It will continue to execute code until async function is done.

Asynchronous task queue or not?

I'm looking at using celery to execute some tasks for my website asynchronously (yes I'm super new to this idea and will probably say some stupid things in this question, sorry in advance). I'm wondering: what criteria do people use to determine whether or not a particular task should be executed asynchronously with a task queue like celery vs using an http request or an ajax request? After reading a few blogs etc. people have been suggesting using task queues for:
Tasks that the user doesn't need immediately
Tasks that are periodic
Preventing tons of database requests (or other expensive tasks) from being executed all at once
Aggregating tasks
So I guess my question is: what types of tasks should I not use a task queue for? If a task is not holding up any other part of a request (not keeping a user waiting) and isn't periodic is there a situation where it would still make sense to use a task queue? Does it make sense to aggregate database modifications? and if so, how exactly does that save resources? Thanks for the help!
I've been looking at this some more, and my conclusion is that a queue should be used for tasks only if:
there is an increase in efficiency
the task is independent of other processes
the task is simple
the task is repeated a lot
This a a pretty weak answer, but if it starts a discussion by people more knowledgeable than myself it will have done its job :)
Adding:
If you want to guarantee execution of a task (tasks queues typically focus on retrying)
If you want to stay within a 3rd party rate limit (say, send up to 10 emails per second)
If a task is CPU intensive and would bog down other client requests to your main API server
An incredibly good resource for this is here, both part 1 and 2

ASP.NET and multithreading best practices

I am working on ASP.NET project and yesterday I saw a piece of code that uses System.Threading.Thread to offload some tasks to a new thread. The thread runs a few SQL statements and logs the result.
Isn't it better to use another approach? For example to have a Windows Service that performs the SQL batch. Then the web page will just enqueue the batch (via WCF).
In general, what are the best practices for multithreading in ASP.NET? Are there justified usages of threads/TPL tasks/etc. in a web page?
My thought when using multi-threading in ASP.NET:
ASP.NET recycles AppDomain for some reasons like you change web.config or in the period of time to avoid memory leak. The thing is you don't know which exact time of recycle. Long running thread is not suitable because when ASP.NET recycles it will take your thread down accordingly. The right approach of this case is long running task should be running on background process via Queue, like you mention.
For short running and fire and forget task, TPL or async/await are the most appropriate because it does not block thread in thread pool to utilize for HTTP requests.
In my opinion this should be solved by raising some kind of flag in the database and a Windows service that periodically checks the flag and starts the job. If the job is too frequent a dedicated queue solution should be used (MSMQ, RabbitMQ, etc.) to avoid overloading the database or the table growing too fast. I don't think communicating directly with the Windows service via WCF or anything else is a good idea because this may result in dropped messages.
That being said sometimes a project needs to run in a shared hosting and cannot setup a dedicated Windows service. In this case a thread is acceptable as a work around that should be removed as soon as the project grows enough to have its own server.
I believe all other threading in ASP.NET is a sign of a problem except for using Tasks to represent async operations or in the extremely rare case when you want to perform a computation in parallel in a web project but your project has very few concurrent users (less concurrent users than the number of cores)
Why Tasks are useful in ASP.NET?
First reason to use Tasks for async operations is that as of .NET 4.5 async APIs return Tasks :)
Async operations (not to be confused with parallel computations) may be web service calls, database calls, etc. They may be useful for two things:
Fire several of them at once and your job will take a time equal to the longest operation. If you fire them in sequential (non-async) fashion they will take time equal to the sum of the times of each operation which is obviously more.
They can improve scalability by releasing the thread executing the page - Node.js style. ASP.NET supports this since forever but in version 4.5 it is really easy to use. I'll go as far as claiming that it is easier than Node.js because of async/await. Releasing the thread is important because you may deplete your threads in the pool by having them wait. The result is that your website becomes slow when there are a certain number of users despite the fact that the CPU usage is like 30% simply because new requests are waiting in queue. If you increase the number of threads in the thread pool you pay the price of constant context switching than by the OS. At certain point you will get 100% CPU usage but 40% of it will be spent context switching. You will increase the throughput but with diminishing returns. A lot of threads also increase the memory footprint.

Designing an asynchronous task library for ASP.NET

The ASP.NET runtime is meant for short work loads that can be run in parallel. I need to be able to schedule periodic events and background tasks that may or may not run for much longer periods.
Given the above I have the following problems to deal with:
The AppDomain can shutdown due to changes (Web.config, bin, App_Code, etc.)
IIS recycles the AppPool on a regular basis (daily)
IIS itself might restart, or for that matter the server might crash
I'm not convinced that running this code inside ASP.NET is not the right thing to do, becuase it would allow for a simpler programming model. But doing so would require that an external service periodically makes requests to the app so that the application is keept running and that all background tasks are programmed with utter most care. They will have to be able to pause and resume thier work, in the event of an unexpected error.
My current line of thinking goes something like this:
If all jobs are registered in the database, it should be possible to use the database as a bookkeeping mechanism. In the case of an error, the database would contain all state necessary to resume the operation at the next opportunity given.
I'd really appriecate some feedback/advice, on this matter. I've been considering running a windows service and using some RPC solution as well, but it doesn't have the same appeal to me. And I'd instead have a lot of deployment issues and sycnhronizing tasks and code cross several applications. Due to my business needs this is less than optimial.
This is a shot in the dark since I don't know what database you use, but I'd recommend you to consider dialog timers and activation. Assuming that most of the jobs have to do some data manipulation, and is likely that all have to do only data manipulation, leveraging activation and timers give an extremely reliable job scheduling solution, entirely embedded in the database (no need for an external process/service, not dependencies outside the database bounds like msdb), and is a solution that ensures scheduled jobs can survive restarts, failover events and even disaster recovery restores. Simply put, once a job is scheduled it will run even if the database is restored one week later on a different machine.
Have a look at Asynchronous procedure execution for a related example.
And if this is too radical, at least have a look at Using Tables as Queues since storing the scheduled items in the database often falls under the 'pending queue' case.
I recommend that you have a look at Quartz.Net. It is open source and it will give you some ideas.
Using the database as a state-keeping mechanism is a completely valid idea. How complex it will be depends on how far you want to take it. In many cases you will ended up pairing your database logic with a Windows service to achieve the desired result.
FWIW, it is typically not a good practice to manually use the thread pool inside an ASP.Net application, though (contrary to what you may read) it actually works quite nicely other than the huge caveat that you can't guarantee it will work.
So if you needed a background thread that examined the state of some object every 30 seconds and you didn't care if it fired every 30 seconds or 29 seconds or 2 minutes (such as in a long app pool recycle), an ASP.Net-spawned thread is a quick and very dirty solution.
Asynchronously fired callbacks (such as on the ASP.Net Cache object) can also perform a sort of "behind the scenes" role.
I have faced similar challenges and ultimately opted for a Windows service that uses a combination of building blocks for maximum flexibility. Namely, I use:
1) WCF with implementation-specific types OR
2) Types that are meant to transport and manage objects that wrap a job OR
3) Completely generic, serializable objects contained in a custom wrapper. Since they are just a binary payload, this allows any object to be passed to the service. Once in the service, the wrapper defines what should happen to the object (e.g. invoke a method, gather a result, and optionally make that result available for return).
Ultimately, the web site is responsible for querying the service about its state. This querying can be as simple as polling or can use asynchronous callbacks with WCF (though I believe this also uses some sort of polling behind the scenes).
I tell you what I have do.
I have create a class called Atzenta that have a timer (1-2 second trigger).
I have also create a table on my temporary database that keep the jobs. The table knows the jobID, other parameters, priority, job status, messages.
I can add, or delete a job on this class. When there is no action to be done the timer is stop. When I add a job, then the timer starts again. (the timer is a thread by him self that can do parallel work). I use the System.Timers and not other timers for this.
The jobs can have different priority.
Now let say that I place a job on this table using the Atzenta class. The next time that the timer is trigger is check the query on this table and find the first available job and just run it. No other jobs run until this one is end.
Every synchronize and flags are done from the table. In the table I have flags for every job that show if its |wait to run|request to run|run|pause|finish|killed|
All jobs are all ready known functions or class (eg the creation of statistics).
For stop and start, I use the global.asax and the Application_Start, Application_End to start and pause the object that keep the tasks. For example when I do a job, and I get the Application_End ether I wait to finish and then stop the app, ether I stop the action, notify the table, and start again on application_start.
So I say, Atzenta.RunTheJob(Jobs.StatisticUpdate, ProductID); and then I add this job on table, open the timer, and then on trigger this job is run and I update the statistics for the given product id.
I use a table on a database to synchronize many pools that run the same web app and in fact its work that way. With a common table the synchronize of the jobs is easy and you avoid 2 pools to run the same job at the same time.
On my back office I have a simple table view to see the status of all jobs.

Multithreading in asp.net

What kind of multi-threading issues do you have to be careful for in asp.net?
It's risky to spawn threads from the code-behind of an ASP.NET page, because the worker process will get recycled occasionally and your thread will die.
If you need to kick off long-running processes as a result of user actions on web pages, your best bet is to drop a message off in MSMQ and have a separate background service monitoring the queue. The service could take as long as it wants to accomplish the task, and the web page would be finished with its work almost immediately. You could accomplish the same thing with an asynch call to a web method, but don't rely on getting the response when the web method is finished working. From code-behind, it needs to be a quick fire-and-forget.
One thing to watch out for at things that expire (I think httpContext does), if you are using it for operations that are "fire and forget" remember that all of a sudden if the asp.net cleanup code runs before your operation is done, you won't be able to access certain information.
If this is for a web service, you should definitely consider thread pooling. Too many threads will bring your application to a grinding halt because they will eventually start competing for CPU time.
Is this for file or network IO? If so, you should also consider using asynchronous IO. It can be a bit more of a pain to program, but you don't have to worry about spawning off too many threads at once.
Programmatic Caching is one area which immediately comes to my mind. It is a great feature which needs to be used carefully. Since it is shared across requests, you have to put locks around it before updating it.
Another place I would check is any code accessing filesystem like writing to log files. If one request has a read-write lock on a file, other concurrent requests will error out if not handled properly.
Isn't there a Limit of 25 Total Threads in the IIS Configuration? At least in IIS 6 i believe. If you exceed that limit, interesting things (read: loooooooong response times) may happen.
Depending on what you need, as far as multi threading is concerned, have you thought of spawning requests from the client. It's safe to spawn requests using AJAX, and then act on the results in a callback. Or use a service as a backgrounding mechanism, which runs every X minutes and processes in the background that way.

Resources