Xamarin android Worker - xamarin.forms

I have old xamarin android jobintentservice I'm replacing to support android 12.0.
I now inherit Worker instead of JobIntentService, and use WorkManager to enqueue its job.
Couple of questions:
Is there a way use await inside DoWork override method?
Is it better to inherit ListenableWorker instead in order to use await? Do I lose
anything if I switch to it?
If I Task.Factory.StartNew(LongRunning) from Worker's DoWork, and immediately that follow with return success result, will my long running task run to completion, or all work associated will be terminated?

I think you can know more about WorkManager first.
From document Schedule tasks with WorkManager we know that:
WorkManager is the recommended solution for persistent work. Work is persistent when it remains scheduled through app restarts and system reboots. Because most background processing is best accomplished through persistent work, WorkManager is the primary recommended API for background processing.
WorkManager handles three types of persistent work:
Immediate: Tasks that must begin immediately and complete soon. May
be expedited.
Long Running: Tasks which might run for longer, potentially longer
than 10 minutes.
Deferrable: Scheduled tasks that start at a later time and can run
periodically.
And from the Work chaining of Features,we know
For complex related work, chain individual work tasks together using an intuitive interface that allows you to control which pieces run sequentially and which run in parallel.
WorkManager.getInstance(...)
.beginWith(Arrays.asList(workA, workB))
.then(workC)
.enqueue();
For each work task, you can define input and output data for that work. When chaining work together, WorkManager automatically passes output data from one work task to the next.
Note:
If you still have any problem, get back to me.

Related

Serve web request in python that spawns a new long running subprocess

I currently have a python command line application that uses python invoke package to organise, list and execute tasks. There are many task files (controlled & created by users, not me). Execution time for some task files can be more than an hour. Each task is actually a test script/program. invoke is useful in listing/executing all the tasks in a task file (we call it a testsuite) or only a bunch of them (a tasks collection) or a single task. (Having a ton of loose scripts and organising, listing & running them in the way users want would be quite a task, hence invoke).
However, invoke cannot be used as a library. It does not offer an API that can be leveraged to list and run test tasks. So I am forced to run invoke as a shell command in subprocess from command line program. I replace (via execl()) the current process with invoke because once the control passes to invoke, there is no need to come back to parent process. So far good..
Now, there is a requirement that this command line program be callable from a web application. So I need to wrap this cmdline program in a restful http API. I've decided to use bottle.py to keep things simple.
I understand that the long running testsuite (tasks) will have to be done off the http request/response cycle. But I'm unable to finalise exactly how to go about it (prob. I may be overthiniking). But here is what I want ...
Tasks are written by users. They are always synchronous, they may sleep or execute shell commands via subprocess.run().
Application is internal, it will not be bombarded with huge number of requests. No of users Max. 10.
But each request (of type that runs the task) will take minutes and some cases > hour to complete. New requests during this should not block.
Calling application (running on a different host) will need to report progress of the running task to the browser UI. ('progress bar')
Ability to communicate with running task and 'cancel' it from browser UI.
With above situation, am I correct in saying ..
because a new 'process' must be spawnned (due use of subprocess and excl in current code) for a request, it rules out using 'threads' of any type (os threads, greenlets, gevent)?
Using any async libraries (web framework, web/http server or in app code) won't be of much help, because every run request will have to be a new process anyway?
How will the process be spawned when a request comes in? Let the web/htpp server (gunicorn?) do it? or My application has to take case of forking itself?
is 'gunicorn' a good choice for this situation?
I have a feeling that users may also ask for the ability to schedule tasks/tests. I might end up using some sort of task queue. I have read 'huey' and feel that it is light & simple for my needs. (No redis/Celery). But any task queue also means a separate consumer process to administer? More moving parts to the mix.
'progress-bar' functionality means, subprocess has to keep updating its progress somewhere and calling application has to read from there. Does this necessitate 'task queue' anyway?
There is a lot of material on all of this and I have read quite some if it. But it still has left me unclear as to how exactly to go about implementing my requirements. Any direction/pointers would be appreciated. I'd also appreciate any advice on what 'not to use'.
If you need something really simple then you could write a wrapper around task spooler (linux tool to run tasks) https://vicerveza.homeunix.net/~viric/soft/ts/ (especially https://vicerveza.homeunix.net/~viric/soft/ts/article_linux_com.html for more details)
Otherwise it's probably better to switch to uwsgi spooler, rq with redis or celery with rabbitmq (cause with redis it works to certain extent).

activiti taskService complete fails when executed concurrently

Hi I am facing a strange situation where I am trying to set a set of tasks as complete all concurrently.
The first one goes through and second one goes through sometimes (rarely) but mostly doesnt go through.
When I do these individually they work.
Something to do with database locking I feel. Is there some workaround or code for executing task and variable updates concurrently ?
Do they belong to the same process instance?
And yes, there will be a db locking mechanism in place, because when you complete each task a process instance will need to move forward.
Can you please clarify what are you trying to solve? what is your business scenario?
Cheers
Activiti uses pre-emptive locking and this can cause problems for parallel tasks.
Typically if you use the "exclusive" flag the problems go away (https://www.activiti.org/userguide/#exclusiveJobs).
Keep in mind that jobs never actually run in parallel, the job engine selects jobs to run and if there are multiple they will be run sequentially (which appears to be parallel to the user).

.Net MVC Threading Long running process

I have a long running process in my MVC application(C#).
Its building data for many reports.
Some of the clients could take several minutes or longer to calculate. Is running the process in a separate thread the best option?
Is there another way to allow the process to run, while allowing the user to still use the rest of the site?
If threading is the best solution, any good sites or stackoverflow threads to look at on how to do this?
When I've had cases like those, I usually would build a service to asynchronously process requests, and return a handle that I could use to check on its status in a database. IMHO, splitting it off as a thread in the web application seems like you'd be trying to shove a square peg into a round hole.
I've used two methods to solve this. If the work is guaranteed to not be TOO long running, I've kicked off a thread to do the work and return immediately to the user. When we couldn't make that guarantee, we used a queue (we happened to use MSMQ) for executing long running tasks. This processing was done on a different server apart from IIS. A benefit of this is that we built in a wait and retry on failure mechanism. So besides it handling long running tasks, we also used it for anything that might fail in a way that was inconvenient to handle in our MVC app. The main example of this is sending an email. Rather than do that in the MVC app we would just toss an email task on the queue. We used the Command Pattern for the task objects placed on the queue. Once we had that mechanism in place, we stopped using the technique of spawning a thread from our MVC code.

Designing an asynchronous task library for ASP.NET

The ASP.NET runtime is meant for short work loads that can be run in parallel. I need to be able to schedule periodic events and background tasks that may or may not run for much longer periods.
Given the above I have the following problems to deal with:
The AppDomain can shutdown due to changes (Web.config, bin, App_Code, etc.)
IIS recycles the AppPool on a regular basis (daily)
IIS itself might restart, or for that matter the server might crash
I'm not convinced that running this code inside ASP.NET is not the right thing to do, becuase it would allow for a simpler programming model. But doing so would require that an external service periodically makes requests to the app so that the application is keept running and that all background tasks are programmed with utter most care. They will have to be able to pause and resume thier work, in the event of an unexpected error.
My current line of thinking goes something like this:
If all jobs are registered in the database, it should be possible to use the database as a bookkeeping mechanism. In the case of an error, the database would contain all state necessary to resume the operation at the next opportunity given.
I'd really appriecate some feedback/advice, on this matter. I've been considering running a windows service and using some RPC solution as well, but it doesn't have the same appeal to me. And I'd instead have a lot of deployment issues and sycnhronizing tasks and code cross several applications. Due to my business needs this is less than optimial.
This is a shot in the dark since I don't know what database you use, but I'd recommend you to consider dialog timers and activation. Assuming that most of the jobs have to do some data manipulation, and is likely that all have to do only data manipulation, leveraging activation and timers give an extremely reliable job scheduling solution, entirely embedded in the database (no need for an external process/service, not dependencies outside the database bounds like msdb), and is a solution that ensures scheduled jobs can survive restarts, failover events and even disaster recovery restores. Simply put, once a job is scheduled it will run even if the database is restored one week later on a different machine.
Have a look at Asynchronous procedure execution for a related example.
And if this is too radical, at least have a look at Using Tables as Queues since storing the scheduled items in the database often falls under the 'pending queue' case.
I recommend that you have a look at Quartz.Net. It is open source and it will give you some ideas.
Using the database as a state-keeping mechanism is a completely valid idea. How complex it will be depends on how far you want to take it. In many cases you will ended up pairing your database logic with a Windows service to achieve the desired result.
FWIW, it is typically not a good practice to manually use the thread pool inside an ASP.Net application, though (contrary to what you may read) it actually works quite nicely other than the huge caveat that you can't guarantee it will work.
So if you needed a background thread that examined the state of some object every 30 seconds and you didn't care if it fired every 30 seconds or 29 seconds or 2 minutes (such as in a long app pool recycle), an ASP.Net-spawned thread is a quick and very dirty solution.
Asynchronously fired callbacks (such as on the ASP.Net Cache object) can also perform a sort of "behind the scenes" role.
I have faced similar challenges and ultimately opted for a Windows service that uses a combination of building blocks for maximum flexibility. Namely, I use:
1) WCF with implementation-specific types OR
2) Types that are meant to transport and manage objects that wrap a job OR
3) Completely generic, serializable objects contained in a custom wrapper. Since they are just a binary payload, this allows any object to be passed to the service. Once in the service, the wrapper defines what should happen to the object (e.g. invoke a method, gather a result, and optionally make that result available for return).
Ultimately, the web site is responsible for querying the service about its state. This querying can be as simple as polling or can use asynchronous callbacks with WCF (though I believe this also uses some sort of polling behind the scenes).
I tell you what I have do.
I have create a class called Atzenta that have a timer (1-2 second trigger).
I have also create a table on my temporary database that keep the jobs. The table knows the jobID, other parameters, priority, job status, messages.
I can add, or delete a job on this class. When there is no action to be done the timer is stop. When I add a job, then the timer starts again. (the timer is a thread by him self that can do parallel work). I use the System.Timers and not other timers for this.
The jobs can have different priority.
Now let say that I place a job on this table using the Atzenta class. The next time that the timer is trigger is check the query on this table and find the first available job and just run it. No other jobs run until this one is end.
Every synchronize and flags are done from the table. In the table I have flags for every job that show if its |wait to run|request to run|run|pause|finish|killed|
All jobs are all ready known functions or class (eg the creation of statistics).
For stop and start, I use the global.asax and the Application_Start, Application_End to start and pause the object that keep the tasks. For example when I do a job, and I get the Application_End ether I wait to finish and then stop the app, ether I stop the action, notify the table, and start again on application_start.
So I say, Atzenta.RunTheJob(Jobs.StatisticUpdate, ProductID); and then I add this job on table, open the timer, and then on trigger this job is run and I update the statistics for the given product id.
I use a table on a database to synchronize many pools that run the same web app and in fact its work that way. With a common table the synchronize of the jobs is easy and you avoid 2 pools to run the same job at the same time.
On my back office I have a simple table view to see the status of all jobs.

what are threads in actionscript functions?

I've seen a lot of other developers refer to threads in ActionScript functions. As a newbie I have no idea what they are referring to so:
What is a thread in this sense?
How would I run more than one thread at a time?
How do I ensure that I am only running one thread at a time?
Thanks
~mike
Threads represent a way to have a program appear to perform several jobs concurrently. Although whether or not the jobs can actually occur simultaneously is dependent on several factors (most importantly, whether the CPU the program is running on has multiple cores available to do the work). Threads are useful because they allow work to be done in one context without interfering with another context.
An example will help to illustrate why this is important. Suppose that you have a program which fetches the list of everyone in the phone book whose name matches some string. When people click the "search" button, it will trigger a costly and time-consuming search, which might not complete for a few seconds.
If you have only a single-threaded execution model, the UI will hang and be unresponsive until the search completes. Your program has no choice but to wait for the results to finish.
But if you have several threads, you can offload the search operation to a different thread, and then have a callback -- a trigger which is invoked when the work is completed -- to let you know that things are ready. This frees up the UI and allows it to continue to respond to events.
Unfortunately, because ActionScript's execution model doesn't support threads natively, it's not possible to get true threading. There is a rough approximation called "green threads", which are threads that are controlled by an execution context or virtual machine rather than a larger operating system, which is how it's usually done. Several people have taken a stab at it, although I can't say how widespread their usage is. You can read more at Alex Harui's blog here and see an example of green threads for ActionScript here.
It really depends on what you mean. The execution model for ActionScript is single-threaded, meaning it can not run a process in the background.
If you are not familiar with threading, it is essentially the ability to have something executed in the background of a main process.
So, if you needed to do a huge mathematical computation in your flex/flash project, with a multi-threaded program you could do that in the background while you simultaneously updated your UI. Because ActionScript is not multi-threaded you can not do such things. However, you can create a pseudo-threading class as demonstrated here:
http://blogs.adobe.com/aharui/pseudothread/PseudoThread.as
The others have described what threading is, and you'd need threading if you were getting hardcore into C++ and 3D game engines, among many other computationally-expensive operations, and languages that support multi-threading.
Actionscript doesn't have multi-threading. It executes all code in one frame. So if you create a for loop that processes 100,000,000 items, it will cause the app to freeze. That's because the Flash Player can only execute one thread of code at a time, per frame.
You can achieve pseudo-threading by using:
Timers
Event.ENTER_FRAME
Those allow you to jump around and execute code.
Tween engines like TweenMax can operate on 1000's of objects at once over a few seconds by using Timers. You can also do this with Event.ENTER_FRAME. There is something called "chunking" (check out Grant Skinner's AS3 Optimizations Presentation), which says "execute computationally expensive tasks over a few frames", like drawing complex bitmaps, which is a pseudo-multi-threading thing you can do with actionscript.
A lot of other things are asynchronous, like service calls. If you make an HTTPService request in Flex, it will send a request to the server and then continue executing code in that frame. Once it's done, the server can still be processing that request (say it's saving a 30mb video to a database on the server), and that might take a minute. Then it will send something back to Flex and you can continue code execution with a ResultEvent.RESULT event handler.
So Actionscript basically uses:
Asynchronous events, and
Timers...
... to achieve pseudo-multi-threading.
a thread allows you to execute two or more blocks of actionscrpt simultaniously by default you will always be executing on the same default thread unless you explcitly start a new thread.

Resources