I have been playing with QProcess as a way to start computationally intensive tasks that way continue after the GUI that created them has been closed. I was now wondering whether I could improve on this so that not too many jobs can be started.
Say I have 20 available cores. User 1 starts a computation that is broken into 30 processes and exits the GUI. At the moment I'm using bash to control all of this so the GUI only really executes a bash script which counts the number of processes running. The whole workflow gets rather messy if another user logs in and starts another large job in the meantime so currently it refuses to submit if the script is already running. Also, there is no way to use the GUI to monitor the processes as they are now being run by bash.
Ideally I would like to improve the flow so that user 1 submits their processes. A separate background process manages the starting of the individual compute tasks - all of which are now QProcesses. Where I am getting stuck is if another user logs in. As opposed to 'please try again later' I would like to pick up the existing managing QProcesses and append any new jobs to that queue. Is this something I can do with QProcess and D-bus? If so what would be a good design for such a process.
Thanks
What you're asking requires two programs; a Gui client and a Server application.
The logged-on users interact with a Gui client interface, to launch and organise processes. The Gui client creates messages and sends them to the server application, which responds by creating and managing processes with QProcess. So the Gui client is simply an interface to the server application.
Of-course, you need the Gui applications and the server application to communicate with each other. While there are multiple methods of interprocess communication available, Qt has QLocalServer and QLocalSocket, which can be used by the server and client applications respectively.
Related
I am implementing an ASP.NET application that needs to service conventional http requests but the responses require data that I need to acquire from providers that are executables that provide their data over sockets. My plan to implement was:
1) In Application_Start, start a new thread that starts a socket server
2) In Session_Start, launch the session-specific process that will ultimately connect to the socket server, and from there do a Monitor.Wait on a session-specific lock object which I've stored in Application.Contents by Session key
3) When the socket server sees a new connection, make the data available to the appropriate session Contents and do a Monitor.Pulse on the session-specific lock object
Is this technically feasible in IIS? Can this concept function as a stable system?
Before answering, please bear in mind I am not asking "is this the recommended approach", I am aware it is not and if I had the option to write this system from scratch I would do this differently. I'm also not able to change the fact that the programs communicate using sockets.
Given the constraints this approach makes sense.
Shutdown and recycling of IIS worker processes are always throny issues when it comes to keeping state in a web app. Note, that your worker process can recycle pretty much at any time for many reasons. Some of those reasons are unavoidable: Server reboot, app deployment, bug leading to a process crash. So you need to think through what happens in those cases: All sessions will be lost while the child processes still run. Suggested solution: Add the children into a Windows Job Object and configure the Job to be killed when the parent exits.
With overlapped IIS worker recycling you can have two functioning workers running at the same time. You must deal with that possibility.
Consider the possibility that the child process immediately crashes. It will never make a connection. Make sure your app doesn't hang waiting for the connection forever.
I'd like to develop a simple solution using .NET for the following problem:
We have several computers in a local network:
10 client computers that may need to execute a program that is only installed on two workstations
The two workstations that are only used to execute the defined program
A server that can be used to install a service available from all previously described computers
When a client computer needs to execute the program, he would send a request to the server, and the server would distribute the job to a workstation when available for execution, and inform the client computer when the execution has been performed.
I'm not very used to web and services development so I'm not sure if it's the best way to go, but below is a possible solution I thought about:
A web service on the server stores in queues or in a database the list of tasks with their status
The client computer calls the web service to execute a program and gets a task id. Then calls it every second with the task id to know if the execution has been performed.
The workstations that are available call the web service every second to know if there is something to execute. If yes, the server assigns the task, and the workstation calls the web service when the execution is completed.
I summarized this in the below figure:
Do you think to a simpler solution?
Have a look at signalr! You could use it as messaging framework and you would not need to poll the service from 2 different diretions. With signalR you would be able to push execution orders to the service and the service will notify the client once the execution has been processed. The workstation would be connected with signalR, too. They would not need to ask for execution orders as the webservice would be able to push execution orders to either all or a specific workstation.
The thread will be started on each Application_Start event.
It will be a monitoring thread which is supposed to run constantly.
So even if the app shuts down, once it is restarted the thread will start too ensuring it runs all the time.
However I need to be sure that this thread will not be stopped / shut down while the application is running.
So in a few words, does anybody know if asp.net could shut down such a thread without actualy stopping / recycling the application.
As a matter of design, you shouldn't depend on asp.net to run threads like this. Little things like app recycling can cause you a lot of trouble.
Instead, create a windows service to execute the thread. This way you don't have to worry about it.
Update
I just wanted to add a little more information.
IIS has the ability to execute your app across multiple threads and processes. A standard site installation usually only has a single process (aka: web garden) assigned which spins up around 20 threads to handle request processing.
However, any IIS administrator can easily add more processes to the mix. They usually do this when a site can hose a single process either because request processing takes too long, or the number of handler threads isn't enough, or as a temporary measure if the app has enough problems that a single thread will hose the entire process fairly often.
If you have a thread being spun on app start, then it will create one for each worker process the site has. This may be unexpected behavior to you or your successors.
Also, monitoring apps are almost always completely separate to the application they are monitoring. One of the primary reasons is that in the event the monitored process dies, hangs, or otherwise becomes unresponsive then the monitoring app itself still needs to carry on and log this information. Otherwise the monitored process could very well hose the monitoring app itself.
So, do yourself a favor and move this to its own process. The best way to do this on an IIS server is to create a windows service and give it the appropriate execution rights to do what you need.
I want to develop a web application using ASP.NET running on IIS.
If a user submits a MAXIMA input command, the code behind will ask a custom windows service to create a new distinct temporary process executing an external assembly.
More precisely, there is only one windows service serving for all users, but each user will be associated with a distinct, temporary process running an external assembly.
The windows service contains a single socket listening on a certain port and a list of asynchronous sockets for communication. Each socket of the list will communicate with a distinct, temporary process running an external assembly which works as a client socket.
Note that: I use a process rather than an application domain because the external assembly is a batch file (not managed assembly).
My questions are:
How to call windows service from code behind?
How to associate each user with a distinct, temporary process?
How to improve scalability if there are more and more users working simultaneously?
If the Maxima input command entered by a user cause long-running process, what is the wise way to notify the user about the progress?
The following link provide you with more detail about my project: https://sourceforge.net/projects/aspmaxima/forums/forum/1190702/topic/3786806
Thank you in advance.
You should not be using codebehind in an MVC app.
Scalability while interoprating with unmanaged code is hard. The only sane way to do this is to decompose the problem.
When you launch an unmanaged app, it already has its own process.
Multiple task flows in a service called from a web app, with monitoring? You're describing Windows Server AppFabric. Host your service with AppFabric, and you won't have to write all of this yourself.
Regarding scalability, when you're dealing with unmanaged processes, you're going to have to limit the number which can start concurrently. Trial and error will be necessary to determine the optimum on specific hardware.
You can only monitor an unmanaged task's progress if that app specifically provides for it.
Launching arbitrary unmanaged code from a service is dangerous, because the launched app, by default, inherits the service's (typically raised) permissions. Consider using specific, limited credentials for the launched app instead of the default.
We're preparing an application using Qt that has a main process that controls the GUI and spawns processes that do the actual data processing. Messages are exchanged between the main process and the data-processing processes using the Qt mechanisms and the stdin/stdout pipes.
Now, in the event that the GUI crashes, the other processes keep running. What we'd like to be able to do is to, when a new GUI starts, reconnect to these processes as before. Anyone know if this is possible, and if so, how to achieve it?
This is possible if you are using a named pipe for communicating with the process. stdin/out are closed if the process they belong to is terminated.
You might want to investigate shared memory for the communication between processes. I seem to recall that it was able to recover in a very similar situation at a previous job.
Another possibility, if your platform supports it, is to use dbus for the communication between processes. If that is the case, neither process would have to be there, but will act get the appropriate messages if it is running.