SSH Equivalent on Windows - asp.net

I am making an Asp.Net application which does the following on the client computer:
Establish a Connection
Check client's cpu usage to see if it is idle or not
if the client is idle it starts executing a c application
while executing the script if client starts doing something (also checked by monitoring his cpu usage) stop signal is sent
start signal is again sent to the client if he is back to his idle position
If the client is Ubuntu, I use ssh and execute what I want to. What is the way of doing this in Windows without the root access.
thanks in advance for replying.

This sounds a bit dodgy to me. However, what you are looking for is called PsExec (http://technet.microsoft.com/en-us/sysinternals/bb897553)
UPDATE
The only other way I can think of doing this is to use the built in task scheduler for windows.
With the task scheduler you can set a task to start when a computer has been idle for a particular amount of time and pause or stop it when it ceases to be idle.
Once the task is installed, just forget about it.

try SSH FTP or SFTP is analogous to SSH in windows

Related

How to poll an unix server reboot in an automated way?

I am performing an install on a server, which requires the server to reboot to complete the installation. My query is how to find out when the reboot completes so that I can run a basic smoke test on the server and confirm the deployment status.
This is a non-trivial task in general, and the solution will be depending on the operating system of the server that's restarted.
In general, it will be best to let the server decide itself on when the startup process has completed, and have it send some notification to interested entities.
If you can't do that, you can check for services that are typically available after reboot (like, HTTP port reachable, exported file shares mountable,...). Adding some additional time on top of that may be useful.
On the other, more unreliable end of possible solutions, you can simply wait for a certain amount of time that's typically required for completing a reboot.

How to deploy a socket server in iis application scope

I am implementing an ASP.NET application that needs to service conventional http requests but the responses require data that I need to acquire from providers that are executables that provide their data over sockets. My plan to implement was:
1) In Application_Start, start a new thread that starts a socket server
2) In Session_Start, launch the session-specific process that will ultimately connect to the socket server, and from there do a Monitor.Wait on a session-specific lock object which I've stored in Application.Contents by Session key
3) When the socket server sees a new connection, make the data available to the appropriate session Contents and do a Monitor.Pulse on the session-specific lock object
Is this technically feasible in IIS? Can this concept function as a stable system?
Before answering, please bear in mind I am not asking "is this the recommended approach", I am aware it is not and if I had the option to write this system from scratch I would do this differently. I'm also not able to change the fact that the programs communicate using sockets.
Given the constraints this approach makes sense.
Shutdown and recycling of IIS worker processes are always throny issues when it comes to keeping state in a web app. Note, that your worker process can recycle pretty much at any time for many reasons. Some of those reasons are unavoidable: Server reboot, app deployment, bug leading to a process crash. So you need to think through what happens in those cases: All sessions will be lost while the child processes still run. Suggested solution: Add the children into a Windows Job Object and configure the Job to be killed when the parent exits.
With overlapped IIS worker recycling you can have two functioning workers running at the same time. You must deal with that possibility.
Consider the possibility that the child process immediately crashes. It will never make a connection. Make sure your app doesn't hang waiting for the connection forever.

How to fork properly within an nginx/php-fpm environment?

I have a very simple php socket server that runs on my machine. I created a convinience class with simple methods like "restart" and "stop" and etc. to control the server once it is already running.
What the restart function does is it sends the server the command to stop and then it forks the process and starts a new instance of the socket server within the child process while the parent process returns to the caller.
This works great on the command line, however I am trying to make an admin webpage which restarts the socket server and the forking is causing problems in php-fpm. Basically, what it appears is happening is that the life of the "page loading" is not ending when the parent process ends and nginx/php-fpm are not reassigning the process to new connections until the forked process also ends.
In my case, I do not want the forked process to ever end, and I end up with a completely debilitated server. (in my test environment, for simplicity i have the worker pool set to only 1, in a production environment this will be higher, but this issue would lead to one worker slot being permanently occupied).
I have tried a few things including calling fastcgi_finish_request() just prior to the forking, however this had no affect.
How can I restart my service from a php-fpm worker process without locking up an assignment in the nginx connection pool?
The solution was simple and elementary.
My child processes were not redirecting STDOUT and STDERR to /dev/null so therefore even though the parent process finished, the fact that the child process still had active file descriptors was causing php-fpm to consider that connection in its pool as still active, and therefore it would never be re-assigned to new connections on the basis that the child process would run continually.
Redirecting STDERR and STDOUT to /dev/null caused php-fpm to correctly reassign connections while simultaneously allowing the child process (the socket server) to run forever. Case closed.
./some/command >/dev/null 2>&1
Should have seen this one a mile off...
(I solved this problem months ago, but haven't signed into this account in a long time ... it didn't take me 7 months to figure out I need to direct output to /dev/null!)
Sounds like you have your protocol design wrong. The server should be capabele of restarting itself. There's quite a few examples of how to do that in the internet.
The client (your webpage) should not need to do any forking.
The server should also not run inside php-fpm, but be a console application that uses a daemon(3) type interface to detach itself from the console.

how do I kill a zeopack without restarting zeo?

It seems that in using the zeopack script, the zeopack script returns to the shell after signalling zeo to do a pack on a database. How can I kill a zeopack operation in zeo without bringing down the database?
The only way to kill the pack is to restart the server:
bin/zeoctl restart
Clients should reconnect automatically.
The server starts the packing in a separate thread to handle the packing but offers no 'stop' hook.

Communication between two programs signals or shared mem?

I need to implement (in Qt) some solution to communicate between two programs running on Linux machine. One program is Worker, and the second is Watchdog. Basically I need Watchdog to periodically check on Worker and in case something wrong (no process,hangup - no answer from Worker) kill Worker (if present) and start it again.
Worker runs as a daemon, so I think starting it from unix /etc/init.d/worker would be appropriate.
I can see two solutions
Unix signals - both of them can send and receive Unix SIGUSR1
Shared memory
Which one to choose?
With signals both of programs will have to know others pid, probably reading from filesystem /var/run so it looks like a drawback.
With shared memory, all I need is "key" that programs will have hardcoded, so no need to read pids from filesystem. Since Watchdog should start first it can create shared mem segment, and Worker will only attach to it and maybe update its timestamp value??? However, to stop Worker by Watchdog (in case of hungup) Watchdog will still need Worker pid to send him SIGKILL, maybe it can read it from shared mem? Both concepts are new to me.
So what is the proper way to build reliable Watchdog, or am I missing something?
best regards
Marek
I think this is the best solution available through Qt:
http://qt-project.org/doc/qt-4.8/qlocalsocket.html
http://qt-project.org/doc/qt-4.8/qlocalserver.html
The QLocalSocket class provides a local socket. On Windows this is a
named pipe and on Unix this is a local domain socket.
http://qt-project.org/doc/qt-4.8/ipc-localfortuneserver.html
http://qt-project.org/doc/qt-4.8/ipc-localfortuneclient.html
Hope that helps.

Resources