Sometimes, you need to run more commands if one command causes a change on the remote system. Good examples would be:
You update a systemd service file. If the file was actually changed, then you need to restart the service.
You update the configuration for a service (like say /etc/dhcp/dhcpd.conf). If that file was changed, you need to restart the service.
Is there a way to do this with files.put? Ideally you could write code like:
changed = files.put(src='files/dhcpd.conf', dest='/etc/dhcp/dhcpd.conf')
if changed:
systemd.service(service='isc-dhcp-server', running=True, restarted=True)
In pyinfra, every operation returns an object that has a changed property that does what you want:
dhcpconfig = file.put(…)
if dhcpconfig.changed:
systemd.service(…)
Related
Would you know how to run a background task on Symfony 4, based on the setup of a form ? This would avoid that the user has to remain on the form until the task is finished.
The idea would be that when the form is validated, it starts an independant background task. Then the user can continue its navigation and come back once the task is finished to get the results.
Thanks for your help,
You need to use pattern Message Bus. Symfony has own implementation of this pattern since version 4.1 introducing Messenger Component.
You can see documentation here: https://symfony.com/doc/current/components/messenger.html
To get it work you need some external program that will implement AMQP protocol. Most popular in PHP world IMHO RabbitMQ.
A very simple solution for this could be the following procedure:
Form is valid.
A temporary file is created.
Cronjob gets executed every five minutes and starts a symfony command.
The command checks if the file exists and if it's empty.
If so, the command works of the background task. But before this, the command write it's process id in the file to prevent from beeing excuted a second time.
Remove the file when the command has finished.
As long as the file exists you can show a hint for the user that the task is running.
I''m trying to get log output (Console.WriteLine(..)) in my Docker logs, but I'm getting zero avail.
I've tried:
Console.WriteLine(..)
Trace.WriteLine(..)
Flushing the console, flushing the trace.
I can see these outputs in a VS output window when I'm debugging, so they go somoewhere.
I'm on windows Container, using microsoft/aspnet:4.7.1-windowsservercore-1709 and net4.7
These are the logs I get on container start
docker logs -f exportapi
ERROR ( message:Cannot find requested collection element. )
Applied configuration changes to section "system.applicationHost/applicationPools" for "MACHINE/WEBROOT/APPHOST" at configuration commit path "MACHINE/WEBROOT/APPHOST"
You have many good lateral options, like self-contained/server-contained executables (eg. Dotnet Core using microsoft/dotnet:runtime would proxy Console.WriteLine by default off the dotnet new web scaffold). Zero-configuration STDOUT logging has never been a common approach on IIS, but these modern options adopt it as best practice (logging should be a transparent backing service).
If you want or need a chain of three programs/assemblies to get your web service up (ServiceMonitor, W3SVC, and finally your assembly), then you need something like this: https://blog.sixeyed.com/relay-iis-log-entries-to-read-them-in-docker/
Overriding the entrypoint to tail more logs than the image does by default is unfortunately a common hack (not just in Microsoft land). So, in your case, I believe you need at least a trace listener config to emit Trace.WriteLine, and then the above approach to emit it: https://learn.microsoft.com/en-us/dotnet/framework/debug-trace-profile/how-to-create-and-initialize-trace-listeners
I created a client-server application and now I would like to deploy it.
While development process I started the server on a terminal and when I wanted to stop it I just had to type "Ctrl-C".
Now want to be able to start it in background and stop it when I want by just typing:
/etc/init.d/my_service {stop|stop}
I know how to do an initscript, but the problem is how to actually stop the process ?
I first thought to retrieve the PID with something like:
ps aux | grep "my_service"
Then I found a better idea, still with the PID: Storing it on a file in order to retrieve it when trying to stop the service.
Definitely too dirty and unsafe, I eventually thought about using sockets to enable the "stop" process to tell the actual process to shut down.
I would like to know how this is usually done ? Or rather what is the best way to do it ?
I checked some of the files in the init.d and some of them use PID files but with a particular command "start-stop-daemon". I am a bit suspicious about this method which seems unsafe to me.
If you have a utility like start-stop-daemon available, use it.
start-stop-daemon is flexible and can use 4 different methods to find the process ID of the running service. It uses this information (1) to avoid starting a second copy of the same service when starting, and (2) to determine which process ID to kill when stopping the service.
--pidfile: Check whether a process has created the file pid-file.
--exec: Check for processes that are instances of this executable
--name: Check for processes with the name process-name
--user: Check for processes owned by the user specified by username or uid.
The best one to use in general is probably --pidfile. The others are mainly intended to be used in case the service does not create a PID file. --exec has the disadvantage that you cannot distinguish between two different services implemented by the same program (i.e. two copies of the same service). This disadvantage would typically apply to --name also, and, additionally, --name has a chance of matching an unrelated process that happens to share the same name. --user might be useful if your service runs under a dedicated user ID which is used by nothing else. So use --pidfile if you can.
For extra safety, the options can be combined. For example, you can use --pidfile and --exec together. This way, you can identify the process using the PID file, but don't trust it if the PID found in the PID file belongs to a process that is using the wrong executable (it's a stale/invalid PID file).
I have used the option names provided by start-stop-daemon to discuss the different possibilities, but you need not use start-stop-daemon: the discussion applies just as well if you use another utility or do the matching manually.
I have an executable (no source) that I need to wrap, to make sure that it is not called more than once at a time. I immediately think of some sort of queue wrapper, but how do I actually make it so that my wrapper is called instead of the executable itself? Is there a better way to do this? The solution needs to be invisible because the users are other applications. Any information/recommendations are appreciated.
Method 1: Put the executable in some location not in the standard path. Create a shell script that checks a sentinel file and, if the sentinel file is absent, executes the program, waits for the ptogram to complete, then deletes the sentinel file. If the sentinel file is present, the script will enter a loop with a short delay (1 second? How long is the standard execution of this program? Take that and half it), check the sentential file again, and so on.
Method 2: Create a separate program that does the same thing as the script, but using a system-level semaphore or lock instead. You could even simply use a read/write lock on a file. The program would do a fork() and exec() on the real program, waiting for child exit before clearing the sentinel.
If the users are other applications, you can just rename the executable (e.g. name -> name.real) and call the wrapper with the original name. To make sure that it's only called once at a time, you can use the pidof command (e.g. pidof name.real) to check if the program is running already (pidof actually gives you the PID of the running process, so that you can use stuff such as kill or whatever to send signals to it).
Can anyone recommend a fairly clean method for determining the process scheduler an app-engine is running on at run-time (NT or UNIX). I need to set a file path that is obviously dependent upon the server the process is being executed on. I understand the GetEnv command can be used, but I don't want to set an environment variable for this particular instance (it does not reside under the PS_FILES) path. I've searched peoplebooks for any kind of built in function or system variable, but was not successful (obviously).
Any suggestions would be appreciated.
Thanks
Okay, I may have asked this question a little too early. I apologize.
It looks like I'll just be able to query the process request table to pull back the server name:
SQLExec("SELECT SERVERNAMERUN FROM PSPRCSRQST WHERE PRCSINSTANCE = :1", &thisProcess, &server);
Evaluate &server
When.......
End-Evaluate;
Exactly :-)
There are a host of Process Request records that can give you the information you need. Glad that you found it.
John