How disable linux console on serial port [duplicate] - console

On Raspberry Pi with Arch Linux there is a service active called serial-getty#AMA0.
The unit file is: /usr/lib/systemd/system/serial-getty#.service
As root I can invoke
systemctl stop serial-getty#ttyAMA0
systemctl disable serial-getty#ttyAMA0
But after reboot the service is enabled and running again.
Why is the service enabled after disabling it? How can I disable it permanent?
UPDATE
systemd uses generators at /usr/lib/systemd/system-generators/ is a binary called systemd-getty-generator. This binary runs at system start and adds the symlink serial-getty#ttyAMA0.service to /run/systemd/generator/getty.target.wants.
I eventually found a dirty solution. I commented out all actions in /usr/lib/systemd/system/serial-getty#.service. The service did appear to start anyway, but without blocking ttyAMA0.

The correct way to stop a service ever being enabled again is to use:
systemctl mask serial-getty#ttyAMA0.service
(using ttyAMA0 as the example in this case). This will add a link to null to the entry for that service.

Try this code:
system("systemctl stop serial-getty#ttyAMA0.service");
system("systemctl disable serial-getty#ttyAMA0.service");
I use it, and it works well.

Related

Stripe CLI on DEBIAN server, how do I make it "listen" to new requests in the background and continue using the console?

I want to use Stripe CLI and WEBHOOKS events on my debian(10.1) server. I've managed to get everything working but my problem is that when I run:
stripe listen --forward-to localhost:8000/foo/webhooks/stripe/
that I can't use the console anymore, because its listening to incoming events, which I still need. The only shown option is ^C to quit, but I need the CLI listener to continue to run at all times while being able to do other stuff at the same time.
On my local version/editor I can add sessions and run the listen command from one terminal and use another terminal session to continue interact with the system. But I dont know how to do that with debian yet. It would be great if the listen function could just run in the background and I could continue with what I need to do without stopping to listen. My next idea was to tunnel via ssh to the server but im unsure if that would solve my problem. Wouldnt that mean that my computer at home running that session would need to be running at all time? Im lost here...
Btw the server is a droplet on Digital Ocean if that matters...which I dont think.
Please let me know if anything is unclear.
UPDATE/SOLVED:
I missunderstood my problem, the Stripe CLI is just for testing local. Once a stripe solution is in production, Stripes servers send requests to my server/endpoints.
If you are wondering about this or want to read more start here how it works in production: https://stripe.com/docs/webhooks/go-live

puppet client reporting to wrong host in Foreman

This is my first post!
I have 100's of nodes managed by puppet/foreman. Everything is fine.
I did something I already did without problem in the past:
Change the hostname of a server.
This time I changed 2 hostnames:
Initially I had 'gate02' and 'gate03'.
I moved gate02 to 'gate02old' (with dummy IP, and switched the server OFF)
then I moved gate03 to gate02 ...
Now (the new) gate02 reports are updating the host called gate02old in foreman.
I did clean the certs in the puppetserver. I rm the ssl dir in the (new) gate02 and run puppet agent. I did not fing any reference to 'gate' in /var/lib/puppet. I changed the certname in puppet.conf and in hostname, and in sysconfig/network-script/ifcfg-xxxx.
The puppet agent run smoothly, and sends it to the puppetserver. But it updates the wrong host!
Anyone would have a clue on how to fix this ?
Thanks!
Foreman 2.0.3
Puppet 6
I do not accept that the sequence of events described led to the behavior described. If reports for the former gate03, now named gate02, are being logged on the server for name gate02old, then that is because that machine is presenting a cert to the server that identifies it as gate02old (and the server is accepting that cert). The sequence of events presented does not explain how that might be, but my first guess would be that it is actually (new) gate02old that is running and requesting catalogs from the server, not (new) gate02.
Fix it by
Ensuring that the machine you want running is in fact the one that is running, and that its hostname is in fact what you intend for it to be.
Shutting down the agent on (new) gate02. If it is running in daemon mode then shut down the daemon and disable it. If it is being scheduled by an external scheduler then stop and disable it there. Consider also using puppet agent --disable.
Deactivating the node on the server and cleaning out its data, including certs:
puppet node deactivate gate02
puppet node deactivate gate02old
puppet node deactivate gate03
You may want to wait a bit at this point, then ...
puppet node clean gate02
puppet node clean gate02old
puppet node clean gate03
Cleaning out the nodes' certs. For safety, I would do this on both nodes. Removing /opt/puppetlabs/puppet/ssl (on the nodes, not the server!) should work for that, or you could remove the puppet-agent package altogether, remove any files left behind, and then reinstall.
Updating the puppet configuration on the new gate02 as appropriate.
Re-enabling the agent on gate02, and starting it or running it in --test mode.
Signing the new CSR (unless you have autosigning enabled), which should have been issued for gate02 or whatever certname is explicitly specified in in that node's puppet configuration.
Thanks for the answer, though it was not the right one.
I did get to the right point by changing again the hostname of the old gateold02 to a another existing-for-testing one, starting the server and get it back in Foreman. Once that done, removing (again!) the certs of the new gate02 put it right, and its reports now updates the right entry in Foreman.
I still beleive there is something (a db ?) that was not updated right so foreman was sure that the host called 'gate02' was in the GUI 'gateold02'.
I am very sorry if you don't beleive me.
Not to say rather disappointed.
Cheers.

Disable internet access when calling java -jar

I'm testing six distinct .jar-files that all need to handle the possibility of no online access.
Unfortunately, I am on a network disc, so disabling the network connection or pulling the ethernet cable does not work unless I move all the files to /tmp or /scratch and change my $HOME environment variable, all of which I'd rather not have to do as it ends up being a lot of work.
Is there a way to invoke java -jar and disable the process from accessing the internet? I have not found any such flag in the man-pages. Is there perhaps a UNIX-way of doing this, as in:
disallowinternetaccess java -jar Foo.jar
Tell your Java program to access the network through a proxy. For all internet access this would be a SOCKS5 proxy.
java -DsocksProxyHost=socks.example.com MyMain
I believe that if no proxy is running you should get an appropriate exception in your program. If you need full control of what is happening, you can look into - and possibly modify - http://jsocks.sourceforge.net/
See http://docs.oracle.com/javase/7/docs/technotes/guides/net/proxies.html for details.
Note: You can do this without any native Unix stuff, so this question fits perfectly fine on SO.
You need just turn on SecurityManager: -Djava.security.manager=default
see details - https://stackoverflow.com/a/4645781/814304
With this solution you can even handle which resource you want to show and which to hide.

Watch an application and restart it

I'm using saltstack minion on Windows.
I would like to always check if an application is started. Restarts if it crashes. And also add the possibility to stop it.
The application is not a Windows service. But I would like to simulate the service with saltstack.
I've check the cmd.run but i'm not sure how to use it. It seems this command wait for the exit of my application. But I don't want to exit
Thank you.
Salt only runs commands when you tell it to. You may want to use the service beacon if you want to constantly check the status of a service.
The service beacon will check once a second (or whatever interval you prefer) and send an event on Salt's event bus when the status changes.
Then you could make salt reactor that will start up the service that has gone down.
Beacon description here: http://docs.saltstack.com/en/latest/topics/beacons/
Service beacon description here: http://docs.saltstack.com/en/latest/ref/beacons/all/salt.beacons.service.html#module-salt.beacons.service

how to post scripts to networkmanager's dispatcher.d directory

Ubuntu 10.10 64bit athalon, gnome
My basic scenario is I'm connecting to a VPN service (via newtworkmanager pptp protocol) and I'm transferring private data (hence VPN). The service goes down intermittantly and that's alright, probably due to my ISP/OS/VPN. What is not good is that my applications will then continue to transmit data via the eth0 default route and thats not cool. After some looking around I'm suspecting the best way to deal with this is to post scripts into /etc/NetworkManager/dispatcher.d. In short, the networkmanager service will execute scripts in this directory (and pass arguments to the scripts) when anything about the network changes.
My problem is that I can't get any of my scripts to execute. They all have, per the manpage, 0755 permissions and owned by root, but when I change the network state by unplugging ethernet cable, my scripts don't execute. I can execute them from the command line, but not automatically via the dispatcher....
an example script:
#!/bin/sh -e
exec /usr/bin/wmctrl -c qBittorrent
exit 0
This script is intentionally simple for testing purposes..
I can post whatever else would be helpful.
i'm using the syntax killall -9 any_application_name_here and that's working just fine. I imagine the script didn't have access to the binary wmctrl. I think that bash interpreter in this case will only execute bash binaries.
So, in a nutshell, if you want to control your VPN traffic based on network events, one way is to post scripts to /etc/NetworkManager/dispatcher.d and use binaries that are in bash's default path.

Resources