virsh console hangs at the escape character "^]" - console

I am trying to kickstart a newly built VM. I am stuck with the following.
Want to start with a console so that I can include username and other info for this VM:
#vmhost02 ~]$ sudo virsh start --console testengine
Domain testengine started
Connected to domain testengine
Escape character is ^]
It hangs up in there and doesn't listen to any keys except "^]"
Let me know if you need more information for any ideas...
Thanks very much.

1)
You can try to edit /etc/default/grub in the guest, and make sure you have:
GRUB_TERMINAL=serial
GRUB_SERIAL_COMMAND="serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1"
Then execute:
# update-grub
# reboot
2)
If that does not work, try to replace quiet with console=ttyS0 in GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub:
GRUB_CMDLINE_LINUX_DEFAULT="... console=ttyS0"
Then again:
# update-grub
# reboot
3)
You may still need to try:
# systemctl enable serial-getty#ttyS0.service
# systemctl start serial-getty#ttyS0.service
# reboot

You would need to define a tty to be used as a virtual console. In case you have access to your vm either using vnc or ssh create the following file
vi /etc/init/ttyS0.conf
The content should be something like
start on stopped rc RUNLEVEL=[2345]
stop on runlevel [!2345]
respawn
exec /sbin/getty -L 38400 ttyS0 vt102 # This is your term type vt102
Save these settings and subsequently from your host machine
virsh destroy [vm-name]; service libvirtd stop; service libvirtd start; virsh start [vm-name]
I'm doing here a stop/start of libvirt, because it sometimes tend to not send a SIGTERM to libvirt.
Finally try
virsh console [vm-name]

May be simpler than the solution of val0x00ff, you shall add the console=ttyS0 at the end of the kernel lines in the /boot/grub2/grub.cfg file of the VM (this is not done by default it seems):
(vm)$> grubby --update-kernel=ALL --args="console=ttyS0"
(vm)$> reboot
Then virsh console shall work as expected.

Related

VPN killswitch using UFW, but now openvpn3 no longer can start automatically

I successfully implemented this, which blocks all internet connections on my Linux machine UNLESS it connects via a specific VPN :
https://www.comparitech.com/blog/vpn-privacy/how-to-make-a-vpn-kill-switch-in-linux-with-ufw/
If I manually execute openvpn3 session-start --config ~/Desktop/config.ovpn, it successfully connects via the VPN.
I used to have this command in a script (that has #!/bin/bash as header) which ran at device bootup without any issues, UNTIL I configured ufw for the killswitch above (now ufw runs on device bootup).
I use openvpn3 so using instructions in the above tutorial for openvpn commands didn't work at all.
I even tried using a sleep in my bash script to get it to wait a while until after bootup. Doesn't work. But if I issue the connection command manually in the command prompt, it works.
Please help! I need it to connect automatically. Much appreciated!
After spending a whole day on this, I figured out a solution. I found an article that guided me : https://www.howtogeek.com/687970/how-to-run-a-linux-program-at-startup-with-systemd/
I set up a service item using systemd (systemctl) just for that command to connect. Here is what my entry looks like :
#/etc/systemd/system/connectvpn.service
[Unit]
Description=Connect VPN
After=ufw.service network.target
Requires=ufw.service
[Service]
Type=oneshot
ExecStart=/usr/local/bin/connect
#/usr/local/bin/connect
#!/bin/bash
openvpn3 session-start --config /home/xyz/Desktop/config.ovpn
Working nicely now, connects to the VPN on bootup.

Using systemd to start vpnc results in 'unknown host'?

I'm trying to connect to vpnc using a systemd service file. The service file runs a script, myscript.sh, which, among other things, runs:
sudo vpnc myhost
Upon booting the device, the other commands are correctly executed, but the vpn is not connected, and gives me the error message:
vpnc: unknown host `myhost.com'
However, if I run the service file manually using
systemctl start myservice.service
then the vpn is successfully started.
My service file looks like this:
[Unit]
Description=VPN Start
Wants=network-online.target
After=network.target network-online.target
[Service]
Environment=DISPLAY=:0.0
Environment=XAUTHORITY=/home/pi/.Xauthority
Type=forking
ExecStart=/bin/bash /home/pi/myscript.sh
Restart=on-abort
User=pi
Group=pi
[Install]
WantedBy=multi-user.target
systemctl status myservice.service
includes this message:
pi: TTY=unknown ; PWD=/home/pi ; USER=root ; COMMAND=/usr/sbin/vpnc myhost
I have already done:
systemctl enable systemd-networkd-wait-online
and that hasn't appeared to help.
It might be to late, but maybe anyone else stumbles upon this.
I had the same issue although I configured the VPN through the GUI.
I found out eventually that my /etc/resolv.conf was a symlink to a configuration file some third party VPN software I sometimes use installed. Although the entries in that file looked fine it only worked, when I deleted the /etc/resolv.conf symlink and created a new normal file.
TL;DR:
Backup /etc/resolv.conf
Remove symlink
Create new /etc/resolv.conf and populate it with your preferred configuration
After that my VPN worked like a charm.

opensuse network management undefined

I did an update on my opensuse box and networking stopped working. The system is trying to use networkmanager, even though it isn't installed. I am using yast to try and get it to use ifup, but it complains about no network connection. I tried running:
ifup eth0
and I get back:
Network is managed by '' -> skipping
Does anyone out there know why it is coming back empty and if there is a config file that I can manually tweak to fix this?
I'm assuming you are running 12.3 or 13.1 with systemd.
Disable network manager if it exists:
systemctl disable networkmanager.service
Enable network.service:
systemctl enable network.service
Make sure ifcfg-eth0 exists with a configuration in /etc/sysconfig/network/
Run ifup eth0
Hope this will help someone.
1. Disable NetworkManager, Stop is and then enable it and restart it respectively.
2. All this happens in console. Check the status for NetworkManager and in the status messages it should show that the interface(wierless) is disconnected. Confirm this by typing command "sudo nmcli c"
3. Type command "sudo iwlist (wireless-interface) scan" to show you the available wireless networks
4. If you see the network that you want to connect to listed, type command "nmcli a" and enter the corresponding connect phrase/password to connect

Openstack-Keystone failing to start

I've tried almost everything in the past couple of days to get keystone running to no avail.
The setup is all on the same host, the virtualization and openstack and keystone are all on the same host, so I've tried setting up keystone with 127.0.0.1 and localhost and the IP of the host with no luck
[DEFAULT] log_file = /var/log/keystone/keystone.log
admin_token = ***
bind_host = 192.168.33.11
public_port = 5000
admin_port = 35357
compute_port = 8774
# === Logging Options ===
# Print debugging output verbose = True
# Print more verbose output
# (includes plaintext request logging, potentially including passwords)
# debug = False
# Name of log file to output to. If not set, logging will go to stdout. log_file = keystone.log
# The directory to keep log files in (will be prepended to --logfile) log_dir = /var/log/keystone
# Use syslog for logging.
# use_syslog = False
# syslog facility to receive log lines
# syslog_log_facility = LOG_USER
# If this option is specified, the logging configuration file specified is
# used and overrides any other logging options specified. Please see the
# Python logging module documentation for details on logging configuration
# files. log_config = logging.conf
# A logging.Formatter log message format string which may use any of the
# available logging.LogRecord attributes.
# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
# Format string for %(asctime)s in log records.
# log_date_format = %Y-%m-%d %H:%M:%S
# onready allows you to send a notification when the process is ready to serve
# For example, to have it notify using systemd, one could set shell command:
# onready = systemd-notify --ready
# or a module with notify() method:
# onready = keystone.common.systemd
[sql] connection = mysql://keystone:***#localhost/keystone
# idle_timeout = 200
[identity] driver = keystone.identity.backends.sql.Identity
[catalog] template_file = /etc/keystone/default_catalog.templates driver = keystone.catalog.backends.sql.Catalog
# dynamic, sql-based backend (supports API/CLI-based management commands)
# driver = keystone.catalog.backends.sql.Catalog
# static, file-based backend (does *NOT* support any management commands)
# driver = keystone.catalog.backends.templated.TemplatedCatalog
# template_file = default_catalog.templates
[token] driver = keystone.token.backends.sql.Token
# driver = keystone.token.backends.kvs.Token
# Amount of time a token should remain valid (in seconds)
# expiration = 86400
I've enabled logging in the logging.conf file and set the level to DEBUG and INFO, however nothing in log files.
[root#* keystone]# service openstack-keystone restart
Stopping keystone: [FAILED]
Starting keystone: [ OK ]
[root#* keystone]# service openstack-keystone restart
Stopping keystone: [FAILED]
Starting keystone: [ OK ]
[root#* keystone]# ps aux | grep keystone
root 25580 0.0 0.0 103236 880 pts/1 S+ 09:41 0:00 grep keystone
[root#* keystone]#
Any ideas will be greatly appreciated.Thank you
As I mentioned in the comment, I've never seen a config file with the section headings on the same line as config option:
[DEFAULT] log_file = /var/log/keystone/keystone.log
I've also seen it like this instead:
[DEFAULT]
log_file = /var/log/keystone/keystone.log
However, I have no idea if this is related to your issue.
To enable debug-level logging, make sure you set the following in /etc/keystone/logging.conf:
[logger_root]
level=DEBUG
Then try running keystone manually instead of as a service:
$ sudo -u keystone bash
$ HOME=/var/lib/keystone keystone-all --debug
Hopefully you'll see a relevant error message on standard out.
(I believe it will still send the logging to /var/log/keystone/keystone.log, not sure how to actually get it to log to standard out when running manually like this).
Add a valid token for admin_token. It should not be "*".
Check the below line:
[sql] connection = mysql://keystone:*#localhost/keystone
It should be something like:
connection = mysql://keystone:keystone#localhost/keystone
Refer to this url for an example keystone.conf file
http://docs.openstack.org/trunk/openstack-compute/install/yum/content/keystone-conf-file.html
I ran into this issue as well. I am running on Ubuntu 12.04LTS. What i found was the the service start command in /etc/init/keystone.conf is using start-stop-daemon to run the service. It was written for a newer version than the one on my box. The --chdir variable is not accepted as an input. once i removed that line keystone started right up.
Try running:
start-stop-daemon --start --chuid keystone --name keystone --exec /usr/bin/keystone-all
/etc/init/keystone.conf after
description "Keystone API server"
author "Soren Hansen <soren#linux2go.dk>"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
exec start-stop-daemon --start --chuid keystone \
--name keystone \
--exec /usr/bin/keystone-all
Check if your IP-adress is equal to HOST_IP=... in localrc
This might be due to keystone not getting started properly and therefore port 35357 is not in listening mode.
This seems to be anomalous behavior of service keystone.
I am mentioning steps which have worked on my system for havana installtion on Ubuntu 12.04 Kernel version 3.2.0-67-generic. After a day of headache around this issue. Try these steps, preferably in the same order.
1) Remove keystone package:-
apt-get remove keystone
2) Reboot your system
reboot
3) After reboot again INSTALL KEYSTONE.
apt-get install keystone
4) Check status of keystone service
service keystone status
It will show start/running
5) Now do the necessary changes you want to do in /etc/keystone/keystone.conf
after making changes in conf file DO NOT RESTART KEYSTONE SERVICE
Use stop and start command to make an effect of restart but don't restart.
service keystone stop
service keystone start
For further help, pasting a dump of my CLI :-
http://pastebin.com/sduuFCL7
There are multiple problems with the icehouse documentations and install. packstack is broken so the only way to get started is to manually follow the upstream docs for your distro. keystone is very important to set up first correctly before moving on, because other services rely on it.
the paste-file /usr/share/keystone/keystone-dist-paste.ini should be copied to /etc/ to be accessible to the config scripts like this:
cp /usr/share/keystone/keystone-dist-paste.ini /etc/keystone/
chown keystone:keystone /etc/keystone/*
make sure to update keystone.conf with the new config_file value
documentation is wrong about the mysql connection, it should go to [sql] and not [database] so:
openstack-config --set /etc/keystone/keystone.conf sql connection mysql://keystone:PASSWD#controller/keystone
the name controller should be resolved to whatever mysql is bound to, I will add it to /etc/hosts like this if [mysqld]/bind-address in /etc/my.cnf is 10.1.1.100:
10.1.1.100 controller
make sure to uncomment log_file in keystone.conf to get what is happening.
I was facing similar issue.I followed below mentioned steps and openstack-keystone service got started.
Edit the /etc/keystone/keystone.conf file and complete the following actions:
In the [DEFAULT] section
[DEFAULT]
admin_token = ADMIN_TOKEN
In the [database] section
[database]
connection = mysql://keystone:KEYSTONE_DBPASS#controller/keystone
In the [token] section, configure the UUID token provider and SQL driver
[token]
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.sql.Token
In the [revoke] section
[revoke]
driver = keystone.contrib.revoke.backends.sql.Revoke
After making above changes populate the Identity service database using command
su -s /bin/sh -c "keystone-manage db_sync" keystone
Start the openstack-keystone service using below command
systemctl start openstack-keystone

Writing a unix daemon

I'm trying to code a daemon in Unix. I understand the part how to make a daemon up and running . Now I want the daemon to respond when I type commands in the shell if they are targeted to the daemon.
For example:
Let us assume the daemon name is "mydaemon"
In terminal 1 I type mydaemon xxx.
In terminal 2 I type mydaemon yyy.
"mydaemon" should be able to receive the argument "xxx" and "yyy".
If I interpret your question correctly, then you have to do this as an application-level construct. That is, this is something specific to your program you're going to have to code up yourself.
The approach I would take is to write "mydaemon" with the idea of it being a wrapper: it checks the process table or a pid file to see if a "mydaemon" is already running. If not, then fork/exec your new daemon. If so, then send the arguments to it.
For "send the arguments to it", I would use named pipes, like are explained here: What are named pipes? Essentially, you can think of named pipes as being like "stdin", except they appear as a file to the rest of the system, so you can open them in your running "mydaemon" and check them for inputs.
Finally, it should be noted that all of this check-if-running-send-to-pipe stuff can either be done in your daemon program, using the API of the *nix OS, or it can be done in a script by using e.g. 'ps', 'echo', etc...
The easiest, most common, and most robust way to do this in Linux is using a systemd socket service.
Example contents of /usr/lib/systemd/system/yoursoftware.socket:
[Unit]
Description=This is a description of your software
Before=yoursoftware.service
[Socket]
ListenStream=/run/yoursoftware.sock
Service=yourservicename.service
# E.x.: use SocketMode=0666 to give rw access to everyone
# E.x.: use SocketMode=0640 to give rw access to root and read-only to SocketGroup
SocketMode=0660
SocketUser=root
# Use socket group to grant access only to specific processes
SocketGroup=root
[Install]
WantedBy=sockets.target
NOTE: If you are creating a local-user daemon instead of a root daemon, then your systemd files go in /usr/lib/systemd/user/ (see pulseaudio.socket for example) or ~/.config/systemd/user/ and your socket is at /run/usr/$(id -u)/yoursoftware.sock (note that you can't actually use command substitution in pathnames in systemd.)
Example contents of /lib/systemd/system/yoursoftware.service
[Unit]
Description=This is a description of your software
Requires=yoursoftware.socket
[Service]
ExecStart=/usr/local/bin/yoursoftware --daemon --yourarg yourvalue
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target
Also=yoursoftware.socket
Run systemctl daemon-reload && systemctl enable yoursoftware.socket yoursoftware.service as root
Use systemctl --user daemon-reload && systemctl --user enable yoursoftware.socket yoursoftware.service if you're creating the service to run as a local-user
A functional example of the software in C would be way too long, so here's an example in NodeJS. Here is /usr/local/bin/yoursoftware:
#!/usr/bin/env node
var SOCKET_PATH = "/run/yoursoftware.sock";
function errorHandle(e) {
if (e) console.error(e), process.exit(1);
}
if (process.argv[0] === "--daemon") {
var logFile = require("fs").createWriteStream(
"/var/log/yoursoftware.log", {flags: "a"});
require('net').createServer(errorHandle)
.listen(SOCKET_PATH, s => s.pipe(logFile));
} else process.stdin.pipe(
require('net')
.createConnection(SOCKET_PATH, errorHandle)
);
In the example above, you can run many yoursoftware instances at the same time, and the stdin of each of the instances will be piped through to the daemon, which appends all the stuff it receives to a log file.
For non-Linux OSes and distros without systemd, you would use the (typically shell-scripted) startup system to begin your process at boot and the user would receive an error like could not connect to socket /run/yoursoftware.sock when something goes wrong with your daemon.

Resources