How can i configure my volume in ~/.pam_mount.conf.xml so set a different port for ssh?
Currently, default (using port 22) this works:
<volume options="nosuid,nodev"
user="yourUsername"
mountpoint="~/mountpoint"
path="sshfs#%(USER)#server:" fstype="fuse" />
Thanks for hints.
G. does help and manpage shows not the right way for me to find out.
Kind regards
flobee
I got it.
A working local ssh config must exists. Than it works :-)
Related
I tried installing openstack in Ubuntu/Virtualbox, but when i give the command ./stack.sh, i get this error : devstack/stackrc:833 Could not determine host ip address. See local.conf for suggestions on setting HOST_IP. I already changed the HOST_IP to the Ethernet-Adapter VirtualBox Host-Only Network IPv4 Address using the command : gedit local.conf . Can somebody help me detect the problem please ? Thanks in advance!
Do you solve this? I encountered this today, and I found this: https://bugs.launchpad.net/neutron/+bug/1693689
I'm not so sure if this method correct, but after the method below I can install devstack ussuri.
The default IP of the VM on Virtualbox is 10.0.2.15, and the IPV4_ADDRS_SAFE_TO_USE in /opt/stack/devstack/stackrc is: 10.0.0.0/22.
try to modify /opt/stack/devstack/stackrc
change IPV4_ADDRS_SAFE_TO_USE=${IPV4_ADDRS_SAFE_TO_USE:-10.0.0.0/22}
into IPV4_ADDRS_SAFE_TO_USE=${IPV4_ADDRS_SAFE_TO_USE:-10.1.0.0/22}
System:
Maxscale 2.5.9
Ubuntu 20.04
In order to access the Web AdminGUI my maxsclale.cnf file looks like this:
[maxscale]
threads=auto
admin_host=0.0.0.0
admin_secure_gui=1
admin_auth=1
admin_enabled=1
admin_gui=1
admin_ssl_key=/etc/ssl/certs/maxscale-key.pem
admin_ssl_cert=/etc/ssl/certs/maxscale-cert.pem
admin_ssl_ca_cert=/etc/ssl/certs/ca-certificates.crt
[...all other configuration..]
With this configuration I can access the Web-AdminGUI on port 8989 from the internal ip address (not 127.0.0.1) by browser.
The SSL key/certs are self-signed .
BUT
When using the command line like:
maxctrl list servers
I get the following error:
Error: Error: socket hang up
When I remove or comment out the lines with the admin_ssl_XXX parameters and restart maxscale, command line works again, but of course the Web-AdminGUI does not.
I tried with various SSL certificate creations (also the one that is listed on the mariadb.com-Website
https://mariadb.com/docs/security/encryption/in-transit/create-self-signed-certificates-keys-openssl/#create-self-signed-certificates-keys-openssl),
the issue remains.
No errors in the maxscale.log whatsoever.
What is the best way to debug this issue?
Or do you have by any chance the right answer at hand?
YOUR help is greatly appreciated!
BR. Martin
You should use maxctrl --secure to encrypt the connections used by it.
Since you are using self-signed certificates, you have to also specify the CA certificate with --tls-ca-cert=/etc/ssl/certs/ca-certificates.crt if it's not installed in the system certificate store.
In addition, you probably need to use --tls-verify-server-cert=false to disable any warnings about self-signed certificates.
NFS Mount is not working in my RHEL 7 AWS instance.
When I do a
mount -o nfsvers=3 10.10.11.10:/ndvp2 /root/mountme2/
I get the error:
mount.nfs: requested NFS version or transport protocol is not supported
Can anyone point me where I am wrong?
Thanks.
Check the nfs service is started or reboot the nfs service.
sudo systemctl status nfs-kernel-server
In my case this package was not running and the issue was in /etc/exports file where i was having same IP address for two machines.
So i commented one ip address for the machine and restarted nf-kernel-server using
sudo systemctl restart nfs-kernel-server and reload the machine.
It worked.
A precision which might be useful for the dump (like me): systemctl status nfs-server.service and systemctl start nfs-server.service must be executed on the server!
Some additional data
If, like me, you've deleted a VM without shutting it down right you might also need to manually edit the file /etc/exports because NFS is trying to connect to it and fails but doesn't continue with the next, it just dies.
After that you can manually restart as mentioned in other answers.
In my case, a simple reload didn't suffice. I had to perform a full restart:
sudo systemctl status nfs-kernel-server
In my case, it didn't work correctly with version NFS 4.1.
So in Vargantfile in each place where is type: 'nfs' I added coma and nfs_version: 4, nfs_udp: false
Here is more detailing explanation NFS
If you're giving a specific protocol to connect with, also check to make sure your NFS server has that protocol enabled.
I got this error when trying to start up a Vagrant box, and my nfs server was running. It turns out that the command Vagrant uses is:
mount -o vers=3,udp,rw,actimeo=1 192.168.56.1:/dir/on/host /vagrant
Which specifically asks for UDP. My server was running but it was not configured to enable connecting over UDP. After consulting /etc/nfs.conf, I created /etc/nfs.conf.d/10-enable-udp.conf with the following contents to enable udp:
[nfsd]
udp=y
The name of the file doesn't matter, as long as it's in the conf.d directory and ends in .conf. Depending on your distribution it may be configured differently. You can directly edit nfs.conf, but using a conf.d file is more likely to preserve the changes after upgrading your system.
Try to ping IP address of the server "ping " from client "ping , if you get reply then install nfs server on the host. Then edit /etc/exports file don't forget to add port along with IP address
I got the solution: make an entry in nfs server /etc/nfsmount.conf with Defaultvers=3 .
There will # Defaultvers=3 just unhash it and then mount on nfs client.
Issue will be resolved!
I'm my wits end with this, I've combed every single google result and nothing helps.
I'm completely unable to get docker containers to access the internet. IP forwarding is enabled (net.ipv4.ip_forward = 1), ufw is turned off, I've tried adding the -dns 8.8.8.8 -dns 8.8.4.4 flags. Every possible solution I've ever found on google fails.
Anyone have any idea how to help?
Attempting to reset everything, as recommend here causes the entire thing to break by telling me that docker -d isn't running even though it is.
I was facing the same problem. So, to solve that issue I've started the container using the argument --net=host, it worked perfectly for me.
Here goes the full statement
sudo docker start --net=host -it --name ex_ngninx ubuntu
Resolved. I followed these instructions: commented out dns=dnsmasq line in NetworkManager.conf
I use following code to receive a connection:
socat TCP-LISTEN:4000,fork EXEC:"./myscrpit"
I need to have a sender's IP address in my script but SOCAT_PEERADDR is not set, what is the problem?
use pktinfo option for TCP-LISTEN so use following code:
socat TCP-LISTEN:4000,pktinfo,fork EXEC:"./myscrpit
Just for information, but not an answer. This command works for me:
socat tcp-listen:12345 exec:./script
But this command does not:
socat exec:./script tcp-listen:12345
Hope this information helps. To me, if an address pair does not work, exchanging the order of the pair might work.
This seems to be the problem: SOCAT_PEERADDR is an environment variable to access which you need to spawn a shell. As the socat-manpage (obtained with man socat) implies, the address type SYSTEM: should be used for this instead of EXEC:.
Demo: The following performed as desired for socat v1.7.3.3.
socat TCP-LISTEN:4000,fork SYSTEM:'echo "${SOCAT_PEERADDR}"'
To check, from another terminal, run
nc localhost 4000
This should show you your IP.