"UseDNS no" is conflict with "Match User xxx"? - sshd

Last night I added UseDNS no to my sshd_config, after I restarted ssh via sudo service ssh restart, I found sshd not started and there is no process listened on port 22, but after I deleted UseDNS no, it will work as before.
I have compared with another server's sshd_config, only three lines are different (this server used sftp, so these lines are added):
Match User editor
ChrootDirectory /opt/ljmall-staging/var/editor-rootfs
ForceCommand internal-sftp
I have tried commet ChrootDirectory and ForceCommand, it doesn't work, it means Match User and UseDNS no is conflicted?
Server OS: Ubuntu 14.04.3 LTS
Openssh-server: 1:6.6p1-2ubuntu2.7

They are not in conflict, but Match creates a new conditional block and UseDNS is not allowed in the conditional blocks.
Moving UseDNS above the Match block will solve your issue.

Related

Ubuntu (Oracle VM) - Mounted Samba shares hang indefinitely

I have a VM instance on Oracle Cloud (Ubuntu 22.04) set up with ZeroTier to act as a web server for some services that should work with my local Synology NAS.
For some of those services I also need to mount three SMB shares from my NAS with the ZeroTier tunnel, but I can't make it work.
I used mount and mount.cifs plenty of times with automounting too, this time it acts very strange:
running the mount command seems to succeed from the console, but /var/log/syslog reads
CIFS: VFS: \\XXX.XXX.XXX.XXX has not responded in 180 seconds.
Reconnecting...
if trying to access one of the shares (ls or lsof or cd or any other command), it succeeds for only one of the shares (always the same one), but only for the first time any command is given:
$ ls /temp
folder1 folder2 folder3
any other following command just "hangs" as if they system is working on something, but it stays like that indefinitely most of the times:
$ ls /temp
█
Just a few times it spits out this error
lsof: WARNING: can't stat() cifs file system /temp
Output information may be incomplete.
ls 1475 ubuntu 3r DIR 0,44 0 123207681 /temp
findmnt reads:
└─/temp //XXX.XXX.XXX.XXX/Downloads cifs rw,relatime,vers=2.0,cache=strict, username=[redacted],uid=1005,noforceuid,gid=0,noforcegid,addr=XXX.XXX.XXX.XXX,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=65536,wsize=65536,bsize=1048576,echo_interval=60,actimeo=1
for the remaining two "mounted" shares, none of them seems to respond to any command, not even the very first command, and they just hang like the one share that, at least, lets me browse for one time;
umount and umount -l take at least 2-3 minutes to successfully unmount the shares.
Same behavior when using smbclient and also with NFS shares from the same NAS.
What I have already tried:
update kernel and all packages;
remove, purge and reinstall cifs-utils, smbclient and so on...
tried mounting the same shares in another client / node within the ZeroTier network and it works just fine; also browsing from Windows and Android file manager apps with and without ZeroTier works flawlessly;
tried all SMB versions including SMBv3 and SMBv1 (CIFS);
tried different browsing or mounting methods / commands including mount, mount.cifs, autofs, smbclient;
tried to debug what happens behind the console, but didn't found anything that seems related to this in logs, htop or anything else. During the "hanging" sessions there is no spike in CPU, RAM or Network usage in either the Oracle VM or Synology NAS;
checked, reset and reconfigured all permissions on my NAS for shares, folders and files recursively and reconfigured users groups permissions.
What I haven't tried yet (I'll try as soon as possible):
reproduce this on another Oracle VM configured the same as the faulty one and another with a different base image (maybe Oracle Linux?);
It seems to me that the mount.cifs process doesn't really succeeds in mounting the share correctly, as it doesn't show as such anywhere. It also seems an issue not related to folder/file permissions, but rather something related to networking?
A note on something that may or may not be related to this: ZeroTier on my Synology NAS does not seems to work with IPv4 only - it remains OFFLINE. The node goes ONLINE only when IPv6 is enabled, but I must say that this is the only node in my ZT network that shows a IPv6 as public IP in the ZT web GUI - the other nodes show IPv4 public addresses.
If anyone has any clue on this, I'll be happy to support and reproduce any advice. Thank you!
I'm using YailScale, but I presume it will work the same.
You need to add the port 445 to /etc/iptables/rules.v4 just under the SSH setup like below:
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 445 -j ACCEPT (like this)
Then you need to edit the interfaces in /etc/samba/smb.conf to:
interfaces = lo tailscale0 100.0.0.0/24
Obviously, my interface is tailscale0, but yours will be different. Use ip link show to find yours. You may also need to change your IP range to suit ZeroTeirs, such as 100.0.0.0/24, which is what tailscale uses.
Then reboot!
I couldn't get it working without doing this.

Mosquitto: Starting in local only mode

I have a virtual machine that is supposed to be the host, which can receive and send data. The first picture is the error that I'm getting on my main machine (from which I'm trying to send data from). The second picture is the mosquitto log on my virtual machine. Also I'm using the default config, which as far as I know can't cause these problems, at least from what I have seen from other examples. I have very little understanding on how all of this works, so any help is appreciated.
What I have tried on the host machine:
Disabling Windows defender
Adding firewall rules for "mosquitto.exe"
Installing mosquitto on a linux machine
Starting with the release of Mosquitto version 2.0.0 (you are running v2.0.2) the default config will only bind to localhost as a move to a more secure default posture.
If you want to be able to access the broker from other machines you will need to explicitly edit the config files to either add a new listener that binds to the external IP address (or 0.0.0.0) or add a bind entry for the default listener.
By default it will also only allow anonymous connections (without username/password) from localhost, to allow anonymous from remote add:
allow_anonymous true
More details can be found in the 2.0 release notes here
You have to run with
mosquitto -c mosquitto.conf
mosquitto.conf, which exists in the folder same with execution file exists (C:\Program Files\mosquitto etc.), have to include following line.
listener 1883 ip_address_of_the_machine(192.168.1.1 etc.)
By default, the Mosquitto broker will only accept connections from clients on the local machine (the server hosting the broker).
Therefore, a custom configuration needs to be used with your instance of Mosquitto in order to accept connections from remote clients.
On your Windows machine, run a text editor as administrator and paste the following text:
listener 1883
allow_anonymous true
This creates a listener on port 1883 and allows anonymous connections. By default the number of connections is infinite. Save the file to "C:\Program Files\Mosquitto" using a file name with the ".conf" extension such as "your_conf_file.conf".
Open a terminal window and navigate to the mosquitto directory. Run the following command:
mosquitto -v -c your_conf_file.conf
where
-c : specify the broker config file.
-v : verbose mode - enable all logging types. This overrides
any logging options given in the config file.
I found I had to add, not only bind_address ip_address but also had to set allow_anonymous true before devices could connect successfully to MQTT. Of course I understand that a better option would be to set user and password on each device. But that's a next step after everything actually works in the minimum configuration.
For those who use mosquitto with homebrew on Mac.
Adding these two lines to /opt/homebrew/Cellar/mosquitto/2.0.15/etc/mosquitto/mosquitto.conf fixed my issue.
allow_anonymous true
listener 1883
you can run it with the included 'no-auth' config file like so:
mosquitto -c /mosquitto-no-auth.conf
I had the same problem while running it inside docker container (generated with docker-compose).
In docker-compose.yml file this is done with:
command: mosquitto -c /mosquitto-no-auth.conf

Error: Port 5000 is not open, could not start functions emulator

✔ Deploy complete!
Project Console: https://console.firebase.google.com/project/socialape-6b2f7/overview
Ayhan-MacBookPro:socialape-functions macbook$ firebase serve
=== Serving from '/Users/macbook/Desktop/socialape-functions'...
Error: Port 5000 is not open, could not start functions emulator.
Run lsof -t -i tcp:5000 | xargs kill from your Terminal.
A common cause for this error occurs when the Firebase emulator is not cleanly shut down (e.g., closing your IDE that's running the emulator in an embedded Terminal session) This will leave the process running in the background and occupies the emulator's default port.
To resolve the conflict, find the process ID running on the port (here 5000) from your Terminal command line and then kill it.
The above one-liner finds the process ID and pipes it directly to kill (h/t #manav).
For additional info, check out: Find (and kill) process locking port 3000 on Mac
The bug seems to be on not your end
It is caused by a bug in a dependency (node portfinder).
A quick fix to edit it might be to use the old version of node portfinder (v 1.0.21). Alternatively, you can do it by editing node_modules/firebase-tools/lib/emulator/controller.js and changing yield pf.getPortPromise({ port, stopPort: port }) to yield pf.getPortPromise({ port, stopPort: port + 1 }).
You can see the answer to your question completely here in this SO link.
If you are facing this issue in macOS Pro then this solution is for you.
In MacOS , Port 5000 may be claimed by a new "AirPlay Receiver".
This can be disabled in Settings -> Sharing:
I'm also adding the Screenshot of settings panel for disabling AirPlay Receiver.
Disabling the AirPlay Receiver (if you do not need it) frees up port 5000.

mount.nfs: requested NFS version or transport protocol is not supported

NFS Mount is not working in my RHEL 7 AWS instance.
When I do a
mount -o nfsvers=3 10.10.11.10:/ndvp2 /root/mountme2/
I get the error:
mount.nfs: requested NFS version or transport protocol is not supported
Can anyone point me where I am wrong?
Thanks.
Check the nfs service is started or reboot the nfs service.
sudo systemctl status nfs-kernel-server
In my case this package was not running and the issue was in /etc/exports file where i was having same IP address for two machines.
So i commented one ip address for the machine and restarted nf-kernel-server using
sudo systemctl restart nfs-kernel-server and reload the machine.
It worked.
A precision which might be useful for the dump (like me): systemctl status nfs-server.service and systemctl start nfs-server.service must be executed on the server!
Some additional data
If, like me, you've deleted a VM without shutting it down right you might also need to manually edit the file /etc/exports because NFS is trying to connect to it and fails but doesn't continue with the next, it just dies.
After that you can manually restart as mentioned in other answers.
In my case, a simple reload didn't suffice. I had to perform a full restart:
sudo systemctl status nfs-kernel-server
In my case, it didn't work correctly with version NFS 4.1.
So in Vargantfile in each place where is type: 'nfs' I added coma and nfs_version: 4, nfs_udp: false
Here is more detailing explanation NFS
If you're giving a specific protocol to connect with, also check to make sure your NFS server has that protocol enabled.
I got this error when trying to start up a Vagrant box, and my nfs server was running. It turns out that the command Vagrant uses is:
mount -o vers=3,udp,rw,actimeo=1 192.168.56.1:/dir/on/host /vagrant
Which specifically asks for UDP. My server was running but it was not configured to enable connecting over UDP. After consulting /etc/nfs.conf, I created /etc/nfs.conf.d/10-enable-udp.conf with the following contents to enable udp:
[nfsd]
udp=y
The name of the file doesn't matter, as long as it's in the conf.d directory and ends in .conf. Depending on your distribution it may be configured differently. You can directly edit nfs.conf, but using a conf.d file is more likely to preserve the changes after upgrading your system.
Try to ping IP address of the server "ping " from client "ping , if you get reply then install nfs server on the host. Then edit /etc/exports file don't forget to add port along with IP address
I got the solution: make an entry in nfs server /etc/nfsmount.conf with Defaultvers=3 .
There will # Defaultvers=3 just unhash it and then mount on nfs client.
Issue will be resolved!

Unable to execute MPICH2 on multiple machines on ubuntu 12.04 (HYDU_sock_connect issue)

I am facing difficulty in executing MPI program on two machines. The OS is Ubuntu 12.04. And the MPI implementation is MPICH2
ssh is working fine:
root#ubuntu:/home# ssh 192.168.1.9
root#gpuguy's password:
Welcome to Ubuntu 12.04.3 LTS (GNU/Linux 3.8.0-29-generic i686)
* Documentation: https://help.ubuntu.com/
131 packages can be updated.
67 updates are security updates.
Last login: Thu Oct 24 17:36:25 2013 from ubuntu.local
root#gpuguy:~#
But when I run my MPI programs it fails:
root#ubuntu:/home# mpiexec -f hosts.cfg -n 4 hello
root#192.168.1.9's password:
[proxy:0:0#gpuguy] HYDU_sock_connect (./utils/sock/sock.c:171): unable to get host address for ubuntu (1)
[proxy:0:0#gpuguy] main (./pm/pmiserv/pmip.c:209): unable to connect to server ubuntu at port 42104 (check for firewalls!)
I have already disabled firewall on both machines that is the reason I can do ssh successfully. But how to solve this issue?
My MPI code runs successfully on single machine.
For MPICH (or any MPI implementation) to work, you need to have passwordless SSH set up. I should also mention that you really shouldn't have to be logged in as root to make this work. It's generally a very bad idea to be logged in as root all of the time.
In /etc/hosts file, add ip address of each server and its hostname.
You should do this for all the servers.
for example:
10.10.0.5 server1
10.10.0.6 server2
10.10.0.7 server3
Just check in /etc/hosts file, not use tab (\t) instead of space to separate between ip address and hostname.
This is wrong:
10.10.0.5 \t server1
This is true:
10.10.0.5 server1
Be careful to not delete or modify existed lines in /etc/hosts file. only add new lines at end of file.
Also, you do not need to disable firewall to fix this issue.

Resources