How to mount vmware shared folders in WSL2 - mount

I have a Mac, and install a Win10 VM by vmware fusion, the shared folders between Mac and Win10 VM use drive letter Z: or \vmware-host
And I mapped a network address \192.168.111.49\Builds to drive letter Y:
Then I install WSL2 in the Win10 VM, and linux subsystem Ubuntu 20.04 LTS
I want to mount all drive letter in Ubuntu, C: is automatically mount /mnt/c
And i use these cmd to mount Y:
sudo mkdir /mnt/y
sudo mount -t cifs -o username=xxx,password=xxxx,domain=xxx //192.168.111.49/Builds /mnt/y
It's OK
However, when I try mount Z:, it's not work
# from vmware official site
$ sudo vmhgfs-fuse -d .host:/ /mnt/z -o subtype=vmhgfs-fuse,allow_other
Segmentation fault (core dumped)
# try to use drvfs
$ sudo mount -t drvfs Z: /mnt/z
mount: /mnt/z: special device Z: does not exist.
<3>init: (457) ERROR: UtilCreateProcessAndWait:489: /bin/mount failed with status 0x2000
<3>init: (457) ERROR: MountPlan9:478: mount cache=mmap,rw,trans=fd,rfdno=3,wfdno=3,msize=65536,aname=drvfs;path=Z:;symlinkroot=/mnt/ failed 2
No such file or directory
# like network address
$ sudo mount -t cifs -o username=xxx,password=xxx //vmware-host/ /mnt/z
mount: /mnt/z: bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program.
And I check my open-vm-tools version
$ sudo apt-get upgrade open-vm-tools
Reading package lists... Done
Building dependency tree
Reading state information... Done
open-vm-tools is already the newest version (2:11.1.5-1~ubuntu20.04.2).
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
It's already newest.
Any help? Thanks.

I found the reason is I run cmd or powershell as administrator
If I don't use administator, the following mount is OK
sudo mount -t drvfs Z: /mnt/z

Related

how to install OpenStack on Ubuntu in 2022

On youtube there are many guides that show how to install openstack on ubuntu I have tried them and they seem not to work
For example with Devstack I fail every time the installation with .Stack.sh, with MicroStack I fail the initialization
I can't install OpenStack in any way!
could somebody help me?
I have installed openstack from various different ways but for me installing through Devstack is the easiest and the most convenient way to do it.
Let me share the installation steps that I use:
Firstly few prerequisites:
A fresh Ubuntu 20.04 installation (Ubuntu 18.04 Works)
8 GB RAM (4 GB RAM works)
4 vCPUs (2 vCPUs works)
Hard disk capacity of 20 GB (min 10 GB)
Step 1 : apt update -y && apt upgrade -y
Step 2: Create Stack user:
sudo adduser -s /bin/bash -d /opt/stack -m stack
echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack
su - stack
Step 3:
git clone https://git.openstack.org/openstack-dev/devstack
cd devstack
Step 4: Create devstack configuration file
vim local.conf
Paste this:
[[local|localrc]]
# Password for KeyStone, Database, RabbitMQ and Service
ADMIN_PASSWORD=admin
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
# Host IP - get your Server/VM IP address from ip addr command
HOST_IP=0.0.0.0
Step 5: ./stack.sh
The setup will take about 10-15 minutes depending upon your system. Once installation is complete you can access the dashboard using https://your-ip/dashboard
Note: Incase the stack.sh fails make sure to use ./unstack and ./clean.sh before you use stack.sh again.

How to Choose R Server's R as Default in Operationalization, Remote R Workspace and RStudio Server?

So I've set up an Azure Data Science Virtual Machine on Linux (Ubuntu) and I've executed the following on the terminal to enable Remote R workspace, RStudio Server, R Server Operationalization and hadoop:
sudo apt update
sudo apt -y upgrade
# Hadoop is installed but doesn't seem to appear on the PATH or have its environment variable set by default
sudo echo "" >> ~/.bashrc
sudo echo "export PATH="'$'"PATH:/opt/hadoop/hadoop-2.7.4/bin" >> ~/.bashrc
sudo echo "export HADOOP_HOME=/opt/hadoop/hadoop-2.7.4" >> ~/.bashrc
#
source ~/.bashrc
#Setting up a password as none exists to begin with because of private key selection in the installation
#RStudio Server requires a password though
"MyPassword\nMyPassword\n" | sudo passwd sshuser
#Unfortunately hadoop fails on Data Science Virtual Machine
#error: mkdir: Call From IM-DSonUbuntu/192.168.5.4 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
# hadoop fs -mkdir /user/RevoShare/rserve2
# hadoop fs -chmod uog+rwx /user/RevoShare/rserve2
sudo mkdir -p /var/RevoShare/rserve2
sudo chmod uog+rwx /var/RevoShare/rserve2
# hadoop fs -mkdir /user/RevoShare/sshuser
# hadoop fs -chmod uog+rwx /user/RevoShare/sshuser
sudo mkdir -p /var/RevoShare/sshuser
sudo chmod uog+rwx /var/RevoShare/sshuser
#Setting up R Server Operationalisation
cd /opt/microsoft/mlserver/9.2.1/o16n
sudo dotnet Microsoft.MLServer.Utils.AdminUtil/Microsoft.MLServer.Utils.AdminUtil.dll -silentoneboxinstall MyPassword
#They say this Data Science Virtual Machine already has RStudio Server, but even though the port 8787 is open, it's nowhere to be found! So installing it now, and after the installation it's accessible by refreshing the page that failed before.
#Perhaps it's not installed then? Or a service is not running like it shoudl?
#https://www.rstudio.com/products/rstudio/download-server/
wget https://download2.rstudio.org/rstudio-server-1.1.414-amd64.deb
yes | sudo gdebi rstudio-server-1.1.414-amd64.deb
#They are small, leave them for debug reasons - lets have evidence the script run thus far.
#sudo rm rstudio-server-1.1.414-amd64.deb
# Remote R workspace Service needs dotnet sdk
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg
sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial main" > /etc/apt/sources.list.d/dotnetdev.list'
sudo apt update
sudo apt -y install dotnet-sdk-2.0.0
sudo apt install libxml2-dev
#Downloading and installing the Remote R service
wget -O rtvs-daemon.tar.gz https://aka.ms/r-remote-services-linux-binary-current
tar -xvzf rtvs-daemon.tar.gz
sudo ./rtvs-install -s
sudo systemctl enable rtvsd
sudo systemctl start rtvsd
#sudo rm rtvs-daemon.tar.gz
#sudo rm rtvs-install
#Fixing Remote R: For some reason, even though 'sudo systemctl enable rtvsd' runs, after every reboot the service won't become automatically active. So let's fix that.
wget https://sa0im0general.blob.core.windows.net/general-blob-container/StartRemoteRAfterReboot.sh
sudo mv StartRemoteRAfterReboot.sh /var/RevoShare/StartRemoteRAfterReboot.sh
sudo /sbin/shutdown -r 5
sudo chown root /etc/rc.local
sudo chmod 755 /etc/rc.local
sudo systemctl enable rc-local.service
sudo -s
sudo find /etc/ -name "rc.local" -exec sed -i 's/exit 0//g' {} \;
sudo echo "" >> /etc/rc.local
sudo echo "sh /var/RevoShare/StartRemoteRAfterReboot.sh" >> /etc/rc.local
sudo echo "exit 0" >> /etc/rc.local
exit
I've also tried, one by one, these, to see if it makes any difference to the RStudio Server (it didn't, but even if it did, I want a global solution to work on Remote R Workspace Service and R Server Operationalisation as well, not only RStudio Server):
#Configuring RStudio Server to see the R Server R
sudo echo "rsession-which-r=/opt/microsoft/mlserver/9.2.1/bin/R/R" >> /etc/rstudio/rserver.conf
export RSTUDIO_WHICH_R=/opt/microsoft/mlserver/9.2.1/bin/R/R
sudo echo "RSTUDIO_WHICH_R=/opt/microsoft/mlserver/9.2.1/bin/R/R" >> ~/.profile
source ~/.profile
sudo echo "RSTUDIO_WHICH_R=/opt/microsoft/mlserver/9.2.1/bin/R/R" >> ~/.bashrc
source ~/.bashrc
sudo echo "PATH=$PATH:/opt/microsoft/mlserver/9.2.1/bin/R" >> ~/.bashrc
export PATH=$PATH:/opt/microsoft/mlserver/9.2.1/bin/R
source ~/.bashrc
The problem is that even though "which R" points to R Server's R, i.e. typing "sudo R" will show the message "Loading Microsoft R Server packages, version 9.2.1." and will load packages like RevoScaleR, everything else fails to do so.
Accessing the RStudio Server with http://THE-IP-GOES-HERE.westeurope.cloudapp.azure.com:8787 and logging in with the initial user ("sshuser") (or with any other user for that matter) will NOT load R Server and RevoScaleR rx functions are unavailable
Using my local Visual Studio 2017 to access the remote workspace via "Add connection" on "Workspaces" tab loads MRO and says:
Installed R versions:
[0] Microsoft R Open '3.4.1.1347' (Default)
And finally, when I use R Server's Operationalisation and log in with "mrsdeploy" package's "remoteLogin()" R Server packages like RevoScaleR are not loaded again, so things like "rxSummary(~., data=iris)" fail with error 'could not find function "rxSummary"'
The exact same thing happened when I deployed from azure a "Machine Learning Server 9.2.1 on Linux (Ubuntu)".
I don't want to just use the regular open source R, I want to be able to use the R Server - that's why I deployed this VM. How can I make it so that everything loads R Server's R, not Microsoft R Open? (Like I'm able to do from terminal using "R")
As a result of my having tried all of this and the fact that R Server is loaded in the console, my mind now goes to permissions. Could it be that by default the Data Science VM doesn't have the correct permissions to allow these?
I'm at a loss
RStudio Server is installed on the Ubuntu DSVM, but the service is disabled by default as it does not support SSL. You can enable it with systemctl enable rstudio-server, then start it with systemctl start rstudio-server.
RStudio Server uses the same R as Microsoft R Server, but the .libPaths are different, which is why you cannot load the MRS packages. You will need to manually set the .libPaths so they match.

Machine with Nvidia card:Issues with mount command on ubuntu using hostname

OS: Ubuntu14.04 64 bit
I have a strange problem occuring on machines with Nvidia cards running Ubuntu 14.04 64 bit.
The mount command works when using the IP address but fails when using the host name
Not-working command :
sudo -S mount -t cifs //share.test.com/LAB/Testing/Path1/Path2/Requisite/ -o username=blabla,password=blabla /mnt/src_shar_lnx
the error being
mount: wrong fs type, bad option, bad superblock on //share.test.com/LAB/Testing/Path1/Path2/Requisite/ ,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
In some cases useful info is found in syslog - try
dmesg | tail or so
The above command works seamlessly on other machines without Nvidia cards.
Working command:
sudo -S mount -t cifs //192.168.200.1/LAB/Testing/Path1/Path2/Requisite/ -o username=blabla,password=blabla /mnt/src_shar_lnx
Resolved.
installing cifs-utils solved the problem

Error installing Meteor on linux x86_64 chrome os

I am trying to install Meteor on the HP14 Chromebook. It is a linx x86_64 chrome os system.
Each time I try to install it I run into errors.
The first time I tried to install it the installer just downloaded the Meteor preengine but never downloaded the tarball or installed the actual meteor application structure.
So, I decided to try as sudo.
sudo curl https://install.meteor.com | /bin/sh
This definitely installed it because you can see it when ls
chronos#localhost ~/projects $ chronos#localhost ~/projects $ ls /home/chronos/user/.meteor/
bash: chronos#localhost: command not found
Now when I try to run meteor --version or meteor create myapp without sudo I get the following error.
````
chronos#localhost ~/projects $ meteor create myapp
'/home/chronos/user/.meteor' exists, but '/home/chronos/user/.meteor/meteor' is not executable.
Remove it and try again.
````
When I try to run sudo meteor --version or sudo meteor create myapp I get this error.
chronos#localhost ~/projects $ sudo meteor create myapp
mkdir: cannot create directory ‘/root/.meteor-install-tmp’: Read-only file system
Any ideas? Thinking I have to make that partition writeable. I made partition 4 writeable.
Put your chrome book into dev mode.
http://www.chromium.org/chromium-os/developer-information-for-chrome-os-devices
Boot into dev mode.
ctrl-alt t to crosh
shell
sudo su -
cd /usr/share/vboot/bin/
./make_dev_ssd.sh --remove_rootfs_verification --partitions 4
reboot
After rebooting
sudo su -
mount -o remount,rw /
mount -o remount,exec /mnt/stateful_partition
Write yourself a read/write script
sudo vim /sbin/rw
#!/bin/bash
echo "Making FS Read/Write"
sudo mount -o remount,rw /
sudo mount -o remount,exec /mnt/stateful_partition
sudo mount -i -o remount,exec /home/chronos/user
echo "You should now have full Read/Write access"
exit
Change permissions on script
sudo chmod a+x /sbin/rw
Run to set read/write root
sudo rw
Install Meteor as indicated on www.meteor.com via curl and meteor create works!
Alternatively you can edit the chomeos_startup though that might not be the best idea. It is probably best to have read/write on demand as illustrated above.
cd /sbin sudo
sudo vim chromeos_startup
Go to lines 51 and 58 and remove the noexec options from the mount command.
Down at the bottom of the script, above the note about ureadahead and below the if statement, add in:
mount -o remount,exec /mnt/stateful_partition
#uncomment this to mount root r/w on boot
mount -o remount,rw /
Again, editing chromeos_startup probably isn't the best idea unless you are so lazy you can't type sudo rw.
Enjoy.
This is super easy to fix!!
Just run this (or put it in .bashrc or .zshrc to make it permanent):
sudo mount -i -o remount,exec /home/chronos/user
Based on your question (you are using sudo) I assume you already have Dev Mode enabled, which is required for the above sudo command to work.
ChromeOS mounts the home folder using the noexec option by default, and this command remounts it with exec instead. And boom, Meteor will work just fine after that (and so will a bunch of other programs running out of your home folder).
Original tip: https://github.com/dnschneid/crouton/issues/928

What does cifs_mount failed w/return code = -22 indicate

I am trying
sudo mount -t cifs //<server>/<share> -o username=user#domain,password=**** /mnt/<mountpoint>
error message:
mount: wrong fs type, bad option, bad superblock on //server/share,
missing codepage or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
The syslog has
CIFS VFS: cifs_mount failed w/return code = -22
I am able to mount the same share on another centos system. I can ping the server, mount point directory has been created.
I ran into this problem when using a host name and solved it by using an IP address. E.g.:
use
mount -t cifs //192.168.1.15/share
rather than
mount -t cifs //servername/share
Another possible solution is to install
cifs-utils
.
Ah, the dreaded -22. Basically this seems to be used as a catchall for "something didn't work", although technically it's referred to as an invalid argument.
The client does IMHO a very poor job of telling you the actual problem. (This may not be its fault - it doesn't always have access to that information).
However -- have you checked the logs on the server/machine you are connecting to?
I was connecting to an OS X samba server, and learned from what I found in the logs there that it was necessary to specify additional options under -o as follows:
nounix,sec=ntlmssp
Among the things these settings enable are "allow long names", and "ignore UNIX filename endings"...sec is to specify security flags.
Another possibility is that you're trying to access a filesystem of a type that mount.cifs can't actually handle.
For RHEL/Centos install package - "cifs-utils"
Maybe move the target?
sudo mount -t cifs -o username=user#domain,password=**** //<server>/<share> /mnt/<mountpoint>
Or maybe this solution? (Ubuntu, Debian methods)
sudo apt-get install smbfs
Or for CentOS, RedHat, Fedora try:
sudo yum install samba-client
I had a similar issue on Ubuntu 12.04 with the "mount" package (version 2.20.1-1ubuntu3).
It happened when I was trying to mount the server share using its hostname rahter than its IP.
Another way to solve the issue on Ubuntu was to install the cifs-utils package. That way I could also mount the samba share using the exact same command line (or fstab) but with hostname.
sudo mount -t cifs //hostname/share -o username=user,password=pwd /mnt/share
Just did a clean install of Ubuntu 12.04 LTS and got this trying to hook up my Linux HTPC.
Solved it by running: sudo apt-get install cifs-utils then remounting it.
CIFS returns code "-22" in many cases (not only invalid arguments).
For me installing keyutils did the trick:
apt-get install keyutils
My distribution is "Ubuntu 14.04.2 LTS".
I figured this out by increasing the logging verbosity of CIFS:
echo 7 > /proc/fs/cifs/cifsFYI
# disable again via:
#echo 0 > /proc/fs/cifs/cifsFYI
Documentation on the bitmask ("7") for cifsFYI can be found here: https://www.kernel.org/doc/readme/Documentation-filesystems-cifs-README
After trying to mount once more dmesg included more helpful information:
Dec 7 12:34:20 pc1471 kernel: [ 5442.667417] CIFS VFS: dns_resolve_server_name_to_ip: unable to resolve:
Another maybe helpful link:
http://vlkan.com/blog/post/2015/01/08/smb-mount-troubleshoot/
I have Ubuntu Server 12.10 x64 installed as a VMware VM, running on OS X 10.8 (Mountain Lion).
On the Mac, in SYSTEM PREFERENCES > SHARING > FILE SHARING (on), I added a folder to share. For my tests, I created a new folder within my Public folder called "ubuntu".
In Ubuntu, I issued the following commands:
sudo mkdir /media/target
sudo mount.cifs //10.0.20.3/ubuntu /media/target -o username=davidallie,nounix,sec=ntlmssp,rw
Ubuntu prompted me for the password and, once entered, mounted the folder. I then ran:
df -H
which allowed me to verify the mounts and mount-points.
This has recently manifested thanks to a kernel bug in v5.18.8+, I was able to reproduce on v5.18.9 and v5.18.11.
Here is the relevant ticket on kernel.org, quote:
it appears that kernel 5.18.8 breaks cifs mounts on my machine. With
5.18.7, everything works fine. With 5.18.8, I am getting:
$ sudo mount /mnt/openmediavault/
mount error(22): Invalid argument
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel
log messages (dmesg)
The relevant /etc/fstab line is:
//odroidxu4.local/julian /mnt/openmediavault cifs
credentials=/home/julas/.credentials,uid=julas,gid=julas,vers=3.1.1,nobrl,_netdev,auto
0 0
Here is the offending commit, and here is the fix, which applies cleanly to v5.18.11. The cause is, from what I understand, a bug in old versions of the samba server in the negotiation protocol.
If this is your issue, you can:
patch your kernel yourself;
downgrade to v5.18.7;
switch to an LTS kernel;
use the userspace (and also really slow and awful) gvfs-smb;
upgrade the samba version on your server; or
add vers=2.0 to the mount.cifs options in /etc/fstab.
Note that while I haven't tried the last one personally, the venerable #SEBiGEM has confirmed in the comments that it works for v5.18.10.
Note also that I didn't try upgrading samba on the server at all because I hate touching the box it's running on - every time I upgrade anything everything breaks. Doing so might also not be an option for those with NAS appliances.
As a personal sidenote, it's a little sad that so many different things can cause -22. My answer is correct, but very very niche and specific to this point in time. I imagine in a month it will simply be useless noise.
Just experience the problem on RHEL 5. You don't need to install the samba suite, just the samba-client and any dependencies.
Maybe it's too late, but simplest solution described in kernel bug 50631:
in the latest code, unc mount parameter in mandatory. Modified command works for me:
sudo mount -t cifs //<server>/<share> -o username=user#domain,password=****,unc=\\\\<server>\\<share> /mnt/<mountpoint>
Try run the comamnd:
$modinfo cifs
filename: /lib/modules/3.2.0-60-virtual/kernel/fs/cifs/cifs.ko
version: 1.76
description: VFS to access servers complying with the SNIA CIFS Specification e.g. Samba and Windows
license: GPL
author: Steve French <sfrench#us.ibm.com>
srcversion: 9435BBC2F61D29F06643803
depends:
intree: Y
vermagic: 3.2.0-60-virtual SMP mod_unload modversions 686
parm: CIFSMaxBufSize:Network buffer size (not including header). Default: 16384 Range: 8192 to 130048 (int)
parm: cifs_min_rcv:Network buffers in pool. Default: 4 Range: 1 to 64 (int)
parm: cifs_min_small:Small network buffers in pool. Default: 30 Range: 2 to 256 (int)
parm: cifs_max_pending:Simultaneous requests to server. Default: 32767 Range: 2 to 32767. (int)
parm: echo_retries:Number of echo attempts before giving up and reconnecting server. Default: 5. 0 means never reconnect. (ushort)
parm: enable_oplocks:Enable or disable oplocks (bool). Default:y/Y/1 (bool)
If your getting any error then cifs is not installed. Just check with your admin. I thought it helps out.
Adding the option vers=3.0 to the mount command worked for me: sudo mount -t cifs -v <src> <dst> -o ...,vers=3.0,...
You need to install cifs-utils first , just as follows:
sudo yum install cifs-utils
I know this is old, but on older cifs-utils versions, you may have to add the following two lines to /etc/request-key.conf
create cifs.spnego * * /usr/sbin/cifs.upcall -c %k
create dns_resolver * * /usr/sbin/cifs.upcall %k
Workaround without installing additional packages (cifs-utils adds another 81mb in Debian Stretch):
$ FILESERVER_IP=$(getent hosts myfileserver.com | awk '{ print $1 ; exit }')
$ sudo mount -t cifs //${FILESERVER_IP}/<share> -o username=user#domain,password=**** /mnt/<mountpoint>
Many answers, but wasn't work for me.
Solution:
My NAS didn't support Samba 3.0, on which my mount switch automatically.
So I downgraded smb version:
mount -t cifs //192.168.0.2/Share -o rw,vers=1.0,username=*****,password=******* /media/1
It's work.

Resources