riak-admin fails on osx 10.8.5 - riak

I'm trying to install riak on my OSX 10.8.5, but when using the command riak-admin test it always fail. I can't find a solution for it!
Also using sudo riak-admin test doesn't help it.
I have installed riak(1.4.2) through brew.
>riak start
!!!!
!!!! WARNING: ulimit -n is 256; 4096 is the recommended minimum.
!!!!
>riak ping
pong
>riak-admin test
Failed to write test value: {error,timeout}%
I have installed riak(1.4.2) precompiled tarball using wget
>curl -O http://s3.amazonaws.com/downloads.basho.com/riak/1.4/1.4.2/osx/10.8/riak-1.4.2-OSX-x86_64.tar.gz
>tar xzvf riak-1.4.2-osx-x86_64.tar.gz
>cd riak-1.4.2
>bin/riak start
!!!!
!!!! WARNING: ulimit -n is 256; 4096 is the recommended minimum.
!!!!
>bin/riak ping
pong
>bin/riak-admin test
Failed to write test value: {error,timeout}%
I have install riak(1.4.1) precompiled tarball using wget
>curl -O http://s3.amazonaws.com/downloads.basho.com/riak/1.4/1.4.1/osx/10.8/riak-1.4.1-OSX-x86_64.tar.gz
>tar xzvf riak-1.4.1-osx-x86_64.tar.gz
>cd riak-1.4.1
>bin/riak start
!!!!
!!!! WARNING: ulimit -n is 256; 4096 is the recommended minimum.
!!!!
>bin/riak ping
pong
>bin/riak-admin test
Failed to read test value: {error,{insufficient_vnodes,0,need,1}}%

Solution
Following this procedure http://docs.basho.com/riak/... solved my issue.
It has to do with the Open Files Limit on mac OSX.
Before
To check the current limits on your Mac OS X system, run:
>launchctl limit maxfiles
maxfiles 256 unlimited
Edit (or create) /etc/launchd.conf
Edit (or create) /etc/launchd.conf and increase the limits. Add lines
that look like the following (using values appropriate to your
environment):
limit maxfiles 16384 32768
Restart the system
Save the file, and restart the system for the new limits to take
effect. After restarting, verify the new limits with the launchctl
limit command:
>launchctl limit maxfiles
maxfiles 16384 32768

Related

badblocks: Resource busy while trying to determine device size

I am trying to run bad blocks on macOS High Sierra 10.13.6. I installed bad blocks using macports. I keep encountering errors when attempting to run it and I am not sure how to even get bad blocks running
sudo badblocks -c 4096 -s -w -o /Users/mcbeav/Desktop/blocks.txt /dev/disk0s2
This keeps returning the error
badblocks: Resource busy while trying to determine device size
If I try
sudo badblocks -c 4096 -s -w -o /Users/mcbeav/Desktop/blocks.txt /dev/disk0
I get the error
badblocks: Value too large to be stored in data type invalid end block (7813820416): must be 32-bit value
Can anyone please help me out?
My recommendation is that you:
a) Run badblocks via the Mac OS X console in Recovery Mode
High Sierra (10.13+) along with APFS (file format system) prevent certain operations on disk. You'll have to be in recovery mode or turn off disk protection to do as you propose.
Turn off your Mac (Apple > Shut Down).
Hold down Command-R and press the Power button. ...
Wait for OS X to boot into the OS X Utilities window.
Choose Utilities > Terminal.
Enter csrutil disable.
Enter reboot.
Mac OS X Workaround:
My sense from past experience is that you are hitting the MacOSX security features (Disk protection and app certification).
Booting to Ubuntu (USB Stick) and running the badblocks test that way is going to be easier. (In my opinion)
I hope this points you in the right direction.
I had the same issue. But then I opened Disk Utility and pressed Eject on the physical device (make sure it's the hard drive and not the volume). This will unmount the volumes but will keep the device still available, which you can check by running:
diskutil list
Now run the badblocks command again and it should work fine.
I was able to get badblocks working for OSX 10.15 by
1) disabling csrutil, as explained here
2) unmounting the badblock-desired drive via Disk Utility
3) running badblocks: sudo badblocks -b 4096 -w -s -v "$MOUNT_POINT" > "badblocks.info", where MOUNT_POINT=/dev/disk2
I installed badblocks via brew install e2fsprogs, as described here
Tangentially, I also did this in order to query the USB-connected drive via smartctl.

Mounting VMDK disk image

I have a single vmware disk image file with vmdk extension
I am trying to mount this and explore all of the partitions (including hidden ones).
I've tried to follow several guides, such as : http://forums.opensuse.org/showthread.php/469942-mounting-virtual-box-machine-images-host
I'm able to mount the image using vdfuse
vdfuse -w -f windows.vmdk /mnt/
After this I can see one partition and an entire disk exposed
# ll /mnt/
total 41942016
-r-------- 1 te users 21474836480 Feb 28 14:16 EntireDisk
-r-------- 1 te users 1569718272 Feb 28 14:16 Partition1
Continuing with the guide I try to mount either EntireDisk or Partition1 using
mount -o loop,ro /mnt/Partition1 mnt2/
But that gives me the error 'mount: you must specify a filesystem type'
In trying to find the correct type I tried
dd if=/mnt/EntireDisk | file -
which outputs a ton of information but of note is:
/dev/stdin: x86 boot sector; partition 1: ....... FATs ....
So i tired to mount as a vfat but that gave me
mount: wrong fs type, bad option, bad superblock ...etc
What am I doing wrong?
For newer Linux systems, you can use guestmount to mount the third partition within a VMDK image:
guestmount -a xyz.vmdk -m /dev/sda3 --ro /mnt/vmdk
Alternatively, to autodetect and mount an image (less reliable), you can try:
guestmount -a xyz.vmdk -i --ro /mnt/vmdk
Do note that the flag --ro simply mounts the image as read-only; to mount the image as read-write, just replace it with the flag --rw.
Installation
guestmount is contained in following packages per distro:
Ubuntu: libguestfs-tools
OpenSuse: guestfs-tools
CentOS / Fedora: libguestfs-tools-c
Troubleshooting
error: could not create appliance through libvirt
$ guestmount -a file.vmdk -i --ro /mnt/guest
libguestfs: error: could not create appliance through libvirt.
Try running qemu directly without libvirt using this environment variable:
export LIBGUESTFS_BACKEND=direct
Original error from libvirt: Cannot access backing file '/path/to/file.vmdk' of storage file '/tmp/libguestfssF6WKX/overlay1.qcow2' (as uid:107, gid:107): Permission denied [code=38 int1=13]
Solution: use LIBGUESTFS_BACKEND=direct, as suggested:
LIBGUESTFS_BACKEND=direct guestmount -a file.vmdk -i --ro /mnt/guest
fusermount: user has no write access to mountpoint
LIBGUESTFS_BACKEND=direct guestmount -a file.vmdk -i --ro /mnt/guest/
fusermount: user has no write access to mountpoint /mnt/guest
libguestfs: error: fuse_mount failed: /mnt/guest/, see error messages above
Solution: use sudo, or change file permissions on the mountpoint
You can also use qemu:
For .vdi disks
sudo modprobe nbd
sudo qemu-nbd -c /dev/nbd1 ./linux_box/VM/image.vdi
if they are not installed, you can install them (issuing this command in Ubuntu)
sudo apt install qemu-utils
and then mount it with:
mount /dev/nbd1p1 /mnt
For .vmdk disks
sudo modprobe nbd
sudo qemu-nbd -r -c /dev/nbd1 ./linux_box/VM/image.vmdk
notice that I use the option -r, that's because VMDK version 3 must be read only to be able to be mounted by qemu
and then I mount it with
mount /dev/nbd1p1 /mnt
I use nbd1, because nbd0 sometimes gives: 'mount: special device /dev/nbd0p1 does not exist'
For .ova disks
tar -tf image.ova
tar -xvf image.ova
The above will extract the .vmdk disk and then mount it.
Install affuse, then mount using it.
affuse /path/file.vmdk /mnt/vmdk
The raw disk image is now found under /mnt/vmdk.
Check its sector size:
fdisk -l /mnt/vmdk/file.vmdk.raw
# example
Disk file.vmdk.raw: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x000da525
Device Boot Start End Sectors Size Id Type
/mnt/vmdk/file.vmdk.raw1 * 2048 41943039 41940992 20G 83 Linux
Multiply sector size and start sector. In the example it would be 2048*512:
echo '2048*512' | bc
1048576
Mount the raw file using that offset:
mount -o ro,loop,offset=1048576 /mnt/vmdk/file.raw /mnt/vmdisk
The disk should now be mounted and readable on /mnt/vmdisk.
Here is an answer from commandlinefu.com that worked for me:
kpartx -av <image-flat.vmdk>; mount -o /dev/mapper/loop0p1 /mnt/vmdk
You can also activate LVM volumes in the image by running
vgchange -a y
and then you can mount the LV inside the image.
To unmount the image, umount the partition/LV, deactivate the VG for the image
vgchange -a n <volume_group>
then run
kpartx -dv <image-flad.vmdk>
to remove the partition mappings.
You can take a look in this article for a download link for VMware Virtual Disk Development Kit (VDDK). Once downloaded and installed:
vmware-mount -p path_to_vmdk will show the partitions inside the VMDK file. For example:
Nr Start Size Type Id Sytem
-- ---------- ---------- ---- -- ------------------------
1 2048 461371392 BIOS 83 Linux
Then just do:
sudo vmware-mount path_to_vmdk 1 /mnt/mount_point
I tried guestmount, but it is very, very slow. Underneath it creates a virtual machine, uses KVM and so on. Crazy stuff, slow as hell.
Have you got the software package for ntfs?
Try
apt-get install ntfs-3g
on debian based systems.

How to use ulimit in a shell environment to limit memory?

I'm trying the set the maximum memory usage of a command with ulimit.
My command is currently:
ulimit -m 1024 -v 0; python file.py
The python process uses over 5MB of memory. What have I done wrong with the ulimit command?

Why OpenMPI uses a different server given a different -n setting?

I am testing out OpenMPI, provided and compiled by another user, (I am using soft link to his directories for all bin, include, etc - all the mandatory directories) but I ran into this weird thing:
First of all, if I ran mpirun with -n setting <= 10, I can run this below. testrunmpi.py simply prints out "run." from each core.
# I am in serverA.
bash-3.2$ /home/karl/bin/mpirun -n 10 ./testrunmpi.py
run.
run.
run.
run.
run.
run.
run.
run.
run.
run.
However, when I tried running -n more than 10, I will run into this:
bash-3.2$ /home/karl/bin/mpirun -n 24 ./testrunmpi.py
karl#serverB's password: Could not chdir to home directory /home/karl: No such file or directory
bash: /home/karl/bin/orted: No such file or directory
--------------------------------------------------------------------------
A daemon (pid 19203) died unexpectedly with status 127 while attempting
to launch so we are aborting.
There may be more information reported by the environment (see above).
This may be because the daemon was unable to find all the needed shared
libraries on the remote node. You may set your LD_LIBRARY_PATH to have the
location of the shared libraries on the remote nodes and this will
automatically be forwarded to the remote nodes.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that the job aborted, but has no info as to the process
that caused that situation.
--------------------------------------------------------------------------
bash-3.2$
bash-3.2$
Permission denied, please try again.
karl#serverB's password:
Permission denied, please try again.
karl#serverB's password:
I see that the work is dispatched to serverB, while I was on serverA. I don't have any account on serverB. But if I invoke mpirun -n <= 10, the work will be on serverA.
This is strange, so I checked out /home/karl/etc/openmpi-default-hostfile, and tried set the following:
serverA slots=24 max_slots=24
serverB slots=0 max_slots=32
But the problem persists and still gives out the same error message above. What must I do in order to have my program run on serverA only?
The default hostfile in Open MPI is system-wide, i.e. its location is determined while the library is being built and installed and there is no user-specific version of it. The actual location can be obtained by running the ompi_info command like this:
$ ompi_info --param orte orte | grep orte_default_hostfile
MCA orte: parameter "orte_default_hostfile" (current value: <LOOK HERE>, data source: default value)
You can override the list of hosts in several different ways. First, you can provide your own hostfile via the -hostfile option to mpirun. If so, you don't have to put hosts with zero slots inside it - simply omit machines that you have no access to. For example:
localhost slots=10 max_slots=10
serverA slots=24 max_slots=24
You can also change the path to the default hostfile by setting the orte_default_hostfile MCA parameter:
$ mpirun --mca orte_default_hostfile /path/to/your/hostfile -n 10 executable
Instead of passing each time the --mca option, you can set the value in an exported environment variable called OMPI_MCA_orte_default_hostfile. This could be set in your shell's dot-rc file, e.g. in .bashrc if using Bash.
You can also specify the list of nodes directly via the -H (or -host) option.

What does cifs_mount failed w/return code = -22 indicate

I am trying
sudo mount -t cifs //<server>/<share> -o username=user#domain,password=**** /mnt/<mountpoint>
error message:
mount: wrong fs type, bad option, bad superblock on //server/share,
missing codepage or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
The syslog has
CIFS VFS: cifs_mount failed w/return code = -22
I am able to mount the same share on another centos system. I can ping the server, mount point directory has been created.
I ran into this problem when using a host name and solved it by using an IP address. E.g.:
use
mount -t cifs //192.168.1.15/share
rather than
mount -t cifs //servername/share
Another possible solution is to install
cifs-utils
.
Ah, the dreaded -22. Basically this seems to be used as a catchall for "something didn't work", although technically it's referred to as an invalid argument.
The client does IMHO a very poor job of telling you the actual problem. (This may not be its fault - it doesn't always have access to that information).
However -- have you checked the logs on the server/machine you are connecting to?
I was connecting to an OS X samba server, and learned from what I found in the logs there that it was necessary to specify additional options under -o as follows:
nounix,sec=ntlmssp
Among the things these settings enable are "allow long names", and "ignore UNIX filename endings"...sec is to specify security flags.
Another possibility is that you're trying to access a filesystem of a type that mount.cifs can't actually handle.
For RHEL/Centos install package - "cifs-utils"
Maybe move the target?
sudo mount -t cifs -o username=user#domain,password=**** //<server>/<share> /mnt/<mountpoint>
Or maybe this solution? (Ubuntu, Debian methods)
sudo apt-get install smbfs
Or for CentOS, RedHat, Fedora try:
sudo yum install samba-client
I had a similar issue on Ubuntu 12.04 with the "mount" package (version 2.20.1-1ubuntu3).
It happened when I was trying to mount the server share using its hostname rahter than its IP.
Another way to solve the issue on Ubuntu was to install the cifs-utils package. That way I could also mount the samba share using the exact same command line (or fstab) but with hostname.
sudo mount -t cifs //hostname/share -o username=user,password=pwd /mnt/share
Just did a clean install of Ubuntu 12.04 LTS and got this trying to hook up my Linux HTPC.
Solved it by running: sudo apt-get install cifs-utils then remounting it.
CIFS returns code "-22" in many cases (not only invalid arguments).
For me installing keyutils did the trick:
apt-get install keyutils
My distribution is "Ubuntu 14.04.2 LTS".
I figured this out by increasing the logging verbosity of CIFS:
echo 7 > /proc/fs/cifs/cifsFYI
# disable again via:
#echo 0 > /proc/fs/cifs/cifsFYI
Documentation on the bitmask ("7") for cifsFYI can be found here: https://www.kernel.org/doc/readme/Documentation-filesystems-cifs-README
After trying to mount once more dmesg included more helpful information:
Dec 7 12:34:20 pc1471 kernel: [ 5442.667417] CIFS VFS: dns_resolve_server_name_to_ip: unable to resolve:
Another maybe helpful link:
http://vlkan.com/blog/post/2015/01/08/smb-mount-troubleshoot/
I have Ubuntu Server 12.10 x64 installed as a VMware VM, running on OS X 10.8 (Mountain Lion).
On the Mac, in SYSTEM PREFERENCES > SHARING > FILE SHARING (on), I added a folder to share. For my tests, I created a new folder within my Public folder called "ubuntu".
In Ubuntu, I issued the following commands:
sudo mkdir /media/target
sudo mount.cifs //10.0.20.3/ubuntu /media/target -o username=davidallie,nounix,sec=ntlmssp,rw
Ubuntu prompted me for the password and, once entered, mounted the folder. I then ran:
df -H
which allowed me to verify the mounts and mount-points.
This has recently manifested thanks to a kernel bug in v5.18.8+, I was able to reproduce on v5.18.9 and v5.18.11.
Here is the relevant ticket on kernel.org, quote:
it appears that kernel 5.18.8 breaks cifs mounts on my machine. With
5.18.7, everything works fine. With 5.18.8, I am getting:
$ sudo mount /mnt/openmediavault/
mount error(22): Invalid argument
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel
log messages (dmesg)
The relevant /etc/fstab line is:
//odroidxu4.local/julian /mnt/openmediavault cifs
credentials=/home/julas/.credentials,uid=julas,gid=julas,vers=3.1.1,nobrl,_netdev,auto
0 0
Here is the offending commit, and here is the fix, which applies cleanly to v5.18.11. The cause is, from what I understand, a bug in old versions of the samba server in the negotiation protocol.
If this is your issue, you can:
patch your kernel yourself;
downgrade to v5.18.7;
switch to an LTS kernel;
use the userspace (and also really slow and awful) gvfs-smb;
upgrade the samba version on your server; or
add vers=2.0 to the mount.cifs options in /etc/fstab.
Note that while I haven't tried the last one personally, the venerable #SEBiGEM has confirmed in the comments that it works for v5.18.10.
Note also that I didn't try upgrading samba on the server at all because I hate touching the box it's running on - every time I upgrade anything everything breaks. Doing so might also not be an option for those with NAS appliances.
As a personal sidenote, it's a little sad that so many different things can cause -22. My answer is correct, but very very niche and specific to this point in time. I imagine in a month it will simply be useless noise.
Just experience the problem on RHEL 5. You don't need to install the samba suite, just the samba-client and any dependencies.
Maybe it's too late, but simplest solution described in kernel bug 50631:
in the latest code, unc mount parameter in mandatory. Modified command works for me:
sudo mount -t cifs //<server>/<share> -o username=user#domain,password=****,unc=\\\\<server>\\<share> /mnt/<mountpoint>
Try run the comamnd:
$modinfo cifs
filename: /lib/modules/3.2.0-60-virtual/kernel/fs/cifs/cifs.ko
version: 1.76
description: VFS to access servers complying with the SNIA CIFS Specification e.g. Samba and Windows
license: GPL
author: Steve French <sfrench#us.ibm.com>
srcversion: 9435BBC2F61D29F06643803
depends:
intree: Y
vermagic: 3.2.0-60-virtual SMP mod_unload modversions 686
parm: CIFSMaxBufSize:Network buffer size (not including header). Default: 16384 Range: 8192 to 130048 (int)
parm: cifs_min_rcv:Network buffers in pool. Default: 4 Range: 1 to 64 (int)
parm: cifs_min_small:Small network buffers in pool. Default: 30 Range: 2 to 256 (int)
parm: cifs_max_pending:Simultaneous requests to server. Default: 32767 Range: 2 to 32767. (int)
parm: echo_retries:Number of echo attempts before giving up and reconnecting server. Default: 5. 0 means never reconnect. (ushort)
parm: enable_oplocks:Enable or disable oplocks (bool). Default:y/Y/1 (bool)
If your getting any error then cifs is not installed. Just check with your admin. I thought it helps out.
Adding the option vers=3.0 to the mount command worked for me: sudo mount -t cifs -v <src> <dst> -o ...,vers=3.0,...
You need to install cifs-utils first , just as follows:
sudo yum install cifs-utils
I know this is old, but on older cifs-utils versions, you may have to add the following two lines to /etc/request-key.conf
create cifs.spnego * * /usr/sbin/cifs.upcall -c %k
create dns_resolver * * /usr/sbin/cifs.upcall %k
Workaround without installing additional packages (cifs-utils adds another 81mb in Debian Stretch):
$ FILESERVER_IP=$(getent hosts myfileserver.com | awk '{ print $1 ; exit }')
$ sudo mount -t cifs //${FILESERVER_IP}/<share> -o username=user#domain,password=**** /mnt/<mountpoint>
Many answers, but wasn't work for me.
Solution:
My NAS didn't support Samba 3.0, on which my mount switch automatically.
So I downgraded smb version:
mount -t cifs //192.168.0.2/Share -o rw,vers=1.0,username=*****,password=******* /media/1
It's work.

Resources