Has anyone had any luck installing Dragonfly BSD 6.2 on Vultr (either as Cloud Computing / High Frequency VM)?
For me, it can't find any disk space - see picture.
Thanks.
Most likely their vm do not use recognized controller. You can try to login as root instead of installer and execute command:
pciconf -lv | grep SCSI or pciconf -lv | grep SATA
to show which SCSI or SATA controller is visible to Dragonfly. Then you will need to load driver for it using kldload and restart installer.
Related
I am trying to run bad blocks on macOS High Sierra 10.13.6. I installed bad blocks using macports. I keep encountering errors when attempting to run it and I am not sure how to even get bad blocks running
sudo badblocks -c 4096 -s -w -o /Users/mcbeav/Desktop/blocks.txt /dev/disk0s2
This keeps returning the error
badblocks: Resource busy while trying to determine device size
If I try
sudo badblocks -c 4096 -s -w -o /Users/mcbeav/Desktop/blocks.txt /dev/disk0
I get the error
badblocks: Value too large to be stored in data type invalid end block (7813820416): must be 32-bit value
Can anyone please help me out?
My recommendation is that you:
a) Run badblocks via the Mac OS X console in Recovery Mode
High Sierra (10.13+) along with APFS (file format system) prevent certain operations on disk. You'll have to be in recovery mode or turn off disk protection to do as you propose.
Turn off your Mac (Apple > Shut Down).
Hold down Command-R and press the Power button. ...
Wait for OS X to boot into the OS X Utilities window.
Choose Utilities > Terminal.
Enter csrutil disable.
Enter reboot.
Mac OS X Workaround:
My sense from past experience is that you are hitting the MacOSX security features (Disk protection and app certification).
Booting to Ubuntu (USB Stick) and running the badblocks test that way is going to be easier. (In my opinion)
I hope this points you in the right direction.
I had the same issue. But then I opened Disk Utility and pressed Eject on the physical device (make sure it's the hard drive and not the volume). This will unmount the volumes but will keep the device still available, which you can check by running:
diskutil list
Now run the badblocks command again and it should work fine.
I was able to get badblocks working for OSX 10.15 by
1) disabling csrutil, as explained here
2) unmounting the badblock-desired drive via Disk Utility
3) running badblocks: sudo badblocks -b 4096 -w -s -v "$MOUNT_POINT" > "badblocks.info", where MOUNT_POINT=/dev/disk2
I installed badblocks via brew install e2fsprogs, as described here
Tangentially, I also did this in order to query the USB-connected drive via smartctl.
I just migrated from RHEL 6 to RHEL 7. I used to call the following in order to list active NFS mounts:
/etc/init.d/netfs status
That would provide this kind of output:
Configured NFS mountpoints:
/data
Active NFS mountpoints:
/data
Since RHEL 7 doesn't use this script anymore, could you please let me know what the equivalent would be? (if there is one)
Thanks!
Run the following command :
mount -l | grep nfs
Another way :
cat /proc/mounts | grep nfs
Also this command is used to get more information about the mountpoints :
nfsstat
i have a small meteor js app suddenly it starts using 100% cpu. i found some blogs that says maybe oplog causing the height usage of the cpu so i've disabled it using:
meteor add disable-oplog
but it did not changing anything. i'm facing this issue on the development environment ( run the app through " meteor " command ) and on the deployment environment (run the app remotly using mup ).
development environment : ubuntu 14.0 2G 64Bit meteor 1.3 node js 0.10.45.
deployment environment (droplet): ubuntu 14.0 512Mb 64Bit meteor 1.3 node js 0.10.45.
installed packages:
monitoring process:
I've run into this problem before, but only when running too many production Meteor development enviornments on one server for too long.
It was the swap solution I put in place. Meteor apps can use a lot of memory, and 512MB can be too little. It was swapping all the time, which oddly showed up as a CPU spike. Once I put a better swap configuration in place, all was fine.
This was on an Ubuntu server, I can't recall if it was 14 or 16. On Digital Ocean hosting (they have Swap disabled by default, and the solution I put in place first was apparently bad).
It may not be likely this is the answer for you, but I'm writing it up as it's certainly possible, and can be very hard to figure out.
Maybe you can try using CPU limiter here's a bash script I created
https://gist.github.com/cortezcristian/5ab4fdddcc573972d44873f1e97a2b88
You'll need to install cpu limiter first:
sudo apt-get install cpulimit
ps ax | grep node | grep meteor | grep -v grep | awk '{print $1}' > /tmp/my-app.pid
cpulimit --p $(cat /tmp/my-app.pid) --limit 77
After that you can choose the limit you want 50 / 100 with the --limit flag.
I have an SSH access to my application. Is there any UNIX code that I can use to get the name of the applications installed?
Thanks!
Actually I don't understand when you talk about "my applicattion" or "applications installed" what are you exactly refering.
1) If you want to know what applications are deployed, for example in the application server of your instance, for example Tomcat 7 you can take a look here: List Currently Deployed Applications
2) Or maybe you are looking for SO installed applications.
Depeending on what OS is running may be different. For example for Red Hat Enterprise / Fedora Linux / Suse Linux / Cent OS:
Under Red Hat/Fedora Linux:
$ rpm -qa | grep {package-name}
For example find out package mutt installed or not:
$ rpm -qa | grep mutt
Output:
mutt-1.4.1-10
If you don't get any output ( package name along with version), it means package is not installed at all. You can display list all installed packages with the following command:
$ rpm -qa
$ rpm -qa | less
3) Another useful command is ps command. You can check what is running with the ps command.
Type the following ps command to display all running process:
ps aux | less
Where,
A: select all processes
a: select all processes on a terminal, including those of other users
x: select processes without controlling ttys
Task: see every process on the system
ps -A
ps -e
what is the command to know the L2 cache size of CPU on Solaris operating system running on Sparc and x86 processors.
I don't have access to a Solaris box to test this out, but you might be able to achieve this using prtpicl.
prtpicl -v -c cpu | grep l2-cache-size
For a more portable option, check out the lstopo command from the hwloc project.
On sparc just run fpversion (/product/SUNWspro/bin/fpversion) and it will print the xcache code-generation options that show the L1 and L2 cache sizes. Then read http://docs.oracle.com/cd/E19205-01/819-5267/bkazt/index.html to understand it.