i have a small meteor js app suddenly it starts using 100% cpu. i found some blogs that says maybe oplog causing the height usage of the cpu so i've disabled it using:
meteor add disable-oplog
but it did not changing anything. i'm facing this issue on the development environment ( run the app through " meteor " command ) and on the deployment environment (run the app remotly using mup ).
development environment : ubuntu 14.0 2G 64Bit meteor 1.3 node js 0.10.45.
deployment environment (droplet): ubuntu 14.0 512Mb 64Bit meteor 1.3 node js 0.10.45.
installed packages:
monitoring process:
I've run into this problem before, but only when running too many production Meteor development enviornments on one server for too long.
It was the swap solution I put in place. Meteor apps can use a lot of memory, and 512MB can be too little. It was swapping all the time, which oddly showed up as a CPU spike. Once I put a better swap configuration in place, all was fine.
This was on an Ubuntu server, I can't recall if it was 14 or 16. On Digital Ocean hosting (they have Swap disabled by default, and the solution I put in place first was apparently bad).
It may not be likely this is the answer for you, but I'm writing it up as it's certainly possible, and can be very hard to figure out.
Maybe you can try using CPU limiter here's a bash script I created
https://gist.github.com/cortezcristian/5ab4fdddcc573972d44873f1e97a2b88
You'll need to install cpu limiter first:
sudo apt-get install cpulimit
ps ax | grep node | grep meteor | grep -v grep | awk '{print $1}' > /tmp/my-app.pid
cpulimit --p $(cat /tmp/my-app.pid) --limit 77
After that you can choose the limit you want 50 / 100 with the --limit flag.
Related
Has anyone had any luck installing Dragonfly BSD 6.2 on Vultr (either as Cloud Computing / High Frequency VM)?
For me, it can't find any disk space - see picture.
Thanks.
Most likely their vm do not use recognized controller. You can try to login as root instead of installer and execute command:
pciconf -lv | grep SCSI or pciconf -lv | grep SATA
to show which SCSI or SATA controller is visible to Dragonfly. Then you will need to load driver for it using kldload and restart installer.
I found out that I have quite many Symfony local web server workers registered (around ~35), and the number keeps growing. I usually just start server with symfony serve and then kill it (Ctrl + \) when no longer needed. Apparently killing it leaves a worker behind, as seen in symfony server:status. Running symfony serve again just creates a new worker.
symfony server:status output:
Local Web Server
Not Running
Workers
PID 6327: /usr/bin/php7.2 -S 127.0.0.1:43653 -d variables_order=EGPCS /home/mindaugas/.symfony/php/83247c3521c3ac3990bf3f823ef473db0a9445e1-router.php
PID 24596: /usr/bin/php7.2 -S 127.0.0.1:37789 -d variables_order=EGPCS /home/mindaugas/.symfony/php/83247c3521c3ac3990bf3f823ef473db0a9445e1-router.php
PID 6575: /usr/bin/php7.4 -S 127.0.0.1:42505 -d variables_order=EGPCS /home/mindaugas/.symfony/php/83247c3521c3ac3990bf3f823ef473db0a9445e1-router.php
PID 41550: /usr/bin/php7.4 -S 127.0.0.1:36313 -d variables_order=EGPCS /home/mindaugas/.symfony/php/83247c3521c3ac3990bf3f823ef473db0a9445e1-router.php
...
Environment Variables
None
So my questions regarding this:
#1: is it possible to quickly kill the server? I assume symfony server:stop is more correct way, but that requires additional console window and entering the command.
#2: how to kill those workers that are registered from previous sessions? Trying e.g. kill 6327 says that there's no such process. Also they're not gone after system restart.
Those extra workers are bothering me because for each one of them the server log output in the console is duplicated. So right now on each request to the server I get around 3k lines of log output in the console. Which makes it pretty useless.
I have the same problem after upgrading to Symfony CLI version v4.19.0...
My (very) bad workaround:
rm /home/myusername/.symfony/var/83247c3521c3ac3990bf3f823ef473db0a9445e1/*
Edit: this answer is not accurate, as hinted a by #CrSrr's answer above.
The symfony command adds data to both the ./log and ./var directories. Deleting entries in only one of those does not remove the appearance of non-existent workers in the project directory. I was fooled by checking status in a directory where the server:start had never been run.
A bug report is on file with symfony here.
Just faced with a similar issue. The PIDs were not to be found.
PS G:\workspace\joined> symfony server:status
Local Web Server
Not Running
Workers
PID 7732: C:\php\php-cgi.exe -b 63801 -d error_log=C:\Users\George\.symfony\log\e79ad2f4b30a2f0a35c3b5ab08772770b382a3d6.log
PID 19324: C:\php\php-cgi.exe -b 62927 -d error_log=C:\Users\George\.symfony\log\e79ad2f4b30a2f0a35c3b5ab08772770b382a3d6.log
PID 17968: C:\php\php-cgi.exe -b 50197 -d error_log=C:\Users\George\.symfony\log\e79ad2f4b30a2f0a35c3b5ab08772770b382a3d6.log
PID 14040: C:\php\php-cgi.exe -b 55075 -d error_log=C:\Users\George\.symfony\log\e79ad2f4b30a2f0a35c3b5ab08772770b382a3d6.log
Environment Variables
None
In the Windows OS, the log files are kept in %USERPROFILE%.symfony. There's most likely a similar location in your home directory. Deleting all the contents of that directory allowed a new Windows Terminal app to show:
PS G:\workspace> symfony server:status
Local Web Server
Not Running
Workers
No Workers
Environment Variables
None
do symfony serve:stopto stopped the server and do against symfony serve to run the server good luck
I am trying to run bad blocks on macOS High Sierra 10.13.6. I installed bad blocks using macports. I keep encountering errors when attempting to run it and I am not sure how to even get bad blocks running
sudo badblocks -c 4096 -s -w -o /Users/mcbeav/Desktop/blocks.txt /dev/disk0s2
This keeps returning the error
badblocks: Resource busy while trying to determine device size
If I try
sudo badblocks -c 4096 -s -w -o /Users/mcbeav/Desktop/blocks.txt /dev/disk0
I get the error
badblocks: Value too large to be stored in data type invalid end block (7813820416): must be 32-bit value
Can anyone please help me out?
My recommendation is that you:
a) Run badblocks via the Mac OS X console in Recovery Mode
High Sierra (10.13+) along with APFS (file format system) prevent certain operations on disk. You'll have to be in recovery mode or turn off disk protection to do as you propose.
Turn off your Mac (Apple > Shut Down).
Hold down Command-R and press the Power button. ...
Wait for OS X to boot into the OS X Utilities window.
Choose Utilities > Terminal.
Enter csrutil disable.
Enter reboot.
Mac OS X Workaround:
My sense from past experience is that you are hitting the MacOSX security features (Disk protection and app certification).
Booting to Ubuntu (USB Stick) and running the badblocks test that way is going to be easier. (In my opinion)
I hope this points you in the right direction.
I had the same issue. But then I opened Disk Utility and pressed Eject on the physical device (make sure it's the hard drive and not the volume). This will unmount the volumes but will keep the device still available, which you can check by running:
diskutil list
Now run the badblocks command again and it should work fine.
I was able to get badblocks working for OSX 10.15 by
1) disabling csrutil, as explained here
2) unmounting the badblock-desired drive via Disk Utility
3) running badblocks: sudo badblocks -b 4096 -w -s -v "$MOUNT_POINT" > "badblocks.info", where MOUNT_POINT=/dev/disk2
I installed badblocks via brew install e2fsprogs, as described here
Tangentially, I also did this in order to query the USB-connected drive via smartctl.
I purchased an Asus c300m with the soul aim of developing my linux skills
I followed the instruction to boot in developer mode and execute the following command to start downloading downloading crouton/ubuntu on it
sudo sh -e ~/Downloads/crouton -t xfce
it was going well until my wifi disconnected temporary and i got the following error:
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
Failed to complete chroot setup
Unmounting /mtn.stateful_partition/crouton/chroots/precise..
Then I tried to run the sudo command again but I got the following:
/usr/local/chroots/precise already has stuff in it!
Either delete it, specify a different name (-n) or specify -u to update it
However, I'm not sure how to modify the command so i can resume installation or restart it.
You could try sudo sh ~/Downloads/crouton -u -n xfce but it's unlikely that will work. That's from the Crouton docs.
The best approach, since you never finished installing and therefore don't need to recover any data, just delete the install directory and start again. There is no good way for crouton the pick up where it left off.
Also during the install you don't want the Chromebook going to sleep. There is no way in the built-in ChromeOS settings to prevent that but according to this article you can go to the Chrome Web Store and install Keep Awake from Google.
This gives a cool icon in the upper right of the Chrome browser showing a sun, a sunset or a moon depending on what settings you want. Before you start your next install, click it to the sun so the machine won't sleep.
I had the same problem. I ran this and it's downloading:
sudo sh ~/Downloads/crouton -e -t xfce -u
I'm not sure if it's where i left off but it is definitely downloading and reinstalling after the interrupted connection.
Just type in sudo start[desktop environment]
and press y. It should keep going.
In UNIX, I have a utility, say 'Test_Ex', a binary file. How can I write a job or a shell script(as a cron job) running always in the background which keeps checking if 'Test_Ex' is still running every 5 seconds(and probably hide this job). If it is running, do nothing. If not, delete a directory at the specified path.
Try this script:
pgrep Test_Ex > /dev/null || rm -r dir
If you don't have pgrep, use
ps -e -ocomm | grep Test_Ex || ...
instead.
Utilities like upstart, originally part of the Ubuntu linux distribution I believe, are good for monitoring running tasks.
The best way to do this is to not do it. If you want to know if Test_Ex is still running, then start it from a script that looks something like:
#!/bin/sh
Test_Ex
logger "Test_Ex died"
rm /p/a/t/h
or
#!/bin/sh
while ! Test_ex
do
logger "Test_Ex terminated unsuccesfully, restarting in 5 seconds"
sleep 5
done
Querying ps regularly is a bad idea, and trying to monitor it from cron is a horrible, horrible idea. There seems to be some comfort in the idea that crond will always be running, but you can no more rely on that than you can rely on the wrapper script staying alive; either one can be killed at any time. Waking up every 10 seconds to query ps is just a waste of resources.