I get crasy about atom-helper eats up CPU. Atom is a nite app, but this CPU overusage drive me crasy. I turned off GIT pacakges. What elase I can try?
Related
I am using XGBoost via the R package, and did not specify an nthread parameter (should default to the maximum number of available cores, which it does in Ubuntu).
On a Windows PC with an i7-4770 CPU (which has 4 cores = 8 threads), however, only max. 50% of the max CPU level is reached, even when I manually set nthread = 8 (The exact same code uses 100% of max CPU level under Ubuntu, so this is not an implementation issue I think). I also tried nthread = 4 which leads to around 30% of max CPU usage.
How do I get XGBoost to use all available threads under Windows?
I've found that when installing the Windows XGBoost R package from CRAN via install.packages("xgboost") it does not have MPI support. Without MPI you will not get the full benefit of parallel processing and your CPUs will be under-utilised. You can confirm this in your scenario by using software like Dependency Walker on the xgboost.dll file—you will note that it doesn't link with any MPI library (usually vcomp140.dll on Windows).
The solution in my case was to uninstall the CRAN-supplied R package and build XGBoost and its R package from source, which was an adventure in itself, but did give me an MPI-enabled installation that pushed all 16 cores in my system to 100% utilisation.
(Edited for extra clarity)
When I run memory.size(max=NA) I get: [1] 16264
But memory.size(max=T) gives me [1] 336.88
When I look at Task Manager, the 4 threads are using a total of ~1,000 MB (1/16 of my 16GB of available RAM) but they are using 100% of my CPU. While running, all processes combined are only using 50% of my 16GB of available RAM.
Whenever I try to increase memory allocation with memory.size(max=1000), I get the warning message:
Warning message:
In memory.size(max = 1000) : cannot decrease memory limit: ignored
What is going on here?
1) Is my CPU just slow given the amount of RAM I have? (Intel i7-6500U 2.5 GHz)
2) Does memory allocation require additional steps when using parallel threading? (e.g. doParallel)
Take Cairo as an example, when I run Pkg.add("Cairo"), there's nothing displayed in the console.
Is there a way to let Pkg.add() display more information when it is working?
What steps does Pkg.add() carry out? Download, compile?
Is it possible to speed it up? I kept waiting for 15 minutes, nothing out! Maybe it's Julia's problem, or maybe it's system's problem, how can one tell?
Edit
Julia version: 0.3.9 (Installed using binary from julia-lang.org)
OS: Winsows 7 64bit.
CPU: Core Duo 2.4GHz
RAM: 4G
Hard Disk: SSD
ping github.com passed, 0% loss.
Internet download speedtest: ~30 Mbps.
I don't know whether this is normal: it took me 11 seconds to get the version.
PS C:\Users\Nick> Measure-Command {julia --version}
Days : 0
Hours : 0
Minutes : 0
Seconds : 11
Milliseconds : 257
Ticks : 112574737
TotalDays : 0.000130294834490741
TotalHours : 0.00312707602777778
TotalMinutes : 0.187624561666667
TotalSeconds : 11.2574737
TotalMilliseconds : 11257.4737
And it took nearly 2 minutes to load the Gadfly package:
julia> #time require("Gadfly")
elapsed time: 112.131236102 seconds (442839856 bytes allocated, 0.39% gc time)
Does it runs faster on Linux/Mac than on Windows? It is usually not easy to build software on Windows; however, will it improve the performance if I build from source?
Julia is awesome, I really hope it works!
As mentioned by Colin T. Bowers, your particular case is abnormally slow and indicates something wrong with your installation. But Pkg.add (along with other Pkg operations) is known to be slow on Julia 0.4. Luckily, this issue has been fixed.
Pkg operations have seen a substantial increase in performance in Julia v0.5, which is being released today (September 19, 2016). You can go to the downloads page to get v0.5. (It might be a few hours before they're up as of the time of this writing.)
I'm trying to clone a reasonably big svn repository with git-svn and at a certain point I get a error message:
Failure loading plugin: APR: Can't create a character converter from 'UTF-8' to native encoding: Cannot allocate memory at /usr/libexec/git-core/git-svn line 5061
And sometimes a
Cannot allocate memory: zlib (compress2): out of memory: Compression of svndiff data failed at /usr/libexec/git-core/git-svn line 5061
error message. I still have ~3GB RAM free. What should I do so git-svn can utilize it?
(I'm doing this on RedHat Enterprise Linux 6.5 if that makes any difference)
From:
This error message is about the memory git is trying to allocate --
it's more than what is free. This is most likely caused by a large
file having been checked into SVN. Unfortunately, there's no easy way
to fix it (apart from buying more memory) -- you would have to remove
the large file and the commit adding it from SVN.
However try following:
Increase swap memory
Increase ulimit
I have a strange problem with one of my plone: when I clear and rebuild the zcatalog the zope client quits silently after some time. No errors.
I did the process using the ZMI (zeo + zeoclient, standalone) and using 'zinstance debug'. Same result the client quits silently.
I'm using a standard Plone 4.3 with some addon Products on a Ubuntu Server 12.04 box.
Tasks I have done in order to find out the problem, without success:
I've checked the permissions on the filesystem.
I've reinstalled Plone 4.3
Packing the database works ok, but the problem persists.
Checked the free inodes on the filesystem.
Executing the process on other computer works successfully.
Executing the client with fg parameter, no messages when quitting.
Backup the db and restoring. Same result after restoring, but if restoring on other computer the process rebuilds the catalog (with same Plone version and addons).
Reindexing a index of the catalog causes the same fail: quitting with no messages.
ZODB/scripts/fstest.py shows no errors.
ZODB/scripts/fsrefs.py shows no errors.
Any clues?
David, you are right, I just found the problem yesterday, but it was too late (I was tired) to report it here.
This plone instance is installed on an VPS (OpenVZ) with 512 MB and the kernel killed the python process silently when there was no memory free.
One of my last tests was to rebuild the catalog with "log progress" enabled, there I show that the process quitted at different points, but all around 30%. Then by chance I executed dmesg, and "voila", the mistery was resolved, look:
[2233907.698115] Out of memory in UB: OOM killed process 17819 (python) score 0 vm:799612kB, rss:497324kB, swap:45480kB
[2235168.564053] Out of memory in UB: OOM killed process 445 (python) score 0 vm:790380kB, rss:498036kB, swap:46924kB
[2236752.744927] Out of memory in UB: OOM killed process 17964 (python) score 0 vm:790392kB, rss:494232kB, swap:45584kB
[2237461.280724] Out of memory in UB: OOM killed process 26584 (python) score 0 vm:790328kB, rss:497932kB, swap:45940kB
[2238443.104334] Out of memory in UB: OOM killed process 1216 (python) score 0 vm:799512kB, rss:494132kB, swap:44632kB
[2239457.938721] Out of memory in UB: OOM killed process 12821 (python) score 0 vm:794896kB, rss:502000kB, swap:42656kB}