I have one of those first alu iMacs with 2+2 GB ram. I use Vagrant to emulate advanced development environments, separated for different jobs.
When I have just one vagrant process running in the background, the computer gets to be slow as hell, because it is always out of memory.
The question is: can I use vagrant (or any app) to run only on swap memory, so it leaves all the memory for the os and other apps?
If there is any solution, how can I do that?
The short answer is: No, a process can not run in swap completely.
Processes must have their data in RAM for the CPU to be able to operate on it, infrequently used data is moved out to swap space when there's no longer space available in memory for everything that's loaded.
You could create a larger swap space and use ulimit to limit the amount of memory used by processes (i.e. force them into swap earlier), but this doesn't really address the root of your problem - that you're pretty much at the limit of your 4GB of memory.
Keep in mind that using swap space will always produce performance problems as (even with SSDs) reading from disk is far slower than reading from memory.
Short of upgrading to more memory, you could:
Reduce the amount of memory allocated by your vagrant box;
Use OS X's Activity Monitor to identify and close any programs/processes that are not in use but are still using memory.
but, again, these are just stop-gap solutions.
Simple answer is no.
Control swappiness has to be done within the VM, for example Linux, echo 100 > /proc/sys/vm/swappiness to set swap strategy to most aggressive mode. Remember, you have no control over where processes are running (physical memory VS swap)
However, by doing this, your host/guest will still be slow as hell as simply you don't have enough physical memory.
The ultimate solution is to add more RAM to your iMAC ;-D
Related
I am not able to find maria DB recommended RAM,disk,number of Core capacity. We are setting up initial level and very minimum data volume. So just i need maria DB recommended capacity.
Appreciate your help!!!
Seeing that over the last few years Micro-Service architecture is rapidly increasing, and each Micro-Service usually needs its own database, I think this type of question is actually becoming more appropriate.
I was looking for this answer seeing that we were exploring the possibility to create small databases on many servers, and was wondering for interest sake what the minimum requirements for a Maria/MySQL DB would be...
Anyway I got this helpful answer from here that I thought I could also share here if someone else was looking into it...
When starting up, it (the database) allocates all the RAM it needs. By default, it
will use around 400MB of RAM, which isn’t noticible with a database
server with 64GB of RAM, but it is quite significant for a small
virtual machine. If you add in the default InnoDB buffer pool setting
of 128MB, you’re well over your 512MB RAM allotment and that doesn’t
include anything from the operating system.
1 CPU core is more than enough for most MySQL/MariaDB installations.
512MB of RAM is tight, but probably adequate if only MariaDB is running. But you would need to aggressively shrink various settings in my.cnf. Even 1GB is tiny.
1GB of disk is more than enough for the code and minimal data (I think).
Please experiment and report back.
There are minor differences in requirements between Operating system, and between versions of MariaDB.
Turn off most of the Performance_schema. If all the flags are turned on, lots of RAM is consumed.
20 years ago I had MySQL running on my personal 256MB (RAM) Windows box. I suspect today's MariaDB might be too big to work on such tiny machine. Today, the OS is the biggest occupant of any basic machine's disk. If you have only a few MB of data, then disk is not an issue.
Look at it this way -- What is the smallest smartphone you can get? A few GB of RAM and a few GB of "storage". If you cut either of those numbers in half, the phone probably cannot work, even before you add apps.
MariaDB or MySQL both actually use very less memory. About 50 MB to 150 MB is the range I found in some of my servers. These servers are running a few databases, having a handful of tables each and limited user load. MySQL documentation claims in needs 2 GB. That is very confusing to me. I understand why MariaDB does not specify any minimum requirements. If they say 50 MB there are going to be a lot of folks who will want to disagree. If they say 1 GB then they are unnecessarily inflating the minimum requirements. Come to think of it, more memory means better cache and performance. However, a well designed database can do disk reads every time without any performance issues. My apache installs (on the same server) consistently use up more memory (about double) than the database.
I am solving a 2d Laplace equation using OpenCL.
The global memory access version runs faster than the one using shared memory.
The algorithm used for shared memory is same as that in the OpenCL Game of Life code.
https://www.olcf.ornl.gov/tutorials/opencl-game-of-life/
If anyone has faced the same problem please help. If anyone wants to see the kernel I can post it.
If your global-memory really runs faster than your local-memory version (assuming both are equally optimized depending on the memory space you're using), maybe this paper could answer your question.
Here's a summary of what it says:
Usage of local memory in a kernel add another constraint to the number of concurrent workgroups that can be run on the same compute unit.
Thus, in certain cases, it may be more efficient to remove this constraint and live with the high latency of global memory accesses. More wavefronts (warps in NVidia-parlance, each workgroup is divided into wavefronts/warps) running on the same compute unit allow your GPU to hide latency better: if one is waiting for a memory access to complete, another can compute during this time.
In the end, each kernel will take more wall-time to proceed, but your GPU will be completely busy because it is running more of them concurrently.
No, it doesn't. It only says that ALL OTHER THINGS BEING EQUAL, an access from local memory is faster than an access from global memory. It seems to me that global accesses in your kernel are being coalesced which yields better performance.
Using shared memory (memory shared with CPU) isn't always going to be faster. Using a modern graphics card It would only be faster in the situation that the GPU/CPU are both performing oepratoins on the same data, and needed to share information with each-other, as memory wouldn't have to be copied from the card to the system and vice-versa.
However, if your program is running entirely on the GPU, it could very well execute faster by running in local memory (GDDR5) exclusively since the GPU's memory will not only likely be much faster than your systems, there will not be any latency caused by reading memory over the PCI-E lane.
Think of the Graphics Card's memory as a type of "l3 cache" and your system's memory a resource shared by the entire system, you only use it when multiple devices need to share information (or if your cache is full). I'm not a CUDA or OpenCL programmer, I've never even written Hello World in these applications. I've only read a few white papers, it's just common sense (or maybe my Computer Science degree is useful after all).
I'm a developer in a large company that has some legacy code that requires a very large ammount of memory on export functions. To address this, ini_set('memory_limit', '4G'); is used.
The problem is that the script crashes with memory exaustion. If I set the limit to 2G, the script runs to the end. It doesn't even reaches 1GB peak memory usage.
Since the code is versioned and shared with the rest of the company I can't change the limit and changing it on my local install is cumbersome.
My question is: what can make a script crashes with 4GB limit but not 2GB?
PS: my setup is a virtualbox machine running Debian with nginx and php-fpm. The vm has 4GB RAM (although changing this doesn't seem to do any difference).
[update]
Created a new virtual machine with an 64 bits operation system and if I set the vm memory to 2GB it works. (If i use 4GB it doesn't).
Since i'm ok with 2GB, i'll close this issue.
It is a natural limitation: 2 or even 4 Gbs of address space are used for file mapping also which takes some memory pages.
The ultimate solution would be to use the 64-bit PHP interpreter (i.e., switch to 64-bit system, if possible).
Maybe you are on a 32bit system?
Well if your VM only has 4GB, then you probably should give it more memory.
On the 32 bit system 4GB is the limit of memory space. I guess that there can be some memory violations when PHP tries to get 4GB memory.
I was just wondering why there is a need to go through all the trouble of creating distributed systems for massive parallel processing when, we could just create individual machines that support hundreds or thousands of cores/CPUs (or even GPGPUs) per machine?
So basically, why should you do parallel processing over a network of machines when it can rather be done at much lower cost and much more reliably on 1 machine that supports numerous cores?
I think it is simply cheaper. Those machines are available today, no need of inventing something new.
Next problem will be in complexity of the motherboard, imagine 10 CPUs on one MB - so much links! And if one of those CPUs dies, it could destroy whole machine..
You can write a program for GPGPU of course, but it is not as easy as write it for CPU. There are many limitations, eg. cache per core is really small if any, you can not communicate between cores (or you can, but it is very costly) etc.
Linking many computers is more stable, more scalable and cheaper due to long usage history.
What Petr said. As you add cores to an individual machine, communication overhead increases. If memory is shared between cores then the locking architecture for shared memory, and caching, generates increasingly large overheads.
If you don't have shared memory, then effectively you're working with different machines, even if they're all in the same box.
Hence it's usually better to develop very large scale apps without shared memory. And usually possible as well - although communications overhead is often still large.
Given that this is the case, there's little use for building highly multicore individual machines - though some do exist e.g. nvidia tesla...
I've been trying to trace down why I have 100% iowait on my box. If I do something like a mysql select query, system goes to 100% iowait (on more than one cpu on my server,) which kills my watchdogs and sometimes kills httpd itself.
In vmstat I see that every 8 seconds or so, there's a 5MB disk write. And that causes at least one cpu (out of 4) to be blocking for one or two seconds.
I have to say that there are a few million files in my ext3 (and I tried ext2, and I have no atime and no journaling enabled.) There is a hardware raid, mirroring two 300GB ides.
I'm missing dtrace. Is there any way to find out what causes these writes? and how do I speed my filesystem up?
Ideas are welcome!
Thank you!
Use iotop.
OK, possible diagnosis steps (for posterity):
Have you confirmed that you're not actually running out of virtual memory and therefore swapping processes out to disk?
If it's not the kernel swapping, you may be able to use strace (as you don't have dtrace) to prove whether it's MySQL doing the writes
Can you please provide more details of the hardware and O/S configuration?