Debugging 100% iowait problem in linux - unix

I've been trying to trace down why I have 100% iowait on my box. If I do something like a mysql select query, system goes to 100% iowait (on more than one cpu on my server,) which kills my watchdogs and sometimes kills httpd itself.
In vmstat I see that every 8 seconds or so, there's a 5MB disk write. And that causes at least one cpu (out of 4) to be blocking for one or two seconds.
I have to say that there are a few million files in my ext3 (and I tried ext2, and I have no atime and no journaling enabled.) There is a hardware raid, mirroring two 300GB ides.
I'm missing dtrace. Is there any way to find out what causes these writes? and how do I speed my filesystem up?
Ideas are welcome!
Thank you!

Use iotop.

OK, possible diagnosis steps (for posterity):
Have you confirmed that you're not actually running out of virtual memory and therefore swapping processes out to disk?
If it's not the kernel swapping, you may be able to use strace (as you don't have dtrace) to prove whether it's MySQL doing the writes
Can you please provide more details of the hardware and O/S configuration?

Related

NiFi memory management

I Just want to understand how we should plan for the capacity of a NiFi instance.
We have a NiFi instance which is having around 500 flows. So, the total number of processors enabled on NiFi canvas is around 4000. We do run 2-5 flows simultaneously which does not take more than half an hour i.e. we do process data in MBs.
It was working fine till now but we are seeing outofMemory error very often. So we increased xms and xmx parameters from 4g to 8g which has resolved the problem for now. But going forward we will have more flows and we may face outofmemory issue again.
So, can anyone help with matrix of capacity planning or any suggestion to avoid such issues before happening? eg:- If we have 3000 processors enabled with/without any processing then Xg amount memory required.
Any input on NiFi capacity planning would be appreciated.
Thanks in Advance.
OOM errors can occur due to specific memory consuming processors. For example: SplitXML is loading your whole record to memory, so it could load a 1GiB file for instance.
Each processors can document what resource considerations should be taken. All of the Apache processors(as far as I can tell) are documented in that matter so you can rely on them.
In our example, by the way, SplitXML can be replaced with SplitRecord which doesn't load all of the record to memory.
So even if you use 1000 processors simultaneously, they might not consume as much memory as one processor that loads your whole FlowFile's content to memory.
Check which processors you are using and make sure you don't use one like that(there are more like this one that load the whole document to memory).

Where's my memory! all ram gone under Windows Server 2016

I'm checking a server that has 32gb of ram and I see 99% memory usage.
The machine is used with IIS, MongoDB and ElasticSearch.
None of the processes seemed to be that big. The largest was MongoDB at about 1gb.
So, I shut down everything.. and now that memory usage is 88%
After a reboot, with all services running, the memory usage is 23%
Those are the largest processes on the system, with everything being shutdown. As you can see, everything is very small, but most of the ram remains gone.
How can I track what is eating up all the ram? I tried process explorer, but it doesn't give me any more useful info.
Try to use RAMMAP from sysinternals it will give you more details about memory usage. Like metaFile for example.
Elasticsearch generally a lot of the available RAM to cache search results aggregation. This is to avoid memory swapping. It's very evident and observable in LINUX servers. Thus it's recommended using ES in separate server in production with heavy usage.
So please try and check cache memory once.
Have a look at the Heap size allotted to Elasticsearch. You could check the values of -Xms and -Xmx in jvm.options file. Usually, 50% of physical RAM is allotted to ES and with bootstrap.memory_lock set to true, it locks the RAM. Ideally, as another answer mentions, Elasticsearch should be run in its own machine.

Maria DB recommended RAM,disk,core capacity?

I am not able to find maria DB recommended RAM,disk,number of Core capacity. We are setting up initial level and very minimum data volume. So just i need maria DB recommended capacity.
Appreciate your help!!!
Seeing that over the last few years Micro-Service architecture is rapidly increasing, and each Micro-Service usually needs its own database, I think this type of question is actually becoming more appropriate.
I was looking for this answer seeing that we were exploring the possibility to create small databases on many servers, and was wondering for interest sake what the minimum requirements for a Maria/MySQL DB would be...
Anyway I got this helpful answer from here that I thought I could also share here if someone else was looking into it...
When starting up, it (the database) allocates all the RAM it needs. By default, it
will use around 400MB of RAM, which isn’t noticible with a database
server with 64GB of RAM, but it is quite significant for a small
virtual machine. If you add in the default InnoDB buffer pool setting
of 128MB, you’re well over your 512MB RAM allotment and that doesn’t
include anything from the operating system.
1 CPU core is more than enough for most MySQL/MariaDB installations.
512MB of RAM is tight, but probably adequate if only MariaDB is running. But you would need to aggressively shrink various settings in my.cnf. Even 1GB is tiny.
1GB of disk is more than enough for the code and minimal data (I think).
Please experiment and report back.
There are minor differences in requirements between Operating system, and between versions of MariaDB.
Turn off most of the Performance_schema. If all the flags are turned on, lots of RAM is consumed.
20 years ago I had MySQL running on my personal 256MB (RAM) Windows box. I suspect today's MariaDB might be too big to work on such tiny machine. Today, the OS is the biggest occupant of any basic machine's disk. If you have only a few MB of data, then disk is not an issue.
Look at it this way -- What is the smallest smartphone you can get? A few GB of RAM and a few GB of "storage". If you cut either of those numbers in half, the phone probably cannot work, even before you add apps.
MariaDB or MySQL both actually use very less memory. About 50 MB to 150 MB is the range I found in some of my servers. These servers are running a few databases, having a handful of tables each and limited user load. MySQL documentation claims in needs 2 GB. That is very confusing to me. I understand why MariaDB does not specify any minimum requirements. If they say 50 MB there are going to be a lot of folks who will want to disagree. If they say 1 GB then they are unnecessarily inflating the minimum requirements. Come to think of it, more memory means better cache and performance. However, a well designed database can do disk reads every time without any performance issues. My apache installs (on the same server) consistently use up more memory (about double) than the database.

Force Vagrant to use Swap memory

I have one of those first alu iMacs with 2+2 GB ram. I use Vagrant to emulate advanced development environments, separated for different jobs.
When I have just one vagrant process running in the background, the computer gets to be slow as hell, because it is always out of memory.
The question is: can I use vagrant (or any app) to run only on swap memory, so it leaves all the memory for the os and other apps?
If there is any solution, how can I do that?
The short answer is: No, a process can not run in swap completely.
Processes must have their data in RAM for the CPU to be able to operate on it, infrequently used data is moved out to swap space when there's no longer space available in memory for everything that's loaded.
You could create a larger swap space and use ulimit to limit the amount of memory used by processes (i.e. force them into swap earlier), but this doesn't really address the root of your problem - that you're pretty much at the limit of your 4GB of memory.
Keep in mind that using swap space will always produce performance problems as (even with SSDs) reading from disk is far slower than reading from memory.
Short of upgrading to more memory, you could:
Reduce the amount of memory allocated by your vagrant box;
Use OS X's Activity Monitor to identify and close any programs/processes that are not in use but are still using memory.
but, again, these are just stop-gap solutions.
Simple answer is no.
Control swappiness has to be done within the VM, for example Linux, echo 100 > /proc/sys/vm/swappiness to set swap strategy to most aggressive mode. Remember, you have no control over where processes are running (physical memory VS swap)
However, by doing this, your host/guest will still be slow as hell as simply you don't have enough physical memory.
The ultimate solution is to add more RAM to your iMAC ;-D

ASP.NET - Single large web request triggers System.OutOfMemoryException - Still have plenty of available memory

Environment:
Windows 2003 Server (32 bit); IIS6, ASP.NET 2.0 (3.5); 4Gb Ram; 1 Worker Process
We have a situation where we have a very large System.XmlDocument is being loaded into memory, and then it heads into a complied XSL transform.
What is happening is when a web request comes in the server is sitting in an idle state with 2500Mb of available system memory.
As the XML DOM is populated, the available memory drops approx 500Mb at which point we get a System.OutOfMemoryException event. At this point the system should theoretically still have 2000Mb of available memory available to service the request (according to Perfmon).
The related questions I have are:
1) At what level in the stack is this out of memory limitation being met? OS? IIS? ASP.NET? worker process? Is this a per individual web request limit?
2) Is this limit configurable somewhere?
3) Why can’t this web request access the full available system memory?
1) I would guess at the worker process but this should be configurable within IIS to the limit of memory that a worker process can use. Another factor is what level of bits does your software use, e.g. 32 bit has a physical limit of 4 GB since this is the total address space.
2) Probably but don't forget that memory fragmentation may play a role in getting to out of memory faster than you think, e.g. if there is a memory request for a contiguous 1000 Mb piece of memory then this may not necessarily be found in the current memory.
3) Have you examined dump data to see what is in the memory when the exception gets thrown? If not, there are ways to get a snapshot of the memory to see what it looks like as this may give you more clues about what is going on.
You are running in a process. A process can only access 2 gigs of memory. This task is sharing memory with everything else running in this process, so this bit of code does not get the full 2 gig -- even if it is available.
There is a 3 gig switch on the os as well. I believe it is a registry setting. But you will have to search MSDN to find that info.
But realistically, you need to do this another way. Possibly by switching to a SAX style xml parser.
I'm sure there are some bright heads here that can answer your specific questions, but have you asked yourself if there is another way to do what you want? I specifically mean that you probably do not want to process a very large XML document, but you probably more specifically want to return something back to the client. Could you rewrite the code to avoid this XML document altogether, or perhaps not load it all into memory at the same time, and still produce the same end-result?
1) Dunno. Check your logs.
2) IIS limits memory divvied out to websites/application pools. Check your settings.
3) Servers are all about uptime; if an single app hogs all the resources everybody else suffers. Thats why enterprise apps like IIS limit memory to prevent runaways from taking down the entire server.

Resources