My app is python with mariadb on 8 Cores vCPU and 16GB RAM
I've set innodb_buffer_pool_size to 11G but RAM usage is low, never crosses 5G
My application then runs quite slow. What causes this? Any clues and guidance?
DB size is less than 5GB
I attached htop statistic, it seemed there're much resources avaialable .what clues we can get? see htop
Related
It would seem to me that we have a bottleneck we just cant seem to get over.
We have a setup which contains 4 NVME drives in Raid 10
We are using mariadb 10.4
We have indexes
The workload that we have will 99% of the time be IO bound there is no way around that fact
What I have seen while watching the performance dashboard in mysql workbench is that both the SATA SSD and NVME SSD read at about 100MB for the same data set
Now if I am searching through 200M rows(or pulling 200M) I would think that the Innodb disk read would go faster then 100MB
I mean these drives should be capable of reading 3GB(s) so I would at least expect to see like 500MB(s)
The reality here is that I am seeing the exact same speed on the NVME that I see on the SATA SSD
So the question I have is how do I get these disk to be fully utilized
Here is the only config settings outside of replication
sql_mode = 'NO_ENGINE_SUBSTITUTION'
max_allowed_packet = 256M
innodb_file_per_table
innodb_buffer_pool_size = 100G
innodb_log_file_size = 128M
innodb_write_io_threads = 16 // Not sure these 2 lines actually do anything
innodb_read_io_threads = 16
IO bound there is no way around that fact
Unless you are very confident on the suitability of indexes this seems a little presumptuous.
Assuming your right, this would imply a 100% write workload, or a data size orders of magnitude higher that RAM available and a uniform distribution of small accesses?
innodb_io_capacity is providing a default limitation and your hardware is capable of more.
Also if you are reading so frequently, your innodb_buffer_pool_size isn't sufficient.
I'm checking a server that has 32gb of ram and I see 99% memory usage.
The machine is used with IIS, MongoDB and ElasticSearch.
None of the processes seemed to be that big. The largest was MongoDB at about 1gb.
So, I shut down everything.. and now that memory usage is 88%
After a reboot, with all services running, the memory usage is 23%
Those are the largest processes on the system, with everything being shutdown. As you can see, everything is very small, but most of the ram remains gone.
How can I track what is eating up all the ram? I tried process explorer, but it doesn't give me any more useful info.
Try to use RAMMAP from sysinternals it will give you more details about memory usage. Like metaFile for example.
Elasticsearch generally a lot of the available RAM to cache search results aggregation. This is to avoid memory swapping. It's very evident and observable in LINUX servers. Thus it's recommended using ES in separate server in production with heavy usage.
So please try and check cache memory once.
Have a look at the Heap size allotted to Elasticsearch. You could check the values of -Xms and -Xmx in jvm.options file. Usually, 50% of physical RAM is allotted to ES and with bootstrap.memory_lock set to true, it locks the RAM. Ideally, as another answer mentions, Elasticsearch should be run in its own machine.
Environment :
machines : 2.1 xeon, 128 GB ram, 32 cpu
os : centos 7.2 15.11
cassandra version : 2.1.15
opscenter version : 5.2.5
3 keyspaces : Opscenter (3 tables), OpsCenter (10 tables), application`s keyspace with (485 tables)
2 Datacenters, 1 for cassandra (5 machines )and another one DCOPS to store up opscenter data (1 machine).
Right now the agents on the nodes consume on average ~ 1300 cpu (out of 3200 available). The only transactioned data being ~ 1500 w/s on the application keyspace.
Any relation between number tables and opscenter? Is it behaving alike, eating a lot of CPU because agents are trying to write the data from too many metrics or is it some kind of a bug!?
Note, same behaviour on previous version of opscenter 5.2.4. For this reason i first tried to upg opscenter to newest version available.
From opscenter 5.2.5 release notes :
"Fixed an issue with high CPU usage by agents on some cluster topologies. (OPSC-6045)"
Any help/opinion much appreciated.
Thank you.
Observing with the awesome tool you provided Chris, on specific agent`s PID noticed that the heap utilisation was constant above 90% and that triggered a lot of GC activity with huge GC pauses of almost 1 sec. In this period of time i suspect the pooling threads had to wait and block my cpu alot. Anyway i am not a specialist in this area.
I took the decision to enlarge the heap for the agent from default 128 to a nicer value of 512 and i saw that all the GC pressure went off and now any thread allocation is doing nicely.
Overall the cpu utilization dropped from values of 40-50% down to 1-2% for the opscenter agent. And i can live with 1-2% since i know for sure that the CPU is consumed by the jmx-metrics.
So my advice is to edit the file:
datastax-agent-env.sh
and alter the default 128 value of Xmx
-Xmx512M
save the file, restart the agent, and monitor for a while.
http://s000.tinyupload.com/?file_id=71546243887469218237
Thank you again Chris.
Hope this will help other folks.
I have openstack deployed across multiple servers. Each server has 2 CPU, 8 Cores each, 16 threads each. If I turn hyper-threading on, how many max vCPUs can I use on my openstack deployment so that I don't overcommit any vCPUs for any VM.
Hyperthreading
I recommend against turning on hyperthreading in when working with KVM in general, however I am biased. When hyperthreading and kvm were both young, there were many issues that cropped up around vcpu and hyperthreading.
For clarity, hyperthreading simply creates a soft-logical processor in the linux kernel in an effort to reach a higher efficiency in the cpu processing queue.
Overcommitting, vCPUs and logical CPUs
A vCPU is a virtual cpu allocated to a virtual machine.
A logical CPU is a CPU logically allocated to your host system's Linux kernel.
As seen with hyperthreading, sometimes the logical CPUs outnumber the physical CPUs or cores on the host.
You are technically overcommitting the moment you have more vcpu cores than physical cores. Note how I said PHYSICAL cores, not logical CPUs. What linux shows you in proc/cpuinfo may not be an accurate reflection of available physical cores, in part thanks to hyperthreading.
As kvm allocates vCPUs they are not set with any sort of CPU affinity by default. What this means is, the vCPUs are going to whichever logical processor in linux seems to be most available at the time. If someone kicks off a make 'MAKE=make -j64' World sort of job, you might see some pretty significant utilization spin up and begin to fire hose around whatever logical CPUs are most available at any given instruction set.
Now if you have an 8 physical core box, hosting 4 virtual machines, with 2 vCPUs a piece this is fine. But think about what happens with hyperthreading enabled... now you have 16 logical CPUs, but only 8 cores. What happens when you bring up 4 more virtual machines? You run the risk of having virtual machines directly impacting resource availability to their neighbors. This is technically overcommitting.
Don't overcommit if you don't have to.
Also consider the needs of the host. You might want to set cpu_affinity on the host system when you perform CPU intensive actions and consider that physical core as dedicated to the HOST, and subtract it from the available ( max ) vCPU count available to VMs.
Learn how to set affinity's with taskset:
ref: http://manpages.ubuntu.com/manpages/hardy/man1/taskset.1.html
Max vCPU per VM
As for cpu quotas, this is basically a function of your hypervisor not of OpenStack. You'll want to handle that with a CFM and some careful planning.
For instance, RedHat tunes their own KVM packages:
The maximum amount of virtual CPUs that is supported per guest varies
depending on which minor version of Red Hat Enterprise Linux 6 you are
using as a host machine. The release of 6.0 introduced a maximum of
64, while 6.3 introduced a maximum of 160. Currently with the release
of 6.7, a maximum of 240 virtual CPUs per guest is supported.
ref: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/chap-Virtualization_Host_Configuration_and_Guest_Installation_Guide-Virtualization_Restrictions.html
Here is some info on tuning PER vm cpu / resource allocation
ref: http://libvirt.org/formatdomain.html#elementsCPUAllocation
I'm a developer in a large company that has some legacy code that requires a very large ammount of memory on export functions. To address this, ini_set('memory_limit', '4G'); is used.
The problem is that the script crashes with memory exaustion. If I set the limit to 2G, the script runs to the end. It doesn't even reaches 1GB peak memory usage.
Since the code is versioned and shared with the rest of the company I can't change the limit and changing it on my local install is cumbersome.
My question is: what can make a script crashes with 4GB limit but not 2GB?
PS: my setup is a virtualbox machine running Debian with nginx and php-fpm. The vm has 4GB RAM (although changing this doesn't seem to do any difference).
[update]
Created a new virtual machine with an 64 bits operation system and if I set the vm memory to 2GB it works. (If i use 4GB it doesn't).
Since i'm ok with 2GB, i'll close this issue.
It is a natural limitation: 2 or even 4 Gbs of address space are used for file mapping also which takes some memory pages.
The ultimate solution would be to use the 64-bit PHP interpreter (i.e., switch to 64-bit system, if possible).
Maybe you are on a 32bit system?
Well if your VM only has 4GB, then you probably should give it more memory.
On the 32 bit system 4GB is the limit of memory space. I guess that there can be some memory violations when PHP tries to get 4GB memory.