Mariadb - information schema queries really slow - mariadb

Ever since upgrading to Fedora 23, information schema queries have become very slow. This is an installation that started as mysql in Fedora 17. The change definitely happened with the upgrade to 23.
mysql
use information_schema
select * from tables
....
+---------------+--------------------+--------------------------------------------------------+-------------+--------------------+---------+------------+------------+----------------+-------------+--------------------+--------------+-----------+---------------------+---------------------+---------------------+---------------------+-------------------+----------+------------------+----------------------------------------------------------------------------------------------------------+
5237 rows in set, 11 warnings (1 min 7.32 sec)
MariaDB [information_schema]>
There are 28 databases, none particularly large.
Is there any sort of clean up or optimization that can be done to make this reasonable again?
Thanks

Probably not a regression...
That query has to "open" every table in every database. This can be a lot of OS I/O to get the .frm files. The OS caches such. I tested your query with my 1177 tables:
1st run: 32.54 seconds.
2nd run: 0.7 seconds.
3rd run: 0.7 seconds.
Try a second run on your "slower" machine.
Also, check this on both machines:
SHOW VARIABLES LIKE 'table_open_cache';
It might be more than 5237 on the fast machine and less than 5237 on the slower machine. (Actually, I don't think this is an issue. I shrank my setting, but the SELECT continued to be about 0.7 sec.)

Related

CPU usage going 100% after upgraded maria DB version from 10.2 to 10.6

I'm using CRM Symfony based application. I'm using AWS RDS Maria DB for the database from the last 2 years. Everything was working smoothly. But before 2 days I've upgraded Maria DB version (as it's mandatory) from 10.2 to 10.6.
After the upgrade, CPU usage went to 99% continue and the session increasing to 800. I've created a new parameter group for the 10.6 version. I didn't update anything in the code or query.
Please give your help/suggestion for the same.
Thanks
Ankit
I have done deep research and finally got the solution.
https://conetix.com.au/support/slow-mariadb-sql-queries-in-statistics-state/
this is an option from MariaDB and Mysql that for each query it does some analysis so it updates it in the next queries, but the complex queries stay too long analyzing instead of actually do the query and we can have queries that were taking too long to execute and put everything on hold and hanging.

Why is a web server 2-3x slower in WSL than in VMware? (same Docker stack)

I have a docker compose setup that consists of:
wordpress:php8.1
mariadb:latest
traefik:v2.7
To rule out Windows networking differences, this is the benchmark ran from inside:
curl --resolve example.com:443:172.18.0.3 --write-out '%{time_total}\n' --output /dev/null https://example.com
Where example.com is my custom domain, the IP is traefik container's current IP. I'm running the command SSH'd into VMware, and then from PowerShell after typing wsl. I copied the project from one place to the other as-is (kudos to Docker for portability).
It consistently returns ~0.2s for the VMware-backed instance, while it's 0.4-0.6s for WSL. It represents the load of the index.php itself which includes HTML source of a performant WP site, using a handcoded theme, no plugins. Locally, the staticized version doesn't seem to have a measurable difference, or very slight, all under 10ms on both systems.
Rest of the config:
Windows 11 22H2 22610.1, WSL 2, VMware 16.1.0 build-17198959, Docker installed the same way and NOT Docker Desktop.
UFW off.
The vmdk and vhdx files are on the same SSD.
Not using /mnt files across filesystems: everything is under the home folder of each system, respectively.
OS is Ubuntu 20.04.4 LTS in WSL, and Ubuntu 21.10 in VMware. I tried Ubuntu 22.04 LTS in WSL and WMware too, same numbers.
I even tried with Nginx + PHP-FPM variant of WP in WSL, and OpenLiteSpeed in VMware, nothing changes the numbers.
The only difference I know of is that I had to sudo update-alternatives --config iptables and choose /usr/sbin/iptables-legacy to make Docker work in WSL, which on WMware I didn't need to do.
Several caveats with my answer:
I wish I had a solution for you, but your primary question is why, which this should answer. And perhaps the data here will help lead us to a solution.
I hope I'm wrong about this, and I may well be. I've always heard anecdotally that WSL2 performance was "near native." Your experience, however, coupled with my benchmarking below, leads me to believe that may not be the case.
That said, I'm going to report the data that I came up with in investigating this.
Short summary -- My benchmarking seems to show a substantial disk IO and memory performance delta between Hyper-V and VMWare that likely explains your WordPress results.
Supporting data and research
I started out with a similar test scenario as yours, but attempted to reduce it to as much of an MRE as I could:
Configuration
Hardware:
i9500
16GB of RAM
SSD
Fairly fresh/recent install of Windows 11 Pro, with WSL2 enabled
Fresh install of VMWare Workstation 16 Player
Virtualization:
Default VMWare settings (2 CPUs, 4GB RAM)
Default WSL2 settings (6 CPUs, 8GB RAM)
In both WSL2 and VMWare:
Ubuntu Server 20.04 guest/distribution
Docker installed in both (from official repo, not Docker Desktop)
ubuntu:latest (22.04) Docker image
MySQL server (MariaDB) and Sysbench installed in the Ubuntu 22.04 Docker container
Note that for the benchmarking below:
I closed VMWare when testing WSL2, and vice-versa.
I did not reboot the Windows host between tests. However, note that some of these tests were run multiple times, from both VMWare and WSL2/Hyper-V with no substantial differences in the results, so I do not believe a reboot would have changed the results measurably.
Benchmarking
I started off with some basic Sysbench testing of CPU and Memory. This was done from within the Docker container.
A simple sysbench cpu --time=300 run:
VMWare
WSL2
events/sec
1,250.97
1,252.89
# events
375,294.00
375,869.00
Latency
↪ Min
0.77
0.77
↪ Avg
0.80
0.80
↪ Max
31.40
4.07
↪ 95th percentile
0.87
0.86
Pretty much an even matchup there.
sysbench memory run:
VMWare
WSL2
Total operations
64,449,416.00
6,456,274.00
MiB transferred
62,938.88
6,304.96
Latency
↪ Min
0.00
0.00
↪ Avg
0.00
0.00
↪ Max
23.63
0.12
↪ 95th percentile
0.00
0.00
Ouch - WSL2's Docker image is running at about 10% of the memory bandwidth of VMWare. I'll be honest; it was tough to spot this until I inserted the comma separators here in the table ;-). At first glance, I thought the two were on par.
I decided to skip straight to MySQL testing, also using Sysbench, since this would probably provide the closest match to your WordPress usage. This was done (after a corresponding prepare) with:
sysbench oltp_read_write.lua --mysql-user=root --time=300 --tables=10 --table-size=1000000 --range_selects=off --report-interval=1 --histogram run
I'll skip the histogram and second-by-second results (but I have them saved if they are useful to anyone), but here's the summary data:
VMWare
WSL2
Queries performed
↪ Read
583,220
66,910
↪ Write
233,288
26,764
↪ Other
116,644
13,382
↪ Total
933,152
107,056
Transactions
58,322
6,691
Ignored errors
0
0
Reconnects
0
0
Latency
↪ Min
2.08
14.54
↪ Avg
5.14
44.83
↪ Max
71.67
193.75
↪ 95th Percentile
11.65
81.48
Again, ouch -- WSL2's MySQL performance (in Docker, at least) is benchmarking at around a tenth of that of VMWare. It's likely that most of your observed performance difference is represented in these results.
At this point, I began to suspect that the problem could be reproduced in a more generic (IO) way at the hypervisor level, ignoring WSL2 and Docker entirely. WSL2, of course, runs inside a (hidden to the user) Hyper-V backed VM, even though it doesn't require the full Hyper-V manager.
I proceeded to enable Hyper-V and install another Ubuntu 20.04 guest in it. I then installed Sysbench in both the VMWare and Hyper-V guest Ubuntu OS.
I then ran a disk IO comparison with:
sysbench fileio --file-total-size=15G prepare
sysbench fileio --file-total-size=15G --file-test-mode=rndrw --time=300 --max-requests=0 --histogram run
The results bore out the suspicion:
VMWare Ubuntu Guest
Hyper-V Ubuntu Guest
File operations
↪ Reads/sec
2,847.07
258.37
↪ Writes/sec
1,898.05
172.25
↪ fsyncs/sec
6,074.06
551.20
Throughput
↪ MiB/sec Read
44.49
4.04
↪ MiB/sec Written
29.66
2.69
Latency
↪ Min
0.00
0.00
↪ Avg
0.09
1.02
↪ Max
329.88
82.77
↪ 95th Percentile
0.32
4.10
One interesting thing to note during this stage was that the prepare operation for Sysbench was faster on Hyper-V by about 30% (IIRC). I did not capture the results since the prepare step isn't supposed to be part of the benchmarking.
However, after reading your comment and benchmarking results on unzip being faster on WSL2, I'm thinking there may be a connection. Both VMWare and Hyper-V/WSL2 use dynamically resized virtual disks (sometimes called "sparse"). The size of the virtual disk on the host OS essentially starts as a near-0-byte file and grows as needed up to its maximum size.
It may be that either:
Hyper-V has a performance advantage when growing the virtual disk.
Or in our testing, VMWare needed to grow the disk for these operations but the Hyper-V/WSL2 disk already had excess free space (from previously deleted files) available.
I cannot say for sure exactly which order I did things in, and the only way to know for sure would be to "shrink/compress" the virtual disks and try again.
Summary
It appears, to my naïve eye and at least on the "Pro" level of Windows, that Hyper-V has some serious performance limitations when compared to VMWare.
Tuning attempts and other comparisons
I did attempt some tuning of the Hyper-V system, but I'm no expert in that area. Regardless, there's not much that we as users could do to extend any Hyper-V tuning to WSL2 -- Microsoft would have to make most of those changes.
I did try converting the dynamic VHDX to a fixed one, in hopes that it would increase IO, but it did not make a substantial change either.
I've also now also tried:
Disabling swap in WSL2
Running the Sysbench tests with more threads
Setting a fixed 12GB RAM size for WSL2.
Running sysbench under WSL2 on my primary desktop, which has faster memory and an NVMe drive compared to my test system's SSD.
Memory speed was significantly improved - Apples-to-oranges, but my desktop's memory numbers were comparable to VMWare running on the lower-end test system.
However, this made no difference on the disk IO numbers. They were still in the same range as the test system.
Next steps
It would be great if you could, perhaps, run some similar benchmarking apart from your WordPress instance, since you already have the two environments set up. If we can corroborate the data, we should probably report it to, at least, the WSL team. They can hopefully either provide some guidance on how to tune WSL2 to near parity with VMWare, or work with the Hyper-V team on this.
Again, it does seem surprising to me that the disparity between Hyper-V and VMWare is so great. I still struggle to believe that I myself am not doing something wrong in the benchmarking.
After taking a nice look into wsl and classic VMs and refreshing my memory a little bit, I have reached a theory, but I cannot prove it.
I hope this answer helps in anyway or attracts someone with a direct knowledge of this question.
I asked here in the comments and to myself:
Is it possible that Hyper-V is just configured to use much less % of 'raw power' than VMWare? (I.E: Microsoft giving WSL not a lot of priority compared to windows and windows getting almost all the available resources) Or is it a more intrinsic problem with Hyper-V performance (coding etc)?
This comes from my own understanding that WSL seems to be more intended for accessing linux commodities from windows but not for a resource-intensive (including network speed) activity like hosting a webpage. Due to how WSL is integrated, it seems intuitive to think that it will run faster than a typical VM, but a VM is fully configurable, you can almost give it full access to resources.
If you look at these answers, you can see that it doesn't really seem to be intended to replace a VM per se.
So I think that WSL probably is not configured to these kind of tasks, neither it is meant to be configurable enough to change that.
I think that the main usage Microsoft was aiming with WSL was to give windows users a dinamic workflow in which you could switch between Windows and Linux commodities (since, IMHO, Linux is much better in console commodities than Windows), but NOT to make WSL a full-fledged VM with all of it's characteristics. It also makes sense since, why would you want to make, for example, a webpage and host it in a linux environment that has the burden of sharing resources with a windows environment you are not using AND that it is the 'main' one?
There are known networking bottlenecks in WSL2.
See this GitHub issue for potential workarounds: https://github.com/microsoft/WSL/issues/4901
Some solutions that have worked for others:
https://github.com/microsoft/WSL/issues/4901#issuecomment-664735137
https://github.com/microsoft/WSL/issues/4901#issuecomment-909723742
https://github.com/microsoft/WSL/issues/4901#issuecomment-957851617

ODBC Suddenly 100x Slower

Running a simple SELECT * FROM TABLE from a small table (10k rows) takes 1 minute to run. It used to take 1 second and still takes 1 second on my coworkers computers. Running queries on larger tables in reasonable time is now impossible.
I've tried querying from Teradata SQL Assistant and a Jupyter Notebook; both are extremely slow. I've tried querying various ODBC Data Sources, all are suddenly slow.
I've tried deleting and then adding back the Data Sources in my ODBC Administrator. I've compared my ODBC settings to those of coworkers and they're exactly the same.

Symfony2 slow page loads despite quick initialization/query/render time?

I'm working on a Symfony2 project that is experiencing a slow load time for one particular page. The page in question does run a pretty large query that includes 16 joins, and I was expecting that to be the culprit. And maybe it is, and I am just struggling to interpret the figures in the debug toolbar properly. Here are the basic stats:
Peak Memory: 15.2 MB
Database Queries: 2
Initialization Time: 6ms
Render Time: 19ms
Query Time: 490.57 ms
TOTAL TIME: 21530 ms
I get the same basic results, more or less, in three different environments:
php 5.4.43 + Symfony 2.3
php 5.4.43 + Symfony 2.8
php 5.6.11 + Symfony 2.8
Given that the initialization + query + render time is nowhere near the TOTAL TIME figure, I'm wondering what else comes into play, and other method I could go about identifying the bottle neck. Currently, the query is set up to pull ->getQuery()->getResult(). From what I've read, this can present huge overhead, as returning full result objects means that each of the X objects needs to be hydrated. (For the sake of context, we are talking about less than 50 top-level/parent objects in this case). Consequently, many folks suggest using ->getQuery()->getArrayResult() instead, to return simple arrays as opposed to hydrated objects to drastically reduce the overhead. This sounded reasonable enough to me so, despite it requiring some template changes in order for the page to render the alternate type of result, I gave it a shot. It did reduce the TOTAL TIME, but by a generally unnoticeable amount (reducing from 21530ms to 20670 ms).
I have been playing with Docker as well, and decided to spin up a minimal docker environment that uses the original getResult() query in Symfony 2.8 code running on php 7. This environment is using the internal php webserver, as opposed to Apache, and I am not sure if that should/could have any affect. While the page load is still slow, it seems to be markedly improved on php 7. The other interesting part is that, while the TOTAL TIME was reduced a good deal, most of the other developer toolbar figured went up:
Peak Memory: 235.9 MB
Queries: 2
Initialization Time: 6 ms
Render Time: 53 ms
Query Time: 2015 ms
TOTAL TIME: 7584 ms
So, the page loads on php 7 in 35% of the amount of time that it takes to load on php 5.4/5.6. This is good to know, and provides a compelling argument for why we should upgrade. That being said, I am still interested in figuring out what are the common factors that explain large discrepancies between TOTAL TIME and the sum of [initialization time + query time + render time]. I'm guessing that I shouldn't expect these numbers to line up exactly, but I notice that, while still off, they are significantly closer in the php 7 Docker environment than they are in the php 5.4/5.6 environments.
For the sake of clarity, the docker container naturally spins up with a php.ini memory_limit setting of -1. The other environments were using 256M, and I even dialed that up to 1024M, but saw no noticeable change in performance, and the "Peak Memory" figure stayed low. I tried re-creating the Docker environment with 1024M and also did not notice a difference there.
Thanks in advance for any advice.
EDIT
I've tested loading the page via the php 5.6 / Symfony 2.8 environment via php's internal webserver, and it loads in about half the time. Still not as good a php 7 + internal server, but at least it gives me a solid lead that something about with my Apache setup is at least significantly related (though not necessarily the sole culprit). Any/all advice/suggestions welcome!

PgAdmin III not responding when trying to restore a database

I'm trying to restore a database (via a backup file), I'm on PgAdmin III (Postgresql 9.1).
After choosing the backup file, a window indicating that pg_restore.exe is running then PgAdmin is not responding ,it has been few hours (It is not a RAM shortage issue)
It might be due the backup file size (500 MB), but i have already restored a database with a 300 MB backup file few days ago, and it was done smoothly.
By the way the format of the backup file ( created via pg_dump)is the "tar" format.
Please let me know if anything comes to mind or if you need any more information. I appreciate any help or pointers anyone has. Thanks in advance.
I have the same problem and I solved looking this web site tutorial
File has been generated in my backup with the size of 78 MB, I have generated it again using
Format:Custom
Ratio:
Enconding: UTF8
Rolename:postgres
I try to restore again and then works fine.
On OS X I had similar issues. After selecting the .backup file and clicking restore in pgAdmin I got the spinning beachball. Everything however was working in the background, the UI was just hanging.
One way you can check that everything is working is by looking at Activity Monitor on OS X or Resource Monitor on Windows. If you look at the 'Disk' tab you should see activity from postgres and you should see the value in the 'Bytes Read' column slowly going up. I was restoring a ~15G backup and the counter in the 'Bytes Read' column slowly counted up and up. When it got to 15G it finished without errors and the pgAdmin UI became active again.
(BTW, my 15G restore took about 1.5 hours on a Mac with an i7 and 8G of RAM)

Resources