I have installed PostgreSQL 9.1 server on my production server.
I have Changed all the configuration(postgresql.conf) according to system.
Everything has been working fine for a week.
After this suddenly, postgresql server becomes very slow.
Even for count(*) query on a table.It is taking too much time.
After this I have done so many activities like:
Monitor Load on system :- Normal within range of 0.5 to 1.5.
Monitor No of users logged in application :- 200 to 400. Normal
Recreated Index
Kill the ideal transactions.
Check Locks (No DeadLock Found)
Application server restarted.
Database server restarted.
After doing this all activities the performance of database server is not increased.
It is taking so much time for normal queries also.
Then I drop the database and recreated then Performance Increases
Everything working after recreating the database.
But after some days suddenly performance goes down.
This sounds like the active portion of the database is growing large enough that it doesn't fit in cache, causing actual disk access (which is orders of magnitude slower) for many of your reads. This is often caused by not vacuuming aggressively enough.
Other factors could be related to your configuration of PostgreSQL and the operating system. It's hard to give much advice without knowing:
exactly what version of PostgreSQL you're using (9.1 tells us the major release, but the minor release sometimes matters),
how you have PostgreSQL configured,
what OS you're running on,
what hardware you are using (cores, RAM, drive arrays, controllers), etc.
Part of that can be supplied by posting the results of running the query on this page:
http://wiki.postgresql.org/wiki/Server_Configuration
It might also help to select relname, relpages, and reltuples from pg_class for the tables involved and compare numbers when things are running well to when they are slow.
With the additional information, people should be able to make some pretty specific recommendations.
Related
I am not able to find maria DB recommended RAM,disk,number of Core capacity. We are setting up initial level and very minimum data volume. So just i need maria DB recommended capacity.
Appreciate your help!!!
Seeing that over the last few years Micro-Service architecture is rapidly increasing, and each Micro-Service usually needs its own database, I think this type of question is actually becoming more appropriate.
I was looking for this answer seeing that we were exploring the possibility to create small databases on many servers, and was wondering for interest sake what the minimum requirements for a Maria/MySQL DB would be...
Anyway I got this helpful answer from here that I thought I could also share here if someone else was looking into it...
When starting up, it (the database) allocates all the RAM it needs. By default, it
will use around 400MB of RAM, which isn’t noticible with a database
server with 64GB of RAM, but it is quite significant for a small
virtual machine. If you add in the default InnoDB buffer pool setting
of 128MB, you’re well over your 512MB RAM allotment and that doesn’t
include anything from the operating system.
1 CPU core is more than enough for most MySQL/MariaDB installations.
512MB of RAM is tight, but probably adequate if only MariaDB is running. But you would need to aggressively shrink various settings in my.cnf. Even 1GB is tiny.
1GB of disk is more than enough for the code and minimal data (I think).
Please experiment and report back.
There are minor differences in requirements between Operating system, and between versions of MariaDB.
Turn off most of the Performance_schema. If all the flags are turned on, lots of RAM is consumed.
20 years ago I had MySQL running on my personal 256MB (RAM) Windows box. I suspect today's MariaDB might be too big to work on such tiny machine. Today, the OS is the biggest occupant of any basic machine's disk. If you have only a few MB of data, then disk is not an issue.
Look at it this way -- What is the smallest smartphone you can get? A few GB of RAM and a few GB of "storage". If you cut either of those numbers in half, the phone probably cannot work, even before you add apps.
MariaDB or MySQL both actually use very less memory. About 50 MB to 150 MB is the range I found in some of my servers. These servers are running a few databases, having a handful of tables each and limited user load. MySQL documentation claims in needs 2 GB. That is very confusing to me. I understand why MariaDB does not specify any minimum requirements. If they say 50 MB there are going to be a lot of folks who will want to disagree. If they say 1 GB then they are unnecessarily inflating the minimum requirements. Come to think of it, more memory means better cache and performance. However, a well designed database can do disk reads every time without any performance issues. My apache installs (on the same server) consistently use up more memory (about double) than the database.
Why is it that every time the server goes down, and asp.net restarts, the response time is SUPER FAST when it comes back up for a few minutes. I assume because everyone is off the server and I am one of the few (or only) people back on the server so quick?
I have discussed this with our developers and they say the response time is due to everyone on the server normally (200+ desktops) and when you are the only person on there, it flys. Really? Then does that mean we need newer, faster web servers?
I am not a programmer, but I think there may be two answers, one is what the devs say above is true, and two is the system is accumulating temp files of some sort and they get cleared out when the server crashes and then restarts.
How do we prove who might be right? Where does one start to look for asp.net bottlenecks?
windows server 2003
asp.net 3.0
iis6
12GB ram
sql server 2005 (db admin says there is no load issue on sql..)
Some very basic steps that you can follow and check if your server work on limits are:
First you download the Process Explorer from sysinternals and you run it to see two things.
Is your server work on their memory limit ?
If yes then what program eats the memory, usually SQL Server 2005 use a lot of memory for database cache, and this is done after many time of work.
Did the server use all of his computing power, if yes, check what program is the one that need all that computing power.
Now next step, download the TCPView from sysinternals, run it and see how many connections are done, how fast, etc... There you can see anomalies, or if the computer is also on their limit.
Final step is to defrag your disks.
Also have in mine that the asp.net session is lock the entire view on all users. So if you have one application on web, with too many users, and each user, or some users, make long time processing on their calls, then this can cause delay to all the users.
I have developed a program which uses SQLite 3.7 ... database, in it there is a rather extensive write/read module that imports , checks and updates data. This process takes 14 seconds on my PC and Im pleased as punch with the performance.
I use transactions for everything with paratetrs my PC is a Intel i7 with 18gig of ram. I have not set anything in the database. I used SQLite Expert to create the database and create the data structures including table and columns and checked that all indexes are created. In other words its all OK.
I have since deployed the program/database to 2 other machines. That 14 second process takes over 5 minutes on the other machines. Same program, identical data, identical database. The machines are upto date, one is a 3rd gen Intel i7 bought last week, the other is quite fast as well so hardware should not be an issue.
Im just not understanding what the problem could be? Is it the database itself ? I have not set anything other then encription on it. Remembering that I run the same and it takes the 14 seconds. Could it be that the database is 'optimised' to my PC ? so when I give it to others its not optimised?
I know I could turn off jurnaling to get better performance, but that would only speed up the process and still would leave the problem.
Any ideas would be welcome.
EDIT:
I have tested the program on my 7yo Dual Athelon with 3gig of ram running XP on HDD, and the procedure took 35 seconds. Well in tolerable limits considering. I just dont get what could be making 2 modern machines take 5 min ?
I have an idea that its a write issue, as using a reader they are slower but quite ecceptable.
SQLite speed is affected most by how well the disk does random reads and writes; any SSD is much more better at this than any rotating disk.
Whenever changes overflow the internal cache, they must be written to disk. You should use PRAGMA cache_size to increase the cache to more than the default 2 MB.
Changed data must be written to disk at the end of every transaction. Make sure that there are as many changes as possible in one transaction.
If much of your processing involves temporary tables or indexes, the speed is affected by the speed of the main disk. If your machines have enough RAM, you can force temporary data to RAM with PRAGMA temp_store.
You should enable Write-Ahead Logging.
Note: the default SQLite distribution does not have encryption.
I have a large ASP.NET website on a hosted platform. It shares the machine with a lot of other applications. We do not have access to the machine itself (only an FTP account).
Our client is complaining that it is starting to perform rather badly, particularly around peak hours. I've run some remote measurements (using a JMeter-like tool) that tells me that, yes, it does indeed perform rather badly during peak hours. It doesn't tell me why though. The client is resisting a move to a dedicated server without some hard facts.
As I see it, what I need are hard data about the machine itself. Setting up a local performance test environment would be extremely time-consuming, and I have no way to estimate the server performance.
My question: is there a good way to collect (a lot) of performance measurements when I have limited access to the machine, and certainly no access to the performance monitor? Any code would have to run in the asp.net application itself, without screwing it up too much.
We had a similar problem with our asp.net application hosted on a shared server, which also started to perform badly during peak hours.
Although I don't know of an elegant solution to your question, this is what we did:
Talk to your host providers to see what additional information they can give you - it's in their best interest to keep their clients happy. Our host providers were able to give us some time with one of their network engineers who provided us with some decent CPU and memory utilization stats.
Take your own performance measurements by dumping information to either a log file (using log4net) and/or the database - for example, user sessions, search times, page hits, timing measurements around key functionality. From this information we were able to ascertain what our systems normal behavior was for a set number of automation tests.
Setup a local server (not necessarily same stats as hosted/production server) with your application loaded and give it a full load/performance/capacity testing (we used Red Gate's ANTS Profiler). The stats that you gather from that will give you and your client a good indication of how the system should behave under certain loads with a known environment. Yes, this can be time consuming but it will give you a great performance measuring tool so that you can catch/fix bottlenecks locally rather than on production.
Good luck.
How much traffic can one web server handle? What's the best way to see if we're beyond that?
I have an ASP.Net application that has a couple hundred users. Aspects of it are fairly processor intensive, but thus far we have done fine with only one server to run both SqlServer and the site. It's running Windows Server 2003, 3.4 GHz with 3.5 GB of RAM.
But lately I've started to notice slows at various times, and I was wondering what's the best way to determine if the server is overloaded by the usage of the application or if I need to do something to fix the application (I don't really want to spend a lot of time hunting down little optimizations if I'm just expecting too much from the box).
What you need is some info on Capacity Planning..
Capacity planning is the process of planning for growth and forecasting peak usage periods in order to meet system and application capacity requirements. It involves extensive performance testing to establish the application's resource utilization and transaction throughput under load. First, you measure the number of visitors the site currently receives and how much demand each user places on the server, and then you calculate the computing resources (CPU, RAM, disk space, and network bandwidth) that are necessary to support current and future usage levels.
If you have access to some profiling tools (such as those in the Team Suite edition of Visual Studio) you can try to set up a testing server and running some synthetic requests against it and see if there's any specific part of the code taking unreasonably long to run.
You should probably check some graphs of CPU and memory usage over time before doing this, to see if it can even be that. (A number alike to the UNIX "load average" could be a useful metric, I don't know if Windows has anything like it. Basically the average number of threads that want CPU time for every time-slice.)
Also check the obvious, that you aren't running out of bandwidth.
Measure, measure, measure. Rico Mariani always says this, and he's right.
Measure req/sec, RAM, CPU, Sessions, etc.
You may come up with a caching strategy (Output caching, data caching, caching dependencies, and so on.)
See also how your SQL Server is doing... indexes are a good place to start but not the only thing to look at..
On that hardware, a .NET application should be able to serve about 200-400 requests per second. If you have only a few hundred users, I doubt you are seeing even 2 requests per second, so I think you have a lot of capacity on that box, even with SQL server running.
Without know all of the details, I would say no, you will not see any performance improvement by adding servers.
By the way, if you're not using the Output Cache, I would start there.