I was getting 52 out of 100 for speed when using Google PageSpeed Insight for the website I'm hosting. And I am trying to increase the server response time and I've been searching via Google. So far I found that I need to do some tweaking in my httpd.conf file such as KeepAlive and MaxRequestWorkers since I use httpd 2.4.12. I'm a bit paranoid when it comes to making changes in my httpd.conf. Do I need MPM worker to be able to include KeepAlive and MaxRequestWorkers? Or can I just add them to the conf file?
I run a quick command on my system (runs on Ubuntu Server 12.04.5 LTS 32bit)
$ free -lm
total used free shared buffers cached
Mem: 999 926 72 0 11 73
Low: 869 798 70
High: 130 128 1
-/+ buffers/cache: 841 157
Swap: 5720 954 4766
I realize this is only 1G of RAM.
Any help would be appreciated. Thank you very much
One thing I would suggest for decreasing the server response time is making use of wordpress caching plugins like Wp-super cache (https://wordpress.org/plugins/wp-super-cache/).
This is the quickest solution to drastically bring down the Server response time. You may have to move the dynamic components to load via Ajax to prevent that section from returning the cached result.
These plugins are simple to use and give you better results on the speed front without much code tweaking
Related
I am extremely sad and couldn't find the solution from last 1 week so I end up asking help here.
I am hosting my business platform on IIS Server running on Windows Server 2021.
I am using server port speed of 10 gb/ps and I have 125 GB ram and 25 cores.
When a user downloads a file from my website, I am getting the server speed of just 100 kbps maximum 500 kbps.
My internet speed is 200 mbps and I am getting the same speed on my PC from the online network speed test.
Please help me to get the highest possible downloading speed on my server, must be I lacking something but I tried all possible things to speedup the server speed.
I am getting this speed in file downloading where I am getting 4-5 mbps when I download anything from google drive-
i got this error at least 20 times each minute on my production server logs.
My website is getting down when visitors number arrives to ~50.
Any suggestion?
[Fri Dec 14 23:52:32.339692 2018] [:error] [pid 12588] [client
81.39.153.171:55104] PHP Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 32 bytes) in
/vendor/symfony/symfony/src/Symfony/Component/Debug/Exception/FlattenException.php
on line 269
In your production, you don't need to debug component, for reducing memory use composer with --no-dev --no-interaction --optimize-autoloader.
If you can access your server via ssh, check memory consuming.
My suggestion if you have 50 visitors at the same time, This is a good time to upgrade the server.
Also, you can try to reduce max_execution_time to open some more memory.
The question is very vague, so this won't be accurate...
Your limit is 512 Mb, and still not enough, so that only leaves a few possibilities.
First check the logs to see if these errors are tied to any specific URL.
(If you don't have adequate logging, I recommend using Rollbar, it has a monolog handler, and takes only a few minutes to wire up. It is also free.)
You mentioned the visitor count... I'm not sure if it has anything to do with this. What kind of webserver are you using?
Check for the usual suspects:
Infinite loops, recursions without an exit condition.
Large files (upload and download mostly)
Statistical modules with complex queries and a high limit are also a good place to check.
I'm currently using CloudFlare's services for my domain.
The interesting thing is that when I change my A record, the new website popup after a few minutes.
I remember, when I didn't used them, I had to wait 24 hours, and even 48 hours on some computers.
Is this because of them? If it is, I guess it's because I change the A record, but the domain actually remains with the same (theirs)?
Every DNS record has a "Time To Live" (aka TTL) which specifies how long dns resolvers should remember an answer before they go get a fresh copy of the answer.
For example:
dig +noall +answer stackoverflow.com
stackoverflow.com. 144 IN A 104.16.37.249
stackoverflow.com. 144 IN A 104.16.35.249
stackoverflow.com. 144 IN A 104.16.33.249
stackoverflow.com. 144 IN A 104.16.36.249
stackoverflow.com. 144 IN A 104.16.34.249
In this case, my resolver will remember this answer to the question "stackoverflow.com" for 144 more seconds. Probably CloudFlare is using a smaller TTL than wherever your DNS records used to come from.
On my Ubuntu 12 vps I am running a full bitcoin node. When I first start this up it uses around 700mb of memory. If I come back 24 hours later (free -m) will look something like this:
total used free shared buffers cached
4002 3881 120 0 32 2635
-/+ buffers/cache: 1214 2787
Swap: 255 0 255
But then if I clear "cached" using
echo 3 > /proc/sys/vm/drop_caches
and then do free -m again:
total used free shared buffers cached
4002 1260 2742 0 1 88
-/+ buffers/cache: 1170 2831
Swap: 255 0 255
Can see the cached column clears and I have way more free memory than it looked like before.
I have some questions:
what is this cached number?
my guess is it's files being cached for quicker access to the disk?
is it okay to let it grow and use all my free memory?
will other processes that need memory be able to evict the cached memory?
if not, should I routinely clear it using the echo3 command I showed earlier?
Linux tries to utilize the system resources more efficiently. Linux caches the data to reduce the no. of io operations thereby speeding up the system. The metadata about the data is stored in buffers and the actual data will be stored in the cache.
When you clear the cache the processes using cache will lose the data so you have to run
sync
before clearing the cache so that the system will copy the data to secondary storage which reduces the errors.
How to investigate the 2.95s latency on the very first response from VPS????
My VPS (2 Core, 4 GB RAM, 100 GB HDD) is hosted on a reputed service.
My server has centos 6.5, nginx 1.4.4, php 5.4.23, mysql 5.5.35, wordpress 3.7 with W3 Total Cache. Caching seems to work. Nginx conf enabled Gzip for all media.
When I look through chrome dev tools in network panel, the very first GET request made is getting response in around 2.9 seconds. In other words, time taken for html generation + network travel is 2.9 seconds.
Then starting from the first response, the whole site is getting loaded in next 2.2 seconds - taking the total time to 5.x seconds.
Test php page that queries db and renders the page is having under 70 milliseconds latency in the first step.
Whats the scope for improvement other than increasing CPU cores? Is it possible to tune up the server with some settings or for the amount of given page complexity (theme, etc) this is it and nothing can be done other than hardware addition?
Disk IP perf: DD command results 1.1 GB copied, 3.5 - 6 s, 180 - 300 MB/s
PS: I am aware of other SO questions, most of them recommend some cache plugin, apache mod setting, etc, I am posting this after I have spent enough time digging through them.
xDebug will show you, per script, how much time your server spends executing it http://xdebug.org/docs/profiler