How should be the HTTP timing? - http

This is the graph of one of my sites https://www.alebalweb-blog.com, first line of firefox development tools -> Network, and I'm not sure that the blocked and waiting entries are "normal".
Waiting, I suspect it's the server's fault, it's a small vps on Vultr - Ubuntu 18.04, the other day I updated to php7-4-fpm and I haven't activated opcache, memcached, acpu or anything else yet, because (unfortunately) my sites are small, less than a thousand visits a day, and I don't know if it makes sense to activate chace systems, maybe they also affect indexing and positioning on search engines?
Even if yandex and bing give a lot of work for my little server... and maybe just them would take advantage from the cache?
Blocked, it is more confusing, I'm not sure it's me, everything happens before you get to my server? Maybe it's Vultr's fault? Maybe namesilo's fault? (where domains are registered) Maybe mine, some apache configuration or something else? Maybe they're normal values? I have no idea.
Can anyone help me understand if they are normal values? And if they are not, to understand how I can improve?
-------------------------update------------------------
I have read the pages you have suggested to me, even they do not seem to have understood much or found a solution....
I did some things on my little server, like: blocked yandex, enabled opcache, installed memcached.
The intent is to stabilize, to begin to understand something.
I have done many other tests these days, and I have seen results like these:
This is another site, but it is on the same server, the one highlighted is matomo (statistics), the tracking javascript script, is in a sub-domain, but always on the same server.
The difference is enormous, and the tests were done within seconds of each other.
So at this point maybe the question is: do you have any suggestions on what else I can do to start understanding something?
At least to understand if to create these timings is me, if it is my server, the scripts of my sites, the browsers, the connection or what else.

None of what you've posted looks very bad, but your service is sometimes taking > 6s to respond to the initial connection request. There are probably a lot of small things wrong that you can fix, I would start with looking at this question which addresses the same problem I'm seeing with your site.

The timing looks bit large as for me.
Seems the server is not responding during 150ms (blocked) especially on main page.
Then takes up to 150ms for TLS setup, 200ms to load content etc.
But this is not stable.
Sometimes it took about 800ms to receive homepage, sometimes the whole thing took less then 200ms.
Most likely it is server issues (as your virtual server share physical hardware machine with other servers).
And just for reference:
What does "Blocked" really mean in the Firefox developer tools Network monitoring?
Also, there is some general things to consider as troubleshoot:
I suggest to create local (localhost) version of the site, then:
Check time actually required to render homepage (inside server log)
Temporary remove gzip compression
Temporary remove https
Temporary remove output buffering in php (hope your code does not need it)
Check if any "post processing" content hooks are active in php

Related

Amazon Aws EC2 - Bitnami - wordpress configuration - extremely slow loading assets

I am trying to test out the feasibility of moving my website from godaddy to AWS.
I used a wordpress migrate plugin which seems to have moved the complete site and at least peripherally appears to be moved properly.
However, when I try to access the site, it is extremely slow. Upon using developer tools, I can tell that some of the css and jpg images are sort of acting as blocking threads.
However, I cannot tell why this is the case. The site loads in less than 3 seconds in godaddy, however, it takes over a minute to load it fully on AWS and there are at least a few requests that timeout. Waterfall view on chrome developer tools show a lot of waiting on multiple requests and I cannot seem to figure out what or why these requests are sort of waiting forever and timing out.
Any guidance is appreciated.
I have pointed the current instance to www. blind beliefs .com
I cannot seem to figure out if it is an issue with the bitnami wordpress AMI or if I am doing something wrong. May be I should go the traditional route of spinning up EC2 instance , run a server on it, connect it to a db and then install wordpress on my server. I just felt the AMI available took care of all of that tailoring without me having to manually doing it.
However, it is difficult to debug though as to why certain assets get blocked/are extremely slow and timeout without loading.
Thank you.
Some more details:
The domain is still at godaddy and I have not moved it to AWS yet, not sure if that is sort of having an impact.
I still feel it has to do with the AMI though - cannot prove it.
Your issue sounds like you have a free memory problem. You did not go into details on the instance size, if MySQL is installed on the instance, etc.
This article will show you how to determine memory usage on your instance. When free memory is low OR you start using SWAP space, your machine will become very slow. Your goal should be 0 bytes used in SWAP space and at least 25% free memory during normal operations.
Other factors to check is percent CPU utilization and free disk space on your file systems.
Linux Memory Check Commands
If you have a free memory problem, increase the instance size. If you have a CPU usage problem, either change the instance size or switch to another instance type. If you have a free disk space problem, create a new instance with a larger EBS volume OR move your website, etc to a new EBS volume sized correctly.

Broken caching of images loaded from homemade webserver

A while ago I wrote a webserver which I'm using on a site of mine.
When I navigate to another page in Chrome while the images from this homemade webserver are still loading, they stay cached as only half-loaded.
Is this a known bug in Chrome, or an issue with my implementation of the HTTP protocol?
My webserver uses E-Tags for caching.
First Rule of Programming: It's your fault
Start with your code, and investigate
further and further outward until you
have definitive evidence of where the
problem lies.
You need to apply this rule here. What are the chances that Chrome, when communicating with Apache, is likely to exhibit this kind of bug deep into it's 6 (at least) major iteration?
I would put a traffic analyser onto your server and view the exchanges carefully. Next I would compare them with those from a well-established web server like Apache and note any differences.

My under development local drupal site become very slow, how to solve?

I am developing locally a site with drupal and suddenly it became very slow. The last thing I made was installing the internationalization module.
Now when I try to reach administration panel I receive:
Fatal error: Maximum execution time of 60 seconds exceeded...
What to do now? Should I increase the maximum execution time allowed? OR could be that I have too many modules installed?
EDIT: Forgot to tell you that I am working on a PC with 2GB RAM and CPU 2.9 GHz, Windows XP + XAMPP
Exceeding 60 seconds execution time is quite something - indicates that something is going quite wrong.
I'd start troubleshooting by disabling modules (physically moving them out of your modules directory) one at a time until the problem goes away. Then, add them back one at a time, until the problem returns (you'll need to re-enable them through the Modules page as you go). You should be able to quickly isolate exactly which module is causing the problem.
Since the last thing you did was to install internationalization, I'd start by disabling that module.
Once you've isolated the module, you can try to work out what's going wrong.
Some things to look into ...
is your database running out of space
Are you missing any indexes
Do you need to "update statistics" (rebuild metrics on table contents and column distributions)
The Devel module can be useful for logging performance statistics, to help you track down the bottleneck.
A php accelerator may help you get the time down a bit, there are also a number of caching options that your site can use (look in admin under performance), this may make developing more difficult but can make pages load faster.
I wouldn't increase your maximum execution time, at some stage you want to put your site wide, and if people don't get a page within a second or so they will think the site is down.
To have too many modules installed you would have to have a lot of modules, it is more likely that one of your modules is causing a performance bottleneck. Or something on your site like a view is causing things to slow down. mattv's answer helps with that.
try also activating the cache system under site settings / performance. It could be helpful.
there is a known and documented problem about massive queries getting dynamically built by the Views module when rebuilding the dynamic menu, apparently.
Unfortunately, no simple and definitive answer has been found, yet.
You can find more information here (please be aware that some answers relate to version 5).
I would really like to know how to fix this in a definitive and efficient manner.
Use Zend Server. For detailded information check this out: http://drupal.org/node/348202#comment-3349704

Any monitoring software which traces IIS & freely available

In continuation to last question - My site goes slow and stops access certain services externally if we check the Process monitor we see that it is normally due to the ‘w3p.exe’ process – which is the background process for running the website – it regularly reaches 99/100% - killing the process/restarting the WebPublishing service reolves tis – my webhost says this can only be due to bad coding ....can someone comment on this ??…
Wanted to know any monitoring software which traces IIS & freely available ...
If you are using Asp.Net then you can use the built in Asp.Net tracing to find out things like the size of your viewstate and the where the time is spent while rendering a page. There are various ways of enabling this depending on what your needs are: see http://authors.aspalliance.com/aspxtreme/webapps/tracefunctionality.aspx
99% CPU is not going to occur if you have an inefficient page or two. 99% CPU utilization happens when you have a bug.
If it does not happen on your local server, but only in the hosted environment, then you will have to resort to the old school detective approaches. Tracing, removing portions of code, and so on - til you find the source of the problem.

What are you using for Distributed Caching in web farms running ASP.NET?

I am curious as to what others are using in this situation. I know a couple of the options that are out there like a memcached port or ScaleOutSoftware. The memcached ports don't seem to be actively worked on (correct me if I'm wrong). ScaleOutSoftware is too expensive for me (I don't doubt it is worth it). This is not to say that I don't want to hear about people using memcached or ScaleOutSoftware. I'm just stating what I "know" at this point.
So my question is basically this: for those of you ACTIVELY using distributed caching, what are you using, are you happy with it, and what should I look out for?
I am moving to two servers very soon...both will be at the same location. I use caching fairly heavily (but carefully) to reduce the load on my database server.
Edit: I downloaded Scaleout Software's solution. I've coded for it and it seems to work real well. I just have to decide if my wallet will part with the cash for it. :) Anyone have experiences good or bad with ScaleoutSoftware?
Edit Again: It's been a little while since I asked this? Any more thoughts on it? We ended up buying the solution from ScaleOutSoftware and have been happy with it, but I'm curious what others are doing.
Microsoft has a product pending code-named Velocity. It's still in CTP, and is moving slowly, but looks like it will be pretty good. We'll be beating it up in the near future to see how it handles what we want it to do (> 2 million read/writes per hour). Will post back with results.
There is a 100% native .NET, well documented open source (LGPL) project called Shared Cache. Looks like it is not yet mentioned on SO, but it's promising and should be able to do what most people expect from a distributed cache. It even supports different strategies like distributed or replicated caching etc.
I will update this post with more details as soon as I had a chance to try it on a real project.
We're currently using an incredibly simple cache that I wrote in a couple of hours, based on re-hosting the ASP.NET cache in a Windows Service (more info and source code here). I won't pretend it's anywhere near as optimised as something like Memcached but we were just looking for something simple and free until Velocity came along, and it's held up extremely well even under fairly heavy load.
It comes down to our personal preference for core components - i.e. ones that affect whether the site is available or not - that they are either (a) supported by a vendor with a history of rapid and high quality support, or (b) written by us so that if something goes wrong we can fix it quickly. Open source is all well and good, and indeed we do use some OSS, but if your site is offline then unfortunately newsgroups et al don't have a 1 hour SLA, and just because it's OSS doesn't mean you have the necessary understanding or ability to fix it yourself.
We are using the memcached port for Windows and we are very pleased with it. The enyim.com memcached client API is great and easy to work with. It's also open source, which is a big advantage, if you ask me.
We are now using this setup in a production web-app and it has helped a lot in improving its performance.
There's a great .NET wrapper/port found here on Codeplex. Awesomesauce!
We use memcached with the enyim library in a production environment (www.funda.nl). Works fine, very pleased with it, but we did notice a substantial raise in CPU use on the clients. Presumably due to the serializing/deserializing going on. We do around 1000 reads per second.
One tried and tested product by 100's of customers worldwide is NCache. Its
a feature rich product that lets you store session state in a redundant and highly available manner, lets you share data
within the enterprise as well as bridging for WAN communication essentially acting as a data fabric and lastly it lets you build an elastic caching tier so that when
your application scales, you can add servers to the cache and actually boost performance further.

Resources