I've just moved my Wordpress website on OpenShift PAAS ecosystem, on a scalable PHP cartridge. But I immediately noticed the website is really slow to respond - around 3000/4000 milliseconds. BUT, when it starts to respond, the page loading/rendering is absolutely fast.
here's the url: http://gabrielebaldassarre.com
just to give you a comparison, this static website is hosted on the same AWS Region: http://extras.gabrielebaldassarre.com/tos5-4
For that reason, I pointed this bottleneck to the nameservers I use (from Cloudflare, because of naked CNAME needs), but using a online tester, they seems ok.
I wouldn't say that my Wordpress is a vanilla config, but it's not a mammoth, after all. And loading time after response starting is ok.
I'm wondering if there is something wrong with HAProxy, or my OpenShift configuration, but I don't know how to check or what to do about.
any idea?
Openshift suspends and serializes apps without much activity after a given period, and the first time they 'wake' they deserialize and this takes time.
Since your a free user, I'm assuming that your application is deployed on small gears. Depending on the size of your application, that may not be enough. Try signing up for the bronze plan and see if your application performance improves on a medium or large gear.
Related
This may very well be a question that is too broad to answer but any ideas would be incredibly beneficial. I have a web site where load times are incredibly slow in one environment but not the other. In general, the time to first byte is around 15 seconds on most pages. It takes this long on every page within the entire application and not only on first load. I have been troubleshooting the issue for several days now and feel completely lost as to the actual cause for the latency.
Now for a long explanation about the issue.
The environment is a Frankenstein monster of different sources where too many people have had their hands in it, from what I can gather. I have carefully taken the time to compare each of the two environments and haven't identified a key difference. There are numerous things at play here, but I can summarize the main components.
It is a .NET web application built using Orchard CMS running within IIS and has a SQL Server backend. A dedicated server hosts the database and the another dedicated server hosts the web application itself, which is pretty standard. The main difference between the environments is the production site is running in Liquid Web and the new development site is running in AWS. Basically, the site will ultimately be migrated to AWS once the latency issues are resolved.
AWS has more than enough resources. In fact, production (Liquid Web) has been running into issues as of late due to the CPU usage being nearly maxed out. There are many more resources in AWS, and neither of the servers appear to be using more than 1% or 2% of their available resources. I verified this.
If the issue is within the database, I'm not really sure where else to look. I used SQL Server Profiler on the database server to analyze traffic and no transactions were taking more than a half second, aside from the Audit Logins/Outs (which from my research is normal behavior). The main database queries execute almost immediately after trying to navigate to a page within the site, not 15 seconds later when the page loads.
I had a thought that the network traffic in AWS application server and the database server could be bottlenecked somewhere. However, resolving the application locally does not improve performance. I thought it could have been an issue with the routing within the domain, such as the way in which DNS is set up, but that does not seem to be the case either... or perhaps it is, and I just haven't figured out the best way to troubleshoot that. Either way, resolving the application on localhost does not improve performance. The page still hangs for 15-20 seconds.
The vRAM usage for the site's application pool and the default app pool certainly does seem on the high end, if that makes a difference.
I have browsed the IIS logs and cannot find anything obvious. Granted, I don't have much experience in IIS and could be missing something. Windows Event logs show me nothing out of the ordinary either. There are some errors in both Liquid Web and AWS in regards to printer drivers not being installed, but those have nothing to do with the application itself.
I am unsure of how to check if it has something to do with the Orchard CMS. Granted, this is just a package/framework that was migrated over into the dev server, directly along with the application itself. I see nothing that would have changed within the environment.
The fact is that the two environments seem identical, yet one is running very slowly based on some factor that I just can't seem to identify.
Thank you!
My company uses SilverStripe v3.1.21, along with the Subsite module to display and administer a number of clients' websites that sell products. This results in close to 200 subsites and a page count in the tens of thousands. The websites are very slow to load and tools such as Google's PageSpeed tell us page speeds are poor. We've already done step like combining and minimising the JS and compressing resources such as imaging, which gave some improvements, however the pages remain slow. The system was handed to us in this state and further hardware upgrades are not on the table as an option, nor are gaining additional resources for redevelopment.
We've taken a look at the static publish module (https://github.com/silverstripe/silverstripe-staticpublisher) and found that when we generating static pages the pages become fast and get a good score on the various tools, however the process to regenerate all of these pages takes over 14 hours, which is unacceptable given these products are updated from an external source daily. We also find that the regeneration process is a memory hog, as the module builds all of the pages in memory before dumping to file, causing the process to crash. We've had to alter the process to go subsite-by-subsite just to make it run.
We then took a look at the static publishing queue module (https://github.com/silverstripe/silverstripe-staticpublishqueue), which seemed to address our issues by having it queue pages as needed for regeneration, making it much more responsive to changes. However, the module seems to be very buggy and often crashes when generating pages.
Has anyone had experience using these modules (or similar) with larger sites and may be able to provide any pointers or ideas on how to implement static publishing successfully?
We are using staticpublishqueue currently on several sites. The only problem we've had with it is crashing due to long builds and poor locking. Or to be precise it doesn't actually crash but keeps spawning more and more instances until the server becomes unresponsible.
I think we have a fix for this in our fork. At least we haven't had any problems after using the modified locking. You could try installing the fork instead of the official version. If this fixes things for you maybe we should make a pull request :)
First of: We only use staticpublishqueue, I don't have any experience in regards to the sub site module. So I can't speak for your exact combination.
We are using staticpublishqueue on a huge site. Setup: We have multiple servers running the SilverStripe Website. They share a MySQL Database and use Redis as a session store.
One great thing about staticpublishqueue: you can run it in parallel. So the servers all run an instance of staticpublishqueue and publish into a shared folder, which is then synced to a nginx load balancer in front of the actual webservers. Works quite nice, but it does not scale indefinitely. At some point the staticpublishqueue instances start to pick the same record to render and waste resources. I think about 6 is the max for us.
Couple of things we learned regarding staticpublishqueue:
do not run to many instances at the same time (see above)
make sure it has enough ram
make sure it runs as the same user as the website
the record look it uses is not compatible with a MariaDB Galera Cluster
If possible switch to SilverStripe 3.6.x and PHP7. The performance gain is huge.
We are migrating away from staticpublishqueue to Cloudflare (or maybe another CDN). Why? Because if a page that is requested has not been rendered yet the server will render it for each request individually and then throw it away. Until the que does a separate render for the cache. Total waste of resources, especially if you purge your cache after a sitewide layout change or something.
We need to use WordPress for a site that is going to have high traffic. We expect an initial load of 500K page views a month and will increase to about 8M page views a month. Usage will be mainly during working time, which is around 20 days a month during 8 hours.
We are thinking on using Google App Engine with Google Cloud SQL. We were wondering how well it scales for that kind of load. Theory says Google App Engine should scale automatically, but not sure how good is Google Cloud SQL when scaling. This will be a mostly read database, which a few writes.
So the questions are:
Does anyone has experience deploying WordPress on Google App Engine + Google Cloud SQL with a high load?
Do you know if there are problem installing plugins for WordPress on Google App Engine? Do they need any especial modification?
To save you some time, look to other solutions.
I'm working on this exact task now, but I'm about to give up due to Cloud SQL's very poor performance. It might work fine for websites like Orane's, but for larger more complex websites the high latency and slow response time from Cloud SQL means for us 3 second load times instead of 0.7s that we have on our VPS. I have tested by connecting to both IP and Socket, SSL and without, and it's just not usable as-is. If you test with Amazon RDS, the difference in speed is shocking.
The only other solution we've been able to come up with is to set up an API server that continously caches data to memcache and only serve static pages on App Engine with most dynamic content loading through AJAX. Scary!
Keep trying, but you'd be better off looking into RackSpace Cloud DB or Amazon RDS.
There are no problems at all and it doesn't need any modifications. Everything works perfectly and from previous projects I've done on appengine, I know it scales extremely well. I've just set up my new wordpress blog on appengine here and everything works the same but loads a lot faster. Its a little tricky to get setup however..I'm working on a tutorial for that.
We have an application deployed on Windows Azure as a Web Role and we are using Pingdom for testing page load times: http://tools.pingdom.com/fpt/
The url for the application on Windows Azure is: http://www.doctorspring.com .
The load time of the app is usually around 7s.
The database is an SQL Azure database and the role and the database are in the same zone.
Sample pingdom result: http://tools.pingdom.com/fpt/#!/CllGggrMz/http://www.doctorspring.com/
Sample pingdom result(with gzip):http://tools.pingdom.com/fpt/#!/f2TUbR6OX/www.doctorspring.com
Suspecting that Azure could be the problem, we tried a free hosting from Somee as:
http://www.doctorspring.somee.com
The load time of the app on Somee is around 3.5s.
Sample pingdom result: http://tools.pingdom.com/fpt/#!/o3gZOjTwH/http://www.doctorspring.somee.com/
That is a huge performance issue for us.
Can you please help us understand the problem with Azure or suggest a method, as to how can we overcome it?
Thanks,
Manish
In both cases, loading the homepage is unacceptably slow - 3.5 seconds to generate a page is around 10 times slower than you need to be when there's no load on the site. I'd expect the site to crumble under even moderate load with this kind of performance.
Without knowing how the site is constructed, it's hard to explain the reason one environment is faster than the other - but my guess is that whatever is generating the page (some kind of CMS?) is the cause. Azure is known to be a touch slow when doing database queries - though normally this only manifests itself under extreme conditions.
I'd recommend tuning the CMS - especially with caching. We found that Azure is normally pretty fast, but when doing database lookups (e.g. retrieving content for the CMS), it can be variable; if your CMS is doing a LOT of database queries to get the homepage content, it's going to be slow.
It's also worth running Yslow - there's some low-hanging fruit on getting performance up.
What services are you running in Azure? Web-role, VM, Website? Are you connecting to an Azure Database instance from the homepage (if so how many distinct calls are you making)?. I'm getting around a 7.5 second load time from London, but to be honest even 3 seconds is too slow for the homepage. It's hard to know what's causing the prolonged page-load but if you are connecting to a DB instance there's a great deal you can do e.g.
Render the page and make some asynchronous calls to spool in additional data.
Make sure your Azure services are running close together
Consider caching database content to a blob. E.g. for the data in "Medical Questions Answered in Last 24 Hours" if you are pulling this from a DB on every load you could considerably speed up access by routinely caching this to a html file stored in a blob container and inject it into the page.
If you must make DB calls from the homepage try to make as few round trips as possible by batching up your queries into a stored procedure.
I've made a lot of assumptions here, but there are certainly things you could do to drastically improve performance on this page.
We currently have a Live ASP.NET application (Basically a CMS) running on our IIS7 web-server.
Every once and a while (Talking every few months) it's app pool will go to 100% CPU-usage and stay there until the page times out. We've tried increasing the timeout for the page to 30 minutes in the web.config but it still just stayed at full CPU so I'm presuming it's some form of infinite loop.
It is a massive application, one of the biggest we have, and far too large to blindly search for an issue. The prevailing opinion is that since it's so rare we can just restart the app-pool whenever it happens, but I'd much prefer to fix it.
I have access to the code and full administrator access to the hosting server, and the monitoring software we're running gives me plenty of time to be on the server while the issue is taking place but I can't find any way to get useful data about what's going on at the time without adding a massive constant overhead to the site (Which given it'll take months to happen isn't really viable).
I'm wondering if anyone has some advice as to how I could narrow down our search? A stack trace of the currently running threads would be spectacular, but even just a list of the pages that are actively being served would make a huge difference. I can add code to the project to make it more traceable, but logging everything in the hopes of catching it would be unrealistic (It gets a lot of traffic and we don't want to add significant overhead to page loads).
Tess's blog is an excellent resource on debugging production asp.net applications.
I think this blog post from her blog will be really helpful in getting started in debugging this problem: Hang debugging walkthrough.
Hope this helps
I recommend you to use ASP.Net performance counter, (like the requests queue and number of requests)