Vagrant performance on HTTP - phpunit

I am seeing very slow vagrant performance, regardless of provider or number of cores I allocate. Seems to be an issue with either OSX, Vagrant settings, or the VMWare Fusion / VirtualBox provider.
I run a PHPUnit testsuite, inside the VM it takes seconds.. outside it takes minutes.
How can I optimize HTTP performance ? I'm already utilizing NFS.

Related

Wordpress site extremely slow after migrating to Azure App Service and Azure Database for MySQL Flexible Sever

I was previously running both my wordpress application and the mysql database server installation inside the same Linux Virtual Machine on Azure. I recently migrated both to Azure App Service and Azure Database for MySQL Flexible Server respectively in the same region - East US. Unfortunately, this has really slowed down the application and page load times have increased to an average of 11 s from 1 s. I served all static files from a CDN but to no avail. Checking the network waterfall, the scripts blocking the page are calls to admin-ajax.php. Increasing the compute of both services to a ridiculous size (there is no traffic right now) only improves the speed to 6 s. Since, both services are in the same region I do not believe there can be such a significant network latency between the server and db. What additional steps can I take to troubleshoot the issue?
If you isolate the slowness endpoints and if its due to SQL then I suggest to configure VNET integration with app service and use service endpoint, Microsoft.SQL at subnet of app service integrated subnet such that some of limitation regarding number of sockets and network latency rule out and should observe performance gain. Parallelly you need to check SQL execution time either using profiling of queries or using Performance recommendations.

Impact of restarting nova compute service

I am looking for some guidance related to the following question:
What will be the impact on the running VMs if nova compute service is restarted?
Openstack version: Newton
I understand that new connections will probably be affected as the nova-compute api will be unavailable for a few seconds. But will there be any risk to the running VMs ?
Found few articles like this but the answers are pretty vague.
As you suggest, restarting the compute service on a hypervisor will have no impact on running VMs. They will continue running and keep their connections alive. It can be considered as a "safe" operation.
However, actions such rebuild, delete, reboot, etc. will fail as long as the service is down. In addition, note that restarting the hypervisor itself (qemu or kvm , for instance) would have impact in the running VMs.

Zenoss auto start off upon system reboot

I have installed Zenoss 5.2.4 on a machine which runs centos 7. I wanted to reboot the machine and thus stopped the serviced for graceful shutdown of all the internal services for Zenoss.
Upon rebooting, I see the serviced is already running. This shows that the Zenoss.core is started upon system boot time. I want to start the serviced and Zenoss.core manually after the system reboots. How can I turn on this feature??
I checked in the /etc/default/serviced configuration file but couldn't get any such parameter.
Thanks.
I did a systemctl disable serviced before restart and it didn't started up anymore automatically. I enabled and started serviced manually.

Efficiently using multiple docker containers in a single host

I have a physical server running Nginx, MySQL and serving my PHP website. The server has Multi-Core processor with 16 GB of RAM. This server can handle certain amount of web traffic.
Now instead of this single server, if I run multiple docker containers running individual instances of Nginx (App Server) and MySQL (DB Server) in it and load balance between the application and database containers, will it be able to handle the same amount of traffic as a single server handled it or would it be lesser (Performance wise)?
How will the performance be if I use a virtual server like EC2 or Digital Ocean Leaflet with the same hardware configuration instead of a physical server?
Since all process run on the native host (you can run ps aux on host outside container and see them). There should be very little overhead. The network bridging and IP Tables entries to forward packets to virtual host will add some CPU overhead but I can't imagine that being too onerous.
If the question is several nginx + 1 mysql versus several containers with each nginx + mysql, probably performance wise would be better not using containers, mainly because mysql and how much memory can have if is a single instance vs having multiple separate instances. You can anyway have the nginx in separate containers but using one central mysql for all sites, in a container or not.

Use sperate pool for sites with HHVM

I am using HHVM 3.0.1(rel) with nginx over unix socket. I would like to setup pooling as in php-fpm and use different pools for different sites and allocate resources very accurately. Is it possible?
Currently, no. It's in the backlog of things to add, or you could work on adding it yourself.
The current workaround is to have multiple instances of HHVM running on different ports and manually set up pools that way.

Resources