We're analyzing on improving the performance by implementing Memcached to our ASP.Net application.
We have several webservers , 12 app server cluster with a load balancer and DB tier. To implement the Memcached, do we need another tier of servers (caching tier) or can we install Memcached on the existing App servers. What are the implications ? If we install on existing App servers, ideally how many should act as Memcached servers and how many as clients. Please throw some light.
Your help is much appreciated. Thanks in advance.
Related
I was previously running both my wordpress application and the mysql database server installation inside the same Linux Virtual Machine on Azure. I recently migrated both to Azure App Service and Azure Database for MySQL Flexible Server respectively in the same region - East US. Unfortunately, this has really slowed down the application and page load times have increased to an average of 11 s from 1 s. I served all static files from a CDN but to no avail. Checking the network waterfall, the scripts blocking the page are calls to admin-ajax.php. Increasing the compute of both services to a ridiculous size (there is no traffic right now) only improves the speed to 6 s. Since, both services are in the same region I do not believe there can be such a significant network latency between the server and db. What additional steps can I take to troubleshoot the issue?
If you isolate the slowness endpoints and if its due to SQL then I suggest to configure VNET integration with app service and use service endpoint, Microsoft.SQL at subnet of app service integrated subnet such that some of limitation regarding number of sockets and network latency rule out and should observe performance gain. Parallelly you need to check SQL execution time either using profiling of queries or using Performance recommendations.
i'm deploying and app to Amazon ECS and need some advice on application level monitoring (periodic HTTP 200 and/or body match). Usually i place it behind an ELB and i am sure that my ELB will take action if it sees too many HTTP errors.
However this time it's a very low budget project and the budget for the ELB should be avoided (also consider this is going to work with only one instance as the userbase is very limited).
What strategies could i adopt to grant that the application is alive (kill instance and restart in case of too many app errors)? Regarding the instance i know about AWS autohealing but that's infrastructure.
Obviously one of the problems is that not having an ELB i must bind the DNS to an EIP....so reassigning it it's crucial.
And obviously the solution should not involve any other EC2 instance, external services are acceptable but keeping it all inside AWS would be great.
Thanks a lot
Monitoring of ECS is important to improve the importance of your site. If you still think there could be issues related to deployment on AWS, I suggest to practice auto-scaling feature of AWS.
You can scale up ECS when needed and release it when not required.
Nagios is another open source monitoring tool that you can leverage. Easy to install and configure.
We have developed an ASP.NET application that uses a backend SQL Server database (dedicated server).
Application will be used by 30-40 users (but not more)
To prevent performance issues we are planning to load balance these application by installing 2 webservers (Windows Server 2012 / ISS 8.0).
QUESTION: Will load balancing significantly improve performance, taking in consideration the relatively low number of user requests (30-40 users in total) ?
Generally one server can handle around more than 2000 request at a time which depends on CPU core. Definitely load balancing improves the performance of application as it divided the traffic between two servers based on the routing request at LB.
Let me know if you require any more information.
There is an intranet based ASP.NET application that is deployed to a server (IIS) and a group of clients (about ten). The end user can then decide to either connect to the local application (deployed to their local machine) or the server version. I do not understand the reasoning for doing this. My question is: is this common practice?
yes, it is a common practice to verify the performance of the application. Each client will have their own settings and as per process, application should not break in any kind of environment. it is always beneficial to put a server version and a local version.
If the clients are laptops, and the application supports disconnected data sets and synchronization, it would make sense. Typically you'd see something like this when the client machines are taken off-network to be used at a remote work site.
Has anyone experienced running multiple collaborating applications on Heroku? For example, an admin application to manage another application; or a stats server observing another application?
On Amazons' EC2 platform you can use security groups to restrict access to servers, creating a virtual network between your application or server instances. Is there any such way to do this on Heroku? If so, can you open UDP as well as TCP connections?
Thanks
Robbie
The comment from #elithrar is correct. To talk between applications you either need to define an API, or used shared resources. For example you can have 2 applications connect to the same database by manually copying and pasting the DATABASE_URL from one app to another. This has the downside that should we need to roll credentials (very rare) your manually copied configuration will break.
The same pattern can be used with any add-ons, such as https://addons.heroku.com/redistogo or https://addons.heroku.com/iron_mq to share a message bus or queue between two applications.