Local Host call vs Azure App Service call (Slow in Azure) - asp.net

I have a service running in azure (in the same service plan as other services).
When this service runs locally (with the exact same settings as the one in azure), my service endpoint returns within 2 seconds. However when it runs in Azure it takes up to a minute.
The service endpoint itself is calling a bunch of external apis.
Looking at app insights it seems like the external api's is taking forever to return. (~10s a piece). Hitting the same external endpoints manually confirms that it returns immediately. AppInsights also shows that the services only takes 6.8ms doing work, but then spends the rest of the time waiting.
My intuition says its some form of connection starvation where the Azure app is waiting for a thread or connection to become available but checking the Azure metrics shows nothing out of the ordinary.

Related

Configuring Azure's Event Hub to receive events from ASP.NET MVC web application

Can someone point me in the right direction on how to configure network settings within event hub so i can successfully send data via the ASP.NET MVC application while running locally (localhost) as well as when I deploy the application in azure's dev/qa/production web environment.
I have build a proof of concept console application in .NET locally and on Azure's EventHub side added my IP address within Networking/Firewall settings, and have no issue sending data and receiving data from a local machine.
But when I try the same code in the ASP.NET MVC web application, the page just hangs on CreateBatchAsync() method and does not return any exception..
var producerClient = new EventHubProducerClient(connectionString, eventHubName);
EventDataBatch eventBatch = await producerClient.CreateBatchAsync();
eventBatch.TryAdd(new EventData(Encoding.UTF8.GetBytes("Event 1z at " + DateTime.Now.ToString())));
await producerClient.SendAsync(eventBatch);
Any help would be appreciated.
Thanks
The call to CreateBatchAsync is the first point in your code to request a network operation and, consequently, will trigger creation of the connection and link to the Event Hubs service. The connection attempt has a timeout associated with it which is 60 seconds in the default configuration that you're using. Depending on the error that it is encountering, you may see retries take place, each of which would have a 60 second timeout. With the default configuration, this would look like a 3 minute hang. (60 seconds * 3 attempts)
The most common connection issue in an enterprise environment is that the ports needed for AMQP over TCP (5671/5672) are not open. Changing the transport to AMQP over WebSockets often helps, as it will use port 443 and may be routed through a proxy, if needed.
For more information, you may want to look at the sample for configuring Event Hubs clients and the Event Hubs network troubleshooting guide.

Load Stressing Web applications deployed in openstack instances under an autoscaling group

I am working testing the auto-scaling feature of OpenStack. In my test set-up, java servlet applications are deployed in tomcat web servers behind a HAproxy load balancer. I aim at stressing testing the application, to see how it scales and the response time using JMeter as the stress tester. However the I observe that HAProxy (or something else) terminates the connection immediately the onComplete signal is sent by one of the member instances. Consequently, the subsequent responses from the remaining servers are reported as failures (timeouts). I have configured the HAProxy server to use a round-robin algorithm with sticky sessions. See attached JMeter results tree , I am not sure of the next step to undertake. The web applications are asyncronous hence my expectation was that the client (HAProxy in this case) should wait until the last thread is submitted before sending the response.
Is there be some issues with my approach or some set up flaws ?

NServiceBus doesn't start picking up messages until an endpoint is "touched"

So when I run my two services locally, I can hit service A which sends a command to service B which picks it up and processes it. Pretty straight forward. However, when I publish these to my web server and I send a request to service A which sends it to service B (I can see the message in service B's queue) but it won't get picked up and processed. I created an endpoint on service B that simply returns an OK response -- if I call this endpoint, effectively "touching" the service, everything kicks on and the messages get processed from that point on.
I figured maybe this had something to do with with late compiling, so I changed the publish to precompile on publish, but I get the same result.
Is there a way to have the service start processing as soon as it is published? Also worth noting that both services are WebAPI 2
Another option (probably more “standard”) would be to move the “handlers” into a Windows Service instead of a web application.
For this Windows Service you can leverage the NServiceBus Host which will turn a standard class library into a Windows Service for you. They have a good amount of documentation about this here: https://docs.particular.net/nservicebus/hosting/nservicebus-host/?version=Host_6
I would argue that this is more stable as you can separate the processing of sending commands (Web Application / WebApi) and processing commands / publishing events (NSB Host). The host can sit on the web server itself or you can put these on a different server.
Our default architecture is to have a separate server run our NSB Hosts as you scale web applications and NSB Hosts differently. If you run the NSB Host on the web server, you can get issues where a web app gets too much traffic and takes the NSB Host processing down. You can always start simple with using 1 server for both, monitor the server and then move things around as traffic increases.
Not sure if this is the "right way" to do things, but what I ended up doing is setting up each site to be always running and auto initialize.
App pool set to "Always Running"
Website set preload enabled = true
Web.config has webServer entry for application initialization doAppInitAfterRestart="true"
Web Role added to server "Application Initialization"
With those things being set, the deployment process is basically to publish the site and iisreset. If there are better options, I'm still looking :)

Geo-Replication in Azure App Service

I have a App service hosted in Windows Azure in a region. When there are some issues with Azure servers in the hosted region, the app service goes down and the users are unable to see the website.
I would like to know if there is a way to geo-replicate the app service so that if the servers are down in 1 region, the website should automatically redirect it to a different server?
You can geo-replicate your app service by using Azure Traffic Manager service, which allows you to control the distribution of user traffic to your service endpoints running in different datacenters around the world.
As of today, Azure Traffic Manager provides 3 ways for routing the traffic: Priority, Weighted and Performance. For what you're looking to accomplish, I believe you would want to choose Priority routing method.
To learn more about how you can make use of this service to make your app service highly available, please see this link: https://azure.microsoft.com/en-us/documentation/articles/app-service-app-service-environment-geo-distributed-scale/.
This is an old entry but I thought I'd chime in after working with Azure for a few years.
If your statement "When there are some issues with Azure servers in the hosted region" is referring to transient outages, what you might be experiencing is your App Service Plan instance transitioning. Microsoft regularly moves ASP instances to new machines for reasons that make sense to them. Likely this is to load balance hardware or apply patches to the underlying VMs that host app services.
It has been my experience that when the ASP instances are moved, the new ASP instance needs time to warmup the app services hosted on it. If your ASP is configured with only 1 instance, your app service will be unreachable during this time.
If on the other hand, you configure your ASP with a minimum of 2 instances, Microsoft will synchronize the moving of the instances so that at least 1 remains up and available while the other is being moved.
Of course running a multi instance ASP requires your application to either be stateless or built using a session provider other than the default .Net "In Memory" session provider. CosmosDB for instance.

Monitoring cluster of micro services (web,queue,db,ha proxy)

I am designing an architecture where all micro services are clustered.
For instance: 5 web server, 1 clustered db, 1 clustered queue system, 8 clustered workers (like send email,send sms,...) that consume from the queue (tasks are pushed by the web server)
I am wondering about the best practice in order to detect that each 'cluster of micro service' is healthy, and how to 'fail fast' the whole service in such case one of the micro service is unavailable.
All the service is sitting behind an nginx for ha proxy - should it be nginx that monitors everything and fails? How can I check the health of all the micro services?
You should use an external monitoring service like Pingometer.
This lets you setup simple health checks (HTTP, HTTPS, Ping, etc.) at regular intervals and receive alerts if a node fails, is unavailable, or not responding with the correct content.
In your contact, you can setup a webhook which is fired when a service goes down. You can use the webhook to trigger a failover, change DNS records, etc.
We setup something similar and it's working quite well.
You can also use something internally to monitor nGinX itself (e.g. cheaping workers + respawning them), but this doesn't let you know that a service is functioning externally (like a monitoring service would).

Resources