SignalR performance counters in Azure Web App - asp.net

I've been trying to load test SignalR on my Azure Web App service (E.g. how many connections it can handle before subscribe calls to the hub start failing). I found that SignalR perfomance counters (https://www.asp.net/signalr/overview/performance/signalr-performance ) can provide me such info. However, I cannot install those performance counters on Web App service, buy running
SignalR.exe ipc
Is there a way to install those performance counters on WebApp or retrieve them somehow from code?

Performance Counters can't be installed on Azure Web App, as it is provided as a managed container and not a full fledged IIS on which you can do everything.
To be able to use these performance counters you can redeploy your solution on an Azure VM or to a Cloud Service, keeping in mind you will loose the flexibility that Azure Web App offers.

You can expose the SignalR performance counters in an Azure Web App using a WebRole, as stated in this article

Related

How costly server-side Blazor is?

Server-side Blazor maintains connection to the server-side apparently, using SignalR.
SignalR is the service you need to pay for. As many simultaneous connections to SignalR are going to be used as many online users your app has at the moment of time.
Do I understand correctly, that I will need to pay for next SignalR tier once I reach certain SignalR free tier limit? And only because I use Blazor, not that I use SignalR for other purposes.
And two cost-reduction alternatives are:
use client-side Blazor WebAssembly
don't use Blazor at all
We run 3 instances of P3V3 app service to serve ~450 users (it is a LOB app, so open by most users, most of the day)
Our SignalR service cost is double the app service plan.
You can use signalr over websockets instead of the SignalR service, but for some unknown reason it is unstable and you'll see lots of signalr connection drop outs.
There's no way to save costs on SignalR - no reservations, no bulk discounts.

How to connect a database server running on local machine as a service to web application hosted on pivotal cloud foundry?

I am trying to test run a basic .NET web application on pivotal cloud foundry. This web application uses as its database a MongoDB server hosted on my local machine. At the moment I am limited to use of the cloud infrastructure by using just the Apps Manager.
I have read the pivotal cloud foundry docs about user provided services, but cannot figure out as to how the connection is to be really made. I have already come across various other ways like using MongoDB as a service (beta version), but at the moment I am not allowed access to the Operations Manager. Looking for an explanation on user provided services or how to implement the service broker API, specifically.
I am new to Mongo as well, so any suggestion regarding making a connection through tweaking Mongo may help as well. Thanks
The use case you describe (web app in PCF connecting to a resource in your local machine) is not recommended.
You can create a MongoDB instance for development purposes in PCF.
$ cf marketplace
...
mlab sandbox Fully managed MongoDB-as-a-Service
...
You can create a mlab service and bind it to your application. You will then have a MongoDB instance in PCF that you can use for development purposes.
Edit:
In that case a user provided service might help you, where you pass in your remote MongoDB instance configuration that you can read in your application. e.g.:
cf.exe cups my-mongodb -p '{"key1":"value1","key2":"value2"}'
You can add your local mongo-db as a CUPS service to your PCF Dev.
Check out the following post.
How to create a CUPS service for mongoDB?

Real-time .NET app monitoring without client polling

We're building a real-time Web-based monitoring system for .NET applications (ASP.NET and Windows executable). Those applications can start a long-running operations and statistics are displayed in real-time on Web page.
For ASP.NET ones we found SignalR a perfect solution: Long running operation (even caused by simple WebForms form postback) periodically call JS client-side functions via SignalR RPC to update monitoring page. But we hit 2 caveats:
In ASP.NET we need to monitor several different apps located in several different virtual directories. How do we push data from those different apps onto a single HTML monitoring page?
Another app is a .NET Windows console executable that runs periodically on a schedule. How do we push its run-time statistics to the same monitoring HTML page? One thing comes to mind - have EXE store temporary statistics in a DB and have client pull same data from the DB, but we'd like to avoid polling. Another - periodically at a given intervals the EXE would call the WebApp, passing the data - and WebApp would pass it to client via the same SignalR call. But are there better ways?
One architecture that I've used is a small monitoring collection service, with embedded monitoring clients in every monitored application, Asp.net, Windows desktop app, console app, Windows service, or otherwise.
The collection service is always running. A webapp then connects directly to the service and requests the state of all monitored apps.
Monitored apps run some small embedded client that feeds back application-specific metrics to the monitoring service. The client can either provide data on events or timers, or the monitoring service an ask for it on a timer itself.
With this, we have a unified monitoring architecture - everything that runs just talks to the monitoring service to send updates, and the health viewer clients just ask the service for data using a unified protocol.
It's basically the Application Server pattern applied to monitoring, and takes a couple cues from the design of SNMP.
Very new to SignalR, didn't realize it has multiple clients for different platforms. We will go with SignalR .NET client for all the apps - they will all talk to main SignalR hub directly invoking server-side methods, which in turn update monitoring page.

Determine If Signalr Scale Out Is Necessary

I am having trouble wrapping my head around whether or not my scenario will require scale out. I have a process in a windows service that pushes messages to a hub hosted in a web app via the signalr .net client. These are user specific messages and are distributed using the Client(connectionid) approach. If this is deployed in a web farm scenario will I need to use a scale out approach? When a user joins I am storing that connection info in the database. I store the url of the webserver and connectionid so I can target that when I publish messages from the windows service.
I would use this if it is an option.
http://www.asp.net/signalr/overview/performance/scaleout-with-windows-azure-service-bus
Louis

How to start ASP.Net State Service in Azure

While deploying my application in azure i got this error when a session variable is used
I know this error is due to ASP.Net state server mode. I started the service in my local PC but how to start this service in Azure environment?
You can use a start up task to spin up the State Service (or really any service for that matter). However, I would highly recommend that you do not use the session state service. I'd recommend looking at the In Role Windows Azure Cache or the Windows Azure Cache Service (Preview) for session state.
By using the session service you separate your session concerns from your web servers. It is still in preview, so if that concerns you, look at the In Role cache, which won't cost any more to run and can be distributed across multiple machines. Also, if you think the latency to pull from the cache service will be too high then the InRole cache may turn out to be better for you (you'd have to test to be sure).

Resources