I'm running a Cloud Pub/Sub PublisherClient instance as a Singleton in an ASP.NET web application (.NET Standard 2). Does this retain a persistent HTTPS connection to the specified Cloud Pub/Sub Topic and should I call the ShutdownAsync method explicitly, or just let the connection be severed when the app pool recycles?
Running this in conjunction with Quartz.NET, publishing messages to Pub/Sub in relatively small batches, every 30 seconds. This seems to introduce server affinity in a 3-node Azure Load Balancer cluster, where the majority of traffic is routed to any given node after running for 1+ hours. Not 100% sure about best practices here.
Using Pub/Sub C# NuGet package V1 1.0 and Quartz NuGet 3.0.7
I assume you’re using this PublisherClient. Per the sample documentation, the PublisherClient instance should be shut down after use. This ensures that locally queued messages get sent. See also the ShutdownAsync documentation.
Related
I have to create 28 stateless services under one service fabric application. The role of each state less service is just to listen to the service bus queue, retrieve the messages and post them to a Rest endpoint. Is there a hard limit on how many stateless services that can be created on a single azure service fabric application? Do we run into any memory issues by having numerous stateless services?
EDIT : We looked on the server itself and saw that each app takes up about 250 MB mem.
Nothing comes for free. Each stateless service will use some memory, also of course largely depending on what it is doing. There is no hard limit but the size of the underlying machine and possibly the scale of the cluster do define the limits. What is the instance count of each service anyway?
I have four different Redis Cache subscriptions set up in Azure. I also have four App Services that each use one of those Redis Cache subscriptions. The four App Services/Redis Cache subscriptions are for the same code base, but different environments. I use a test, staging, live-east coast, and live-west coast environment.
The code running in each app service is exactly the same.
I have an ASP.NET Core Web API project that uses StackExchange.Redis. In my Web API project, I connect to the Redis subscription set up in Azure that corresponds to the environment for the App Service. As part of the startup process for my Web API project, I open up four PubSub channels.
For the test, staging, and live-west coast environments, the four PubSub channels get created and work just fine. I can connect to the Redis console through Azure and run the PUBSUB CHANNELS command and see the four channels I create through code.
For some reason, on the live-east coast Redis subscription, only one of the PubSub channels shows up. I can also verify that only one channel is actually open. My front-end that calls the Web API has logic that publishes messages to the Redis PubSub. These do not work on the live-east coast App Service. If I restart the App Service or reboot Redis, then I can sometimes get all four PubSub channels to show up and to work properly. Anytime I deploy new code to my live-east coast App Service, after the service boots back up, only one of the channels get created.
For some reason Redis is closing three of my PubSub channels. Again, this only happens in one of my four Redis subscriptions/App Services. The other three work flawlessly.
I've made sure that all the settings for my four Redis subscriptions and four App Services are identical. I've tried rebooting and redeploying several times and I just can't get that live-east coast Redis subscription to keep all four PubSub channels open.
Has anyone experienced anything like this? Has anyone seen Azure Redis Cache randomly closing their PubSub channels?
It is possible that the clients subscribed to that channel have either died or never successfully connected. Once the subscriber count reaches zero, PUBSUB channels won't show that channel anymore. Try running PUBSUB numsub <channel_name> to verify that there are subscribers. Also run CLIENT list to see how many clients have subscriptions (would be something like sub=1).
Here is an example how to listen to Unix signals and stop ServiceStack host.
If I have IRegisteredObjects running in this applications how can I ensure that all IRegisteredObjects receive proper Stop(false) and then Stop(true) calls, and how I can wait until all Stop(true) calls return?
Should I call HostingEnvironment.InitiateShutdown (MSDN)?
In IIS I just rely on the environment, but I am not sure if that functionality is possible at all with self-hosted ServiceStack.
(The question assumes that AWS platform sends SIGTERM before terminating spot instances at least in majority of cases. If AWS just kills instances, IRegisteredObjects are meaningless).
I'd like to develop a simple solution using .NET for the following problem:
We have several computers in a local network:
10 client computers that may need to execute a program that is only installed on two workstations
The two workstations that are only used to execute the defined program
A server that can be used to install a service available from all previously described computers
When a client computer needs to execute the program, he would send a request to the server, and the server would distribute the job to a workstation when available for execution, and inform the client computer when the execution has been performed.
I'm not very used to web and services development so I'm not sure if it's the best way to go, but below is a possible solution I thought about:
A web service on the server stores in queues or in a database the list of tasks with their status
The client computer calls the web service to execute a program and gets a task id. Then calls it every second with the task id to know if the execution has been performed.
The workstations that are available call the web service every second to know if there is something to execute. If yes, the server assigns the task, and the workstation calls the web service when the execution is completed.
I summarized this in the below figure:
Do you think to a simpler solution?
Have a look at signalr! You could use it as messaging framework and you would not need to poll the service from 2 different diretions. With signalR you would be able to push execution orders to the service and the service will notify the client once the execution has been processed. The workstation would be connected with signalR, too. They would not need to ask for execution orders as the webservice would be able to push execution orders to either all or a specific workstation.
I am interested in the Pub/Sub paradigm in order to provide a notifications system (ie : like Facebook), especially in a web application which has publishers (in several web applications on the same web server IIS) and one or more subscribers, in charge to display on the web the notifications for the front user.
I found out Redis, it seems to be a great server which provides interesting features : Caching (like Memcached) , Pub/Sub, queue.
Unfortunately, I didn't find any examples in a web context (ASP.NET, with Ajax/jQuery), except WebSockets and NodeJS but I don't want to use those ones (too early). I guess I need a process (subscriber) which receives messages from the publishers but I don't see how to do that in a web application (pub/sub works fine with unit tests).
EDIT : we currently use .NET (ASP.NET Forms) and try out ServiceStack.Redis library (http://www.servicestack.net/)
Actually Redis Pub/Sub handles this scenario quite well, as Redis is an async non-blocking server it can hold many connections cheaply and it scales well.
Salvatore (aka Mr Redis :) describes the O(1) time complexity of Publish and Subscribe operations:
You can consider the work of
subscribing/unsubscribing as a
constant time operation, O(1) for both
subscribing and unsubscribing
(actually PSUBSCRIBE does more work
than this if you are subscribed
already to many patterns with the
same client).
...
About memory, it is similar or smaller
than the one used by a key, so you
should not have problems to subscribe
to millions of channels even in a
small server.
So Redis is more than capable and designed for this scenario, but the problem as Tom pointed out in order to maintain a persistent connection users will need long-running connections (aka http-push / long-poll) and each active user will take its own thread. Holding a thread isn't great for scalability and technologically you would be better off using a non-blocking http server like Manos de Mono or node.js which are both async and non-blocking and can handle this scenario. Note: WebSockets is more efficient for real-time notifications over HTTP, so ideally you would use that if the users browser supports it and fallback to regular HTTP if they don't (or fallback to use Flash for WebSockets on the client).
So it's not the Redis or its Pub/Sub that doesn't scale here, it's the number of concurrent connections that a threaded HTTP server like IIS or Apache that is the limit, with that said you can still support a fair amount of concurrent users with IIS (this post suggests 3000) and since IIS is the bottleneck and not Redis you can easily just add an extra IIS server into the mix and distribute the load.
For this application, I would strongly suggest using SignalR, which is a .Net framework that enables real-time push to connected clients.
Redis publish/subscribe is not designed for this scenario - it requires a persistent connection to redis, which you have if you are writing a worker process but not when you are working with stateless web requests.
A publish/subscribe system that works for end users over http takes a little more work, but not too much - the simplest approach is to use a sorted set for each channel and record the time a user last got notifications. You could also do it with a list recording subscribers for each channel and write to the inbox list of each of those users whenever a notification is added.
With either of those methods a user can retrieve their new notifications very quickly. It will be a form of polling rather than true push notifications, but you aren't really going to get away from that due to the nature of http.
Technically you could use redis pub/sub with long-running http connections, but if every user needs their own thread with active redis and http connections, scalability won't be very good.