NServiceBus doesn't start picking up messages until an endpoint is "touched" - asp.net

So when I run my two services locally, I can hit service A which sends a command to service B which picks it up and processes it. Pretty straight forward. However, when I publish these to my web server and I send a request to service A which sends it to service B (I can see the message in service B's queue) but it won't get picked up and processed. I created an endpoint on service B that simply returns an OK response -- if I call this endpoint, effectively "touching" the service, everything kicks on and the messages get processed from that point on.
I figured maybe this had something to do with with late compiling, so I changed the publish to precompile on publish, but I get the same result.
Is there a way to have the service start processing as soon as it is published? Also worth noting that both services are WebAPI 2

Another option (probably more “standard”) would be to move the “handlers” into a Windows Service instead of a web application.
For this Windows Service you can leverage the NServiceBus Host which will turn a standard class library into a Windows Service for you. They have a good amount of documentation about this here: https://docs.particular.net/nservicebus/hosting/nservicebus-host/?version=Host_6
I would argue that this is more stable as you can separate the processing of sending commands (Web Application / WebApi) and processing commands / publishing events (NSB Host). The host can sit on the web server itself or you can put these on a different server.
Our default architecture is to have a separate server run our NSB Hosts as you scale web applications and NSB Hosts differently. If you run the NSB Host on the web server, you can get issues where a web app gets too much traffic and takes the NSB Host processing down. You can always start simple with using 1 server for both, monitor the server and then move things around as traffic increases.

Not sure if this is the "right way" to do things, but what I ended up doing is setting up each site to be always running and auto initialize.
App pool set to "Always Running"
Website set preload enabled = true
Web.config has webServer entry for application initialization doAppInitAfterRestart="true"
Web Role added to server "Application Initialization"
With those things being set, the deployment process is basically to publish the site and iisreset. If there are better options, I'm still looking :)

Related

Specific IIS user not working with TLS 1.2

We have run into a problem with IIS, TLS 1.2 and domain users. I searched SO and other forums, but all possibly related topics didn't lead me to a solution.
Please don't judge the configuration, it wasn't invented by me, I just need to solve this problem.
What happens is the following:
We have an old web application, that opens an executable with Process.Start and that executable calls an external webservice. This used to work fine with TLS 1.0, but in the near future, the external webservice demands TLS 1.2.
So now we are trying to make this work, and we are almost there: we upgraded the executable's .Net Framework version to 4.7.2 and enabled TLS 1.2 on the Windows Server 2008 R2. The web app's .Net Framework version is set to 4.6.1. It seems to me that this should be everything there is to it.
And indeed, when we run the executable stand alone (not called by the web app) from the server, so owned by the domain user logged on to the server (with RDP), everything works as expected; we receive the proper answer from the web service.
Also, when we call the executable by the web app and in IIS the application pool identity is set to a build in account: ApplicationPoolIdentity, everything works as expected as well.
But, when we set the application pool identity to a dedicated domain account (so a different one than the one that executed the executable earlier), the trouble begins. Connecting the web service fails with the following exception:
System.ServiceModel.EndpointNotFoundException: There was no endpoint
listening at https://<some url>/<some webservice name>.asmx that could
accept the message. This is often caused by an incorrect address or
SOAP action. See InnerException, if present, for more details. --->
System.Net.WebException: Unable to connect to the remote server
---> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a
period of time, or established connection failed because connected
host has failed to respond ...
Now the question is of course, what could be causing this?
I like to believe that the failing domain account is configured correctly, but it seems it is not. Or could it be something else, that I don't even know the existence of...
EDIT:
I managed to narrow it down to a permissions issue: when the dedicated domain account runs the application stand alone, it works as it should. When the dedicated account runs it from within the IIS context (started by the web app), it doesn't work, but when the dedicated account is given admin rights, it also works as expected.
That leaves me to the question: what additional permissions does IIS need to allow this setup? Maybe in combination with TLS 1.2 thingies.
Any help would be greatly appreciated.

Configuring Azure's Event Hub to receive events from ASP.NET MVC web application

Can someone point me in the right direction on how to configure network settings within event hub so i can successfully send data via the ASP.NET MVC application while running locally (localhost) as well as when I deploy the application in azure's dev/qa/production web environment.
I have build a proof of concept console application in .NET locally and on Azure's EventHub side added my IP address within Networking/Firewall settings, and have no issue sending data and receiving data from a local machine.
But when I try the same code in the ASP.NET MVC web application, the page just hangs on CreateBatchAsync() method and does not return any exception..
var producerClient = new EventHubProducerClient(connectionString, eventHubName);
EventDataBatch eventBatch = await producerClient.CreateBatchAsync();
eventBatch.TryAdd(new EventData(Encoding.UTF8.GetBytes("Event 1z at " + DateTime.Now.ToString())));
await producerClient.SendAsync(eventBatch);
Any help would be appreciated.
Thanks
The call to CreateBatchAsync is the first point in your code to request a network operation and, consequently, will trigger creation of the connection and link to the Event Hubs service. The connection attempt has a timeout associated with it which is 60 seconds in the default configuration that you're using. Depending on the error that it is encountering, you may see retries take place, each of which would have a 60 second timeout. With the default configuration, this would look like a 3 minute hang. (60 seconds * 3 attempts)
The most common connection issue in an enterprise environment is that the ports needed for AMQP over TCP (5671/5672) are not open. Changing the transport to AMQP over WebSockets often helps, as it will use port 443 and may be routed through a proxy, if needed.
For more information, you may want to look at the sample for configuring Event Hubs clients and the Event Hubs network troubleshooting guide.

Where to host SignalR when long-running service via WCF is backend

I'm sure that was a confusing enough title.
I have a long running Windows service dealing with things happening in the world. This service is my canonical source of truth for the rest of my system. Now I want to slap a web interface onto this so the clients can see what is actually going on. At first this would simply be a MVC5 application with some Web API stuff. Then I plan to use SignalR 2.0 and Ember.js to make this application more interactive and "realtime".
The client communicates with the Windows Service over named pipes using WCF. A client (such as a web app) could request an instance of for example IEventService, would be given a WCF proxy client, and could read about events through this interface. Simple enough.
However, a web application basically just exists in the sense that it responds to requests from the user. The way I understand it, this is not the optimal environment for a long lived WCF client proxy to raise events in, and thus I wonder how to host my SignalR stuff. Keep in mind that a user would log in to the MVC5 site, but through the magic of SignalR, they will keep interacting with the service without necessarily making further requests to the website.
The way I see it, there are two options:
1) Host SignalR stuff as part of the web app. Find a way to keep it "long-running" while it has active clients, so that it can react to events on the WCF client proxy by passing information out to the connected web users.
2) Host SignalR stuff as part of my Windows service. This is already long-running, but I know nada about OWIN and what this would mean for my project. Also the SignalR client will have to connect to a different port than where the web app was served from, I assume.
Any advice on which is the right direction to go in? Keep in mind that in extreme cases, a web user would log in when they get to work in the morning, and only have signalr traffic going back and forth (i.e. no web requests) for a full work day, before logging out. I need them to keep up with realtime events all that time.
Any takers? :)
The benefit of self-hosting as part of your Windows service is that you can integrate the calls to clients directly with your existing code and events. If you host the SignalR server separately, you'd have another layer of communication between your service and the SignalR server.
If you've already decided on using WCF named pipes for that, then it probably won't make a difference whether you self-host or host in IIS (as long as it's on the same machine). The SignalR server itself is always "long-running" in the sense that as long as a client is connected, it will receive updates. It doesn't require manual requests from the user.
In any case, you'll probably need a web server to serve the HTML, scripts and images.
Having clients connected for a day shouldn't be a problem either way, as far as I can see.

Is it possible to start WCF UDP Listener in IIS on a shared host without requiring a user to access HTTP first?

I have created a sweet ASP.NET 4.0 UDP listener via WCF that starts on Application_Start. As usual, everything is hunky dory on my local machine. On my local machine using the VS Dev Environment and setting it to not open any page upon debug, the listener starts without browsing to anything yet. However, whenever I deploy to my shared host, I must access the site via a web browser before the listener will start. I do not have access to the IIS control panel but I do have some limited setting changes I can make to IIS via "Website Panel" software. I believe the shared host uses IIS 7.5.
Is there a better way to solve this rather than creating a polling service from my home PC to send an HTTP request to the shared host every so often to kick off the listener?
Requirements
The client sends UDP packets over a configurable port. I cannot change anything other than the IP and port that the client uses to connect
The solution must work with my shared host since I cannot afford a VPS at this time - otherwise I would've created a Windows Service. I got around creating a window service before by creating a polling service via WCF Application_Start but that only works because the info the users would see have to be on a webpage therefore application_start would always be called. In this case, the users/clients are not necessarily accessing the webpage.
Ideas:
Somehow pull this into a .svc. That way when the .svc is accessed by the client, it kicks off the listener for everyone else. But how can a .svc running on port 80 accept UDP calls? I'm also not sure if the client will be able to connect to more than an IP:PORT (I don't think it'd accept a .svc path like URL.com/awesomeListener.svc).
Any suggestions? Thank you so much.
If you are running ASP.NET 4.0 you can set it to auto-start for you which will fire Application_Start:
http://weblogs.asp.net/scottgu/archive/2009/09/15/auto-start-asp-net-applications-vs-2010-and-net-4-0-series.aspx

Asp.net "background service" listening to MSMQ not working after IIS site is stopped/started

We've implemented a "background service" in our Asp.Net web app that receives messages from MSMQ at random intervals without requiring an HTTP request to start up the application. We auto start this web app using a serviceAutoStartProvider.
Everyhing works when IIS initially starts up, the server is rebooted and so on, we receive messages just fine. BUT if we just stop the site in IIS (not touching the application or app pool), the application stops receiving MSMQ messages. And when we start the web site again, the serviceAutoStartProvider is not called again, so our app does not start listening to MSMQ messages again!
If we issue a HTTP request against the web app after the IIS site has been stopped and started again, it starts listening to MSMQ messages again.
Shouldn't our "background service" web app continue to listen to MSMQ messages even if the IIS site is stopped? It won't get any requests, but I think it should continue to run.
What exactly happens in an Asp.Net application/app pool when the IIS site is stopped? Any events fired that we can hook up to? The app pool claims to be "started" in IIS manager, but code is not running in it.
Why isn't our serviceAutoStartProvider called when the site is started again? I believe it is "by design", since the application isn't really stopped. But the applications isn't running, either, has to be waken up by an actual HTTP request.
When the IIS Web App shuts down (eg. due to no new HTTP(S) requests for the timeout time) the .NET app domain (within the app pool worker process) completely closes and unloads. This includes all background threads, including those used by the .NET thread pool.
A Web App can be configured with a longer (or no) timeout, then background worker threads could continue to process work.
But better would be to either run such workers in a specialist service process managed completely separately.
Or, even better, use IIS application hosting with WCF to create the MSMQ listener. I understand in this case the integration of Windows Process Activation Services with IIS would restart the Web App if a new message arrived after it had been shutdown.
I would host the MSMQ listenner in a windows service. Why couple it to IIS?
UPDATE
Actually what I mean is why couple the MSMQ and ASPNET in the same app pool?
You can now use "Application Initialization" feature of IIS8 (and IIS7.5), more information including version availability and usage documentation can be found at:
http://www.iis.net/learn/get-started/whats-new-in-iis-8/iis-80-application-initialization
This replaces "Application Warm-Up Module" which is no longer supported, and provides us with proper control over component/service initialization in an "always running" scenario.

Resources