Setting WebProxy when using NServiceBus + Azure Servicebus - .net-core

I am currently writing a couple .NET core of applications using NServiceBus with AzureServiceBusTransport, but I cannot find any way of configuring it with a WebProxy. When using Azure Servicebus directly we have the configuration option available on ServiceBusClientOptions, but I can't find any such property or method on AzureServiceBusTransport.
I've found some threads suggesting that it is possible to set default proxy, but I'm not sure if that is something we want to do as we're doing calls that does not require proxy in the same application(s).
Is there any way to configure a WebProxy when using NServiceBus + Azure ServiceBus?

Related

How does one configure KeepAlive on Grpc.AspNetCore.Client?

I'm trying to configure the KeepAlive settings for a gRPC connection using Grpc.Net.Client. The original SDK supports this through injecting ChannelOption objects into the Channel constructor. What I can't see is any way to set this up through the new .NET Core 3.1 API. Is this possible?
Unfortunately the Grpc.Net.Client does not currently provide configurable keepalive support (it requires feature support from the underlying HttpClient)
This might be available in the future, see https://github.com/dotnet/corefx/issues/41852 for details.

How to configure dynamic routing of gRPC requests with envoy, nomad and consul

We use nomad to deploy our applications - which provide gRPC endpoints - as tasks. The tasks are then registered to Consul, using nomad's service stanza.
The routing for our applications is achieved with envoy proxy. We are running central envoy instances loadbalanced at IP 10.1.2.2.
The decision to which endpoint/task to route is currently based on the host header and every task is registered as a service under <$JOB>.our.cloud. This leads to two problems.
When accessing the service, the DNS name must be registered for the loadbalancer IP which leads to /etc/hosts entries like
10.1.2.2 serviceA.our.cloud serviceB.our.cloud serviceC.our.cloud
This problem is partially mitigated by using dnsmasq, but it is still a bit annoying when we add new services
It is not possible to have multiple services running at the same time which provide the same gRPC service. If we e.g. decide to test a new implementation of a service, we need to run it in the same job under the same name and all services which are defined in a gRPC service file need to be implemented.
A possible solution we have been discussing is to use the tags of the service stanza to add tags which define the provided gRPC services, e.g.:
service {
tags = ["grpc-my.company.firstpackage/ServiceA", "grpc-my.company.secondpackage/ServiceB"]
}
But this is discouraged by Consul:
Dots are not supported because Consul internally uses them to delimit service tags.
Now we were thinking about doing it with tags like grpc-my-company-firstpackage__ServiceA, ... This looks really disgusting, though :-(
So my questions are:
Has anyone ever done something like that?
If so, what are recommendations on how to route to gRPC services which are autodiscovered with Consul?
Does anyone have some other ideas or insights into this?
How is this accomplished in e.g. istio?
I think this is a fully supported usecase for Istio. Istio will help you with service discovery w/ Consul and you can use route rules to specify which deployment will provide the service. You can start explore from https://istio.io/docs/tasks/traffic-management/
We do something similar to this, using our own product, Turbine Labs.
We're on a slightly different stack, but the idea is:
Pull service discovery information into our control plane. (We use Kubernetes but support Consul).
Organize this service discovery information by service and by version. We use the tbn_cluster, stage, and version (like here).
Since version for us is the SHA of the release, we don't have formatting issues with it. Also, they don't have to be unique, because the tbn_cluster tag defines the first level of the hierarchy.
Once we have those, we use UI / API to define all the routes (e.g. app.turbinelabs.io/stats -> stats_service). These rules include the tags, so when we deploy a new version (deploy != release), no traffic is routed to it. Releases are done by updating the rules.
(There's even some nice UI affordances for updating those rules for the common case of "release 10% of traffic to the new version," like a slider!)
Hopefully that helps! You might check out LearnEnvoy.io -- lots of tutorials and best practices on what works with Envoy. The articles on Service Discovery Integration and Incremental Blue/Green Releases may be helpful.

Application insights SDK - Map inter-resource dependencies

I'm trying to create an end-to-end Application Map using Application Insights. Note all dependencies and metrics are captured and sent using the SDK.
Take the following scenario:
Windows service (batch processing) > (calls) WebAPI > (queries db)
I have 2 Application Insight resources - Windows Service and WebAPI. Both are capturing metrics but in isolation. How can I create a dependency using the SDK between resource 1 (i.e. service) and resource 2 (i.e. WebAPI)? I need to be able to view the Application Map for resource 1 and be able to see the entire end-to-end view of windows service > web service > db.
I can currently see only windows service > WebApi (App Map resource 1) or WebApi > db (App Map resource 2). Need to bring both together somehow?
Application Insights sdk only collect dependencies automatically for HTTP dependencies. Also it only works when the application insights profiler is running on the machine (often installed on azure websites through the Application Insights Extension).
If you happen to be in one of the situations where the new beta sdk is not collecting dependencies for you. You can do that on your own by writing a little bit of code yourself.
The sdk's autocollection code is open source and you can use it to guide you as to how to track these dependencies. The idea is to append the dependency telemetry's target field with the hash of the target component's instrumentation key and set the dependency type to "Application Insights".
Here is how to compute the hash: Compute Hash
Here is how to add it to the target field and set the right dependency type on the dependency telemetry object: Add component correlation to DependencyTelemetryTarget
A little word of caution. There may soon be a change to the format in which the target field is captured / the name of the dependency type (see this discussion). If and when that happens, it would be an easy enough change for you too.
My recommendation would be to use the same Application Insights resources (e.g. instrumentation key) for both your Windows Service and Web API.
You can separate your telemetry for those two services by adding a custom property indicating the service for all telemetry you emit. Easiest way to do this would be to implement a telemetry initializer (see here for documentation).
It is not possible today. Possible ways -
Use a single InstrumentationKey and identify by a custom property (as
suggested by #EranG
Export the data for both the apps and do your own thing
Please vote on this uservoice. Product team is already considering implementing this functionality in future.

Rest service .net operation contracts during runtime

Is it possible to add operation contracts to a rest service during the runtime?
For instance we have a service which is available under the endpoint: www.mywebsite.com has already one operationContract: getName.
But now I want to add during the runtime two additional operationContracts: like getAdress and GetNumber.
The main problem is how to let clients to know about new service contracts.
I think you can add new service endpoints at runtime by creating ServiceHost instances. Service will listenting to some different addresses. Those endpoints will use different service contracts (interfaces).
I don't think you will construct and implement interfaces dynamically (but you can in .net). You will probably use some predefined interfaces. In this case you should inform clients somehow about new service contract and address. To pass and consume such metadata you can:
1) Use duplex methods (service to client call direction) of some 'static' maintenance service - to bring service metadata to client in custom format;
2) Use periodical client calls (polling) to this maintenance service - for same purpose;
3) Use periodical web scanning (by address range) and parse wsdl to build client proxy at runtime. You can run "svcutil /sc ..." at runtime to generate code or use custom technic (this or this can help);
4) Use intermediate service to orchestrate both service and clients (complex but powerful approach, this can help);
5) Use wcf discovery in addition (not an easy way too);
6) Use combination of those methods.
p.s. You can use one interface but inform client when service supports (implements) certain method. It would be the easiest way.

Newbie question for Flex Remoting with WebOrb

Since Flashbuilder does not support WCF over https, i am considering to use weborb remoting as alternative, but not really sure how flash is going to know weborb location, if they are sitting on different servers. Looked at destination, source fields, but not really find a field called url in remoteObject in Flex. Has anyone done similar things?
I know this is an old question, but thought I'd answer it anyway. You can expose your WCF services to remoting clients (Flash, Flex) via WebORB. WebORB supports both self-host and IIS-hosted WCF services. Here are links to instructions for both models.
Self-hosted: http://www.themidnightcoders.com/fileadmin/docs/dotnet/v4/guide/index.html?standalone_wcf_services.htm
IIS-hosted: http://www.themidnightcoders.com/fileadmin/docs/dotnet/v4/guide/index.html?iis_hosted_wcf_services.htm
Both documents address your questions. Here is an example of one approach:
Invoking Self-Hosted Service From Flex/AIR
Flex and AIR clients can use the RemoteObject API to invoke methods on self-hosted WCF services which use the AMF endpoint. There are two approaches for invoking self-hosted WCF service. The first approach requires less code, but creates a dependency on configuration files declaring destinations and channels (the files located in WEB-INF/flex). The second approach does not have any dependencies on the configuration files, but results in a few additional lines of code.Consider the examples of the API below:
Approach 1 (with dependency on configuration files):
var remoteObject:RemoteObject = new RemoteObject("GenericDestination");
remoteObject.endpoint = "http://localhost:8000/WCFAMFExample/amf"
remoteObject.GetQuote.addEventListener( ResultEvent.RESULT, gotResult );
remoteObject.GetQuote.addEventListener( FaultEvent.FAULT, gotError );
remoteObject.GetQuote( "name" );
The endpoint URL uniquely identifies the WCF service. Notice the /amf at the end of the URL, it is required for the AMF endpoint. With the approach demonstrated above, the destination name in the RemoteObject constructor is required however it is not used. As a result, for the code to work, the Flex/AIR application must be compiled with additional compile argument:
-services "C:\Program Files\WebORB for .NET\4.0\web-inf\flex\services-config.xml"
I hope this helps.
K

Resources