Can we bind the two different WCF-BasicHttp Send Ports pointing different Web Service in the same Orchestration??
If you want to have two different services on contract level (different operations/message types), you'll have to specify two logical ports in your orchestration.
If you want to call the same web service (on contract level), you have three options. In order of my personal preference:
- Direct binding the logical port
- Role links (using Role Links) and define the party, based on your internal logic
- Create two different ports of the same port type and send your message to the right port, based on your logic
cheers
Related
I have set up Akka.net actors running inside an ASP.net application to handle some asynchronous & lightweight procedures. I'm wondering how Akka scales when I scale out the website on Azure. Let's say that in code I have a single actor to process messages of type FooBar. When I have two instances of the website, is there still a single actor or are there now two actors?
By default, whenever you'll call ActorOf method, you'll order creation of a new actor instance. If you'll call that in two actor systems, you'll end up with two separate actors, having the same relative paths inside their systems, but different global addresses.
There are several ways to share an information about actors between actor systems:
When using Akka.Remote you can call actors living on another actor system given their addresses or IActorRefs. Requirements:
You must know the path to a given actor.
You must know the actual address (URL or IP) of an actor system, on which that actor lives.
Both actor systems must be able to communicate via TCP between actor system (i.e. open ports on firewall).
When using Akka.Cluster actor systems (also known as nodes) can form a cluster. They will exchange information about their localization in the network, track incoming nodes and eventually detect a dead or unreachable ones. On top of it, you can use higher level components i.e. cluster routers. Requirements:
Every node must be able to open TCP channel to every other (so again, firewalls etc.)
A new incoming node must know at least one node that is already part of the cluster. This is easily achievable as part of the pattern known as lighthouse or via plugins and 3rd party services like consul.
All nodes must have the same name.
Finally, when using cluster configuration you can make use of Akka.Cluster.Sharding - it's essentially a higher level abstraction over actor's location inside the cluster. When using it, you don't need to explicitly tell, where to find or when to create an actor. Instead, all you need is a unique actor identifier. When sending a message to such actor, it will be created ad-hoc somewhere in the cluster if it didn't exist before, and rebalanced to equally spread the workload in cluster. This plugin also handles all logic associated with routing the message to that actor.
I'm working on a project that has server and client roles. I would like to have servers and clients automatically detecting each other. At a first glance, zeroconf seems to be the best solution. But it would add a dependency, Bonjour, to the project. I use Qt for the GUI and Qt has native support of broadcast and multicast. So I'm wondering if it's feasible to just use those features to replace zeroconf?
Here is a decent post about how zeroconf works.
I don't think I need the features of obtaining an IP Address and obtaining a Hostname from zeroconf. All I want is let one instance be aware of other instance's existence.
My current plan is combining broadcast and multicast. Each server chooses a unique multicast group and broadcast this group to the others. When a client wants to connect a to specific server, it joins the corresponding group.
Some people mention that it's normal that routers blocking local broadcast. If this is true, my plan would not be feasible.
Is there any standard way to implement this rather than using zeroconf?
I've read of an ESB being used as a SOA approach. What are some other approaches?
This is a very broad question, you may want to focus is.
If you are asking regarding approaches that are instead of ESB, then you may consider using direct access to services, instead of using a service bus.
This approach is often used with a directory or lookup service like UDDI to look up service end point location.
When using an ESB, you send the message to the ESB, who 's responsible to route it to the service provider.
When using direct access the client should know in advance the address of the service provider, and he sends the message directly to him.
When using a lookup service, you first query the address of the service provider (like using DNS to lookup IP addresses), and using this address you send the message to the service provider.
Beyond addressing and routing, the ESB may provide other functions that you loose (or have to implement in other way) if you use the direct access approach.
multi cast routing - sending the request to more then one service provider
context based routing - deciding to which service provider we should send the request, based on the content of the request
central logging
central policy enforcement
load balancing \ fault tolerance
format or protocol translation
buffering and asynchronous service invocation
First.... ask yourself which SOA philosophy are you adhering to. If you are in the IBM camp, then there are 4 different products that provide ESB functionality. Each product is optimized for a different scenarios but basically each one does similar functions.
Think.... SOA == a car. IBM is one manufacturer. Different products == different type of cars for different type of drivers.
My app needs to access two network cards. One to receive data (eth0) and another to send data (3G modem).
Normally, the kernel force the app to work with only one card at a time.
Is there any thing that I can do to make it run?
Thank you.
The kernel does no such thing.
The kernel will route your traffic to the most appropriate end destination based upon the routing information and networks each card is assigned. However, if you are using TCP, your bidirectional communication will use only one route as there is only one address associated with that connection.
If you are trying to implement an multi-homing send/receive system, this is not supported in normal TCP - you will need to use a different protocol, likely implemented in the kernel.
The kernel is not forcing you to use a single interface. It just chooses a default interface if you don't specify otherwise. You can specify a specific interface by specifying it's IP address in the bind() command. To get a list of the available interfaces and their names, use the ioctl(SIOCGIFCONF) function.
Here's an example: http://techpulp.com/2008/10/get-list-of-interfaces-using-siocgifconf-ioctl/
You can make two different UDP sockets bind to separate NICs with the bind(2) and send on one and listen on the other.
Just a technical question -
Can two or more SNMP agents be run on the same port (on the same machine)?
My first instinct would be no since host:port identifies an instance of an application but I'm not sure.
Thank you!
Technically, if the OS supports it, the SO_REUSEADDR SO_REUSEPORT options may be set on a socket to allow other processes to bind to the same address/port and thus allow multiple processes to receive messages on the same address/port. But both processes would have to set the option, and I doubt any agent implementations do that because it would not make sense to do so--it would just cause headaches having both agents potentially responding to a single request. Managers won't be equipped to handle it.
However, you can instead run an SNMP proxy in the primary address/port, configured to forward requests to one of multiple agents based on query, security, or (with SNMPv3) context/engine ID parameters, and forward responses back.
Also, using AgentX, you have an SNMP master agent running on the primary address/port, and one or more SNMP sub-agents connected to the master agent. The master agent dispatches requests to the sub-agents as appropriate, merging the results into a single response, so that to the outside world it appears as a single agent. Each sub-agent typically handles a different branch of OID space (one sub-agent implementing certain module(s), another sub-agent implementing other module(s)).
But taking two agents intended to own the address/port exclusively, and forcing them to share through the REUSE options, while it may be possible, would not be wise.
You can run multiple agents on the same host and with the same port if they have differents ip address (can use a netsh script for that).
Personnaly I use the nsoftware ddl : SecureSNMP V8 edition .NET to do this.
You can look at this post : Multiple SNMP Agents with nsoftware dll
No, two agents cannot both run on the same port as seperate applications for the reasons you assumed (except with a brittle packet sniffing hack, which we'll not go into).
However, 2 agents can be accessed through the same port if there is some mechanism that handles the actual port and distributes requests based on MIB. For example the Windows SNMP service does this, allowing any number of SNMP agents to be added as "extensions" through the registry (HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SNMP\Parameters\ExtensionAgents) by writing them as DLLs and using the snmp.h headers in the platform SDK.
You are correct: ports can't be shared.
If both the agents were designed by you, then the answer can be different.
Consider the HTTP and FTP cases, we can use host names to distinguise multiple sites on the same port, then why can't we do it for SNMP?
We can create a dispatcher who monitors port 161 for incoming traffic. Then use multiple real agents to handle those traffic behind. We can feel free to design how to distinguise them. Personally I prefer the FTP virtual host name manner and use | to distinguise agents.
Maybe I can create a demo for #SNMP Suite in the future.
But if you need to work with existing agents on the same server, then such flexibility is lost.