The difference between "SDN Platform" and "SDN Controller Platform" - networking

What is the difference between "SDN Platform" and "SDN Controller Platform"? These two indicate the same things? Could someone please explain this to me?

There is not much literal difference between these terms, and usually they are referred interchangeably. SDN Controller Platform is actually the controller software that takes care of entire network manageability.
The controller is said as a platform, as this gives the programmer the flexibility to add specific modules (for e.g., a routing module which takes care of routing only multicast packets using some very different routing algorithm) and run them on the controller. A controller when used without any specific modules, essentially sends all packets to everywhere (well, it depends on the controller implementation also). The controller is a platform where you are expected to do stuff and make your network behave the way you want.
And sometimes SDN Platform is used for SDN Infrastructure which includes topology, switches, one or more controllers and so on. Essentially describing the whole network infrastructure.
What you need to know is SDN components, like controller, switches, protocols etc.

I dont see a exact difference between two terminologies you have mentioned .
Briefing SDN Platform simply denotes the seperation of Data plane and Control Plane . I know this can be confusing . But think it this way when you seperate the brain of a switch and put it in another layer ..
Generally SDN consists three layers (from top)
1)Applications -- Consists of Network Applications eg: FW,IPsec etc..
NB API --- no defined Standard yet
2)Control --- consists of Controller
SB API --- This is where OPENFLOW is used
3)Infrastructure --- consists of network devices eg:Switches
The Control Layer connects these two layes using NB API and SB API's
SDN simply gives advantage to network admin to configure multiple network devices .
Using a SDN Controller they can configure multiple network devices at a single place
Hope this helps :D

From software perspective, the term 'platform' emphasises on how a software would be used as the enabler for other applications.
A SDN contoller can be any application which pushes policies to the underlying network. SDN platform on the other hand, should provide extended tools, APIs and hooks to facilitate running 3rd party applications on top of it.

SDN is one of the best open-source platform for developing the projects. SDN platform is just nothing but the networking platform which provides us with numerous benefits to work on topologies. It gives a option to the user to install any controller of their choice and work on it.
SDN Controller platform is the SDN that provides a base platform for different number of controllers onto it. They work on API. SDN Controller platform architecture differs for the different controllers.

Related

Monolithic to Microservcies architecture

My company has a .NET project which serves following use cases
It listens on a WebSocket Port with certificate based client authentication.
It listens on another WebSocket Port with authorization header based client authentication.
It listens on a TCP Port with certificate based client authentication.
On the above three ports different set of client devices are connected.
Now my company wants to convert this application in .NET Core so that this can be deployed on Linux servers to save the deployment cost. As an architect I am thinking in the direction of adopting microservices architecture along with migrating the application in .NET Core. So I am thinking that the above application can be break down into three microservices based on the above use cases.
AFAIK microservices architecture means breaking down your application into smaller services which serves a particular use case. So breaking this complete application into three different microservices is correct or not.
My organization is very new to micro services architecture.I just want to know that whether this thinking is correct or not as per architecture.
Thanks in advance for your help.
Generally I'd try to break things down based on business domains (or business capabilities) instead of technical reasons.
A good place to start might be reading about domain driven design and bounded contexts - see here - there's some good further reading at the end of that link.
Yes you are thinking in right direction.
Here are my suggestion-
you should go for .NET Core and Docker for implementing your Microservices in a better way.
There can be multiple cases when you will go for docker container in this scenario-
1: Run the same image in multiple containers
2: Manage different Containers
3: Run the same image in multiple environments
4: Tag and Run image with different versions
And other reason to go for micro services with docker-
Microservices are smaller in size
Microservices are easier to develop, deploy, and debug, because a fix only needs to be deployed onto the microservice with the bug, instead of across the board
Microservices can be scaled quickly and can be reused among different projects
Microservices work well with containers like Docker
Microservices are independent of each other, meaning that if one of the microservices goes down, there is little risk of the full application shutting down.
You do more research on the same and easily go for micro services architecture.
This may not be answering your question, but I thought it could be useful, especially in light of the fact that your organization is very new to micro-services.
I would recommend to carefully evaluate the advantages and especially disadvantages (complexity) that micro-services architecture introduces.
Just a few examples of things that you will need to think about are log aggregation, communication between services (sync vs async), E2E and integration tests, eventual consistency, etc. Obviously you may end up not having to deal with some of these, but all of them do become a lot more complicated with micro-services.
There should be good business justification to take on the additional complexity (=cost).
Microservices shouldn't be measured on how small enough but how autonomous and independent they are. Microservices are great to be designed around business and domain context and mentioned in details Identifying domain boundires
Since you are starting to build microservices in .Net core why not consider severless microservices ? You have plenty on options in major clouds (AWS,Azure) to build serverless microservices. Serverless are quicker to build and you get generous free tier and you don't have to manage clusters. Is there specific reason you would want to use Kubernetes? you can read more about cloud native and servlerss here Design Cloud native and Serverless

has every webservice a service broker?

I study SOA and webservices for a science paper. My state of knowledge is, that every SOA architcture needs a service broker.
Webservices are concrete implementations of a SOA, so do they have a service broker after? For example I create a webservice in asp.net which returns "hallo world" By Creating it, do I create a service broker too?
Don't let fool you by answers which are copy paste from Wikipedia :-)
Webservices are concrete implementations of a SOA
This assumption/statement is wrong. At least there is no direct relationship between SOA and webservices. SOA is an architectural paradigm where a webservice is a concrete technology (stack) based on WSDL and its result, the SOAP-protocol. Nothing more. Webservices may help to establish loosely coupled service landscape, which the SOA paradigm expects. But you could also build up a SOA landscape with other technology stacks (self-written hacks, RMI, even based on REST for instance).
Repository
The thing is: When you start building up your SOA-landscape, you (or others) will code services (i.e. webservices) where your service will have a technical contract (WSDL, WADL, ..) as a base for the implementation. Your clients will ask for it and you want it to store somewhere. This somewhere is usually a service repository. You could develop your own one, use the UDDI-standard or just buy one of products by the big vendors (IBM, TIBCO, Oracle etc).
Broker
A message broker within the SOA context is some piece of software, which supports the decoupling of the connected partner systems. Commonly it's called ESB (enterprise service bus). Also one of the goals of the SOA paradigm is, that the services can be used by anyone (reusability). Therefore you don't want to connect your services by P2P-connections (aka spaghetti architecture) - just imagine that one of the service participants changes it's hardware/IP: this would be a nightmare for all the connected partner systems. That's why the ESB was invented which acts between the service consumer and the service provider.
Typically, these ESB-products support a lot of technologies or -stacks/APIs like HTTP, JMS, REST etc.
Source: I work with a self-claimed SOA landscape and thousands of different (web-)services for a big company for a long time now.
A Web service is a set of related application functions that can be programmatically invoked over the Internet. Businesses can dynamically mix and match Web services to perform complex transactions with minimal programming. Web services allow buyers and sellers all over the world to discover each other, connect dynamically, and execute transactions in real time with minimal human interaction.
Web services are self-contained, self-describing modular applications that can be published, located, and invoked across the Web.
A network component in a Web Services architecture can play one or
more fundamental roles: service provider, service broker, and service
client.
Service brokers register and categorize published services and provide search services. For example, UDDI acts as a service broker for WSDL-described Web services.

Does SOA need a network?

Do the services in a service oriented architecture need to communicate across a network interface? OR can the services sit on the same computer without even touching localhost? If so, are there any examples of this?
Yes you can - service orientation is an architectural style which you can implement in many ways. An example for in memory implementation is OSGI standard see this presentation for example

Testing Openflow/SDN Controller Application

Openflow/SDN networks gives a remote controller the ability to manage the behavior of network devices i.e. configurations. They can forward instruction sets to dynamically change network configuration. But there is always some room for bugs and failures in your SDN controller application. What i am getting is that i had to painstakingly dig through logs to find the one or two inputs that lead my controller software to break. What are the best testing practices for controller code i.e. traffic simulator, stress testing etc
Veryx PktBlaster is excellent for with following features
1.Simulating mix of OpenFlow switch versions
2.Throughput, Latency Measurement for both Fixed Load and Varying Load
and plenty more Features
Reference: http://sdn.veryxtech.com
there is a company called Ixia who provides Ethernet testing SW and HW. They have a comprehensive portfolio to test OpenFlow/SDN. They are also the chair of Openflow industry body.
http://www.ixiacom.com/solutions/sdn-openflow/index.php
Why not use Mininet for testing your controller/application. You can direct Mininet to not use its own controller and instead use the controller/application running on localhost..
You can use remote controllers with mininet.
For example,
Start a VM, start flood light controller
After the floodlight is up and running, you can go to your browser and type in a URL as follows http://192.168.110.2:8080/ui/index.html (Replace 192.168.110.2 with the proper inet address).
Start a VM, start mininet with following topology
sudo mn --topo linear,2 --controller=remote,ip=ip_floodlight_controller,port=6633
So now you can emulate any topology in mininet and test controller for bugs and failures.
In summary, you can use any other controller with mininet, emulate any topology in mininet and test your controller for bugs and failures in different scenarios.
Try the below link that contains a tutorial named as OpenFlowTutorial.
http://archive.openflow.org/wk/index.php/OpenFlow_Tutorial
It is one of the best tutorials for SDNs implementation.
It explains about setting up mininet,wireshark(for monitoring),which controller APIs are available, how to implement a controller and so on.

Is there a domain-specific enterprise service bus?

I've been thinking about this idea and wanted to know if it's been implemented commercially. Just like there are (external) domain-specific programming languages (where instead of the int's and string's and classes you have business-specific entities and functions that are the primitive types in the language syntax/semantics), is there such a thing as a domain-specific enterprise service bus where instead of routing, orchestrating, and integrating different systems through standard protocols (SOAP/HTTP, JMS, JDBC...etc), you're actually working at more abstract layer of integrating commercial systems (in a specific industry) via their communication protocols? I'm wondering if this pattern has been used as a product for integrating different systems (of different domain standards) within a specific industry (e.g. healthcare, automotive).
Example, in healthcare. You have a central bus that commercial healthcare applications plug into and communicate to each other, get orchestrated, monitored through protocols like HL7, HIE, CCD...etc where the activities, integration, and workflows done through the bus are authored by business analysts (instead of IT staff), example: health quality officers at a hospital, clinical analysts, physicians....etc
Yes, there are many such custonized ESBs, e.g.
BridgeLink
by ISGN is a product for Real Estate Mortgage Domain.
JBoss ESB allows for the customization of transports. There is also the Redhat supported version in SOA-P
IBM has had that for years (and others like Microsoft and Oracle). It's called IBM WebSphere Transformation Extender product http://www-01.ibm.com/support/docview.wss?uid=swg27008337. They have it for several of industries.
In healthcare, this type of integration middleware is called Interface engine.
This is because this type of products are traditionally used by health IT vendors to expose standard-oriented interfaces such as HL7 messaing interfaces.
Consider a EHR vendor, who does not have HL7 expert to implement the interfaces but still wants to integrate with other systems using HL7 or IHE profiles. With an interface engine, along with the expertise provided by their service, the vendor can convert their database interface or SOAP interfaces into standard HL7 interfaces easily.
The market has several players, such as Corepoint, Ensemble, Mirth, etc.
However, these tools are quite focused on the technical level issues, including connecting endpoints, transforming data formats, and routing messages between interfaces, as what you can expect from an ESB. I don't think they are meant to be used by business analysts.

Resources