I'm researching about SDN and NFV.
In the concept of NFV on Wikipedia , it says : "Network Functions Virtualization (NFV) is a network architecture concept that proposes using IT virtualization related technologies, to virtualize entire classes of network node functions into building blocks that may be connected, or chained, together to create communication services."==> first thing to consider that it will reduce the cost of facilities.
So in real life implementation, for example, how can we virtualize a network nodes like a router?
NFV was created for the networks to be capable to extend in a dynamically way(virtualize the router) , not a static way(buy a new router), that is we must implement the router functions in the server or a computer instead of buying and then adapting the new router to the current nextwork , in this case I don't see any different in this implementation , because buying a server to implement a virtualized router is not cheaper than buying a new router.
Can anyone explain this for me , or Am i wrong understanding the NFV concept?
Thanks.
SDN is just that, software defined networking. In a Hybrid SDN model SDN decouples the logic from the physical box, rendering the physical box a simple "forwarding" box. The logic rests with the SDN controller where developers create APIs that manage these forwarding boxes (we call them network elements now) with flow tables that get pushed to them. The benefit here is that the devices can now be configured and provisioned through this controller, as opposed to having to log into each and every box.
Then you have the cloud. A small office can literally get away with porting all of their apps and services into the cloud, doing away with most of their physical boxes. Of course you still need a LAN in the office and a way to get out to the Internet and eventually the cloud. You can even ask the cloud provider to provision load-balancing on specific applications, firewalls and content delivery services. So basically your office applications and most of the supporting LAN and databases can be safely ported to cloud providers.
When you said "...because buying a server to implement a virtualized router is not cheaper than buying a new router", it depends: As it's a virtualized resource, you can use this new server to run your router and another resource from your infrastructure, if the machine has more hardware capacity than you need for a single router.
In fact, you might not even need to buy a new machine, if you have your resources in a cloud like AWS (or your own private cloud), when you have need for more routers, you can just flexibly allocate more hardware resources and spawn a new router instance (scale out) and, whenever your router demand is lower than what you have allocated, you can reduce your number of routers (scale in) and stop losing money with an infrastructure that you are not using at the moment.
Consider that a really high level explanation, if you want to know the details about how a Virtual Network Function scales in and out in a NFV implementation, I recommend you to read the ETSI specification about how it should work: http://www.etsi.org/standards-search#page=1&search=&title=1&etsiNumber=1&content=0&version=1&onApproval=1&published=1&historical=0&startDate=1988-01-15&endDate=2017-04-13&harmonized=0&keyword=&TB=789,,832,,831,,795,,796,,800,,798,,799,,797,,828&stdType=&frequency=&mandate=&collection=&sort=3
Let me continue with your example of the router. Traditionally, these routers are vendor specific. For example, the major sellers are companies like Cisco, Juniper, etc. They are implemented on proprietary hardware and therefore if you want to buy a new router you need to buy from them only. Further, when they go into some problems, you need a dedicated engineer to repair them. Therefore, the telecommunication has to take care of high Capital Expenditure (COPEX) and Operational Expenditure (OPEX).
With NFV, the entire router function is implemented as a software and deployed on a general purpose servers (GPP) or cloud. These GPPs are relatively very cheap when compared to proprietary hardware. Thanks to cloud computing, even small companies can afford servers on Amazon and Google clouds. Because of cheap availability, COPEX is now relatively cheaper. Further, you don't need a dedicated engineer when the hardware goes into a problem, the same engineer who works for GPP server maintenance is enough. This way OPEX is reduced.
Now imagine, like routers there are many networking elements present in Telecommunication. If every networking element requires a dedicated engineer, how much a Teleco operator will be spending money. Apart from this, due to software implementation, suppose, when you have very high traffic than expected, you can just roll out a new router (software network function) on GPP or Cloud instead of completely buying a new router, which is very costly. As you already know, in the cloud you pay based on usage.
There are many more uses. To know more you need to read research papers.
Related
I was wondering if DDS could be used over the internet, and if it would be a good choice for online gaming.
I have seen on the RTI website that they support WAN, but does that mean I can subscribe to a topic from another participant that is on the other side of the world?
What would happen to the QoS guarantees if this was the case?
Thanks.
Disclaimer: I work on OpenDDS full time, but have no experience in networked games programming.
A internet-enabled DDS could be used for connecting game clients. Whether or not it's a good idea is something I can't answer at the moment with no specific information, but the QoS part is a good question. In OpenDDS, as far as I'm aware, we try to adhere to the QoS defined by the user as if it was a normal RTPS connection. This means using it over the Internet might require some tuning of the QoS depending on what QoS you want to use. For example if deadline QoS was being used on a local network, the time period might have to be relaxed given the greater latency of the Internet.
For OpenDDS, internet-enabled RTPS is described in Chapter 15 of the OpenDDS's Developer's Guide: http://download.objectcomputing.com/OpenDDS/OpenDDS-latest.pdf. In addition to using ICE to overcome NATs, we also have a feature called the RTPS Relay to enable connections when a client can't use ICE.
I'm not familiar with what specific capabilities RTI Connext here has but as far as I'm aware they are similar, in that they use ICE as well. Also it should be noted that internet-enabled RTPS is not standardized, so the Connext and OpenDDS wouldn't be able to be talk over WAN.
OpenDDS would only be appropriate for games in very constrained environments because of the bandwidth requirements. If all users are on the same LAN then the UDP multicast approach that RTPS uses would be effective for a peer-to-peer game architecture. However, if remote users are added, then the requirement of every peer having to send every update directly to every other peer will very quickly explode the bandwidth requirements.
Given that the RTPS relay is already another application that needs to be run, a game server that collates updates from peers and sends world state would be far more effective for cases where users are not all on a single LAN segment.
I'm in the process of starting the design of the networks (VPC, subnetworks and such) as part of the process of moving a rather complex organization on-premise structure, on the cloud.
The chosen provider is GCP and I read and taken the courses to be associate engineer. However, the courses I've followed don't go into details of the technical aspects of doing something like this, just present you with the possible options.
My background is of a senior backend, then fullstack, developer. So I lack some of the very interesting and useful knowledge of a sysadmin unfortunately.
Our case is as follows:
On premise VMs on several racks, reachable only inside a VPN
Several projects on the GCP Cloud
Two of them need to connect to the on-premise VPN but there could be more
Some projects see each other resources (VMs, SQL, etc) using VPC Peering
Gradually we will abandon the on-premise, unless we find some legacy application that really is messed up
Now, I could just create a new VPN connection for every project from Hybrid Connectivity -> VPN but I'd rather create a project dedicated to having the VPN gateway set up and allow other projects to use that resources.
Is this a possible configuration? Is it a valid design? As far as I explored the VPN creation, it seems that I'll have to create a VM that will expose an IP acting as gateway, if that's the case I was thinking to be using the VPC peering to allow other projects to exit into the on premise VPN. No idea if I'm talking gibberish here. I'm still waiting for some information (IKE shared key, etc) before attempting anything, so I'm rather lost at this point.
You have to take in consideration several aspect:
Cost: if you set up a VPN in each project, and if you have to double your connectivity for HA, it will be expensive. If you have only 1 gateway project, it's cheaper
Cheaper, imply trade off. VPN have limited bandwidth: 3Gbps (Cloud Interconnect also, but higher and more expensive). If all your projects use the same VPN thanks to mutualization, take care at this bottleneck.
If you want to mutualise, at least for DEV/UAT project, I recommend you to use VPC Peering, I mean 1 VPN project, and others with VPC peering. Take care at your IP range assign for peering. If you are interested, I wrote an article on this
It's also possible to use Shared VPC, which is great! But there is less compatibility with several product (for example, serverless VPC Connector for Cloud Function and App Engine isn't yet compliant with shared VPC).
I wonder if there are any situations where one would prefer software load balancer over hardware load balancer or vice-versa. I've played around with f5, A10, Nginx, and HAproxy briefly, and the only marginal difference I was able to notice was the price, apart from slightly better API documentation etc. So my question is:
Are there any particular use cases where one would prefer Software load balancers over hardware load balancers or vice-versa?
Feel free to quote your experience, where you preferred one over the other and, rationale you used to make that decision.
PS: I have read 5 reasons to prefer S/W load balancers over H/W load balancers and didn't find explanations there very propelling.
EDIT: Regarding my use case, I'll be needing lot of load balancers to secure/load-balance tons of apps. Therefore the design decision should be such, as to cope up with exponentially increasing number of apps behind it (Should be easily scalable). I'm not looking for 10 or 50 app load balancer but at tons of thousands of apps behind load balancers solution. Also it would be great if you can specifically point out at features which outweigh in H/W over S/W or vice-versa. For example with H/W load balancer FPGA services one can do SSL offloading and can acheive an order of X performance gain given that one has more than Y number of apps behind it etc.
There isn't going to be a single answer to this question as it will always depend on your application requirements and your compliance obligations. Companies like F5, A10, Citrix offer services that expand well past basic load balancing and offer features lb just cannot touch.
If you're JUST looking for lb services and maybe some SSL bridging or offloading here are some benefits:
Hardware: Offer hardware accelerated SSL offloading and bulk encryption due to the use of FPGA services. This is also dependent on what cipher suites you plan to use. With hardware you're usually placing them in front of 100's of applications or you're using it because they may be certified firewalls and you need additional requirements for compliancy.
Software: If you just need basic LB, HAProxy/Nginx are an easy choice for basic lb services and even some SSL services. Support is mixed if you're not paying for it, having to rely only on community examples.
However, if you have mixed environments and maybe already have 1 vendor in play, that can help decide. All of the hardware vendors offer virtual appliances and have automation tools to help with elastic environments so really it ends up being "Will you only ever need LB services or will you end up having to tack on more later"?
The F5/A10/Citrix ADC's in cloud still offer more features in a single platform than having to spin up segregated services (think firewall/load balancing/Web firewall/global load balancing/fraud prevention/analytics/access management).
Updated 6/21/2017:
Hardware: People are buying hardware solutions not to proxy 1 or 2 applications but 100 or 200, or even 1000 or 2000 applications in their data centers (on site or collocated). For these cases it's about performance and services beyond lb. It includes security needs and app protection that are not baked into HAproxy and Nginx.
ADC Vendors Software Solutions: You have 3 options because F5/A10/Citrix also sell virtual appliances allowing you to run the same software in Azure/AWS/Google or in VMWare.... you get the idea. This becomes unique because you can have hardware in your co-location and virtual appliances in your cloud solution and its the same vendor and the bonus for your admins, the same support escalation point.
HAproxy/Nginx Softare: This goes back to the original statement, if you're talking LB solution only and price is a concern, this is your way to go. The feature sets are more limited than the ADC/Security solutions above, but they do LB justfine. It can become a bit cumbersome managing 100's of apps so you'll have to rely on your dev team a bit more to make sure they're isolating environments OR are REALLY good at automation.
The decision comes down to will you only need load balancers? If yes, then HAproxy/Nginx. If you need more features to load balance AND protect your app, then ADC software solutions are the way to go.
If you need reliable performance and cannot justify dedicating one vm per host to achieve it, then hardware ADC's are the way to go.
For transparency, I work on the DevCentral team at F5 so I would love to say go hardware, but if you don't need it don't do it. But its going to come down to your application requirements.
The follow up question is what is your application and requirements for a load-balancer?
Generally hardware LB's have a fixed performance and hardware acceleration to assist with SSL offload. Software or virtual performance can fluctuate with an increased load and then you can run into bugs with performance, but it's easier to deploy and scale.
Other questions to look into is, will you need to modify or redirect traffic based on content? For example, rewriting or filtering traffic? If yes, then you may need a full proxy LB.
I was assigned with the re-architecture of a legacy (medical) product which is controlling several external devices. In the current architecture, we have several such stations in each customer's network, where each station is processing its own data, and they all share some of that data via a central server (that talks to the DB and BLOB storage).
I'm planning the new architecture such that it will allow more scenarios, such as monitoring the stations through a web interface, and allowing data processing to be scalable by adding additional servers.
This led me to choose NServicebus as the messaging and communication infrastructure. And I pretty much have a clear view of the new architecture.
However, another factor was recently added to the equation by my manager. He requires that the machine that communicates with the devices (hardware), will not be under the IT policies of the customer. The reason behind this, as I understand, is that we don't want the customer's IT to control OS updates, security, permissions and other settings, because we want full control over that machine in order to work properly with our hardware.
My manager thus added a requirement that this machine will be disconnected from the customer's LAN.
If I still want to deploy NServiceBus on that separated machine (because I want to pub/sub async messages to other machines - some are on the customer's LAN and some aren't), Will it require some special deployment? Will it require an NServiceBus gateway?
EDIT: I removed the other (1st) question, as it wasn't relevant to the scope of StackOverflow.
Regarding question 2, yes it would require the use of a "Gateway", however the current NServiceBus Gateway implementation does not support pub/sub so you would have to look at alternatives.
I want to develop simple Serverless LAN Chat program just for fun. How can I do this ? What type Architecture should I use?
Last year I have worked on TCP,UDP Client/ Server application Project.It was simple (Server listens to certain port/socket and Client connect to server's port etc..) But I have no idea about how to develop "Serverless" LAN Chat program. How can I do this? UDP,TCP,Multicast,Broadcast? or Should program behave like both server and client?
The simplest way would be to use UDP and simply broadcast your messages all over the network.
A little bit more advanced version would be to only use the broadcast to discover other nodes in the network.
Every node maintains a list of known peers.
Messages are sent with TCP to all known peers.
When a node starts up, it sends out an UDP broadcast to discover other nodes.
When a node receives a discovery broadcast, it sends "itself" to the source of the broadcast, in order to make it self known. The receiving node adds the broadcaster to it's own list of known peers.
When a node drops out of the network, it sends another broadcast in order to inform the remaining nodes that they should remove the dropped client from their list.
You would also have to consider handling the dropping out of nodes without them informing the rest of the network.
The spread toolkit may be a bit overkill for what you want, but an interesting starting point.
From the blurb:
Spread is an open source toolkit that provides a high performance messaging service that is resilient to faults across local and wide area networks. Spread functions as a unified message bus for distributed applications, and provides highly tuned application-level multicast, group communication, and point to point support. Spread services range from reliable messaging to fully ordered messages with delivery guarantees.
Spread can be used in many distributed applications that require high reliability, high performance, and robust communication among various subsets of members. The toolkit is designed to encapsulate the challenging aspects of asynchronous networks and enable the construction of reliable and scalable distributed applications.
Spread consists of a library that user applications are linked with, a binary daemon which runs on each computer that is part of the processor group, and various utility and demonstration programs.
Some of the services and benefits provided by Spread:
Reliable and scalable messaging and group communication.
A very powerful but simple API simplifies the construction of distributed architectures.
Easy to use, deploy and maintain.
Highly scalable from one local area network to complex wide area networks.
Supports thousands of groups with different sets of members.
Enables message reliability in the presence of machine failures, process crashes and recoveries, and network partitions and merges.
Provides a range of reliability, ordering and stability guarantees for messages.
Emphasis on robustness and high performance.
Completely distributed algorithms with no central point of failure.
Apples iChat is an example of the very product you are envisioning. It uses Bonjour (apple's zero-conf networking protocol) to identify peers on a LAN. You can then chat or audio/video chat with them.
I'm not entirely sure how Bonjour works inside, but I know it uses multicast. Clients "register" services on the LAN, and the Bonjour protocol allows for each host to pull up a directory of hosts for a given service (all without central management).