While trying to understand Kubernetes networking, one point has confused me. Why Kubernetes doesn't handle pod-to-pod communication inbuilt/itself?
As per the docs - https://kubernetes.io/docs/concepts/cluster-administration/networking/
There are 4 distinct networking problems to address:
Highly-coupled container-to-container communications: this is solved
by pods and localhost communications.
Pod-to-Pod communications: this is the primary focus of this
document.
Pod-to-Service communications: this is covered by services.
External-to-Service communications: this is covered by services.
When Kubernetes can handle all the other problems(mentioned above) of networking, why does pod-to-pod communication needs to handled by other plugins like ACI, Cilium, Flannel, Jaguar and so on..
Would like to know is there any specific reason for such architecture?
The short answer is that networks are complex and highly customized. It's not easy to provide an efficient built-in that works everywhere. All of the cloud provider networks are different than bare-metal networks. Rather than pick a bad default we require that the end user, who really is the only person who could possibly comprehend their network, makes a decision.
Doing a built-in VXLAN or something might be possible but would be far from ideal for many users, and defaults tend to stick...
Agree with Tim above. Kubernetes in general is mostly an abstraction and orchestration layer of the Compute, storage and networking for the developers so that they don't have to be aware of the implementation. The implementation itself will be tied to the underlying infrastructure and kubernetes just defines the interface for them (CRI for containers -compute, CSI for storage and CNI for networking).
By just defining the interface the implementations can evolve independently without breaking the contract. For example, in future it might become possible to offload pod-to-pod networking to the nic-card and expecting kubernetes to evolve to such a technology change might be a big ask. By not being intimately tied to the implementation it allows development of technology to be accelerated in each layer.
Related
I'm trying to set up a general architecture for a system that I'm moving to Kubernetes (self-hosted, probably on VSphere).
I'm not very well versed in networking and I have the following problem that I cannot seem to be able to conceptually solve:
I have many microservices which were split out of a monolith, but the monolith is still significant. All of it is moving to K8s. It's a clustered application and does a lot of all-to-all networking under high load, which I would like to separate from all the other services in the Kubernetes cluster.
Before moving to K8s we provided a way to specify a network device that is used only for the cluster communication, and as such could be strictly separated from other traffic, and alas, even use separate networking hardware for clustering.
So this is where I would request your input: is it possible to have completely separate networking for this application-level cluster inside the Kubernetes cluster? The ideal solution would allow me to continue using our existing logic, i.e. to have a separate network (and network adapter) for the chatty bits but it's not a hard requirement to keep it that way. I have looked at Calico, Flannel, and Istio, but haven't been able to come up with a sound concept.
Use k8s NetworkPolicies, by applying these policing, you can allow/deny traffic for pods base on label selector. you can try WeaveNet and Calico, both are good and support NetworkPolicies.
It is good to have Calico network plugin. Because Flannel doesn't support network policies. You can create NetworkPolicies resources for allow/deny the traffics.
On OpenShift, you can have an isolated network per project (Kubernetes namespace). See https://docs.openshift.com/container-platform/3.5/admin_guide/managing_networking.html#isolating-project-networks
I'm working on application running on JBossEAP 6.4 in domain mode. I need to handle multiple (up to 1000) parallel TCP connections from server to different hardware devices.
Connections have to be opened at server side and will be keept up to 90 seconds. All connections uses the same port and protocol, but destination IP adresses are different..
I wonder if JCA adapter is suitable for this use case. Should I create special Activation specification for every single device or use something else than JCA resource adapter?
The question is a bit old, but JCA is a not-that-popular technology so some knowledge about it would be useful despite of the fact you might have been implemented all the things already.
Firstly, JCA is a spec for writing your connector, it's not a connector implementation itself. If you need your application to work with external TCP it's up to you to implement a connector.
Whether JCA is suitable for your particular needs depends solely on the nature of your "different hardware devices". Resource adapters has been designed to be used on per-service basis with possible connections pooling. I.e. all the connections this particular adapter would fire should have quite similar nature and be interchangeable. However, JCA adapters also have sensible advantages outside of this scope, especially in EJB environment. EJB imposes quite harsh restrictions upon your code: no locking, threads spawning, etc. It might be quite difficult to implement TCP I/O without all those things. Here comes JCA and it's threading freedom (MDBs are capable of providing EJB-friendly scope on their activation despite of an underlying thread source).
In conclusion: you should consider using JCA for TCP networking if you're working in EJB environment (or if you're going to do so in observable future). If you're using Spring-powered environment upon Java EE stack it may worth to omit such a complicated technology and stick to some pure Netty networking.
If you choose to proceed with JCA, consider inspecting the JCA SocketConnector I am currently maintaining. It's build upon Netty stack and it could be a good start for you. You can also use it as a source of ideas how to build your own implementation instead.
I'm researching about SDN and NFV.
In the concept of NFV on Wikipedia , it says : "Network Functions Virtualization (NFV) is a network architecture concept that proposes using IT virtualization related technologies, to virtualize entire classes of network node functions into building blocks that may be connected, or chained, together to create communication services."==> first thing to consider that it will reduce the cost of facilities.
So in real life implementation, for example, how can we virtualize a network nodes like a router?
NFV was created for the networks to be capable to extend in a dynamically way(virtualize the router) , not a static way(buy a new router), that is we must implement the router functions in the server or a computer instead of buying and then adapting the new router to the current nextwork , in this case I don't see any different in this implementation , because buying a server to implement a virtualized router is not cheaper than buying a new router.
Can anyone explain this for me , or Am i wrong understanding the NFV concept?
Thanks.
SDN is just that, software defined networking. In a Hybrid SDN model SDN decouples the logic from the physical box, rendering the physical box a simple "forwarding" box. The logic rests with the SDN controller where developers create APIs that manage these forwarding boxes (we call them network elements now) with flow tables that get pushed to them. The benefit here is that the devices can now be configured and provisioned through this controller, as opposed to having to log into each and every box.
Then you have the cloud. A small office can literally get away with porting all of their apps and services into the cloud, doing away with most of their physical boxes. Of course you still need a LAN in the office and a way to get out to the Internet and eventually the cloud. You can even ask the cloud provider to provision load-balancing on specific applications, firewalls and content delivery services. So basically your office applications and most of the supporting LAN and databases can be safely ported to cloud providers.
When you said "...because buying a server to implement a virtualized router is not cheaper than buying a new router", it depends: As it's a virtualized resource, you can use this new server to run your router and another resource from your infrastructure, if the machine has more hardware capacity than you need for a single router.
In fact, you might not even need to buy a new machine, if you have your resources in a cloud like AWS (or your own private cloud), when you have need for more routers, you can just flexibly allocate more hardware resources and spawn a new router instance (scale out) and, whenever your router demand is lower than what you have allocated, you can reduce your number of routers (scale in) and stop losing money with an infrastructure that you are not using at the moment.
Consider that a really high level explanation, if you want to know the details about how a Virtual Network Function scales in and out in a NFV implementation, I recommend you to read the ETSI specification about how it should work: http://www.etsi.org/standards-search#page=1&search=&title=1&etsiNumber=1&content=0&version=1&onApproval=1&published=1&historical=0&startDate=1988-01-15&endDate=2017-04-13&harmonized=0&keyword=&TB=789,,832,,831,,795,,796,,800,,798,,799,,797,,828&stdType=&frequency=&mandate=&collection=&sort=3
Let me continue with your example of the router. Traditionally, these routers are vendor specific. For example, the major sellers are companies like Cisco, Juniper, etc. They are implemented on proprietary hardware and therefore if you want to buy a new router you need to buy from them only. Further, when they go into some problems, you need a dedicated engineer to repair them. Therefore, the telecommunication has to take care of high Capital Expenditure (COPEX) and Operational Expenditure (OPEX).
With NFV, the entire router function is implemented as a software and deployed on a general purpose servers (GPP) or cloud. These GPPs are relatively very cheap when compared to proprietary hardware. Thanks to cloud computing, even small companies can afford servers on Amazon and Google clouds. Because of cheap availability, COPEX is now relatively cheaper. Further, you don't need a dedicated engineer when the hardware goes into a problem, the same engineer who works for GPP server maintenance is enough. This way OPEX is reduced.
Now imagine, like routers there are many networking elements present in Telecommunication. If every networking element requires a dedicated engineer, how much a Teleco operator will be spending money. Apart from this, due to software implementation, suppose, when you have very high traffic than expected, you can just roll out a new router (software network function) on GPP or Cloud instead of completely buying a new router, which is very costly. As you already know, in the cloud you pay based on usage.
There are many more uses. To know more you need to read research papers.
I was thinking about two options:
Tor
Mix
But I don't know how to include any of this in my application.
I'm building a protocol for transfering small files (1 to 5 MB) and I want to hide the sender's IP. I will be building a native windows/linux/mac application.
I'm planning to have many computers inside the system, so I was thinking that using MIX and routing through them could be a nice idea.
I found this gMix framework. I tried to develop on top of the framework but it looks very complicated. I'm thinking about developing my own Mix implementation for this project.
Can someone point me in the right direction please?
This is the way to go.
"Tarzan is a peer-to-peer anonymous IP network overlay. Because it provides IP service, Tarzan is general-purpose and transparent to applications. Organized as a decentralized peer-to-peer overlay, Tarzan is fault-tolerant, highly scalable, and easy to manage.Tarzan achieves its anonymity with layered encryption and multi-hop routing, much like a Chaumian mix. A message initiator chooses a path of peers pseudo-randomly through a restricted topology in a way that adversaries cannot easily influence. Cover traffic prevents a global observer from using traffic analysis to identify an initiator. Protocols toward unbiased peer-selection offer new directions for distributing trust among untrusted entities.Tarzan provides anonymity to either clients or servers, without requiring that both participate. In both cases, Tarzan uses a network address translator (NAT) to bridge between Tarzan hosts and oblivious Internet hosts.Measurements show that Tarzan imposes minimal overhead over a corresponding non-anonymous overlay route."
You can find the package in this link.
I want to build a decentralized, reddit-like system using P2P. Basically, I want to retain the basic capabilities of reddit, but make it decentralized, to make it more robust and immune to censorship. This will also allow people to develop different clients to match the way they want to browse it.
Could you recommend good p2p libraries to base my work on? They should be open-source, cross-platform, robust and easy to use. I don't care much about the language, I can adapt.
Disclaimer: warning, self-promotion here !!!
Have you considered JXTA's latest release? It is probably sufficient for what you want to do. Else, we are working on a new P2P framework called Chaupal, but it is not operational yet.
EDIT
There is also what I call the quick-and-dirty UDP solution (which is not so dirty after all, I should call it minimal).
Just implement one server with a public address and start listening for UPD.
Peers located behind NATs contact the server which can read how their private IP address has been translated into a public IP address from the received datagrams.
You send that information back to the peer who can forward it to other peers. The server can also help exchanging this information between peers.
Then peers can communicate directly (one-to-one) by sending datagrams to these translated addresses.
Simple, easy to implement, but does not cover for lost datagrams, replays, out-of-order etc... (i.e., the typical stuff that TCP solves for you at the IP stack level).
I haven't had a chance to use it, but Telehash seems to have been made for this kind of application. Peer2Peer apps have a particular challenge dealing with the restrictions of firewalls... since Telehash is based on UDP, it's well suited for hole-punching through firewalls.
EDIT for static_rtti's comment:
If code velocity is a requirement libjingle has a lot of effort going into it, but is primarily geared towards XMPP. You can port off parts of the ICE code and at least get hole-punching. See the libjingle architecture overview for details about their implementation.
Check out CouchDB. It's a decentralized web app platform that uses an HTTP API. People have used it to create "CouchApps" which are decentralized CouchDB-based applications that can spread in a viral nature to other CouchDB servers. All you need to know to write CouchApps is Javascript and learn the CouchDB API. You can read this free online book to learn more: http://guide.couchdb.org
The secret sauce to CouchDB is a Master-to-Master replication protocol that lets information spread like a virus. When I attended the first CouchConf, they demonstrated how efficient this is by throwing a "Couch Party" (which is where you have a room full of people replicating to the person next to them simulating an ad hoc network).
Also, all the code that makes a CouchApp work is public by default in special entities known as Design Documents.
P.S. I've been thinking of doing a similar project, but I don't have a lot of time to devote to it at the moment. GOD SPEED MY BOY!