Compare Quagga to XORP - networking

What do you think of Quagga compared to XORP as a dynamic software routing engine? What are the technical merits of each engine comparatively? Additionally, what do most people think of them from a programming view. Who has manipulated networks using these enginers? I was wondering from an OSPF, routing, BGP protocol user's perpspective.

The following does not answer your question completely, but the Vyatta open source routers and the OpenSolaris customer gateway software for Amazon VPC both use quagga to implement BGP support.
From the wikipedia entry for XORP,
"The software suite was selected
commercially as the routing platform
for the Vyatta line of products in its
early releases, but later has been
replaced with quagga.

What do you think of Quagga compared to XORP as a dynamic software routing engine?
It is one of many options, but not particularly of very much use to you based upon your questions/information that you posted here. Have you tried looking into some of the alternatives such as (nothing comes to mind)?
What are the technical merits of each engine comparatively?
Small, fast, oddly placed, optimized, super-heroic and more filler for a resume.
Additionally, what do most people think of them from a programming view.
I can't speak for most people, but I for myself do not give it much credit or merit, or well... you know what I mean.
Who has manipulated networks using these enginers?
I could not find specific references, but I do remember reading that both Disney and the 'famous' YUV corporation of South Africa both played with this notion before. I believe Disney abandoned it with the fall of Michael Eisner.
I was wondering from an OSPF, routing, BGP protocol user's perpspective.
I am a BGP protocol user's prospective. Hopefully we hear from OSPF and routing user's perspectives shortly.
Good question.

Related

I want to build a decentralized, reddit-like system using P2P. What existing p2p library should I base it on?

I want to build a decentralized, reddit-like system using P2P. Basically, I want to retain the basic capabilities of reddit, but make it decentralized, to make it more robust and immune to censorship. This will also allow people to develop different clients to match the way they want to browse it.
Could you recommend good p2p libraries to base my work on? They should be open-source, cross-platform, robust and easy to use. I don't care much about the language, I can adapt.
Disclaimer: warning, self-promotion here !!!
Have you considered JXTA's latest release? It is probably sufficient for what you want to do. Else, we are working on a new P2P framework called Chaupal, but it is not operational yet.
EDIT
There is also what I call the quick-and-dirty UDP solution (which is not so dirty after all, I should call it minimal).
Just implement one server with a public address and start listening for UPD.
Peers located behind NATs contact the server which can read how their private IP address has been translated into a public IP address from the received datagrams.
You send that information back to the peer who can forward it to other peers. The server can also help exchanging this information between peers.
Then peers can communicate directly (one-to-one) by sending datagrams to these translated addresses.
Simple, easy to implement, but does not cover for lost datagrams, replays, out-of-order etc... (i.e., the typical stuff that TCP solves for you at the IP stack level).
I haven't had a chance to use it, but Telehash seems to have been made for this kind of application. Peer2Peer apps have a particular challenge dealing with the restrictions of firewalls... since Telehash is based on UDP, it's well suited for hole-punching through firewalls.
EDIT for static_rtti's comment:
If code velocity is a requirement libjingle has a lot of effort going into it, but is primarily geared towards XMPP. You can port off parts of the ICE code and at least get hole-punching. See the libjingle architecture overview for details about their implementation.
Check out CouchDB. It's a decentralized web app platform that uses an HTTP API. People have used it to create "CouchApps" which are decentralized CouchDB-based applications that can spread in a viral nature to other CouchDB servers. All you need to know to write CouchApps is Javascript and learn the CouchDB API. You can read this free online book to learn more: http://guide.couchdb.org
The secret sauce to CouchDB is a Master-to-Master replication protocol that lets information spread like a virus. When I attended the first CouchConf, they demonstrated how efficient this is by throwing a "Couch Party" (which is where you have a room full of people replicating to the person next to them simulating an ad hoc network).
Also, all the code that makes a CouchApp work is public by default in special entities known as Design Documents.
P.S. I've been thinking of doing a similar project, but I don't have a lot of time to devote to it at the moment. GOD SPEED MY BOY!

Flow based routing and openflow

This may not be the typical stackoverflow question.
A colleague of mine has been speculating that flow-based routing is going to be the next big thing in networking. Openflow provides the technology to use low cost switches in large application, IT data-centers, etc; replacing Cisco, HP, etc switch and routers. The theory is that you can create a hierarchy these openflow switches with simple configuration, eg. no spanning tree. Open flow will route each flow to the appropriate switch/switch-port, using only the knowledge of the hierarchy of switches (no routers). The solution is suppose to save enterprises money and simplify networking.
Q. He is speculating that this may dramatically change enterprise networking. For many reasons, I am skeptical. I would like to hear your thoughts.
OpenFlow is a research project from Stanford University led by professor Nick McKeown. In the original OpenFlow research paper, the goal of OpenFlow was to give researchers a way "to run experimental protocols in the networks they use every day." For years networking researchers have had an almost impossible task deploying and evaluating their ideas on real networks with real Ethernet switches and IP routers. The difficultly is that real switches and routers from companies like Cisco, HP, and others, are all closed, proprietary boxes that implement standard "protocols", like Ethernet spanning tree, and OSPF. There are business reasons why Cisco and HP won't let you run software on their switches and routers; there is no technical reason. OpenFlow was invented to solve a people problem: if Cisco is not willing to let you run code on their switch, maybe they can at least provide a very narrow interface to let you remotely configure their switch, and that narrow interface is called OpenFlow.
To my knowledge more than a dozen companies are currently implementing OpenFlow support for their switches. Some like HP are only providing the OpenFlow software for research purposes. Others like NEC are actually offering commercial support.
For academic researchers that want to evaluate new routing protocols in real networks, OpenFlow is a huge win. For switch vendors, it is less clear if OpenFlow support will help, hurt, or have no effect in the long run. After all, the academic research market is very small.
The reason why OpenFlow is most often discussed in the context of enterprise networks is that OpenFlow grew out of a previous research project called Ethane that used OpenFlow's mechanism of remotely programming switches in an enterprise network in order to centralize a security policy. Ethane, and by extension OpenFlow, has led directly to two startup companies: Nicira, founded by Martin Casado, and Big Switch Networks, founded by Guido Appenzeller. It would be easier to implement an Ethane-like system if all of the switches in the network supported OpenFlow.
Closely related to enterprise networks are data center networks, the networks that interconnect thousands to tens of thousands of servers in companies such as Google, Facebook, Microsoft, Amazon.com, and Yahoo!. One problem with Ethernet is that it does not scale to this many servers on the same Layer 2 network. We attempted to solve this problem in a research project called PortLand. We used OpenFlow to facilitate programming the switches from a central controller, which we called a Fabric Manager. We released the PortLand source code as open source.
However, we also found a limitation to OpenFlow's functionality. In another data center networking research project called Helios, we were not able to use OpenFlow because it did not provide a mechanism for bonding multiple switch ports into a Link Aggregation Group (LAG). Presumably one could extend the OpenFlow specification indefinitely until it all possible switch features become exposed.
There are other networks as well such as the Internet access networks, Internet backbones, home networks, wireless networks, cellular networks, etc. Researchers are trying to see where OpenFlow fits into all of these markets. What it really comes down to is the question, "what problem does OpenFlow solve?" Ethane makes a case for enterprise networks but I have not yet seen a compelling case for any other type of network. OpenFlow might be the next big thing, or it might end up being a case of "don't solve a people problem with a technical solution."
In order to assess the future of flow-based networking and OpenFlow, here’s the way to think about it.
It starts with the silicon trends: Moore’s Law (2X transistors per 18-24 months), and a correlated but slower improvement in the I/O bandwidth available on a single chip (roughly 2X every 30-36 months). You can now buy full-featured 10GbE single chip switches with 64 ports, and chips which have a mix of 40GbE and 10GbE ports with comparable total I/O bandwidth.
There are a variety of ways physically connect these in a mesh (ignoring the loop-free constraints of spanning tree and the way Ethernet learns MAC addresses). In the high performance computing (HPC) world, a lot of work has been done building clusters with InfiniBand and other protocols using meshes of small switches to network the compute servers. This is now being applied to Ethernet meshes. The geometry of a CLOS or fat-tree topology enables a two stage mesh with a large number of ports. The math is thus: Where n is the # of ports per chip, the number of devices you can connect in a two-stage mesh is (n*2)/2, and the number you can connect in a three-stage mesh is (n*3)/4. While with standard spanning tree and learning, the spanning tree protocol will disable the multi-path links to the second stage, most of the Ethernet switch vendors have some sort of multi-chassis Link Aggregation protocol which gets around the multi-pathing limitation. There is also standards work in this area. Although it might not be obvious, the vast majority of Link Aggregation schemes allocate traffic so all the frames of any given flow take the same path. This is done in order to minimize out-of-order frames so they don’t get dropped by some higher level protocol. They could have chosen to call this “flow based multiplexing” but instead they call it “link aggregation”.
Although the devil is in the details, there are a variety of data center operators and vendors that have concluded they don’t need to have large multi-slot chassis switches in the aggregation/core layer for server connect, instead using meshes of inexpensive 1U or 2U switches.
People have also concluded that eventually you need some kind of management station to set up the configuration of all the switches. Again, drawing from the experience with HPC and InfiniBand, they use what is called an InfiniBand Controller. In the telecom world, most telecom networks have evolved to separate the management and part of the control plane from the boxes that carry the data traffic.
Summarizing the points above, meshes of Ethernet switches with an external management plane with multipath traffic where flows are kept in order is evolutionary, not revolutionary, and is likely to become mainstream. At least one major company, Juniper, has made a big public statement about their endorsement of this approach. I'd call all of these "flow-based routing".
Juniper and other vendors’ proprietary approaches notwithstanding, this is an area that cries out for standards. The Open Networking Foundation (ONF), was founded to promote standards in this area, starting with OpenFlow. Within a couple of months, the sixty+ members of ONF will be celebrating their first year anniversary. Each member has, I am led to believe, paid tens of thousands of dollars to join. While the OpenFlow protocol has a ways to go before it is widely adopted, it has real momentum.
#Nathan: OpenFlow 1.1 actually adds some primitives that enable the use of multiple links via the Multipath Proposal.
An excellent view of OpenFlow by Simon Crosby
http://community.citrix.com/display/ocb/2011/03/21/The+Rise+of+the+Software+Defined+Network
More context on SDN which discusses IETF's SDN initiative and ONF's Openflow. Working in conjuction is a powerful combination http://bit.ly/A8xYso
Nathan, Excellent historical account and overview of openflow. Thanks!
You've hit on the points that I've been wrapping my head around as to why Openflow might not be widely adopted. Since it was designed to be open to allow researcher the ability to run experimental protocols and not necessarily be "compatible with" the big players Cisco/HP/etc. it puts itself into niche (although potentially big) market, more on this later. And as you've stated it's recieved some adoption in the "cloud data centers (CDC)" e.g. google, facebook, etc because they need to exploit experimental protocols to gain a competitive advantage or optimize for their application.
As you've stated some switch vendors have added openflow capability to capitalize on the niche need in academia and potentially sell into the CDC; google, facebook. This is potentially a big market (or bubble if you're pessimistic).
The problem that I see is that the majority of the market (80% or more) is enterprise IT data centers. The requirements here is for stable, compatible networking. Open and less expensive would be nice, but not at the cost of the former.
One could think of a day where corporate IT is partially or completely cloud-sourced where QoS is maintained by the cloud provider. In this case, experimental protocols could be leveraged to provide a competitive advantaged for speed or QoS. In which case; openflow could play a more dominant roll. I personally think this scenario is many years off.
So, the conclusion I come to is that other than in research and perhaps CDCs (google, facebook), the market is pretty small. I suppose that if researchers use openflow to come up with a better protocol for say link aggregation, or congestion management, then eventually Cisco and HP will provide those in their standard offering because their customers will demand it. So openflow could be a market influencer (via the research community), but it would not be a market disruptor.
Do you agree with my conclusions? Thanks for your input.

How can I learn _really_ low-level network programming?

So I want to learn all about networks. Well below the socket, down to raw sockets and stuff. And I want to understand hubs, routers, access points, etc. For example, I'd like to be able to write my own software to do this kind of stuff.* Is there a great source for this kind of information?
I know that I'm asking a LOT here, and that to fully explain it all requires from high level down to low level. I guess I'm looking for a source similar in scope and depth to Applied Cryptography, but about networks.
Thanks to anyone who can help to point me (and others like me?) in the right direction.
* Yes, I realize using any of my hand-crafted network stack code would be a huge security issue, and am only looking to do it to learn :)
Similar Question: here. However I'm looking for more than just 'what's below TCP/UDP sockets?'.
Edited for Clarification: The depth I'm talking about is above the driver level. So assuming that the bits can make it to and from the other end of the wire, what next?
I learned IP networking from TCP/IP Illustrated. Highly recommended.
This may not help you learn it, but a packet sniffer like Wireshark will give you some insight into what the data looks like at a pretty low-level protocol (TCP/IP).
As you have obviously recognised, the universe does not start and end with the IP Protocol. Take a look at the OSI 7 Layer Model where IP is a Layer 3 (Network) protocol. Common IP Routers will operate at this level, but there is more complexity you probably should understand in the Data Link and Physical layers before you start coding your own network stacks.
Start with the fundamentals of data communications in all its myriad forms and work your way up the stack until you get to where you need to stop. Data Communications, Computer Networking and Open Systems is a good foundation text, and then look for more detail on each area you need to focus on. Previous answers include good links for IP and TCP/IP, and as mentioned Wireshark will let you look down through some of the layers
CISCO CCNA materials contain a great network fundamentals, but does not affect programming aspect. I'm not sure that there is an official free link, but you can try to find them.
You should equip yourself with a c compiler and the necessary libs and headers for your OS and play around. You may want to read for example:
http://snap.nlc.dcccd.edu/learn/fuller3/chap13/chap13.html
I had some more links in my delicious account, but they all went down the digital drain ;-)
Have you any embedded programming experience ? If so I recommend you buy one of these development boards. They are cheap and allow you work on every part of the networking stack plus all the software tools required are free.
Note that getting going on it isn't easy and I ended up reading the CS8900 IC datasheet to learn how to make it communicate with the ARM7 based processor. But if you enjoy that sort of thing (as I do) then they are great fun.
Hmmm ... have you looked into Computer Networks by Tanenbaum ?
The TCP/IP Guide
I have found the networking chapter in "understanding the linux kernel" and "understanding linux network internals" from oreilly to be very helpful.
The TCP/IP stack is a very good start but there is a lot more and a good understanding of how ethernet works and how ethernet != IP != the-interweb will go a long way.
books on network security often do a decent if not goos job explaining how networks work in a concise context.
what really did the trick for me was taking a job implementing NAT :)
This course worked for me: COS 461 at Princeton. Note that it assumes system-level programming experience with C.
Pretty much all the readings and lectures are available online under "Syllabus". And you can try the assignments too (unfortunately, you won't have access to the Virtual Network System).
Check this.. it is a good collection of information:
http://www.tcpipguide.com/free/t_toc.htm

Could you recommend some networking paper for me?

I am learning Computer Networking this semester, on which I find it quite interesting in learning why Internet is designed like today. And I also enjoy reading paper referred to in the teaching slides, like End-to-End Argument in System Design and The Design Philosophy of the DARPA Internet Protocols.
Could you recommend some other interesting paper/articles to me, especially those related to higher layers like TCP/IP protocols?
Many thanks for your reply.
Roy Thomas Fielding wrote his dissertation at UCI about Architectural Styles and
the Design of Network-based Software Architectures (2000), which has more focus on applications in a networked environment; but I would recommend it, since it's related to your request about learning why Internet is designed like today.
Link: Architectural Styles and the Design of Network-based Software Architectures
Generally, the books by Andrew Tanenbaum are worth reading. They are normally used as standard lecture in networking courses.
On the other hand, they are rather basics.
One classic book is "Interconnections 2nd edition" by Radia Perlman.
Another very good one is "Routing in the Internet" by Christian Huitema.
If you are serious about learning how the Internet works, you also need to know about BGP. A decent introduction is "BGP4 Inter-Domain Routing in the Internet" by John Stewart.
A more advanced topic is MPLS. A very good book on this is "MPLS-Enabled Applications" by Ina Minei and Julian Lucek.
Another more advanced topic is multicast. I could recommend "Interdomain Multicast Routing" by Brian Edwards and others.
The books published by Cisco press tend to be good (although obviously vendor-specific) if you are interested in more practical details of how to configure network equipment.
Finally, "Unix Network Programming, Volume 1" by Richard Stevens is a must read if you want to do network programming.
http://www.cs.ucsb.edu/~almeroth/classes/F05.276/papers/vegas.pdf
I'm not sure if this is exactly what you were looking for, but there are some really interesting papers out there on peer-to-peer networking.
Some examples:
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications
Incentives Build Robustness in BitTorrent
Protecting Free Expression Online with Freenet
If you want an introduction to what (not why) the TCP/IP protocols are, a classic book is TCP/IP Illustrated, Volume 1: The Protocols.

Why isn't bittorrent more widespread? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I suppose this question is a variation on a theme, but different.
Torrents will never replace HTTP, or even FTP download options. This said, why aren't there torrent links next to those options on more websites?
I'm imagining a web-system whereby downloaded files are able to be downloaded via HTTP, say from http://example.com/downloads/files/myFile.tar.bz2, torrents can be cheaply autogenerated and stored in /downloads/torrents/myFile.tar.bz2.torrent, and the tracker might be /downloads/tracker/.
Trackers are a well defined problem, and not incredibly difficult to implement, and there are many drop in place alternatives out there already. I imagine it wouldn't be difficult to customise one to do what is needed here.
The autogenerated torrent file can include the normal HTTP server as a permanent seed, the extensions to do this are very well supported by most, if not all, of the major torrent clients and requires no reconfiguration or special things on the server end (it uses stock standard HTTP Range headers).
Personally, if I setup such a system, I would then speed limit the /downloads/files/ directory to something reasonable, say maybe 40-50kb/s, depending on what exactly you were trying to serve.
Does such a file delivery system exist? Would you use it if it did: for your personal, company, or other website?
first of all: http://torrent.ubuntu.com/ for torrents on ubuntu.
second of all: opera has a built in torrent client.
third: I agree with the stigma attached to p2p. So much so that we have sites that need to be called legaltorrents and such like because by default a torrent would be an illegal thing, and let us not kid ourselves, it is.
getting torrents into the main stream is an excellent idea. you can't tamper with the files you are seeding so there is no risk there.
the big reason is not really stigma. the big reason is analytics, and their protection. with torrents these people (companies like microsoft and such like) would not be able to gather important information about who is doing the downloads (not personally identifiable information, and quickly aggregated away). with torrents, other people would be able to see this information, at least partially. A company would love to seed the torrent of an evaluation version of a competing companys product, just to get an idea of how popular it is and where it is getting downloaded from. It is not as good as hosting the download on your webservers, but it is the next best thing.
this is possibly the reason why the vista download on microsofts sites, or its many service packs and SDKs are not in torrents.
Another thing is that people just wont participate, and that is not difficult to figure out why because of the number of hoops you have to jump through. you got to figure out the firewall, the NAT thing, and then uPNP thing, and then maybe your ISP is throttling your bandwidth, and so on.
Again, I would (and I do) seed my 1.5 times or beyond for the torrents that I download, but that is because these are linux, openoffice that sort of thing. I would probably feel funny seeding adobe acrobat, or some evaluation version or something, because those guys are making profits and I am not a fool to save money for them. Let them pay for http downloads.
edit: (based on the comment by monoxide)
For the freeware out there and for SF.net downloads, their problem is that they cannot rely on seeders and will need their fallback on mirrors anyway, so for them torrents is adding to their expense. One more reason that pops to mind is that even in software shops, Internet access is now thoroughly controlled, and ports on which torrents rely plus the upload requirement is absolutely no-no. Since most people who need these sites and their downloads are in these kinds of offices, they will continue to use http.
BUT even that is not the answer. These people have in their licensing terms restrictions on redistribution. And so their problem is this: if you are seeding their software you are redistributing it. That is a violation of their licensing terms so if they host a torrent download and allow you to seed it, that is entrapment and they can be sued (I am not a lawyer, I learn from watching TV). They have to then delicately change their licensing to allow distribution by seeding torrents but not otherwise. This is an easy enough concept for most of us, but the vagaries of the English language and the dumb hard look on the face of the judge make it a very tricky thing to do. The judge may personally understand torrents, but sitting up their in the court he has to frown and pretend not to because it is not documented in legalese.
That there is the ditch they have dug and there they fall into it. Let us laugh at them and their misery. Yesterdays smart is todays stupid.
Cheers!
I'm wondering if part of it is the stigma associated with torrents. The only software that I see providing torrent links are Linux distros, and not all of them (for example, the Ubuntu website does not provide torrents to download Ubuntu). However, if I said I was going to torrent something, most people associate it with illegal downloads (music, video, TV shows, etc).
I think this might come from the top. An engineer might propose using a torrent system to provide downloads, yet management shudders when they hear the word "torrent".
That said, I would indeed use such a system. Although I doubt I would be able to seed at home (I found that the bandwidth kills the connection for everyone else in the house). However, at school, I probably would not only use such a system, but seed for it as well.
Another problem, as mentioned in the other question, is that torrent software is not built into browsers. Until it is, you won't see widespread use of it.
Kontiki (which is very similar to bittorrent), makes up about 10% of all internet traffic by volume in the UK, and is exclusively used for legal distribution of "big media" content.
There are people who won't install a torrent client because they don't want the RIAA sending them extortion letters and running up legal fees in court when they (the RIAA) break into your computer and see MP3 files that are completely legal backup copies of CDs that were legally purchased.
There's a lot of fear about torrents out there and I'm not comfortable with any of the clients that would allow even limited access to my PC because that's the "camel's nose in the tent".
The other posters are correct. There is a huge stigmata against Torrent files in general due to their use by hackers and people who violate copyright law. Look at PirateBay, that is all they "serve" are torrent files. A lot of cable companies in the US have started traffic shaping Torrent traffic on their networks as well because it is such a bandwidth hog.
Remember that torrents are not a download accellerator. They are meant to offload someone who cannot afford (or maybe just doesn't desire) to pay for all the bandwidth themselves. The users who are seeding take the majority of the load. No one seeding? You get no files.
The torrent protocol is also horrible for being so darn chatty. As much as 40% of your communications on the wire can be control flow messages and chat between clients asking for pieces. This is why cable companies hate it so much. There are some other problems of the torrent end game (where it asks a lot of people for final parts in an attempt to complete the torrent but can sometimes end up with 0 available parts so you are stuck with 99% and seeding for everyone).
http is also pretty well established and can be traffic shaped for load balancers, etc. So most legit companies that serve up their content can afford to host it, or use someone like Akamai to repeat the data and then load balance.
Perhaps its the ubiquity of http-enabled browsers, you don't see so much FTP download links anymore, so that could be the biggest factor (ease of use for the end-user).
Still, I think torrent downloads are a valid alternative, even if they won't be the primary download.
I even suggested Sourceforge auto-generate torrents for downloads, and they agreed it was a good idea.. but havn't implemented it (yet). Here's hoping they will.
Something like this actually exists at speeddemosarchive.com.
The server hosts a Metroid Prime speedrun and provides a permanent seed for it.
I think that it's a very clever idea.
Contrary to your idea, you don't need an HTTP URL.
I think one of the reasons is that (currently) torrent links are not fully supported inside web browser... you have to fire up the torrent client and so on.
Maybe is time for a little firefox extension/plugin? Damn, now I am at work! :)

Resources