Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I wanted to know if any such system already exists for the average open-source user. With all of the net neutrality arguments around and with the cost of broadband likely to go up in the future. It seems like a good idea for an open-source protocol that allows standard consumer routers to operate together and form a mesh network with other consumer routers close by.
Likely possible that with enough nodes in close enough proximity and a good abstraction we could get something good going.
You could always use WDS nodes (like a repeater, kinda).
I use it in my Buffalo AirStation with DD-WRT installed (any router that can load DD-WRT would work).
www.dd-wrt.com
Not sure on the scalability of it though. And the APs would have to be in reach of each other. They could run on separate SSIDs though.
Edit: here's the DD-WRT Wiki page about WDS: http://www.dd-wrt.com/wiki/index.php/WDS
WDS is not meant for and will not scale to more than a few nodes.
There has been extensive work on mesh routing protocols such as BATMAN-ADV, OLSR, BMX and 802.11s. These are all supported on OpenWRT which supports a very large number of consumer wireless routers
There are also many large scale deployments such as freifunk and deployments by The Village Telco
Just to add more info, batmand (layer 3) or batman-adv(layer 2) can run on almost anything with a resemblance of linux, I have managed to get it working on android devices (running cyanogenmod mostly), raspberry's, laptops, foneras, .... basically anything that has or allows a wireless card with ad-hoc mode and a linux-based operating system.
Freifunk Luebeck uses D-Link 300 with batman-adv
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
If Chris and Pat want to exchange a text message, they send and receive via their network providers, which charge them for a connection.
If Chris and Pat are both located in New York City, and there are enough wireless devices between Chris and Pat all close enough to each other to form a continuous chain, is it possible for all those devices to be programmed to cooperatively forward packets amongst each other, bypassing the need for network providers?
It would seem the "address" of each device would have to include current geographic coordinates, and devices would have to report their movements frequently enough so routing attempts could still find them, but the speed and capacity of devices nowadays could handle that, right?
Would such a network be viable? Does it already exist or has it been attempted? Is there some kind of inherent programming problem that is difficult to overcome?
There are a few interesting things here:
Reachability. At least you need to use a technology that can do ad-hoc and peer-to-peer networking. Of those technologies only bluetooth, NFC and WiFi are more or less often implemented. Of those again only wifi currently may have the strength to connect to devices in other houses or to the street, but even there typical ranges are 30-60m (and that's for APs, it might be lower for UEs).
Mobility. ANY short-range wireless communication protocol has difficulties with fast-moving devices. It's simple math, suppose your coverage is 50m in diameter, if you move at about 20km/h or 5.5m/s, you have less than 10s to actually detect, connect and send data while passing this link. Oh, but then we did not consider receiving traffic, you actually have to let all devices know that for the next 10s you want to receive data now via this access network. To give an example, wifi connectivity times with decent authentication (which you need for something like this) alone takes a few seconds. 10s might be doable, but as soon we talk about cars, trains, ... it's becoming almost impossible with current technology. But then again, if you can't connect to those, what are the odds you will cross some huge boulevards with your limited reachability?
Hop to hop delays. You need a lot of those. We can fairly assume that you need at least a hop each 20-30m, let's average at 40 hops/km. So to send a packet over lets say 5km you'd need 200 hops. Each hop needs to take in a packet (L2 processing), route it (L3 processing) and send it out again (L2 processing). While mobile devices are relatively powerful these days I wouldn't assume they can handle that in the microseconds routers do. Next to that in a wireless network you have to wait for a transmission slot, which can actually take in the order of ms (each hop!). So all in all, odds are huge this would be a terribly slow network.
Loss. Well, this depends a bit on the wireless protocol, either it has its own reliable delivery protocol (which will make the previous point worse) or it doesn't. In that last case, suppose your wireless link has about .1% loss, or 99.9% no-loss, this would actually end up with an 18.1% loss rate for the 200 hops considered previously ( (1-0.999**200)*100) This is nearly impossible to work with in day-to-day communications.
Routing. lets say you need a few millions of devices and thus routes. For traditional routing this usually takes some very heavy multicore routers with loads of processing power. Let's just say mobile devices (today) can't cut that yet. A purely geographically based routing mechanism might work, but I can't personally think of any (even theoretical) system for this that works today. You still have to distribute those routes, deal with (VERY) frequent route updates, avoid routing loops, and so on. So even with that I'd guess you'd hit the same scale issues as with for example OSPF. But all-in-all I think this is something that mobile devices will be able to handle somewhere in the not-so-far future, we're just talking about computing capacity here.
There are some other points why such a network is very hard today, but these are the major ones I know of. Is it impossible? No, of course not, but I just wanted to show why I think it is almost impossible with the current technologies and would require some very significant improvements, not just building the network.
If everyone has a device with sufficient receive/process/send capabilities, then backbones (ISP's) aren't really necessary. Start at mesh networking to find the huge web of implementations, devices, projects, etc., that have already been in development. The early arpanet was essentially true peer-to-peer, but the number of net nodes grew faster than the nodes' individual capabilities, hence the growth of backbones and those damn fees everyone's paying to phone and cable companies.
Eventually someone will realize there are a million teenagers in NYC that would be happy to text and email each other for free. They'll create a 99-cent download to let everyone turn their phones and laptops and discarded devices into routers and repeaters, and it'll go viral.
Someday household rooftop repeaters might become as common as TV antennas used to be.
Please check: Wireless sensor network
A wireless sensor network (WSN) of spatially distributed autonomous sensors to monitor physical or environmental conditions, such as temperature, sound, pressure, etc. and to cooperatively pass their data through the network to a main location
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I read about OpenBTS it's really amazing... but I was wondering if we can use it to build phone-to-phone provider-less network.
Any clues or experiments are really appreciated.
One thing to be aware of is that open BTS is 2G/GSM only - i.e. it does not support 3G/UMTS.
This may or may not be important to you depending on what you would like to achieve.
There does appear to be some discussion on adding this functionality in the future (i.e. building an open node b/RNC effectively) but it will be tricky as the authentication mechanism used in 3G requires the network owning the SIM to provide authentication data for even the most basic communication.
GSM follows a strict client-server model. Mobile phones are intended to be clients.
If you would want to build phones with phone-to-phone capability you would need to implement network functionality in the phone. With this, phone-to-phone (theoretically) could be done in an ad-hoc-network model, with one phone running the network part.
I would suspect that one has to look at impacts on the pyhsical/radio layer as well.
Rather unrealistic, IMHO.
May be of interest:
http://terranet.se/history/
So far this company (TerraNet) seem to be only offering sowftware for creating mesh networks over Wifi (I think Wifi is a big disadvantage due to the battery drain. If only we could use GSM), but they seem to share this idea.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I was wondering where I could learn more about decentralized sharing and P2P networks. Ideally, I'd like to create something to help students share files with one another over their universities network, so they could share without fear of outside entities.
I'm not trying to build the next Napster here, just wondering if this idea is feasible. Are there any open source P2P networks out there that could be tweaked to do what I want?
Basically you need a server (well, you don't NEED a server, but it would make it much simplier) that would store user IPs between other things like file hash lists, etc.
That server can be in any enviroinment you want (which is very comfortable).
Then, each client connects to the server (it should have a dns, it can be a free one, I've used no-ip.com once) and sends basic information first (such as its IP, and a file hash list), then sends something every now and then (say each 5 minutes or less) to report that it's still reachable.
When a client searchs files/users, it just asks the server.
This is a centralized network, but the file sharing would be done in p2p client-to-client connections.
The reason to do it like this is that you can't know an IP to connect to without some reference.
Just to clear this server thing up:
- Torrents use trackers.
- eMule's ED2K uses lugdunum servers.
- eMule's "true p2p" Kademlia uses known nodes (clients) (most of the time taken from servers like this).
Tribler is what you are looking for!
It's a fully decentralized BitTorrent Client from the Delft University of Technology. It's Open Source and written in Python, so also a great starting point to learn.
Use DC++
What is wrong with Bit-Torrent?
Edit: There is also a pre-built P2P network on Microsoft operating systems that is pretty cool as the basis to build something. http://technet.microsoft.com/en-us/network/bb545868.aspx
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
So, this appears, on the surface, to be a network admin (serverfault) question, but I'm looking for a lower level answer from a network hacker type.
I was pretty much oblivious to how networks actually work in real life until I started my summer internship. Then, by way of having no other option (internship is at a pretty networking-centric place and I have to put together testbeds for testing [among other things] networks), I became familiar with them. For one thing, the fact that there's no "This goes out to the internet!" port on commercial switches was kind of surprising, until you reasoned about how it works (starts out like a hub til it 'learns' where ips are in terms of the physical port, i guess?).
And after this home-crafted self-discovery (or possibly, error in thinking), I'm back at the extended stay hotel and looking at my cheap little home switch, and it has an uplink port.
Now my question to you, Network hackers (in the good way), is why?
The "uplink" port on your SOHO switch is internally crossed over. It relieves you of having to use a crossover cable to connect two switches. That is the only difference.
BTW: There isn't a "this goes out to the internet" port on SOHO switches either. You're confusing switches and routers/gateways. This confusion may be encouraged by manufacturers putting the two logically separate devices in one piece of hardware, e.g., a router with a 4-port switch. While we're at it, a wireless router w/ 4 port switch is actually logically three separate devices (router, switch, and access point).
BTW #2: A switch (well, except for layer 3 switches, which arguably are only switches to the marketing department) actually learns where MAC addresses are. It neither knows about nor cares about IP.
Uplink ports can be thought of special ports for inter-switch connections. Sometimes they may have a higher speed (1G instead of 100M for example). Or they are interchangeable (laid out as modules).
Some have multiple uplink ports (I had one with two), so you may have redundancy or multiple switched connected this way with the same logic (where is the mac address (on wthich other switch)?).
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've recently become aware that there's a distinction between IP multicasting (which apparently doesn't work that well on the public internet) and application multicasting (which is apparently used in IRC and PSYC, per http://en.wikipedia.org/wiki/Multicast).
Is there a good tutorial on implementing application-level multicasting?
I thought the whole point of multicast was to reduce bandwidth for common network segments, so it's hard for me to understand what application-level multicast does.
The purpose of IP level multicasting is to reduce bandwidth for common network segments where many users wish to receive the same traffic. It's usually limited to one particular subnet and an IP router won't propagate the multicast beyond the subnet. This is done for scalability reasons - it wouldn't be a good idea to allow one host to originate multicast packets which are propagated to every IP address on the internet.
There are different ways to think of "application level" multicasting. One approach is to build a multicast tree using the host computers participating in the multicast. Dijkstra's algorithm could be used to do this (Wikipedia has a reasonable description of this). However, maintaining the list of participating computers - and keeping the tree up to date - can be a fair amount of work if hosts are joining and leaving the network at a substantial rate. And you probably don't have a good estimate of hop cost available at the application level.
Another approach you should review is the flooding algorithm used in the Gnutella network's query routing protocol. (Wikipedia also has a good description of this.) This approach alleviates the need to build a multicast tree, but it has the downside of generating more network traffic. In fact, a LOT more network traffic, as the traffic grows with the square of the number of nodes, i.e. O(n**2).
Another example of application multicasting is using JGroups in Amazon EC2 or Google App Engine as they do not support IP multicast but developers want to use multicasting functionality.