I have been playing with Contiki for some time now and have tried out various examples and wrote my own for both the simulation environment and the real hardware. I have only been experimenting with self contained networks that, for example, measure the temperature difference between two nodes and then communicate that data with other devices (PCs) over plain text RS232 link, blink LEDs, and such simple stuff.
Now I want to make a more complicated system where instead of just forwarding the data in plain text to be read on a console I would forward it to an application that would in turn post it to some sort of web service and, vice versa, receive data from web service to be delivered to nodes on the network. There is quite a lot of examples and tutorials describing this kind of setup but all of them (as far as I am aware) are focusing on the IP(v6) stack and SLIP to achieve this. The problem with this is that I have a really lousy programmer and uploading of a 50 kB image takes about 1.5 mins so the development cycle is pure hell. I am also out of luck with simulation since my platform is not really supported at the moment.
That's why I decided to try out the Rime stack, image size is 1/3 of the IPv6 and the development cycle is somewhat acceptable now (I really should get a decent JTAG programmer...) Meanwhile, I am having a bit of trouble wrapping my head around this new setup with a different network stack on which there is very little information around. Although it is pretty easy to understand by itself, I am not sure how I would go about connecting a Rime network to an IP network and if it was even possible or advised/intended by it's designers.
I have some ideas in my head, ranging from ad hoc communication over a serial link between a server application running on the PC and the collector node, to a real Rime border router that is certainly outside of my league, for now.
How would you go about it? It would certainly work for my simple experimental case to just have a collector node that gathers the data from the Rime network and sends the aggregated data over a serial connection to the custom application that does the rest of the magic, but, I wouldn't want to be the guy that reinvented the wheel and I am quite sure that Rime wasn't designed to be used in a vacuum so there must be at least an advised way of doing this?
Rime is a really simple stack (By simple I mean few functionnality). But it's quite quicker for simple task.
You need to program the Rime stack on your gateway. Thus, your board and the gateway can communicate with the same stack. So now you have the data send on your gateway. The gateway now can send the data with IP to whoever you want.
If you want more technical detail, then edit your question with more specific technical context.
Btw JTAG is a must have. (for industrial application)
Edit : An other solution is to simply send your data from your board to your gateway in broadcast. Then the gateway take the data and interpret it. The cons of this method is you have to somehow be sure that your gateway interpret only the data of your board (not of others board)
I'm looking for preferably a device, but I'll settle for an application. I'd like to get an idea of what is out there. Something just to get a total count from what computer, And maybe by common ports.
I'm not looking for a detailed reporting like wireshark, unless wireshark can do summaries and be running for a whole month without issues.
Essentially, I just want an idea of where my monthly bandwidth is going and by what computers / devices. I.e. computer A does a lot of website traffic, computer B gets a lot of steam downloads, device C has got some virus setting out on an unknown port. Now to be clear, I would only know that Device C has some virus on it as the logs would show either lots of bandwidth on a unknown port which then gets be to investigate.
Being able to ignore / filter any traffic that stays behind the router would be nice. This computer as of this post has over a gig of traffic, but mostly to the networked drive I have, I'm really only interested in what uses internet bandwidth, and who's using it... and if possible, what it is.
If you ask for some tools,maybe google/superuser is better for you.
If you ask how to program, you can use Raw Socket.
RawSocket is able to work as network sniffer. So you can get your networking bandwidth using.
I am sorry if my question is obvious, but i need the expert suggestion/views, i want to test my Client/Server game for which currently, i am using localhost same machine for both Client and Server.
that's why i not getting any fluctuation in data and measure idea of performance, and in other parameters, what i wanted to ask to have a real world scenario:
if i create a little network with two computers or
if check that on LAN on which i am or
ARE THESE THREE CASES (localhost included) ARE EQUIVALENT? or
I really need to test that on different LANs to have reliable testing data and realistic data,
How these different network setups will influence the testing process?
Can somebody please suggest, which could be the ideal way or enough for testing?
which above setup will give me more up and downs in number with LEAST setup/implementing efforts.
Note: The game is suppose to play on the LAN but it is capable of more.
Thanks,
Jibbylala
P.S: i m newbie in network stuff so if u used the wrong terms, vocabulary pardon me
emphasized text
You will want to test your application under different situations. For example, test it using a small LAN where you only have one switch between the two computers. That will ensure that you can, in fact, connect and play over a simple LAN. Then, test different LAN connections such as a slow link (turn a network card down to 10 Mbps), on a wireless LAN, and if possible even a larger or corporate-type LAN. The more testing you can do about different situations, the better. Testing on just your localhost will not be enough.
Are there any libraries which put a reliability layer on top of UDP broadcast?
I need to broadcast large amounts of data to a large number of machines as quickly as possible, and generally it seems like such a problem must have already been solved many times over, but I wasn't able to find anything except for the Spread toolkit, which has a somewhat viral license (you have to mention it in all materials advertising the end product, which I'm not sure our customer will be willing to do).
I was already going to write such a thing myself (because it would be extremely fun to do!) but decided to ask first.
I looked also at UDT (http://udt.sourceforge.net) but it does not seem to provide a broadcast operation.
PS I'm looking at something as lightweight as a library - no infrastructure changes.
How about UDP multicast? Have a look at the PGM protocol for which there are several commercial and open source implementations.
Disclaimer: I'm the author of OpenPGM, an open source implementation of said protocol.
Though some research has been done on reliable UDP multicasting, I haven't yet used anything like that. You should take into consideration that this might not be as trivial as it first sounds.
If you don't have a list of nodes in the target network you have no idea when and to whom to resend, even if active nodes receiving your messages can acknowledge them. Sending to a large number of nodes, expecting acks from all of them might also cause congestion problems in the network.
I'd suggest to rethink the network architecture of your application, e.g. using some kind of centralized solution, where you submit updates to a server, and it sends this message to all connected clients. Or, if the original sender node's address is known a priori, then just let clients connect to it, and let the sender push updates via these connections.
Have a look around the IETF site for RFCs on Reliable Multicast. There is an entire working group on this. Several protocols have been developed for different purposes. Also have a look around Oracle/Sun for the Java Reliable Multicast Service project (JRMS). It was a research project of Sun, never supported, but it did contain Java bindings for the TRAM and LRMS protocols.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I suppose this question is a variation on a theme, but different.
Torrents will never replace HTTP, or even FTP download options. This said, why aren't there torrent links next to those options on more websites?
I'm imagining a web-system whereby downloaded files are able to be downloaded via HTTP, say from http://example.com/downloads/files/myFile.tar.bz2, torrents can be cheaply autogenerated and stored in /downloads/torrents/myFile.tar.bz2.torrent, and the tracker might be /downloads/tracker/.
Trackers are a well defined problem, and not incredibly difficult to implement, and there are many drop in place alternatives out there already. I imagine it wouldn't be difficult to customise one to do what is needed here.
The autogenerated torrent file can include the normal HTTP server as a permanent seed, the extensions to do this are very well supported by most, if not all, of the major torrent clients and requires no reconfiguration or special things on the server end (it uses stock standard HTTP Range headers).
Personally, if I setup such a system, I would then speed limit the /downloads/files/ directory to something reasonable, say maybe 40-50kb/s, depending on what exactly you were trying to serve.
Does such a file delivery system exist? Would you use it if it did: for your personal, company, or other website?
first of all: http://torrent.ubuntu.com/ for torrents on ubuntu.
second of all: opera has a built in torrent client.
third: I agree with the stigma attached to p2p. So much so that we have sites that need to be called legaltorrents and such like because by default a torrent would be an illegal thing, and let us not kid ourselves, it is.
getting torrents into the main stream is an excellent idea. you can't tamper with the files you are seeding so there is no risk there.
the big reason is not really stigma. the big reason is analytics, and their protection. with torrents these people (companies like microsoft and such like) would not be able to gather important information about who is doing the downloads (not personally identifiable information, and quickly aggregated away). with torrents, other people would be able to see this information, at least partially. A company would love to seed the torrent of an evaluation version of a competing companys product, just to get an idea of how popular it is and where it is getting downloaded from. It is not as good as hosting the download on your webservers, but it is the next best thing.
this is possibly the reason why the vista download on microsofts sites, or its many service packs and SDKs are not in torrents.
Another thing is that people just wont participate, and that is not difficult to figure out why because of the number of hoops you have to jump through. you got to figure out the firewall, the NAT thing, and then uPNP thing, and then maybe your ISP is throttling your bandwidth, and so on.
Again, I would (and I do) seed my 1.5 times or beyond for the torrents that I download, but that is because these are linux, openoffice that sort of thing. I would probably feel funny seeding adobe acrobat, or some evaluation version or something, because those guys are making profits and I am not a fool to save money for them. Let them pay for http downloads.
edit: (based on the comment by monoxide)
For the freeware out there and for SF.net downloads, their problem is that they cannot rely on seeders and will need their fallback on mirrors anyway, so for them torrents is adding to their expense. One more reason that pops to mind is that even in software shops, Internet access is now thoroughly controlled, and ports on which torrents rely plus the upload requirement is absolutely no-no. Since most people who need these sites and their downloads are in these kinds of offices, they will continue to use http.
BUT even that is not the answer. These people have in their licensing terms restrictions on redistribution. And so their problem is this: if you are seeding their software you are redistributing it. That is a violation of their licensing terms so if they host a torrent download and allow you to seed it, that is entrapment and they can be sued (I am not a lawyer, I learn from watching TV). They have to then delicately change their licensing to allow distribution by seeding torrents but not otherwise. This is an easy enough concept for most of us, but the vagaries of the English language and the dumb hard look on the face of the judge make it a very tricky thing to do. The judge may personally understand torrents, but sitting up their in the court he has to frown and pretend not to because it is not documented in legalese.
That there is the ditch they have dug and there they fall into it. Let us laugh at them and their misery. Yesterdays smart is todays stupid.
Cheers!
I'm wondering if part of it is the stigma associated with torrents. The only software that I see providing torrent links are Linux distros, and not all of them (for example, the Ubuntu website does not provide torrents to download Ubuntu). However, if I said I was going to torrent something, most people associate it with illegal downloads (music, video, TV shows, etc).
I think this might come from the top. An engineer might propose using a torrent system to provide downloads, yet management shudders when they hear the word "torrent".
That said, I would indeed use such a system. Although I doubt I would be able to seed at home (I found that the bandwidth kills the connection for everyone else in the house). However, at school, I probably would not only use such a system, but seed for it as well.
Another problem, as mentioned in the other question, is that torrent software is not built into browsers. Until it is, you won't see widespread use of it.
Kontiki (which is very similar to bittorrent), makes up about 10% of all internet traffic by volume in the UK, and is exclusively used for legal distribution of "big media" content.
There are people who won't install a torrent client because they don't want the RIAA sending them extortion letters and running up legal fees in court when they (the RIAA) break into your computer and see MP3 files that are completely legal backup copies of CDs that were legally purchased.
There's a lot of fear about torrents out there and I'm not comfortable with any of the clients that would allow even limited access to my PC because that's the "camel's nose in the tent".
The other posters are correct. There is a huge stigmata against Torrent files in general due to their use by hackers and people who violate copyright law. Look at PirateBay, that is all they "serve" are torrent files. A lot of cable companies in the US have started traffic shaping Torrent traffic on their networks as well because it is such a bandwidth hog.
Remember that torrents are not a download accellerator. They are meant to offload someone who cannot afford (or maybe just doesn't desire) to pay for all the bandwidth themselves. The users who are seeding take the majority of the load. No one seeding? You get no files.
The torrent protocol is also horrible for being so darn chatty. As much as 40% of your communications on the wire can be control flow messages and chat between clients asking for pieces. This is why cable companies hate it so much. There are some other problems of the torrent end game (where it asks a lot of people for final parts in an attempt to complete the torrent but can sometimes end up with 0 available parts so you are stuck with 99% and seeding for everyone).
http is also pretty well established and can be traffic shaped for load balancers, etc. So most legit companies that serve up their content can afford to host it, or use someone like Akamai to repeat the data and then load balance.
Perhaps its the ubiquity of http-enabled browsers, you don't see so much FTP download links anymore, so that could be the biggest factor (ease of use for the end-user).
Still, I think torrent downloads are a valid alternative, even if they won't be the primary download.
I even suggested Sourceforge auto-generate torrents for downloads, and they agreed it was a good idea.. but havn't implemented it (yet). Here's hoping they will.
Something like this actually exists at speeddemosarchive.com.
The server hosts a Metroid Prime speedrun and provides a permanent seed for it.
I think that it's a very clever idea.
Contrary to your idea, you don't need an HTTP URL.
I think one of the reasons is that (currently) torrent links are not fully supported inside web browser... you have to fire up the torrent client and so on.
Maybe is time for a little firefox extension/plugin? Damn, now I am at work! :)