I'm about to start writing a "server-side" (as in not running in a browser) software that will do a lot of network communication with UDP.
I'm currently leaning towards running the server on Node.js, mostly with the superficial motivation that it's "new and shiny".
As I'm fairly new to Javascript, I'm looking into using a some type of JS-framework, to help me avoid a few of the most obvious mistakes one often makes as a beginner. I'm looking for experiences with networking and Javascript. Does any particular framework make UDP-networking really easy/convenient?
Both Mootools and Dojo look interesting as they are both well maintained, while Prototype/script.aculo.us seems to have become stagnant, and aren't very interesting because of that.
MooTools Server could be a nice add-on to any JS software for its utility methods and OOP layer. However, it would not help in any way to manage specifically UDP.
Actually, all the frameworks you mentioned take their origin from the client-side, and therefore don't have any reason to manage networking at the transport layer (or beneath) in any way.
It is not a very common use-case since JS, even with Node, is very oriented towards webapp that almost always use HTTP over TCP. I don't have any references to give regarding such usage, beyond Node's dgram API. It seems to offer enough abstraction over the datagram to be usable, and I don't really know what you would like to have to make UDP “easier”: if you're messing with the transport layer, you're anyway asking for your hands to get dirty ;)
Related
We've looking at moving splitting up our architecture (and adding new components) using a Service Oriented Architecture (SOA). There will be a number of external API's that will be used by third parties, which we will make using a REST HTTP interface, however I was wondering what would be best to use internally as all components are with in our control and will be on the same network, however potentially different technologies (mainly .net and ruby on rails).
Would there be big performance/functionality gains in using a messaging system (redis, rabbitmq, EMS, other notable exceptions I've not heard of...) instead of HTTP (REST, SOAP, etc).
I've struggled to find good information on this topic and (as you can probably tell) I'm fairly new to this side area, so any advice or good resources would be appreciated!
Thnaks
Messaging tends to give you a more loosely coupled architecture. It can potentially be more robust as well, since individual components can fail without killing the entire infrastructure.
The downside is complexity, the paradigm shift to an asynchronous model, and possibly performance (especially if you're persisting messages every where).
You also need to ensure that your messaging system is particularly robust. A single aspect of your logic can go down and restart without affecting everything, but if you lose your core message base, then ALL of your logic is down waiting for the messaging to be back up.
Fortunately, the message bus can be long running without humans fiddling and touching it, the largest source of errors and instability in any system.
In addition to what #Will Hartung mentioned, I would also say that it depends on what you are going to do with your system. If you have mostly client-server type applications, where you have few servers/services and they tend to be completely independent, then it will probably be easier to implement service contracts via REST over HTTP.
If, on the other hand, your entire system is doing bi-directional communication, or if there are many inter-process calls (and particularly if every participant in the system is going to be both a client and a server at some point), then messaging is your best bet. Of the messaging options, I find that AMQP/RabbitMQ is the most feature-rich and easy to use of all of these. It offers you a true asynchronous platform to code against.
They key benefit to using messaging is that you can have queues for each type of message, so as your system expands and changes, the queues/messages can be the same, but the service that handles them can change underneath. It promotes separation of layers.
Finally, and this is a huge thing in my opinion, the proper use of messaging promotes small, independent pieces of code. These are both more testable and more maintainable, and in general it simplifies your enterprise architecture. If you attempt to handle too many services from HTTP endpoints, you will eventually (over the course of a year or two) end up with either (1) way too many endpoints to keep track of or (2) an unmaintainable mess of spaghetti code.
My company started out with using a message-based framework, and it has worked very well for us. The RabbitMQ server has easily been the most reliable component. Feel free to ask if you have any more questions about messaging or SOA.
We're all familiar with popular protocols like IMAP and POP, used for email messaging.
I have a plan for a new protocol, but I'm not sure to go about implementing it.
Is the protocol a collection of C source code, for example, that accepts and sends data through ports? Or is a protocol just a thorough description of how data should be sent, which clients then implement?
I'm lost where to start here, and I'm not very familiar with how the protocol system works.
Edit:
Also, if I write a protocol and it isn't made official by the standards group, can people/clients still implement it?
The official way is to write an RFC - a Request for Comments. People will respond to that (that's why it's an RFC) and probably try to implement your protocol.
As soon as two independent implementations exist that completely support the protocol, it's a new standard.
Of course, people aren't going to implement a new protocol for someone just for fun. So you should first find a group who is interested in listening to you. Maybe there already is a protocol which does what you want (or can easily be extended).
But you probably don't want to invent a new standard. Standards are a lot of work and - for some - overrated.
So you should describe how it works and create a library that can read and write the protocol, so developers can use it even though it's not an official standard.
As you are interested in the Replace Email section of the Paul Graham article you linked, then IMHO you will need to both develop a protocol definition, and also provide an example implementation. The protocol definition does not need to be published as an internet protocol standard in order to be useful.
You will need an implementation to so that you can test, refine and improve the ideas. It is extremely unlikely the protocol will be right at the first attempt, and you'll need something to support the initial users.
You don't need a protocol definition to implement an improved email, but you will need one if you expect others to work with you and adopt it, though it very much depends on your 'business model'. I strongly recommend you have a protocol definition from the start, even if only to keep yourself sane when you try to produce the second implementation.
I recommend having a look at some examples of sneaky approaches to protocols and implementation. My favourite is described in the Viewpoints Research 2008 Progress report on a super-compact approach to TCP/IP.
They did not follow the traditional approach to developing the implementation of a protocol (the protocol stack). Instead they wrote code which parsed the human-readable TCP/IP protocol specification, and generated the code of a TCP/IP stack from that protocol document. The usual TCP/IP stack is about 40,000 lines of code, or more. Their program, which read the protocol specification, and generated the code for a TCP/IP stack 'automatically' was only 160 lines of code. They use extremly powerful programming tools.
If you had an approach like that, you could keep the protocol implementation synchronised with the specification, and potentially make it straightforward for others to adopt your protocol.
HTH
You are confusing a protocol standard with the implementation.
These 2 are unrelated.
A protocol is described in a high level but has enough information for someone to undestand how it should be implemented.
The idea is that someone reading the document can understand how/what to implement in any language of preference
To give an example: SIP protocol in the RFC describes the various flows and also has the various messages and how they are supposed to b processed i.e. the semantics well defined.
You can implement a SIP UA or Server in C++ or Java. This is irrelevant to the SIP protocol
For this you don't need to provide any source code (you could though if you think it helps clarify some obscurity of the description).
The most important part is that your protocol is actually reviewed by stakeholders i.e. people that expect it to solve their problems.
This part is the most important not only because it could solve problems in your protocol but because they can actually verify that the concept is solid i.e. can be technically implemented
The only case that one could specify something concrete or imply something is if for example the protocol described something demanding some specific constraints e.g. hard-real time constraint which could serve as "hint" on which implementation/languages to avoid
Also, if I write a protocol and it isn't made official by the
standards group, can people/clients still implement it?
Strange question.What do you mean?How will someone know your protocol exists?
If it is official he can get it from the standards group to implement it.
Otherwise it is obvious that you have some sort of "proprietary" protocol (which is not uncommon e.g. a company can have an internal protocol for its own software) and people have to get the spec from you.
I want to build a decentralized, reddit-like system using P2P. Basically, I want to retain the basic capabilities of reddit, but make it decentralized, to make it more robust and immune to censorship. This will also allow people to develop different clients to match the way they want to browse it.
Could you recommend good p2p libraries to base my work on? They should be open-source, cross-platform, robust and easy to use. I don't care much about the language, I can adapt.
Disclaimer: warning, self-promotion here !!!
Have you considered JXTA's latest release? It is probably sufficient for what you want to do. Else, we are working on a new P2P framework called Chaupal, but it is not operational yet.
EDIT
There is also what I call the quick-and-dirty UDP solution (which is not so dirty after all, I should call it minimal).
Just implement one server with a public address and start listening for UPD.
Peers located behind NATs contact the server which can read how their private IP address has been translated into a public IP address from the received datagrams.
You send that information back to the peer who can forward it to other peers. The server can also help exchanging this information between peers.
Then peers can communicate directly (one-to-one) by sending datagrams to these translated addresses.
Simple, easy to implement, but does not cover for lost datagrams, replays, out-of-order etc... (i.e., the typical stuff that TCP solves for you at the IP stack level).
I haven't had a chance to use it, but Telehash seems to have been made for this kind of application. Peer2Peer apps have a particular challenge dealing with the restrictions of firewalls... since Telehash is based on UDP, it's well suited for hole-punching through firewalls.
EDIT for static_rtti's comment:
If code velocity is a requirement libjingle has a lot of effort going into it, but is primarily geared towards XMPP. You can port off parts of the ICE code and at least get hole-punching. See the libjingle architecture overview for details about their implementation.
Check out CouchDB. It's a decentralized web app platform that uses an HTTP API. People have used it to create "CouchApps" which are decentralized CouchDB-based applications that can spread in a viral nature to other CouchDB servers. All you need to know to write CouchApps is Javascript and learn the CouchDB API. You can read this free online book to learn more: http://guide.couchdb.org
The secret sauce to CouchDB is a Master-to-Master replication protocol that lets information spread like a virus. When I attended the first CouchConf, they demonstrated how efficient this is by throwing a "Couch Party" (which is where you have a room full of people replicating to the person next to them simulating an ad hoc network).
Also, all the code that makes a CouchApp work is public by default in special entities known as Design Documents.
P.S. I've been thinking of doing a similar project, but I don't have a lot of time to devote to it at the moment. GOD SPEED MY BOY!
What is the correct way (or best) way to implement Comet, HTTP Push, or Reverse AJAX?
What .NET implementations would you recommend?
I have hear about, WebSync and PokeIn, both are paid implementations, I have used PokeIn and its pretty straight forward. If you are looking forward to code your own COMET implementation, I just can say that its a complex task, because you need to modify the natural behaviour if IIS. Its a hacky way to get around the limitations of the HTTP protocol and you need to know really well what you doing so don't end up breaking things around =).
It's also known as long-lived
requests. This is also by far the most
complex method to implement.
Basically, a request is made by the
client, and the server very slowly
responds, which causes the connection
to be maintained. Periodically, when
the server has something to push,
it'll "burst" send the information, so
to speak. This approach gives you
real-time push, which is great. But,
it has a serious down-side: holding
connections open like that isn't how
the underlying protocols are meant to
work, and most servers aren't terribly
happy about it. If your traffic gets
too great, you'll chew up threads on
the server and wind up bringing your
site down.
ref: http://www.coderanch.com/t/121668/HTML-JavaScript/does-Reverse-Ajax-Works
JOBG is correct re: the complexities; it's probably not a task you want to undertake lightly. I'm one of the authors of WebSync, and I can attest that it's a difficult task.
There are a ton of examples in the download, and the community edition is free.
Microsoft is developing HTTP push in SignalR
https://github.com/SignalR/SignalR
My first question so go easy on me :)
I've been developing for years and have written WAY too many apps (mostly web apps) using web services - I'm happy with SOAP/WSDL/etc... I also used to write TCP/IP client-server apps back in the day using good old winsock.
I'm a bit bored and looking for a new project to expand my skills so decided to have a go at doing either a game or some sort of server monitoring and remote control application
I haven't decided which and the answer to this question will hopefully inform my decision.
What I'd like is some advice as to which methods I should be looking to handle the communication.
Let's assume I'm doing thew game for the moment - I want 2-way communication with low latency and the ability to handle as many simultaneous connections as possible.
I've considered web services but it seems like a lot of overhead - especially as I'd need the client to expose one as well.
TCP/IP would do the job but seems like it's a little low-level and I lsoe a lot of the advantages like definitions. Presumably I'd need to formulate a new protocol for the communications etc... I'm also unsure how I'd have one client use multiple channels for concurrent information - eg a chat and updating location information. I could attempt to multiplex this in some way but my initial ideas re: the queuing seem quite messy
.Net remoting - I've not really touched this much at all. Seems to have low overhead and more flexibility than webservices but I don't know enough to evaluate properly.
I'd really appreciate any input you can provide (and a link to a tutorial would be fantastic)
Thanks in advance for your help
EDIT: I've had an answer which points me at a UDP library. Is UDP appropriate for this? For location information/similar which requires no history, I can see how this is advantageous but for a chat, a lost packet could be an issue - or do I manually send back an acknowledgment of receipt? If so, aren't I duplicating TCP/IP functionality for limited advantage?
Apologies if this is an incorrect way to expand on the question - guidance for that appreciated too :)
If you're up to date on .NET 3.5 SP1, then you should use WCF. You say you don't want to use web services, and I assume from that you mean you don't want to use SOAP over HTTP. WCF does a lot more than SOAP over HTTP. In particular, it can do binary over TCP/IP using the same infrastructure. It also has support for peer-to-peer.
Take a look at something like Lidgren and see how that work's. Its written in c# so its able to be used with VB.Net
Lidgren is a socket wrapper, Ive used it in a few small scale multiplayer games, ( mainly by using a header stating packet type. ie first byte represents packet type,
Lidgren