We've tried to evaluate gRPC with flatbuffers on an embedded linux system, the resulting executable is ~6 MB for a very basic example with protobuf. We are looking to strip off as much as possible to move to platforms with even less resources. Only a direct channel over a "secure" serial transport; USB CDC, direct UDP/TCP or similar is required.
Is there a way to achieve this with standard gRPC configuration?
Is a custom channel required for this setup? Implementing a custom channel seems rather complicated (very high cyclomatic complexity in the included channels, even the in-memory one)
Is there any other guidance or examples/implementations for a simple custom channel implementation?
Related
I want to fetch the data of a stock. Since the data changes very fast, is there any way to pull the data like 50-100 times a second from trading websites?
And can we implement that using a raspberry Pi 4 8gig model.
RasPi4 should be more than adequate for this task. Both the ethernet and WiFi hardware is capable of connections at these speeds. (Unless you’re running a bunch of other stuff on it.) Consider where your bottlenecks may be, likely ISP or other network traffic). Consider avoiding WiFi in favor of cat5e or cat6. Consider hanging this device off your router (edge) to keep lan traffic lower and consider QOS settings if you think this traffic may compete with other lan traffic.
This appears to be a general question with no specific platform in mind. For stocks, there are lots of platforms to choose from.
APIs for trading platforms often include a method to open a stream. Instead of a full TCP conversation for each price check, a stream tells the server to just keep on sending data. There are timeout mechanisms of course, but it is good to close that stream gracefully (It’s polite since you’re consuming server resources at a different scale. I’ve seen some financial APIs monitor and throttle stream subscribers who leave sessions open.).
For some APIs/languages you can find solid classes already built on GitHub. Although, if simply pulling and reading a stream then the platform API doc code snippets should be enough to get you going.
Be sure to find out what other overhead may be implicated. For example, if an account or API key is needed to open a stream then either a session must be opened first or the creds must be passed with the stream being opened. The API docs will say. If you’re new to this sort of thing, just be a detective and try to infer what is needed. API docs usually try to be precise and technically correct with the absolute minimum word count.
Simply checking the steam should be easy. Depending on how that steam can be handled by your code/script, it may be harder to perform logic on the stream while it is being updated. That’s usually a thread issue or a variable scope issue depending on the script/code. For what you’re doing I would consider Python or PowerShell depending on your skill-set and other design parameters.
The short story: me and friend are making a multiplayer action game and we thought playn would be great for this. Android, java and HTML5 support is the most important ones but we don't want to cut out the others if not necessary.
The problem is now when we want to implement the networking part of it. We have implemented our own capable server and thought we would use long polling http requests for communication. We estimate now we need some way to have one thread running for the communication that use messages and two multithread safe queues. One queue for incoming messages that the update() part can consume from and one queue for outgoing messages to the server.
Is there any way to implement this without losing platform support? Or any other idea how we can implement this?
PlayN currently has no cross-platform support for persistent socket connections to a server. You will need to implement your own cross-platform abstraction. You can use WebSockets for the HTML5 backend, and you can look for a WebSockets library for Android and whatever other platforms you intend to support.
You can also use the Nexus library, which is designed to work with PlayN and provide client/server communication. However, it raises the level of abstraction substantially beyond passing simple messages between the client and server, so it might be easier to just implement your own simple WebSockets based communication than to learn how Nexus works.
Are there any libraries which put a reliability layer on top of UDP broadcast?
I need to broadcast large amounts of data to a large number of machines as quickly as possible, and generally it seems like such a problem must have already been solved many times over, but I wasn't able to find anything except for the Spread toolkit, which has a somewhat viral license (you have to mention it in all materials advertising the end product, which I'm not sure our customer will be willing to do).
I was already going to write such a thing myself (because it would be extremely fun to do!) but decided to ask first.
I looked also at UDT (http://udt.sourceforge.net) but it does not seem to provide a broadcast operation.
PS I'm looking at something as lightweight as a library - no infrastructure changes.
How about UDP multicast? Have a look at the PGM protocol for which there are several commercial and open source implementations.
Disclaimer: I'm the author of OpenPGM, an open source implementation of said protocol.
Though some research has been done on reliable UDP multicasting, I haven't yet used anything like that. You should take into consideration that this might not be as trivial as it first sounds.
If you don't have a list of nodes in the target network you have no idea when and to whom to resend, even if active nodes receiving your messages can acknowledge them. Sending to a large number of nodes, expecting acks from all of them might also cause congestion problems in the network.
I'd suggest to rethink the network architecture of your application, e.g. using some kind of centralized solution, where you submit updates to a server, and it sends this message to all connected clients. Or, if the original sender node's address is known a priori, then just let clients connect to it, and let the sender push updates via these connections.
Have a look around the IETF site for RFCs on Reliable Multicast. There is an entire working group on this. Several protocols have been developed for different purposes. Also have a look around Oracle/Sun for the Java Reliable Multicast Service project (JRMS). It was a research project of Sun, never supported, but it did contain Java bindings for the TRAM and LRMS protocols.
I know that a protocol is a set of rules that governs communication between two computers on a network, but how are thoses rules implemented for the computer? Is a protocol basically a piece of code or, in other words, software?
Protocols are generally built upon each other. At the risk of sounding pedantic, here's an example of a protocol and where/how it's implemented:
Application Protocol - the way a particular application talks to another instance of itself or a corresponding server; this is implemented in the application code or a shared library
TCP (or UDP, or another layer) - the way that information is sent at the binary level and split up into usable chunks, then reassembled at the destination; this is usually implemented as part of the operating system, but it is still software code
IP - the way that information (having already been split or truncated by something like TCP or UDP) makes its way from one place to another by routing over one or more "hops"; this is always software code, but is sometimes implemented in the OS and sometimes implemented in the network device (your LAN card, for example)
base-T (ethernet), token ring, etc - Here we are physically getting into how the hardware talks to one another; ie, which wire corresponds to a particular type of signal; this is always implemented in hardware
electricity /photons - the laws that govern (or at least define) how electrons (or photons) flow over a conductive material or over the air; this is usually implemented in hardware ;)
In a sense, these are all "protocols" (a set of rules or expected behaviors that allow communication to take place), and they're built on one another.
Bear in mind that (aside from electricity) this is not an exhaustive list of the sort of protocols that exist at any of these layers!
Edit Thanks to dmckee for pointing out that electricity isn't the only physical process used in networking ;)
Networking protocols are not pieces of code or software, they are only a set of rules. When software uses a specific networking protocol, then the software is known as an implementation. There can be many different software implementations of the same protocol (i.e. Windows and UNIX have different TCP/IP implementations). It is possible to understand networking protocols without any knowledge of programming.
EDIT: How are they implemented? Here's a paper on taking an abstract specification of a protocol and implementing it into C. You'll see that less-strict protocols leave out certain details that programmers have to guess on, which makes some implementations incompatible with others.
A network protocol is basically like a spoken language. It is implemented by code that sends and receives specially prepared messages over the network/internet, much like the vocal chords you need to speak (the network and hardware) and a brain to actually understand what someone said (the protocol stack/software).
Sometimes protocols are implemented directly on the hardware [for speed reasons] (like the Ethernet protocol for LANs) - but it is always software/code required to do something useful with a protocol.
This might be interesting for you:
The OSI Model
Protocol (Computing)
Software implements the rules defined in the protocol, some protocols are formal defined and some informal.
a protocol is a set of rules governing the communication between two entities.
in the computer/programming context, a protocol is a set of rules governing the communication between two programs.
in the computer network context, a protocol is a set of rules governing the communication between two programs, well, over network.
in computers, in the end everything is embodied in code...
Protocols are basically set of rules. The way to implement them is to first of all make a state machine diagram as it completely tells that what is going to be the current state and how the state is going to change on the basis of input and what output actions are going to be performed.
Your answer is a very short one:
BY READING THE RFC.
The main networking problem is to share data between computers. All the networking protocols try to solve is a little part of that major problem. Some of them (the protocols) are implemented as software, some others as hardware. In short, protocols like algorithms, can be implemented it in many programming languages.
Back to the TCP, it is implemented by the operating system.
I'm developing a multi-player game and I know nothing about how to connect from one client to another via a server. Where do I start? Are there any whizzy open source projects which provide the communication framework into which I can drop my message data or do I have to write a load of complicated multi-threaded sockety code? Does the picture change at all if teh clients are running on phones?
I am language agnostic, although ideally I would have a Flash or Qt front end and a Java server, but that may be being a bit greedy.
I have spent a few hours googling, but the whole topic is new to me and I'm a bit lost. I'd appreciate help of any kind - including how to tag this question.
If latency isn't a huge issue, you could just implement a few web services to do message passing. This would not be a slow as you might think, and is easy to implement across languages. The downside is the client has to poll the server to get updates. so you could be looking at a few hundred ms to get from one client to another.
You can also use the built in flex messaging interface. There are provisions there to allow client to client interactions.
Typically game engines send UDP packets because of latency. The fact is that TCP is just not fast enough and reliability is less of a concern than speed is.
Web services would compound the latency issues inherent in TCP due to additional overhead. Further, they would eat up memory depending on number of expected players. Finally, they have a large amount of payload overhead that you just don't need (xml anyone?).
There are several ways to go about this. One way is centralized messaging (client/server). This means that you would have a java server listening for udp packets from the clients. It would then rebroadcast them to any of the relevant users.
A second way is decentralized (peer to peer). A client registers with the server to state what game / world it's in. From that it gets a list of other clients in that world. The server maintains that list and notifies the other clients of people who join / drop out.
From that point forward clients broadcast udp packets directly to the other users.
If you look for communication framework with high performance try look at ACE C++ framework (it has Java bindings).
Official web-site is: http://www.cs.wustl.edu/~schmidt/ACE-overview.html
You could also look into Flash Media Interactive Server, or if you want a Java implementation, Wowsa or Red5. Those use AMF and provide native functionality for ShareObjects including synching of the ShareObjects among connected clients.
Those aren't peer to peer though (yet, it's coming soon I hear). They use centralized messaging managed by the server.
Good luck