Simple question, is it possible to send a raw byte array packet with Kryonet? The client doesn't use Kryonet and will read the bytes
Thanks
Kryonet is based on simple tcp communication via NIO along with build-in kryo serialization. Kryonet without kryo serialization is just tcp client/server, nothing more.
Or if you want simple solution, you can just create a wrapper for the entity having one attribute in the form of byte[] and use customer serializer to serialize byte[]. It's the fastest way for some proof of concept etc.
Related
I am the first scholar of grpc. I think it is an ugly question.
A simple chat application is created using grpc-go. and want to achieve is something like this.
#Each of clientA, clientB, clientC, connects to serverA, and has a bidirectional stream connection.
For example, if you want to notify clientA, B, C that clientD has connected to serverA, what kind of implementation method is there?
How to broadcast in gRPC from server to client?
gRPC: How can I distinguish bi-streaming clients at server side?
I have read these posts, but I would like to know the best practices.
For example, when creating a client list and notifying there, how should this be code?
I wrote it down because it was self-solving.
https://github.com/rodaine/grpc-chat
This project was very helpful.
type server struct {
ClientStreams map [string] chan chat.StreamResponse
}
In this way, a map that stores the stream of the client, here map, but slice or sync.Map may be used.
By connecting the information from the client to the structure when the stream was opened, we were able to support broadcasting and managing the number of connections.
What is the difference between multistream (yamux, multistream-select, ..) and multiplex (mplex)?
I'd like to utilize one TCP connection for RPC, HTTP, etc (one client is behind firewall) like this:
conn = tcp.connect("server.com:1111")
conn1, conn2 = conn.split()
stream1 = RPC(conn1)
stream2 = WebSocket(conn2)
..
// received packets tagged for conn1 is forwarded to stream1
// received packets tagged for conn2 is forwarded to stream2
// writing to stream1 tags the packets for conn1
// writing to stream2 tags the packets for conn2
Which one suits this case?
The short answer: mplex and yamux are both Stream Multiplexers (aka stream muxers), and they're responsible for interleaving mulitiple "logical streams" over a single "raw" connection (e.g. TCP). Multistream is used to identify what kind of protocol should be used when sending / receiving data over the stream, and multistream-select lets peers negotiate which protocols are supported by each end and hopefully agree on one to use.
Long answer:
Stream muxing is an interface with several implementations. The "baseline" stream muxer is called mplex - a libp2p-specific protocol with implementations in javascript, go and rust.
Stream multiplexers are "pluggable", meaning that you add support for them by pulling in a module and configuring your libp2p app to use them. A given libp2p application can support several multiplexers at the same time, so for example, you might use yamux as the default but also support mplex to communicate with peers that don't support yamux.
While having this kind of flexibility is great, it also means that we need a way to figure out what stream muxer to use for any specific connection. This is where multistream and multistream-select come in.
Multistream (despite the name) is not directly related to stream multiplexing. Instead, it acts as a "header" for a stream of binary data that contextualizes the stream with a protocol id. The closely-related multistream-select protocol uses mutlistream protocol ids to negotiate what protocols to use for the "next phase" of communication.
So, to agree upon what stream muxer to use, we use multistream-select.
Here's an example the multistream-select back-and-forth:
/multistream/1.0.0 <- dialer says they'd like to use multistream 1.0.0
/multistream/1.0.0 -> listener echoes back to indicate agreement
/secio/1.0.0 <- dialer wants to use secio 1.0.0 for encryption
/secio/1.0.0 -> listener agrees
* secio handshake omitted. what follows is encrypted via secio: *
/mplex/6.7.0 <- dialer would like to use mplex 6.7.0 for stream multiplexing
/mplex/6.7.0 -> listener agrees
This is the simple case where both sides agree upon everything - if e.g. the listener didn't support /mplex/6.7.0, they could respond with na (not available), and the dialer could either try another protocol, ask for a list of supported protocols by sending ls, or give up.
In the example above, both sides agreed on mplex, so future communication over the open connection will be subject the semantics of mplex.
It's important to note that most of the details above will be mostly "invisible" to you when opening individual connections in libp2p, since it's rare to use the multistream and stream muxing libraries directly.
Instead, a libp2p component called the "switch" (also called the "swarm" by some implementations) manages the dialing / listening state for the application. The switch handles the multistream negotiation process and "hides" the details of which specific stream muxer is in use from the rest of the libp2p stack.
As a libp2p developer, you generally dial other peers using the switch interface, which will give you a stream to read from and write to. Under the hood, the switch will find the appropriate transport (e.g. TCP / websockets) and use multistream-select to negotiate encryption & stream multiplexing. If you already have an open connection to the remote peer, the switch will just use the existing connection and open another muxed stream over it, instead of starting from scratch.
The same goes for listening for connections - you give the switch a protocol id and a stream handler function, and it will handle the muxing & negotiation process for you.
Our documentation is a work-in-progress, but there is some information at https://docs.libp2p.io that might help clarify, especially the concept doc on Transports and the glossary. You can also find links to example code.
Improving the docs for libp2p is my main quest at the moment, so please feel free to file issues at https://github.com/libp2p/docs to let me know what your most important missing pieces are.
In RPC, the stubs at client and server needs to marshal and unmarshal data, then it sends it to the Lower layer to send it over network. Do TCP/IP also Marshal the data to binary stream? Why the Middleware needs to marshal the invocation request?
I'm trying to understand am so confused because as I know using IPC we don't marshal the data we just use send() and recv().
Thank you.
The job of the proxy is to marshal the call from the client by serializing the arguments to bytes so it can be transmitted across the network. The stub in the server deserializes them again and makes the call. Possible return values go back the same way.
There is no marshaling in TCP, it just transmit bytes.
I read raw socket tutorial in order to implement my own bridge (i capture package from one side and send them to the other interface by raw socket). I am coming from Java world, so low level programming is strange to me, so forgive me for my ignorance.
I implement a bridge, so I need to send traffic from A to interface B and vice versa.
I created a single raw socket and I used it to send data to both servers by 2 different interfaces.is there any reason to why not use the same socket in order to send from interface A or B? I am asking if it's a good practice? problems and etc
It will be great if you can clarify to me how socket is not binded to a physical interface in the underline. this is the reason that it seems strange to me that the same raw socket use to send data for different interfaces.
There is no problem to use the same raw socket for two different interfaces. the use in the specific interface is done when you call to send.
I am writing an application in C, using libpcap. My program listens for new packets and parses them
according to a grammar. The payload actually is XML.
Sometimes one packet is not enough for an XML file, so the XML buffer is splitted into separate packets.
I want to add code logic in order to handle these cases. However I don't know in advance that a packet does not contain the whole data. How do I know that a packet has more data that will be send next? How to i recognize that a new packet contains the rest of the data?
Do I have to use the TH_FIN flag? Could you please explain it to me?
There's nothing in TCP that defines packets, that's up to the higher layers to define if they need to - TCP is just a stream.
If this is raw XML over a TCP stream, you actually need to parse the xml - you'll know when you have a whole xml document when you've received the end of the document element.
If it's XML packaged over HTTP , you might be able to parse out the Content-Length: header which should contain the length of the body.
Note, reassembling a TCP stream from captured packets is a very hard problem, there's a lot of corner cases, e.g. you'd need to handle retransmission , out of sequence tcp segments and many more. http://libnids.sourceforge.net/ might help you.
As Anon say use a higher level stream library.
But even then you need to know the chunk side before starting to handle it, as you will read from the stream in block's of n bytes.
Thus you want to first send in binary the number of bytes to be sent, then send x bytes, and repeat, thus when you are receiving the chucks via select/read to know went you have all of chunk one to pass to the processor.
If you're using TCP, use a TCP library that gives you the data as a stream instead of trying to handle the packets yourself.
Stream is good. Another option is to store the incoming data in a buffer (eg char*) and search for application messaging framing characters or in the case of Xml, the root end tag. Once you've found a complete xml message at the front of the buffer, pull it out and process.
The XMPP instant messaging protocol, used by Jabber, has means to move XML chunks over a TCP stream. I don't know how exactly it is done myself, but RFC 3290 is the protocol definition. You should be able to work it out from that.