Protocol Buffer Wireshark Plugin - tcp

I am looking for a wireshark plugin for google protocol buffer. And I found this GPB Wireshark plugin http://code.google.com/p/protobuf-wireshark/
Apparently only UDP….Is there a GPB plugin for wireshark that works for TCP?

You could use Protobuf dissector shipped with Wireshark instead. Since Wireshark 3.2.0, the *.proto files can now be configured to enable more precise parsing of serialized Protobuf data (such as gRPC).
Parsing Protobuf data based on UDP port is supported since that version. And you can also write a simple dissector to invoke Protobuf dissector for TCP by passing message type through 'data' parameter in C or pinfo.private["pb_msg_type"] in lua.
You may get detail from the release note (https://www.wireshark.org/docs/relnotes/wireshark-3.2.0.html).
The detail of invoking Protobuf dissector in your own dissector is on https://www.wireshark.org/docs/wsug_html_chunked/ChProtobufUDPMessageTypes.html.

This plugin only supports wireshark 10.0.2.

Related

Is it possible to use ActionCable to receive TCP data?

Good people,
I am looking to write a rails 5 app that will receive a TCP data stream from an external party.
Is it possible to receive TCP data using ActionCable?
Are there sample apps online? Google hasn't been helpful
Or will I need to use eventmachine to receive the TCP data?
My answer might not be much helpful.
Actioncable forces its own protocol which is undocumented but simple.
Websocket isn't exactly a
simple tcp socket since it uses its own protocol for framing.
So for raw tcp socket you could use eventmachine however it has several issues. For example no true ssl support although that might not be an issue if you are running in a server environment. Also you don't have much control of what is going on.
Depending on how many connections you have you could use simple TCPServer socket or something more advanced based on nio4r eg: celluloid-io etc.

protobuf vs gRPC

I try to understand protobuf and gRPC and how I can use both. Could you help me understand the following:
Considering the OSI model what is where, for example is Protobuf at layer 4?
Thinking through a message transfer how is the "flow", what is gRPC doing what protobuf misses?
If the sender uses protobuf can the server use gRPC or does gRPC add something which only a gRPC client can deliver?
If gRPC can make synchronous and asynchronous communication possible, Protobuf is just for the marshalling and therefore does not have anything to do with state - true or false?
Can I use gRPC in a frontend application communicating instead of REST or GraphQL?
I already know - or assume I do - that:
Protobuf
Binary protocol for data interchange
Designed by Google
Uses generated "Struct" like description at client and server to un-/-marshall message
gRPC
Uses protobuf (v3)
Again from Google
Framework for RPC calls
Makes use of HTTP/2 as well
Synchronous and asynchronous communication possible
I again assume its an easy question for someone already using the technology. I still would thank you to be patient with me and help me out. I would also be really thankful for any network deep dive of the technologies.
Protocol buffers is (are?) an Interface Definition Language and serialization library:
You define your data structures in its IDL i.e. describe the data objects you want to use
It provides routines to translate your data objects to and from binary, e.g. for writing/reading data from disk
gRPC uses the same IDL but adds syntax "rpc" which lets you define Remote Procedure Call method signatures using the Protobuf data structures as data types:
You define your data structures
You add your rpc method definitions
It provides code to serve up and call the method signatures over a network
You can still serialize the data objects manually with Protobuf if you need to
In answer to the questions:
gRPC works at layers 5, 6 and 7. Protobuf works at layer 6.
When you say "message transfer", Protobuf is not concerned with the transfer itself. It only works at either end of any data transfer, turning bytes into objects
Using gRPC by default means you are using Protobuf. You could write your own client that uses Protobuf but not gRPC to interoperate with gRPC, or plugin other serializers to gRPC - but using gRPC would be easier
True
Yes you can
Actually, gRPC and Protobuf are 2 completely different things. Let me simplify:
gRPC manages the way a client and a server can interact (just like a web client/server with a REST API)
protobuf is just a serialization/deserialization tool (just like JSON)
gRPC has 2 sides: a server side, and a client side, that is able to dial a server. The server exposes RPCs (ie. functions that you can call remotely). And you have plenty of options there: you can secure the communication (using TLS), add authentication layer (using interceptors), ...
You can use protobuf inside any program, that has no need to be client/server. If you need to exchange data, and want them to be strongly typed, protobuf is a nice option (fast & reliable).
That being said, you can combine both to build a nice client/server sytem: gRPC will be your client/server code, and protobuf your data protocol.
PS: I wrote this paper to show how one can build a client/server with gRPC and protobuf using Go, step by step.
grpc is a framework build by google and it is used in production projects from google itself and #HyperledgerFabric is built with grpc there are many opensource applications built with grpc
protobuff is a data representation like json this is also by google in fact they have some thousands of proto file are generated in their production projects
grpc
gRPC is an open-source framework developed by google
It allows us to create Request & Response for RPC and handle rest by the framework
REST is CRUD oriented but grpc is API oriented(no constraints)
Build on top of HTTP/2
Provides >>>>> Auth, Loadbalancing, Monitoring, logging
[HTTP/2]
HTTP1.1 has released in 1997 a long time ago
HTTP1 opens a new TCP connection to a server at each request
It doesn't compress headers
NO server push, it just works with Req, Res
HTTP2 released in 2015 (SPDY)
Supports multiplexing
client & server can push messages in parallel over the same TCP connection
Greatly reduces latency
HTTP2 supports header compression
HTTP2 is binary
proto buff is binary so it is a great match for HTTP2
[TYPES]
Unary
client streaming
server streaming
Bi directional streaming
grpc servers are Async by default
grpc clients can be sync or Async
protobuff
Protocol buffers are language agnostic
Parsing protocol buffers(binary format) is less CPU intensive
[Naming]
Use camel case for message names
underscore_seperated for fields
Use camelcase for Enums and CAPITAL_WITH_UNDERSCORE for value names
[Comments]
Support //
Support /* */
[Advantages]
Data is fully Typed
Data is fully compressed (less bandwidth usage)
Schema(message) is needed to generate code and read the code
Documentation can be embedded in the schema
Data can be read across any language
Schema can evolve any time in a safe manner
faster than XML
code is generated for you automatically
Google invented proto buff, they use 48000 protobuf messages & 12000.proto files
Lots of RPC frameworks, including grpc use protocol buffers to exchange data
gRPC is an instantiation of RPC integration style that is based on protobuf serialization library.
There are five integration styles: RPC, File Transfer, MOM, Distributed Objects, and Shared Database.
RMI is another example of instantiation of RPC integration style. There are many others. MQ is an instantiation of MOM integration style. RabbitMQ as well. Oracle database schema is an instantiation of Shared Database integration style. CORBA is an instantiation of Distributed Objects integration style. And so on.
Avro is an example of another (binary) serialization library.
gRPC (google Remote Procedure Call) is a client-server structure.
Protocol buffers are a language-neutral, platform-neutral extensible mechanism for serializing structured data.
service Greeter {
rpc SayHello (HelloRequest) returns (HelloResponse) {}
}
message HelloRequest {
string myname = 1;
}
message HelloResponse {
string responseMsg = 1;
}
Protocol buffer is used to exchange data between gRPC client and gRPC Server. It is a protocol between gRPC client and gRPC Server. Protocol buffer is implemented as a .proto file in gRPC project. It defines interface, e.g. service, which is provided by server-side and message format between client and server, and rpc methods, which are used by the client to access the server.
Both client and side have the same proto files. (One real example: envoy xds grpc client side proto files, server side proto files.) It means that both the client and server know the interface, message format, and the way that the client accesses services on the server side.
The proto files (e.g. protocol buffer) will be compiled into real language.
The generated code contains both stub code for clients to use and an abstract interface for servers to implement, both with the method defined in the service.
service defined in the proto file (e.g. protocol buffer) will be translated as abstract class xxxxImplBase (e.g. interface on the server side).
newStub(), which is a synchronous call, is the way to implement a remote procedure call (e.g. rpc in the proto file).
And the methods which build request and response messages are also implemented in the generated files.
I re-implemented simple client and server-side samples based on samples in the official doc. cpp client, cpp server, java client, java server, springboot client, springboot server
Recommended Useful Docs:
cpp/helloworld/README.md#generating-grpc-code,
cpp/basics/#generating-client-and-server-code,
cpp/basics/#defining-the-service,
generated-code/#client-stubs,
a blocking/synchronous stub
StreamObserver
how-to-use-grpc-with-spring-boot
Others: core-concepts,
gRPC can use protocol buffers as both its Interface Definition Language (IDL) and as its underlying message interchange format
In simplest form grpc is like a public vechicle.It will exchange data between client and server.
The protocol Buffer is the protocol like your bus ticket,that decides where you should go or shouldn't go.

WSO2 ESB TCP/UDP Axis2 server (case study)

I have been studying WSO2 ESB for this particular case:
We got some remote devices that monitor various types of data (temperature, wind, warnings, alarms, panic, Etc.) this devices send data packages to a server by UDP and TCP mostly in binary format (start bit, protocol, values, time, stop bit).
I know that WSO2 ESB can support TCP and UDP Transport by an axis2 server, however all the examples I have found need the data to be in SOAP format (or XML like).
Is there any way to config the Axis2 server to receive the raw packages?
Thanks in advance.
AFAIK this can be achieved by configuring message builders and formatters in the axis2.xml.
Apache Axis2, which is the base for Apache Synpase's SOAP processing, helps users to add their custom message formats through Builders and Formatters.
A Builder accepts a binary data stream and creates an XML message, and a Formatter accepts a XML message and converts it to bytes.
Add the following message builders and formatters to the corresponding sections in conf/axis2.xml (for both Apache Synapse and WSO2 Enterprise Service Bus).
<messageBuilder contentType="text/html"
class="org.wso2.carbon.relay.BinaryRelayBuilder"/>
<messageFormatter contentType="text/html"
class="org.wso2.carbon.relay.ExpandingMessageFormatter"/>
The above example shows how to enable Binary Relay for text/html content type.
You need to repeat the above pair of configurations for each content type you want to be handled at the byte level.
For more information on Setting up Binary-Relay please refer this documentation.
Hope this information will help you.

Simple Http alternative for Wireshark

In web development, I usually use Firebug. But now I have to use Wireshark to monitor Http requests sent by an Android simulator. Wireshark is a fantastic tool, however it is too fat for what I'm doing, and quite painful to copy/paste the request.
So I'm looking for a simpler alternative on Linux Ubuntu.
Wireshark is mostly bloated due to the GUI front-end; however it has a text-version called tshark that uses substantially less memory... the syntax is very similar to tcpdump...
To capture packets sent to and from a webserver on 192.168.12.14, use this...
tshark -n -i eth0 tcp and host 192.168.12.14 and port 80
You may also consider using ngrep http://ngrep.sourceforge.net/usage.html#http

A Question regarding wget

when I type wget http://yahoo.com:80 on unix shell. Can some one explain me what exactly happens from entering the command to reaching the yahoo server. Thank you very much in advance.
RFC provide you with all the details you need and are not tied to a tool or OS.
Wget uses in your case HTTP, which bases on TCP, which in turn uses IP, then it depends on what you use, most of the time you will encounter Ethernet frames.
In order to understand what happens, I urge you to install Wireshark and have a look at the dissected frames, you will get an overview of what data belongs to which network layer. That is the most easy way to visualize and learn what happens. Beside this if you really like (irony) funny documents (/irony) have a look at the corresponding RFCs HTTP: 2616 for example, for the others have a look at the external links at the bottom of the wikipedia articles.
The program uses DNS to resolve the host name to an IP. The classic API call is gethostbyname although newer programs should use getaddrinfo to be IPv6 compatible.
Since you specify the port, the program can skip looking up the default port for http. But if you hadn't, it would try a getservbyname to look up the default port (then again, wget may just embed port 80).
The program uses the network API to connect to the remote host. This is done with socket and connect
The program writes an http request to the connection with a call to write
The program reads the http response with one or more calls to read.

Resources