Translate TCP binary to human readable - tcp

By using Wireshark, I want to read the conversation between my machine (using MetaTrader) and the Meta trader server. It's a TCP conversation but unfortunately I couldn't decode the binary parsed (I tried base64 decode and others things by playing with hex but nothing worked).
Any way/idea to decode this conversation ?
Big thanks in advance for your time/reply
Respectfully

It's almost certainly going to be impossible without going to extreme measures requiring deep knowledge. Unless the protocol was designed by a complete imbecile, the information will be encrypted so just trying to decode it by observation is extremely unlikely to have any hope of working at all.

Related

Why do we still use base64 but only in limited contexts, like SMTP?

I'm trying to explain base64 to someone, but I'm not grasping the fundamental need for it. And/or it seems like all network protocols would need it, meaning base64 wouldn't be a special thing.
My understanding is that base64 is used to convert binary data to text data so that the protocol sending the text data can use certain bytes as control characters.
This suggests we'll still be using base64 as long as we're sending text data.
But why do I only see base64 being used for SMTP and a few other contexts? Don't most commonly-used protocols need to:
support sending of arbitrary binary data
reserve some bytes for themselves as control chars
Maybe I should specify that I'm thinking of TCP/IP, FTP, SSH, etc. Yet base64 is not used in those protocols, TMK. Why not? How are those protocols solving the same problems? Which then begs the reverse question: why doesn't SMTP use that solution instead of base64?
Note: obviously I have tried looking at WP, other Stack-O questions about base64, etc, and not found an answer to this specific question.

How to read TCP packet? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
TCP IP: Is it possible to read what TCP/UDP data a program is sending remotely?
I want to read a packet I've captured with Wireshark. The packet contains data, 133 bytes in length. It is not encrypted. Yet, the HEX form of the data decodes in Wireshark as a string of mostly unintelligible gibberish.
Is there any way to read this data in human-readable form? I'm just trying to figure out how a game client works, that's all.
You would have to know the format to convert it into human-readable form. It's like a book written in Chinese -- if you don't know Chinese, it's going to look like unintelligible gibberish. But it makes perfect sense to anyone who does know Chinese.
Figuring out the format from just the data is as difficult as learning Chinese just from a book written in Chinese. It can be done, but it's a highly-specialized art.
For example, you can try not moving and seeing which numbers stay the same. Then move, and see which numbers change. That might clue you in to where the position information is. However, the entire packet might be scrambled with a pseudo-random sequence, in which case, it will be nearly impossible without reverse-engineering the software.

How to read data in elementary stream

I have to read the eiss table in the elementary stream. How to read the elementary stream and get access to the data.Are there any API's available in DVB , javatv or ocap
Do you have a standard that defines the format of the data you are trying to read? Something like the ISO/IEC 13818-1 for the Transport Stream and Packetized Elementary Stream. If you do, you can find out exactly how to read the data and what it means. If that is not what you are asking, please be more specific on your question.
There are a couple open source projects related to DVB that I know of:
TSFileSource
MediaPortal
Also possibly of use, but I'm not sure how open the source is:
DVBCore
Note, I've never used any of these personally, so I'm not sure how useful they will be, especially at the elementary stream level. It could be starting point though.
You might also check Project-X.

Instead of using common ciphers such as AES or blowfish twofish, how creating my own cipher?

I don't know much about the heavy math behind cryptosystems, I get stuck when it gets bad with the Z/nZ algebra, and sometimes with all these exponent of exponents. It's not I don't like it, it's just that the information you find on the web are not easy to follow blindly.
I was wondering: how reliable can a algorithm be when it encodes a message into plain binary. If my algorithm is arbitrary and known only to me, how can a cryptanalist study an encrypted file and decrypt it, with or without having the decoded file ?
I'm thinking about not using ASCII text to code my message, and I have some ideas to make this algorithm/program.
Attacking a AES or blowfish crypted file is more trivial for a cryptanalyst, than if the algorithm the file is encrypted with is unknown to him, but how does he do then ?
I don't know if I understanded well, but a CS teacher once told me that codes are harder to crack that crypted ciphers.
What do you think ?
Attacking a AES or blowfish crypted file is more trivial for a cryptanalyst, than if the algorithm the file is encrypted with is unknown to him...
What about:
Attacking an untested self written algorithm with no real research is more trivial for a cryptanalyst, than if the algorithm the file is encrypted with, is a well known and proofed one, that has been correctly used....
In short, DO NOT roll your own cryptography unless you're an expert, no unless you're part of an expert group in that field.
Nintendo failed when they implemented RSA on their own in the Wii, Sony failed too when using it in the PS3 (they pretty much used XKCD's random number function for M...)
And you really think you can win by using security by obscurity?
PS: That doesn't mean that you should take the Wikipedia entry on RSA and roll you own implementation from that one (that's exactly were Sony and Big-N failed), no use a tested, open source implementation.
You seem to be using two words interchangeably but remember that Encoding is Not Encryption
When the attacker has no idea which algorithm you used and it is safe, cryptoanalyst has a hard job. So it is unimportant if you use AES or your own cipher as long as it is as strong and safe as AES. Here is the but. Cryptography is a bit demanding and therefore you have many ways to shoot yourself in a foot without knowing it. I would suggest using standard algorithms, maybe with some safe variations.
Common wisdom is that you should not build your own algorithms, and especially not rely on these algorithms remaining secret.
The conceptual reason is that good encryption is about quantified confidentiality. We do not want our secrets to get cracked, but in a more precise way we want to be able to tell how much it would cost to crack our secrets (and hopefully show that the cost is way too high to be envisioned by any entity on Earth). This is the real advance which occurred a few years after World War II: to understand the distinction between key and algorithm. The key concentrates the secret. The algorithm becomes the implementation.
Since the implementation is, well, implemented, it exists as some code or a device, which is tangible and stored even when it is not used. Keeping an implementation secret requires keeping track of the hard disk on which the code resides at all times. If the attacker sees the binary code, he may be able to reverse-engineer it, something which depends on his wits and patience. The point here is that it is very difficult to be able to say: "it costs X dollars to recover a description of the algorithm".
On the other hand, the key is short. It can be stored safely much more easily; e.g. you could memorize it, and avoid committing it to any permanent storage device. You then have to worry about your key only at times when you use it (and not when you do not, e.g. in the middle of the night, when you sleep). The number of possible keys is a simple mathematical problem. You can easily and accurately estimate the average cost of enumerating the possible keys until your key is found. The key is a sturdy foundation for quantified security.
So you should not roll your own algorithms because then you do not know how much security you get.
Also, most people who rolled their own algorithms found out, usually the hard way, that they did not get much security at all. Designing a good encryption algorithm is hard, because it cannot be automatically tested. Your code may run, and properly decrypt data that it encrypted, but it tells you nothing about how secure the algorithm is. The design of the AES was the result of a process which took several years and involved hundreds of skilled cryptographers (most of whom had a PhD and years of experience in academic research on symmetric encryption). That a lone developer could do as well, let alone better, in the secrecy of his own workshop, looks kind of... implausible.
The biggest part of your strategy is called "security through obscurity." You're making the gamble that, since nobody knows the precise details of your little variation on an idea, they won't be able to figure it out.
I'm not a security expert, but I can tell you that you probably won't come up with something incredibly new. Cryptography has been studied by people for millenia and your idea is highly unlikely to be original. Even if you're a relatively good programmer and code something really tricky, the question will come down to who you're up against. If you're just trying to protect your data from your kid sister, then it will probably be fine. On the other hand, if you're using it to send credit card numbers across the internet, then you're doomed to fail. It will be analysed in ways you didn't think of or don't know, and ultimately cracked.
Another way to think of it: algorithms like AES have been extensively studied by professionals in the field and its level of security is pretty well understood. Anything you come up with by yourself will not have the benefit of having been attacked by the best and brightest minds out there. You will have almost no idea of how good it actually is until people start reporting identity theft.

Are binary protocols dead? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
It seems like there used to be way more binary protocols because of the very slow internet speeds of the time (dialup). I've been seeing everything being replaced by HTTP and SOAP/REST/XML.
Why is this?
Are binary protocols really dead or are they just less popular? Why would they be dead or less popular?
You Just Can't Beat the Binary
Binary protocols will always be more space efficient than text protocols. Even as internet speeds drastically increase, so does the amount and complexity of information we wish to convey.
The text protocols you reference are outstanding in terms of standardization, flexibility and ease of use. However, there will always be applications where the efficiency of binary transport will outweigh those factors.
A great deal of information is binary in nature and will probably never be replaced by a text protocol. Video streaming comes to mind as a clear example.
Even if you compress a text-based protocol (e.g. with GZip), a general purpose compression algorithm will never be as efficient as a binary protocol designed around the specific data stream.
But Sometimes You Don't Have To
The reason you are seeing more text-based protocols is because transmission speeds and data storage capacity have indeed grown fast compared to the data size for a wide range of applications. We humans find it much easier to work with text protocols, so we designed our ubiquitous XML protocol around a text representation. Certainly we could have created XML as a binary protocol, if we really had to save every byte, and built common tools to visualize and work with the data.
Then Again, Sometimes You Really Do
Many developers are used to thinking in terms of multi-GB, multi-core computers. Even your typical phone these days puts my first IBM PC-XT to shame. Still, there are platforms such as embedded devices, that have rather strict limitations on processing power and memory. When dealing with such devices, binary may be a necessity.
A parallel with programming languages is probably very relevant.
While hi-level languages are the preferred tools for most programming jobs, and have been made possible (in part) by the increases in CPU speed and storage capactity, they haven't removed the need for assembly language.
In a similar fashion, non-binary protocols introduce more abstraction, more extensibility and are therefore the vehicle of choice particularly for application-level communication. They too have benefited from increases in bandwidth and storage capacity. Yet at lower level it is still impractical to be so wasteful.
Furthermore unlike with programming languages where there are strong incentives to "take the performance hit" in exchange for added simplicity, speed of development etc., the ability to structure communication in layers makes the complexity and "binary-ness" of lower layers rather transparent to the application level. For example so long as the SOAP messages one receives are ok, the application doesn't need to know that these were effectively compressed to transit over the wire.
Facebook, Last.fm, and Evernote use the Thrift binary protocol.
I rarely see this talked about but binary protocols, block protocols especially can greatly simplify the complexity of server architectures.
Many text protocols are implemented in such a way that the parser has no basis upon which to infer how much more data is necessary before a logical unit has been received (XML, and JSON can all provide minimum necessary bytes to finish, but can't provide meaningful estimates). This means that the parser may have to periodically cede to the socket receiving code to retrieve more data. This is fine if your sockets are in blocking mode, not so easy if they're not. It generally means that all parser state has to be kept on the heap, not the stack.
If you have a binary protocol where very early in the receive process you know exactly how many bytes you need to complete the packet, then your receiving operations don't need to be interleaved with your parsing operations. As a consequence, the parser state can be held on the stack, and the parser can execute once per message and run straight through without pausing to receive more bytes.
There will always be a need for binary protocols in some applications, such as very-low-bandwidth communications. But there are huge advantages to text-based protocols. For example, I can use Firebug to easily see exactly what is being sent and received from each HTTP call made by my application. Good luck doing that with a binary protocol :)
Another advantage of text protocols is that even though they are less space efficient than binary, text data compresses very well, so the data may be automatically compressed to get the best of both worlds. See HTTP Compression, for example.
Binary protocols are not dead. It is much more efficient to send binary data in many cases.
WCF supports binary encoding using TCP.
http://msdn.microsoft.com/en-us/library/ms730879.aspx
So far the answers all focus on space and time efficiency. No one has mentioned what I feel is the number one reason for so many text-based protocols: sharing of information. It's the whole point of the Internet and it's far easier to do with text-based, human-readable protocols that are also easily processed by machines. You rid yourself of language dependent, application-specific, platform-biased programming with text data interchange.
Link in whatever XML/JSON/*-parsing library you want to use, find out the structure of the information, and snip out the pieces of data you're interested in.
Some binary protocols I've seen on the wild for Internet Applications
Google Protocol Buffers which are used for internal communications but also on, for example Google Chrome Bookmark Syncing
Flash AMF which is used for communication with Flash and Flex applications. Both Flash and Flex have the capability of communicating via REST or SOAP, however the AMF format is much more efficient for Flex as some benchmarks prove
I'm really glad you have raised this question, as non-binary protocols have multiplied in usage many folds since the introduction of XML. Ten years ago, you would see virtually everybody touting their "compliance" with XML based communications. However, this approach, one of several approaches to binary protocols, has many deficiencies.
One of the values, for example, was readability. But readability is important for debugging, when humans should read the transaction. They are very inefficient when compared with binary transfers. This is due to the fact that XML itself is a binary stream, that has to be translated using another layer into textual fragments ("tokens"), and then back into binary with the contained data.
Another value people found was extensibility. But extensibility can be easily maintained if a protocol version number for the binary stream is used at the beginning of the transaction. Instead of sending XML tags, one could send binary indicators. If the version number is an unknown one, then the receiving end can download the "dictionary" of this unknown version. This dictionary could, for example, be an XML file. But downloading the dictionary is a one time operation, instead of every single transaction!
So efficiency could be kept together with extensibility, and very easily! There are a good number of "compiled XML" protocols out there which do just that.
Last, but not least, I have even heard people say that XML is a good way to overcome little-endian and big-endian types of binary systems. For example, Sun computers vs Intel computers. But this is incorrect: if both sides can accept XML (ASCII) in the right way, surely both sides can accept binary in the right way, as XML and ASCII are also transmitted binarically.......
Hope you find this interesting reading!
Binary protocols will continue to live wherever efficency is required. Mostly, they will live in the lower-levels, where hardware-implementation is more common than software implementations. Speed isn't the only factor - the simplicity of implementation is also important. Making a chip process binary data messages is much easier than parsing text messages.
Surely this depends entirely on the application? There have been two general types of example so far, xml/html related answers and video/audio. One is designed to be 'shared' as noted by Jonathon and the other efficient in its transfer of data (and without Matrix vision, 'reading' a movie would never be useful like reading a HTML document).
Ease of debugging is not a reason to choose a text protocol over a 'binary' one - the requirements of the data transfer should dictate that. I work in the Aerospace industry, where the majority of communications are high-speed, predictable data flows like altitude and radio frequencies, thus they are assigned bits on a stream and no human-readable wrapper is required. It is also highly efficient to transfer and, other than interference detection, requires no meta data or protocol processing.
So certainly I would say that they are not dead.
I would agree that people's choices are probably affected by the fact that they have to debug them, but will also heavily depend on the reliability, bandwidth, data type, and processing time required (and power available!).
They are not dead because they are the underlying layers of every communication system. Every major communication system's data link and network layers are based on some kind of "binary protocol".
Take the internet for example, you are now probably using Ethernet in your LAN, PPPoE to communicate with your ISP, IP to surf the web and maybe FTP to download a file. All of which are "binary protocols".
We are seeing this shift towards text-based protocols in the upper layers because they are much easier to develop and understand when compared to "binary protocols", and because most applications don't have strict bandwidth requirements.
depends on the application...
I think in real time environment (firewire, usb, field busses...) will always be a need for binary protocols
Are binary protocols dead?
Two answers:
Let's hope so.
No.
At least a binary protocol is better than XML, which provides all the readability of a binary protocol combined with all the efficiency of less efficiency than a well-designed ASCII protocol.
Eric J's answer pretty much says it, but here's some more food for thought and facts. Note that the stuff below is not about media protocols (videos, images). Some items may be clear to you, but I keep hearing myths every day so here you go ...
There is no difference in expressiveness between a binary protocol and a text protocol. You can transmit the same information with the same reliability.
For every optimum binary protocol, you can design an optimum text protocol that takes just around 15% more space, and that protocol you can type on your keyboard.
In practice (practical protocols is see every day), the difference is often even less significant due to the static nature of many binary protocols.
For example, take a number that can become very large (e.g., in 32 bit range) but is often very small. In binary, people model this usually as four bytes. In text, it's often done as printed number followed by colon. In this case, numbers below ten become two bytes and numbers below 100 three bytes. (You can of course claim that the binary encoding is bad and that you can use some size bits to make it more space efficient, but that's another thing that you have to document, implement on both sides, and be able to troubleshoot when it comes over your wire.)
For example, messages in binary protocols are often framed by length fields and/or terminators, while in text protocols, you just use a CRC.
In practice, the difference is often less significant due to required redundancy.
You want some level of redundancy, no matter if it's binary or text. Binary protocols often leave no room for error. You have to 100% correctly document every bit that you send, and since most of us are humans, that happens rarely and you can't read it well enough to make a safe conclusion what is correct.
So in summary: Binary protocols are theoretically more space and compute efficient, but the difference is in practice often less than you think and the deal is often not worth it. I am working in the Internet of Things area and have to deal nearly on daily base with custom, badly designed binary protocols which are really hard to troubleshoot, annoying to implement and not more space efficient. If you don't need to absolutely tweak the last milliampere out of your battery and calculate with microcontroller cycles (or transmit media), think twice.

Resources