I realized there is no encryption in CANBus communication. So my question is: Does it worth to add an encryption method to CANBus communication in a vehicle. Since the communication takes place between the ECU's in the same vehicle, I believe CANBus is safe against the remote attacks/attackers. So I think encryption is not necesssary. However, putting an encryption in CANBus might make it harder to make your vehicle get reverse engineered. Am I correct? To sum up, I have 3 questions:
Does CANBus needs an encryption method and does adding it worth to effort?
If yes what kind of encryption method would be the best for the CANBus communication in a vehicle?
Does adding an encryption to CANBus in your vehicled, would make it harder to get reverse engineered?
Not really. Depends what you want to protect against. To prevent a car thief from hacking into your CAN bus, use the usual protection measures against thieves: locks and alarms.
A couple of horrible car designs connect a MCU on the bus to the Internet for firmware updates. If you do that, you risk getting it hacked remotely, if an attacker can figure out how to download their own firmware into that MCU. Jeep had some design flaw like that iirc. But the design flaw is not of the CAN bus itself, it is the provided internet access to a MCU which also has access to the bus.
I suppose whatever is currently regarded as safe. But if it involves a lot of heavy math, it might be unsuitable for that purpose, since these are hard real-time systems, often with low end, decentralized CPUs. They won't be able to do heavy floating point math fast enough.
Of course.
Related
I currently have an idea rolling around in my head about how to abstract away (to some degree) common data transfer mechanisms of embedded systems such as CAN, UART, SPI, I2C, Ethernet, etc. Ideally I would want to have something like the concept of a Pipe, but that the interface doesn't really care about what physical medium/protocol the data is flowing over. If I say "transfer this data through the pipe", it just works. Obviously there would have to be some protocol specific details in the construction of this pipe object but outside of that it shouldn't matter.
Is there an industry accepted name for what I'm trying to do?
Is this concept even a good idea? I feel like it will be useful for my purposes but I don't know if it's pointless in the grand scheme of the embedded engineering world.
Is there an industry accepted name for what I'm trying to do?
Hardware Abstraction Layer (HAL) comes closest. Keep in mind that the mentioned buses are hardware standards that don't define higher-layer protocols. At higher levels, this might be called "sockets", though that typically refers to IP specifically.
Is this concept even a good idea?
Probably not, unless you have specific requirements.
For example, it would be a good idea if you wish to replace an old RS-485 network with CAN but maintain the old hardware. It would then make sense to have as much of the software compatible as possible in that specific project.
Otherwise, from a general point of view, each of these buses are picked to suit quite different requirements. CAN when you need rugged & reliable, UART when you need backwards-compatibility, SPI when you need fast, synchronous, close-to-metal on-board communication, Ethernet when you need fast communication over long distances etc etc. The hardware requirements of one bus might exclude another.
For example, if I want my MCU to communicate with a "dumb" LCD, only SPI makes sense. I might have to toggle additional I/O pins together with the SPI signals. I might want to use DMA. Etc. In that context, there is no benefit for me if I have to use an abstract communication API which is portable to CAN, Ethernet etc. That's just bloat - this code will never run on any of those buses!
It makes far more sense to develop a HAL per bus type, so that you have one SPI HAL which is portable between microcontrollers. That's common and actually useful.
Pipes are commonly used for IPC (inter process communication) or redirecting output to a file or ... For all this exists remote technologies, 'remote' means over the network or over an interface or bus like RS232, SPI, ... The name for remote IPC is remote procedure calls (RPC), see https://os.mbed.com/cookbook/Interfacing-Using-RPC and https://github.com/EmbeddedRPC/erpc . Like with all IPC, security is a major problem, especially over a network.
I.e. writing a remote file (over TCP/IP) can be done like in https://askubuntu.com/questions/917200/how-to-redirect-command-output-from-remote-machine-to-local-file-via-ssh
The SSH login you can wrap into a function and this function to get the commands shorter (macros can also be used for wrapping a function Wrap function call with macro)
There are also various implementations of communication protocols over each other i.e. Ethernet over USB (https://en.wikipedia.org/wiki/Ethernet_over_USB)
I'm in a situation where, logically, UDP would be the perfect choice (i need to be able to broadcast to hundreds of clients). This is in a very small and controlled environment (the whole network is over a few square metters, all devices are local, the network is way oversized with gigabit ethernet and switches everywhere).
Can i simply "ignore" all of the added reliability that needs to be tossed on udp (checking messages arrived, resending them etc) as those mostly apply where the is expected packet loss (the internet) or is it really suggested to handle udp as "may not arrive" even in such conditions?
I'm not asking for theorycrafting, really wondering if anyone could tell me from experience if i'm actually likely to have udp packets missing in such an environment or is it's going to be a really rare event as obviously sending things and assuming that worked is much simpler than handling all possible errors.
This is a matter of stochastics. Even in small local networks, packet losses will occur. Maybe they have an absolute probability of 1e-10 in a normal usage scenario. Maybe more, maybe less.
So, now comes real-world experience: Network controllers and Operating systems do have a tough live, if used in high-throughput scenarios. Worse applies to switches. So, if you're near the capacity of your network infrastructure, or your computational power, losses become far more likely.
So, in the end it's just a question on how high up in the networking stack you want to deal with errors: If you don't want to risk your application failing in 1 in 1e6 cases, you will need to add some flow/data integrity control; which really isn't that hard. If you can live with the fact that the average program has to be restarted every once in a while, well, that's error correction on user level...
Generally, I'd encourage you to not take risks. CPU power is just too cheap, and bandwidth, too, in most cases. Try ZeroMQ, which has broadcast communication models, and will ensure data integrity (and resend stuff if necessary), is available for practically all relevant languages, and runs on all relevant OSes, and is (at least from my perspective) easier to use than raw UDP sockets.
I am working on an application that needs to encrypt all of its traffic on a LAN environment and so the speed of encryption is important and the cpu time needs to be reduced to let the application have more cpu cycles for itself. I am thus trying to understand what are my existing options besides rolling my own since I'm not a cryptographer.
I am trying to go now for a comprehensive list of all semi-valid options to be able to measure and test them:
TLS -- Not considered fast, maybe possible to tune the ciphers
SSH -- Maintaining ssh tunnels may be a burden
UDT -- Should be high performance, how is the optional encryption?
CurveCP -- By DJB so encryption is good, not sure about the transport part
MinimaLT -- DJB contributed crypto know-how, others did the transport
IPSec -- non-trivial to configure
What else I missed?
Go with TLS. The chances that the provider has heard of it and that acceleration is already present is rather high. SSH would also be an option, but it is generally used for administration.
About the other options:
UDT -- Should be high performance, how is the optional encryption? Good question, and a quick search did not find too much information, so avoid.
CurveCP -- By DJB so encryption is good, not sure about the transport part Anything mainly done by DJB requires university grade understanding of cryptography.
MinimaLT -- DJB contributed crypto know-how, others did the transport. See above. The main documentation seems to be a paper about MinimalLT.
IPSec -- non-trivial to configure And possibly security at the wrong level. Personally I would avoid, may be tricky to setup on a cloud provider.
So there you are, in the end transport level security always seems to gravitate towards TLS.
Try to go for a ciphersuite with AES and ECDSA/ECDH(E) if you want a high chance of a speedy implementation and a high level of security.
I want to build a decentralized, reddit-like system using P2P. Basically, I want to retain the basic capabilities of reddit, but make it decentralized, to make it more robust and immune to censorship. This will also allow people to develop different clients to match the way they want to browse it.
Could you recommend good p2p libraries to base my work on? They should be open-source, cross-platform, robust and easy to use. I don't care much about the language, I can adapt.
Disclaimer: warning, self-promotion here !!!
Have you considered JXTA's latest release? It is probably sufficient for what you want to do. Else, we are working on a new P2P framework called Chaupal, but it is not operational yet.
EDIT
There is also what I call the quick-and-dirty UDP solution (which is not so dirty after all, I should call it minimal).
Just implement one server with a public address and start listening for UPD.
Peers located behind NATs contact the server which can read how their private IP address has been translated into a public IP address from the received datagrams.
You send that information back to the peer who can forward it to other peers. The server can also help exchanging this information between peers.
Then peers can communicate directly (one-to-one) by sending datagrams to these translated addresses.
Simple, easy to implement, but does not cover for lost datagrams, replays, out-of-order etc... (i.e., the typical stuff that TCP solves for you at the IP stack level).
I haven't had a chance to use it, but Telehash seems to have been made for this kind of application. Peer2Peer apps have a particular challenge dealing with the restrictions of firewalls... since Telehash is based on UDP, it's well suited for hole-punching through firewalls.
EDIT for static_rtti's comment:
If code velocity is a requirement libjingle has a lot of effort going into it, but is primarily geared towards XMPP. You can port off parts of the ICE code and at least get hole-punching. See the libjingle architecture overview for details about their implementation.
Check out CouchDB. It's a decentralized web app platform that uses an HTTP API. People have used it to create "CouchApps" which are decentralized CouchDB-based applications that can spread in a viral nature to other CouchDB servers. All you need to know to write CouchApps is Javascript and learn the CouchDB API. You can read this free online book to learn more: http://guide.couchdb.org
The secret sauce to CouchDB is a Master-to-Master replication protocol that lets information spread like a virus. When I attended the first CouchConf, they demonstrated how efficient this is by throwing a "Couch Party" (which is where you have a room full of people replicating to the person next to them simulating an ad hoc network).
Also, all the code that makes a CouchApp work is public by default in special entities known as Design Documents.
P.S. I've been thinking of doing a similar project, but I don't have a lot of time to devote to it at the moment. GOD SPEED MY BOY!
Two types of problems I want to talk about:
Say you wrote a program you want to encrypt for copyright purposes (eg: denying unlicensed user from reading a certain file, or disabling certain features of the program), but most software-based encryption can be broken by hackers (just look at the amount of programs available to HACK programs to become "full versions". )
Say you want to push a software to other users, but want to protect against piracy (ie, the other user making a copy of this software and selling it as their own). What effective way is there to guard against this (similar to music protection on CD's, like DRM)? Both from a software perspective and a hardware perspective?
Or are those 2 belong to the same class of problems? (Dongles being the hardware / chip based solution, as many noted below)?
So, can chip or hardware based encryption be used? And if so, what exactly is needed? Do you purchase a special kind of CPU, special kind of hardware? What do we need to do?
Any guidance is appreciated, thanks!
Unless you're selling this program for thousands of dollars a copy, it's almost certainly not worth the effort.
As others have pointed out, you're basically talking about a dongle, which, in addition to being a major source of hard-to-fix bugs for developers, is a also a major source of irritation for users, and there's a long history of these supposedly "uncrackable" dongles being cracked. AutoCAD and Cubase are two examples that come to mind.
The bottom line is that a determined enough cracker can still crack dongle protection; and if your software isn't an attractive enough target for the crackers to do this, then it's probably not worth the expense in the first place.
Just my two cents.
Hardware dongles, as other people have suggested, are a common approach for this. This still doesn't solve your problem, though, as a clever programmer can modify your code to skip the dongle check - they just have to find the place in your code where you branch based on whether the check passed or not, and modify that test to always pass.
You can make things more difficult by obfuscating your code, but you're still back in the realm of software, and that same clever programmer can figure out the obfuscation and still achieve his desired goal.
Taking it a step further, you could encrypt parts of your code with a key that's stored in the dongle, and require the bootstrap code to fetch it from the dongle. Now your attacker's job is a little more complicated - they have to intercept the key and modify your code to think it got it from the dongle, when really it's hard-coded. Or you can make the dongle itself do the decryption, passing in the code and getting back the decrypted code - so now your attacker has to emulate that, too, or just take the decrypted code and store it somewhere permanently.
As you can see, just like software protection methods, you can make this arbitrarily complicated, putting more burden on the attacker, but history shows that the tables are tilted in favor of the attacker. While cracking your scheme may be difficult, it only has to be done once, after which the attacker can distribute modified copies to everyone. Users of pirated copies can now easily use your software, while your legitimate customers are saddled with an onerous copy protection mechanism. Providing a better experience for pirates than legitimate customers is a very good way to turn your legitimate customers into pirates, if that's what you're aiming for.
The only - largely hypothetical - way around this is called Trusted Computing, and relies on adding hardware to a user's computer that restricts what they can do with it to approved actions. You can see details of hardware support for it here.
I would strongly counsel you against this route for the reasons I detailed above: You end up providing a worse experience for your legitimate customers than for those using a pirated copy, which actively encourages people not to buy your software. Piracy is a fact of life, and there are users who simply will not buy your software even if you could provide watertight protection, but will happily use an illegitimate copy. The best thing you can do is offer the best experience and customer service to your legitimate customers, making the legitimate copy a more attractive proposition than the pirated one.
They are called dongles, they fit in the USB port (nowadays) and contain their own little computer and some encrypted memory.
You can use them to check the program is valud by testing if the hardware dongle is present, you can store enecryption keys and other info in the dongle or sometimes you can have some program functions run in the dongle. It's based on the dongle being harder to copy and reverse engineer than your software.
See deskey or hasp (seem to have been taken over)
Back in the day I've seen hardware dongles on the parallell port. Today you use USB dongles like this. Wikipedia link.