How to protect OpenCL code from stealing? - opencl

I use OpenCL in my program, and i need to protect opencl code from reading by other users. Compile code to binary maybe the option, but if exist some way to decompile it, this option is useless. I cant allow to somebody steal my opencl code. How i can do opencl protection? Thanks!

It really depends on how determined an attacker you need to guard against. Generally any sufficiently determined attacker can reverse engineer your code if they have access to the compiled code, but it may not be worth the effort.
It might be a performance hit, but you could do things like have the binary image use some self-extracting encryption at runtime, like communicating with a licence server and only decrypting the rest of the code if a licence is valid. That can suffer from man-in-the-middle or replay attacks, etc (especially if the underlying hardware can be virtualized), unless you specifically guard against that too. Presumably you don't have cooperation of the hardware in keeping your code secure, but do you need to consider runtime vulnerability, or just in storage?
Chances are compilation is sufficient protection against casual IP theft; obfuscation is a slightly higher bar, then options get progressively more expensive to implement.

Use a dynamic compilation method that authorizes using facebook account that has the original opencl string so youd known who stole it at least. Maybe even using an encryption of your own to mix the string into a meaningless clobbered char array.

You can never protect against decompilation.

Today, you can compile and ship binaries, but the format is different for each platform, and perhaps even devices within that platform, so you'd be shipping a lot of binaries.
In the future, you can use SPIR. It's nascent today but will be your solution in the future.

If your app already has some protection (ofuscation, etc..) then as long as the OpenCL code is a static string inside your app it will be protected. However, clever attacker may be able to get it out as well.
The best way is to precompile and distribute the kernel binaries, but that will be tedious task, and it may be reversed by an attacker as well.

Related

Encrypting programs

So I’m trying to understand what the process is of encrypting a program I wrote. How does it work. When you encrypt something can that executable be ran without a key? Is there a key that is used?
If you can explain this or add some links that would be great.
There are many different approaches to protecting code. They all fall under the category of DRM (Digital Rights Management).
These are what come to mind for me:
Encryption, actually modifying the byte codes in such a way that they can only be executed if a key or password is provided.
Obfuscation, rearranging code into a way that is still fully executable as is, but reversing by hand is tedious because the code is purposely arranged into a non-standard/confusing order.
Shield, protecting active code that has been loaded into runtime memory. This can be done either with another process that is performing real-time memory checking with checksums. Or it can be done via in memory code encryption with the key stored somewhere in memory that only the application knows where to find it.
There are so many options for DRM, that I'd have trouble picking any implementations that stand out to list here. A simple google search should help point you in the direction of actual implementations.

Adding Encryption to Solr/lucene indexes

I am currently using Solr to perform search services over some sensitive records.
As Solr/lucene provides fast searching by storing inverted indexes of the sensitive information in plain text on a disk there is a requirement to encrypt these index files so that unauthorized people can't have access to them by bypassing the system's security.
I found there are similar patches open on Apache JIRA AES encrypted directory and Codec for index-level encryption.
AES encrypted directory looks promising but this patch has been implemented for lucene 3.1 as I am using the newer version, I am not sure if this patch can be used with lucene version 5 or higher.
I was wondering if there is a way to implement a security measure that encrypts the indexes or if it is possible to write some custom plugin which can encrypt/decrypt the indexes on I/O level(i.e FsDirectory)?
The discussion in the comment section of LUCENE-6966 you have shared is really interesting. I would reason with this quote of Robert Muir that there is nothing baked into Solr and probably will never be.
More importantly, with file-level encryption, data would reside in an unencrypted form in memory which is not acceptable to our security team and, therefore, a non-starter for us.
This speaks volumes. You should fire your security team! You are wasting your time worrying about this: if you are using lucene, your data will be in memory, in plaintext, in ways you cannot control, and there is nothing you can do about that!
Trying to guarantee anything better than "at rest" is serious business, sounds like your team is over their head.
So you should consider to encrypt the storage Solr is using on OS level. This should be transparent for Solr. But if someone comes into your system, he should not be able to copy the Solr data.
This is also the conclusion the article Encrypting Solr/Lucene indexes from Erick Erickson of Lucidwors draws in the end
The short form is that this is one of those ideas that doesn't stand up to scrutiny. If you're concerned about security at this level, it's probably best to consider other options, from securing your communications channels to using an encrypting file system to physically divorcing your system from public networks. Of course, you should never, ever, let your working Solr installation be accessible directly from the outside world, just consider the following: http://server:port/solr/update?stream.body=<delete><query>*:*</query></delete>!

Scheduled process - providing key for encrypted config

I have developed a tool that loads in an configuration file at runtime. Some of the values are encrypted with an AES key.
The tool will be scheduled to run on a regular basis from a remote machine. What is an acceptable way to provide the decryption key to the program. It has a command line interface which I can pass it through. I can currently see three options
Provide the full key via CLI, meaning the key is available in the clear at OS config level (i.e. CronJob)
Hardcode the key into the binary via source code. Not a good idea for a number of reasons. (Decompiling and less portable)
Use a combination of 1 and 2 i.e. Have a base key in exe and then accept partial key via CLI. This way I can use the same build for multiple machines, but it doesn't solve the problem of decompiling the exe.
It is worth noting that I am not too worried about decompiling the exe to get key. If i'm sure there are ways I could address via obfuscation etc.
Ultimately if I was really conscious I wouldn't be storing the password anywhere.
I'd like to hear what is considered best practice. Thanks.
I have added the Go tag because the tool is written in Go, just in case there is a magical Go package that might help, other than that, this question is not specific to a technology really.
UPDATE:: I am trying to protect the key from external attackers. Not the regular physical user of the machine.
Best practice for this kind of system is one of two things:
A sysadmin authenticates during startup, providing a password at the console. This is often extremely inconvenient, but is pretty easy to implement.
A hardware device is used to hold the credential. The most common and effective are called HSMs (Hardware Security Modules). They come in all kinds of formats, from USB keys to plug-in boards to external rack-mounted devices. HSMs come with their own API that you would need to interface with. The main feature of an HSM is that it never divulges its key, and it has physical safeguards to protect against it being extracted. Your app sends it some data and it signs the data and returns it. That proves that that the hardware module was connected to this machine.
For specific OSes, you can make use of the local secure credential storage, which can provide some reasonable protection. Windows and OS X in particular have these, generally keyed to some credential the admin is required to type at startup. I'm not aware of a particularly effective one for Linux, and in general this is pretty inconvenient in a server setting (because of manual sysadmin intervention).
In every case that I've worked on, an HSM was the best solution in the end. For simple uses (like starting an application), you can get them for a few hundred bucks. For a little more "roll-your-own," I've seen them as cheap as $50. (I'm not reviewing these particularly. I've mostly worked with a bit more expensive ones, but the basic idea is the same.)

Qt/C++ store IM Messages offline

I have developed a Client/Server application for IM with Qt. So far messages are sent and displayed at the client side, but when the program is closed the messages are no longer available since a proper storage is missing.
I would like to keep the messages on the client devices and avoid to store everything on the server. I don't want to use a DB either since it needs to be installed and I would like to keep everything quite easy.
Therefore I was thinking of simply storing everything in an encrypted file, but I couldn't think of a proper format to do that.
Has anyone experience with that or any suggestions how to save the messages from different clients?
You do have a concern with data integrity in face of unplanned termination of your software, due to bugs in your code, transient hardware errors, power outages, etc. That's the problem that everyone using "plain files" usually ignores, as it's a hard problem to solve and requires extensive testing and know-how.
That's why you should use an embedded database. It will solve that, and many other problems as well. SQLite is a de-facto standard for applications such as yours. You can add any encryption you wish, as SQLite provides hooks that let you implement writing and reading of the pages. You'd do the encryption there.
One little-appreciated aspect of SQLite specifically is the amount of testing it gets during development. The test harness, most of it non-public, is probably worth way more than the published SQLite code (>1M USD). SQLite is used in aerospace applications, e.g. IIRC in code classified as DAL-B under DO-178B.

Protecting hard-coded data that cannot be available to the user, such as a pass phrase

My program needs to decrypt an encrypted file after it starts up to load data it requires to function. This data cannot be available to the user.
I'm not a cryptography expert, so what is the best way to protect hardcoded passphrases and other tidbits of data from users, debugging software and disassembling software?
I understand that this is probably bad practice but it's essential for me (at least for now).
If there are other ways to protect my data from the above 3, could you let me know what those are?
Short answer: you can't. Once the software is on the user's disk, a sufficiently smart and determined user will be able to extract the secret data from it.
For a longer answer, see "Storing secrets in software" on the security.SE blog.
what is the best way to protect hardcoded passphrases and other
tidbits of data from users, debugging software and disassembling
software?
Request the password from the user and don't hardcode the passphrase. This is the ONLY way to be safe.
If you can't do that and must be hardcoded in the app then all bets are off.
The simplest thing you can do (if you don't have the luxury to do something elaborate which will only delay the inevidable) is to delegate the responsibility to the user of the system.
I mean explicitely state that you software is as secure as the "machine" it runs.
If the attacker has access to start pocking around the file system then your app would be the user's least of concerns
In my experience this type of questions are often motivated by either of four reasons:
Your application is connecting to a restricted remote service, such as a database server.
You do not want your users to mess with configuration settings, which in turn do not really have to be kept confidential as long as they are unmodified.
Copy protection of your own software.
Copy protection of data.
Like Illmari Karonen wrote in his answer, you can't do exactly what you are asking for, and this means in particular that 3 & 4 cannot be solved by cryptography alone.
However, if your reason for asking is either 1 or 2, you have ended up asking the questions you do, because you have made some bad decisions earlier in your design process. For instance, in case of 1, you should not make a restricted service accessible from systems you do not trust completely. The typical safe solution is to introduce a middle tier that is the only client to your restricted resource, and which you can make public.
In case of 2, the best solution is often to use exactly the same logic for checking your configuration files (or registry settings or what ever) when they are loaded at start up, as you use for checking consistency when the user enters them using your preferred configuration user interface. If you spot an inconsistency, just bring up your configuration UI and highlight the problem.

Resources