Can anyone help me out with this issue? When I run my decryption code, it asks for passphrase every time and fails if not entered. I want to automate this so that the passphrase is not asked every time. I'm using kleopatra to import/export public/secret keys with GPG4win installed.
I can understand your frustration, however, I believe that functionality is intended. If the private key were kept open for a time period or the passphrase cached, that would pose a security hole where either one could possibly leak.
While OSs have security measures in place to prevent other processes from reading memory that doesn't belong to them, there are still ways around this. The ways that come to mind first are more likely to be done in a research lab than real life, but security experts would still throw up red flags.
It is possible to use a command line to decrypt multiple files all at once, see how-to-use-gpg-to-decrypt-multiple-files-to-a-different-directory. Hope that helps.
Related
So I’m trying to understand what the process is of encrypting a program I wrote. How does it work. When you encrypt something can that executable be ran without a key? Is there a key that is used?
If you can explain this or add some links that would be great.
There are many different approaches to protecting code. They all fall under the category of DRM (Digital Rights Management).
These are what come to mind for me:
Encryption, actually modifying the byte codes in such a way that they can only be executed if a key or password is provided.
Obfuscation, rearranging code into a way that is still fully executable as is, but reversing by hand is tedious because the code is purposely arranged into a non-standard/confusing order.
Shield, protecting active code that has been loaded into runtime memory. This can be done either with another process that is performing real-time memory checking with checksums. Or it can be done via in memory code encryption with the key stored somewhere in memory that only the application knows where to find it.
There are so many options for DRM, that I'd have trouble picking any implementations that stand out to list here. A simple google search should help point you in the direction of actual implementations.
You compile your program into a binary executable, and you want to protect it before you send it to someone. Even better, make the executable runnable only within a certain period of time with the password -- when the time period expires, the program can no longer be run. How can you achieve such goals?
I've read some posts on this forum, the closest is to hard wire the password inside your source code and do a comparison when the program is run. However, I don't think this is secure especially when your source code is in perl or java.
Thanks in advance!
Short of using the TPM, there is no secure solution. Both the password and the time check can be bypassed by modifying the program to not perform the checks at all. You can only utilize obscurity, meaning you can only raise the bar for the amount of effort needed to bypass your measures.
I have developed a tool that loads in an configuration file at runtime. Some of the values are encrypted with an AES key.
The tool will be scheduled to run on a regular basis from a remote machine. What is an acceptable way to provide the decryption key to the program. It has a command line interface which I can pass it through. I can currently see three options
Provide the full key via CLI, meaning the key is available in the clear at OS config level (i.e. CronJob)
Hardcode the key into the binary via source code. Not a good idea for a number of reasons. (Decompiling and less portable)
Use a combination of 1 and 2 i.e. Have a base key in exe and then accept partial key via CLI. This way I can use the same build for multiple machines, but it doesn't solve the problem of decompiling the exe.
It is worth noting that I am not too worried about decompiling the exe to get key. If i'm sure there are ways I could address via obfuscation etc.
Ultimately if I was really conscious I wouldn't be storing the password anywhere.
I'd like to hear what is considered best practice. Thanks.
I have added the Go tag because the tool is written in Go, just in case there is a magical Go package that might help, other than that, this question is not specific to a technology really.
UPDATE:: I am trying to protect the key from external attackers. Not the regular physical user of the machine.
Best practice for this kind of system is one of two things:
A sysadmin authenticates during startup, providing a password at the console. This is often extremely inconvenient, but is pretty easy to implement.
A hardware device is used to hold the credential. The most common and effective are called HSMs (Hardware Security Modules). They come in all kinds of formats, from USB keys to plug-in boards to external rack-mounted devices. HSMs come with their own API that you would need to interface with. The main feature of an HSM is that it never divulges its key, and it has physical safeguards to protect against it being extracted. Your app sends it some data and it signs the data and returns it. That proves that that the hardware module was connected to this machine.
For specific OSes, you can make use of the local secure credential storage, which can provide some reasonable protection. Windows and OS X in particular have these, generally keyed to some credential the admin is required to type at startup. I'm not aware of a particularly effective one for Linux, and in general this is pretty inconvenient in a server setting (because of manual sysadmin intervention).
In every case that I've worked on, an HSM was the best solution in the end. For simple uses (like starting an application), you can get them for a few hundred bucks. For a little more "roll-your-own," I've seen them as cheap as $50. (I'm not reviewing these particularly. I've mostly worked with a bit more expensive ones, but the basic idea is the same.)
I edited this question to clarify why I asked this question again (I had weak Google-Fu and found these rather old 1 2 3 pretty-much-duplicates only after posting).
Approaches to accessing a password-protected resources that I've seen in the wild.
Plaintext storage in script (might often end up being shared, or in a Dropbox)
Plaintext storage in a config script
You can do password = readline("Password: ") but of course the password ends up in plaintext in the console (and thus in console logs etc.), so might as well store it in a plaintext config file.
I found this little trick to avoid displaying the password in the Terminal, but running system("stty -echo") on OS X Mavericks leads to the error stty: stdin isn't a terminal, so I guess it wouldn't be particularly portable.
Using tcltk. Has the unfortunate effect of making Rstudio crash and being difficult to install.
keychain. It's not on CRAN, so I don't think I can use this as a first-line approach, I'd also like a bit more detail about where and how passwords are stored on various systems (i.e. will it end up in plaintext on Windows?).
Access tokens, OAuth etc. seem to have similar problems.
I don't know any R packages which use PGP for connections? Probably also a bit difficult for newbie users.
I'm not asking for myself mainly, but I want to provide somewhat sensible defaults for nontechnical users who might store plaintext passwords enabling access to sensitive data in their Dropbox.
Unlike others who asked similar questions, I could also change the server-side of things if I had a better approach.
Are there best-practice approaches that I'm currently missing? My focus on interactive sessions is because I assume that's how most nontechnical types use R, but of course it would be nice if it worked during e.g. knitr report generation too.
Some suggestions to solve your problem securely. These solutions match all programming languages.
Establish a secure connection to your resource without R, like a SSL tunnel.
If you need a secure password in R to establish a secure connection, then you can read this from a secure config file and remove this password variable if you don't use the password anymore. A secure config file is a config file that is not part of your code repository (Git, SVN, ...). You have to manage your secret independent of your code. This mean separate your code and your secrets. One simple way is to put your private and secure secret in your private and secure user home directory. Then you have delegated your security problem to your operating system. Your secret is now save as your OS and your home directory. Pleas check the rights of your home directory and enable the file system encryption if they are off. Notice, this is the way like Maven handle passwords.
You get more security if you encrypt your password/secret config file. Then you have second line of defense.
For most applications is point 2 enough.
Notice, be sure that your secret is not deployed with your code. You need a second way to manage and deploy your secret to production systems.
Notice, be sure that if your programs jams, that your secret is not in memory anymore.
Notice, use always strong algorithms for encryption. Don't implement your own security algorithm, is a high complexity task. Better use standard implementations of strong encryption algorithms.
My program needs to decrypt an encrypted file after it starts up to load data it requires to function. This data cannot be available to the user.
I'm not a cryptography expert, so what is the best way to protect hardcoded passphrases and other tidbits of data from users, debugging software and disassembling software?
I understand that this is probably bad practice but it's essential for me (at least for now).
If there are other ways to protect my data from the above 3, could you let me know what those are?
Short answer: you can't. Once the software is on the user's disk, a sufficiently smart and determined user will be able to extract the secret data from it.
For a longer answer, see "Storing secrets in software" on the security.SE blog.
what is the best way to protect hardcoded passphrases and other
tidbits of data from users, debugging software and disassembling
software?
Request the password from the user and don't hardcode the passphrase. This is the ONLY way to be safe.
If you can't do that and must be hardcoded in the app then all bets are off.
The simplest thing you can do (if you don't have the luxury to do something elaborate which will only delay the inevidable) is to delegate the responsibility to the user of the system.
I mean explicitely state that you software is as secure as the "machine" it runs.
If the attacker has access to start pocking around the file system then your app would be the user's least of concerns
In my experience this type of questions are often motivated by either of four reasons:
Your application is connecting to a restricted remote service, such as a database server.
You do not want your users to mess with configuration settings, which in turn do not really have to be kept confidential as long as they are unmodified.
Copy protection of your own software.
Copy protection of data.
Like Illmari Karonen wrote in his answer, you can't do exactly what you are asking for, and this means in particular that 3 & 4 cannot be solved by cryptography alone.
However, if your reason for asking is either 1 or 2, you have ended up asking the questions you do, because you have made some bad decisions earlier in your design process. For instance, in case of 1, you should not make a restricted service accessible from systems you do not trust completely. The typical safe solution is to introduce a middle tier that is the only client to your restricted resource, and which you can make public.
In case of 2, the best solution is often to use exactly the same logic for checking your configuration files (or registry settings or what ever) when they are loaded at start up, as you use for checking consistency when the user enters them using your preferred configuration user interface. If you spot an inconsistency, just bring up your configuration UI and highlight the problem.