Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I have a encrypted file and i am aware of its password. I am trying to decrypt it but i could not find any properties of this file such as the type of algorithm/program that was used to encrypt the file originally.
I am thinking to try 'gpg' and 'openssl' and other techniques that can be used to decrypt this file without corrupting it. Although i have taken the backup, it's a huge file which takes roughly 3hrs for backing up. Hence i am extra careful so that it does not goes corrupt.
Thanks,
The general idea of encryption is that the result should be indistinguishable from the noise.
That automatically means, that unless you know all the parameters, you won't be able to infer it from the encrypted file if it was done right.
Unless you brute force all the possible types of encryption and their parameters (good luck with that!).
In a nutshell, you are going to have to find out what program was used to encrypt the file, and maybe, possibly (but probably not) have to know how the program was configured. People have been using computers for encryption almost as long as computers have existed. There are hundreds of different encryption programs that have been published over the years.
Your best bet is to get the developer to tell you which program he used and how to use it.
Your next best bet is to search through your backups of every machine he used looking for some clue as to what program he used.
If that doesn't help, it's time to start trying every encryption program you can get hold of. Obviously, you'll want to start with the newer ones, and the more popular ones, and the ones that run on whatever operating systems he was known to use.
Considering the size of the file, it's likely that you're dealing with an encrypted archive or, an image of an encrypted file system. So, don't limit your search to specialized encryption utilities. You'll also want to try all of the different archivers and all of the different file systems and operating systems that offer encryption as a feature.
If you've tried every encryption program without success, and you still haven't blown your budget; then the next step is going to blow your budget. I'm pretty confident in saying that because if your organization was the kind that could afford to take the next step, then you wouldn't be asking how to do it on StackOverflow. Heck! You probably would not even be allowed to use StackOverflow without written permission from three levels up the hierarchy.
Related
So I’m trying to understand what the process is of encrypting a program I wrote. How does it work. When you encrypt something can that executable be ran without a key? Is there a key that is used?
If you can explain this or add some links that would be great.
There are many different approaches to protecting code. They all fall under the category of DRM (Digital Rights Management).
These are what come to mind for me:
Encryption, actually modifying the byte codes in such a way that they can only be executed if a key or password is provided.
Obfuscation, rearranging code into a way that is still fully executable as is, but reversing by hand is tedious because the code is purposely arranged into a non-standard/confusing order.
Shield, protecting active code that has been loaded into runtime memory. This can be done either with another process that is performing real-time memory checking with checksums. Or it can be done via in memory code encryption with the key stored somewhere in memory that only the application knows where to find it.
There are so many options for DRM, that I'd have trouble picking any implementations that stand out to list here. A simple google search should help point you in the direction of actual implementations.
I'm using an Instant Messaging software, and I suspect that the software is retaing a lot of information about my machine (such as my MAC address) and possibly leaks it. I decided I want to check the local DBs of the software and see what it saves locally.
I have been able to locate, using the software's own log dump and Procmon, the interesting DBs. However, they are SQLite DBs that are key-protected.
Do I have any way to know what will be the format and size of the key? Will it be hex?
How can I efficiantly continue my research? I looked, using procmon, and been able to detect the first time that the software uses a key-protected DB from the first time it is being opened. However, I couldn't detect any 'interesting' local file that the software uses and could hint about the key's location - apart from several Windows Registries values that are being used - but I'm not so sure on how to approach that.
Sorry if I have mistakes in English, and thank in advance.
Do I have any way to know what will be the format and size of the key? Will it be hex?
The key is just in plaintext (just like normal passwords) and the size is (also like passwords) defined by the creator of the database.
How can I efficiantly continue my research?
I would recommend reverse engineering the application and look for the part, where the connection to the database gets initiated. For that, you can use dynamic analysis (with a debugger) or static analysis (analyse the binary with a disassembler).
Can anyone help me out with this issue? When I run my decryption code, it asks for passphrase every time and fails if not entered. I want to automate this so that the passphrase is not asked every time. I'm using kleopatra to import/export public/secret keys with GPG4win installed.
I can understand your frustration, however, I believe that functionality is intended. If the private key were kept open for a time period or the passphrase cached, that would pose a security hole where either one could possibly leak.
While OSs have security measures in place to prevent other processes from reading memory that doesn't belong to them, there are still ways around this. The ways that come to mind first are more likely to be done in a research lab than real life, but security experts would still throw up red flags.
It is possible to use a command line to decrypt multiple files all at once, see how-to-use-gpg-to-decrypt-multiple-files-to-a-different-directory. Hope that helps.
My program needs to decrypt an encrypted file after it starts up to load data it requires to function. This data cannot be available to the user.
I'm not a cryptography expert, so what is the best way to protect hardcoded passphrases and other tidbits of data from users, debugging software and disassembling software?
I understand that this is probably bad practice but it's essential for me (at least for now).
If there are other ways to protect my data from the above 3, could you let me know what those are?
Short answer: you can't. Once the software is on the user's disk, a sufficiently smart and determined user will be able to extract the secret data from it.
For a longer answer, see "Storing secrets in software" on the security.SE blog.
what is the best way to protect hardcoded passphrases and other
tidbits of data from users, debugging software and disassembling
software?
Request the password from the user and don't hardcode the passphrase. This is the ONLY way to be safe.
If you can't do that and must be hardcoded in the app then all bets are off.
The simplest thing you can do (if you don't have the luxury to do something elaborate which will only delay the inevidable) is to delegate the responsibility to the user of the system.
I mean explicitely state that you software is as secure as the "machine" it runs.
If the attacker has access to start pocking around the file system then your app would be the user's least of concerns
In my experience this type of questions are often motivated by either of four reasons:
Your application is connecting to a restricted remote service, such as a database server.
You do not want your users to mess with configuration settings, which in turn do not really have to be kept confidential as long as they are unmodified.
Copy protection of your own software.
Copy protection of data.
Like Illmari Karonen wrote in his answer, you can't do exactly what you are asking for, and this means in particular that 3 & 4 cannot be solved by cryptography alone.
However, if your reason for asking is either 1 or 2, you have ended up asking the questions you do, because you have made some bad decisions earlier in your design process. For instance, in case of 1, you should not make a restricted service accessible from systems you do not trust completely. The typical safe solution is to introduce a middle tier that is the only client to your restricted resource, and which you can make public.
In case of 2, the best solution is often to use exactly the same logic for checking your configuration files (or registry settings or what ever) when they are loaded at start up, as you use for checking consistency when the user enters them using your preferred configuration user interface. If you spot an inconsistency, just bring up your configuration UI and highlight the problem.
In some of my books that I've read, it is stated that it is good to hide yellow screens of death (obviously), but not only for the reason in that it is quite informal to users, but also because hackers can use the information to hack your website.
My question is this. How can a hacker use this information? How does a call stack of basic operations of the .NET call stack help hackers?
I attached a yellow screen of death that I encountered on one of the websites that I created a long time ago and it sparked my interest.
(The error is that it fails when attempting to cast a query string parameter to an int. Yea, I know its bad code, I wrote it many years ago ;)
If you're writing secure code, the YSOD shouldn't provide a hacker with the ability to hack your application. If however, your code is insecure, then the YSOD could provide the attacker with essential information to allow them to carry out their attack.
Say, for example, you have written your own forum software. You have put in lots of validation for when the user writes posts to prevent XSS attacks and such, but your validation is faulty. If a hacker can bring up the YSOD when they make a post, the stack trace shown could potentially show them the cracks in your validation and exploit them to create XSS attacks or obtain member details or passwords and such.
The YSOD on it's own is no threat, but to a hacker, it can be a very useful way of finding flaws in your application's security.
There are several different ways this could compromise your application... but most of them would only make it easier for an attack... the vulnerability would probably already have to be there. For example, you could easily reveal a hard-coded password or salt, or reveal a line of code accepting user input without properly sterilizing it.
As mentioned by others YSOD itself is not necessarily always helpful to hacker but assume on your Line 13: in your code above you had your hard-coded Connection string or an inline sql query.
I now know from your YSOD that the "id" meant in your querysting is actually artId and not any randorm id number which may be of some use to hacker.
Also if hacker was able to get more than one different YSOD, it might reveal more info as a whole and sufficient enough to damage your app.
Sometime back MS reported a security vulnerability with ASP.NET where the workaround provided was to enable CustomErrors and hide from user the error-code and any paricular error related detail.
One thing that hasn't been mentioned yet is that an attacker would now have good reason to believe that you're using a MySql database (which they wouldn't be likely to guess about an ASP.NET app otherwise), helping them to narrow the range of potential attacks. No sense in making their job easier than it has to be.