How can I always keep encrypted data bags "encrypted"? - encryption

In one of my chef recipes, I am using encrypted data bags to do hide the download path for a remote file resource that I have defined.
However when converging on a node, if the download fails for whatever reason, then I can see all my secrets in the log.
Since I'm planning to deploy this on a CI server, I really don't want to have it displayed.
Is there any way to keep the data encrypted even on error?

You can try setting the sensitive attribute on the resource. This suppresses a lot of log data for some resources. For example, template resources will not log their contents when the sensitive attribute is set to true. I doubt it will suppress the URL of a remote_file, but it's worth a shot.

Related

Why Jmeter can't record sites using the firebase as data connection

I tried to record a site using JMeter which uses Firebase for data storage but it fails to access the firebase and I can not log into the site while recording. Is there any way to access firebase during the recording of load testing in JMeter? I entered the JMeter certificate also but still, the problem is there. And also tried using the chrome extension still it also didn't give the expected output Error Description Image
Most probably it's due to incorrect JMeter configuration for recording, you need to import JMeter's certificate into your browser. The file is called ApacheJMeterTemporaryRootCA.crt, JMeter generates it under its "bin" folder when you start the HTTP(S) Test Script Recorder.
See HTTPS recording and certificates documentation chapter for more details.
Going forward consider looking at View Results Tree listener output and jmeter.log file, they should provide sufficient amount of information in order to get to the bottom of the issue. If you cannot interpret what you see there yourself - add at least essential parts of response/log to your question.
Also be aware of alternative "non-invasive" way of recording a JMeter test - JMeter Chrome Extension, in that case you won't have to worry about proxies and certificates and should be able to normally record whatever HTTP(S) traffic your browser generates

Does Firebase guarantee that data set using updateValues or setValue is available in the backend as one atomic unit?

We have an application that uses base64 encoded content to transmit attachments to backend. Backend then moves the content to Storage after some manipulation. This way we can enjoy world class offline support and sync and at the same time use the much cheaper Storage to store the files in the end.
Initially we used updateChildren to set the content in one go. This works fairly well, but then users started to upload bigger and more files at the same time, resulting in silent freezing of the database in the end user devices.
We then changed the code to write the files one by one using FirebaseDatabase.getInstance().getReference("/full/uri").setValue(base64stuff), and then using updateChildren to only set the metadata.
This allowed seemingly endless amount of files (provided that it is chopped to max 9 meg chunks), but now we're facing another problem.
Our backend uses Firebase listener to start working once new content is available. The trigger waits for the metadata and then starts to process the attachments. It seems that even though the client device writes the files before we set the metadata, the backend usually receives the metadata before the content from the files is available. This forced us to change backend code to stop processing and check later again if the attachment base64 data is available.
This works, but is not elegant and wastes cpu cycles and increases latencies.
I haven't found anything in the docs wether Firebase guarantees anything about the order in which the data is received by the backend. It seems that everything written in one go (using setValue or updateChildren) is available in the backend as one atomic unit.
Is this correct? Can I depend on that as a fact that will not change in the future?
The way I'm going to go about this (if the assumptions are correct above) is to write metadata first using updateChildren in the client like this
"/uri/of/metadata/uid/attachments/attachment_uid1" = "per attachment metadata"
"/uri/of/metadata/uid/attachments/attachment_uid2" = "per attachment metadata"
and then each base64 chunk using updateChildren with following payload:
"/uri/of/metadata/uid/uploaded_attachments/attachment_uid2" = true
"/uri/of/base64/content/attachment_uid" = "base64content"
I can't use setValue for any data to prevent accidental overwrite depending the order in which the writes will happen in the end.
This would allow me to listen to /uri/of/base64/content and try to start the handling of the metadata package every time a new attachment completes the load. The only thing needed to determine if all files have been already uploaded is to grab the metadata and see that all attachment uids found from /attachments/ are also present /uploaded_attachments/.
Writes from a single Firebase Database client are delivered to the server in the same order as they are executed on the client. They are also broadcast out to any listening clients in the same order.
There is no chance that another client will see the results of write B without seeing the results from write A (unless A was rejected by security rules)

Handle ActionResults as cachable, "static content" in ASP.NET MVC (4)

I have a couple of ActionMethods that returns content from the database that is not changing very often (eg.: a polygon list of available ZIP-Areas, returned as json; changes twice per year).
I know, there is the [OutputCache(...)] Attribute, but this has some disadvantages (a long time client-side caching is not good; if the server/iis/process gets restartet the server-side cache also stopps)
What i want is, that MVC stores the result in the file system, calculates the hash, and if the hash hasn't changed - it returns a HTTP Status Code 304 --> like it is done with images by default.
Does anybody know a solution for that?
I think it's a bad idea to try to cache data on the file system because:
It is not going to be much faster to read your data from file system than getting it from database, even if you have it already in the json format.
You are going to add a lot of logic to calculate and compare the hash. Also to read data from a file. It means new bugs, more complexity.
If I were you I would keep it as simple as possible. Store you data in the Application container. Yes, you will have to reload it every time the application starts but it should not be a problem at all as application is not supposed to be restarted often. Also consider using some distributed cache like App Fabric if you have a web farm in order not to come up with different data in the Application containers on different servers.
And one more important note. Caching means really fast access and you can't achieve it with file system or database storage this is a memory storage you should consider.

Hows does one prevent passwords and other sensitive information from appearing in an ASP.NET dump?

How does one prevent passwords and other sensitive data submitted to and received from ASP.NET web pages in IIS/ASP.NET dump files?
Steps to reproduce
Using Visual Studio 2010, create a ASP.NET MVC 3 intranet application.
Configure it to use IIS 7.5.
Fire it up and register an account (say bob123 as the user and Pa$$w0Rd as the password. I'm assuming that the SQL Express database is created and the site is fully functional.
Using task manager, right click on the w3wp process and create a dump.
Open the dump in an editor capable of displaying its contents as hex, such as SlickEdit.
Search for "Pa$$0Rd" and "Pa%24%24w0Rd" in the hex dump. You should be able to find several copies of it stored as ASCII, Unicode, or encoded.
Note that it doesn't matter whether you use HTTPS because it only encrypts the communication. ASP.NET stores that data in the clear in memory or disk.
The problem
Common wisdom has it to encrypt sensitive data and not to store it in the clear. However an employee may receive a dump of an IIS/ASP.NET application and discover passwords and other confidential data of users because this information is neither encrypted, nor is memory used by ASP.NET cleared after usage.
This puts them at risk simply because they have access to it. Dump are sometimes shared with partners (such as Microsoft) to help them diagnose issues in their code. It is a necessary part of diagnosing some really complex problems in one's application.
Things I looked at
Use SecureString for passwords and other sensitive data. However, the ASP.NET Membership provider, along with other frameworks like WCF, often accepts passwords as System.String, which means that those copies will still be in the dump.
Looked to see if there is anything in the framework to clear out a copy of System.String when it is no longer being used. I couldn't find anything.
Investigated whether one can zero out the memory used for requests and responses once IIS is done with it, but I was unable to find anything.
I investigated wether one can encrypt files IIS receives (as HttpPostFile) so that they are not stored in the clear. We may receive documents that are extremely confidential and every step is made to encrypt and protect them on the server. However, someone can extract them in the clear from an IIS dump.
What I was hoping for is to tell IIS/ASP.NET that a specific request/response contains sensitive data and that IIS/ASP.NET will clear out the memory when it is done using it.
A dump file by definition dumps all the memory the application uses at the moment it is dumped, If you were to create a filter so that certain things were excluded then you could never be sure that you had enough data to zero in on a problem.
Would you be comfortable handing over your databases / configuration settings to a third party? if not then you probably shouldn't be handing over dumpfiles either. (imho)
I know this doesn't answer the question directly but why not look at this problem differently.
You could easily put together some javascript to do the hashing client side. I would combine the password with something random such as a guid that is sent down by the server and is valid only for a single use. The data exchange would no longer contain the password and the hash could easily be compared server side. It would only be valid for the session so someone looking at the dump data couldn't use the hash to "authenticate" themselves in future.
On the file side, how are these files being uploaded? directly from a web page? There are quite a few javascript libraries that do encryption (blowfish) and I would definitely use this option when sending a sensitive file. It does have a speed penalty but you can be sure that the data is secure.

Is the filesystem for Raven DB encrypted?

I'm just trying to determine if the files on the filesystem used by Raven DB are encrypted or not? Can someone just open the files on the filesystem and convert them from binary to ASCII directly, or are they encrypted?
I am trying to convince our management to give RavenDB a shot, but they have concerns about security. They gave the example that you can't just open up an MS SQL db file, convert it from binary to ASCII, and read it. So I am trying to verify if RavenDB prevented that kind of thing as well?
Well, personally I think that your management sucks if they come up with such straw-man arguments.
To answer your question: No, you can't just open any file inside ravens data folder with Notepad and expect to see something meaningful. So, for the ones that don't know how to program, yes they are encrypted.
To convice your management you can tell them that raven uses the same encryption algorithm as Microsofts Exchange Server does. If they want to dig deeper - it's called Esent.
RavenDb storage is not encrypted. You can open it with notepad and see some pieces of data. At the same time I do not think that MS SQL encrypts files by default either.
RavenDB added encryption in mid-2012. Get RavenDB's “bundle:encryption” and then make sure your key is properly encrypted in the .NET config file or whatever.
http://ravendb.net/docs/article-page/3.0/csharp/server/bundles/encryption
http://ayende.com/blog/157473/awesome-ravendb-feature-of-the-day-encryption
SQL Server 2008 does have encryption, but you need to prepare the DB instance beforehand to enable it, then create the DB with encryption enabled and then store data.
If you haven't, you could just copy the DB off the machine and open it in a tool that does have access to it.
With RavenDB, you can tick the box and off you go! (although I do not know the intricacies of moving backups to another machine and restoring them).
In relation to the point your management made, this is a relatively pointless argument.
If you had access directly to the file of a DB, it's game over. Encryption is your very last line of defence.
[I don't think hackers are going to be opening a 40GB file in Notepad .. thats just silly :-)]
So instead of ending up at the worst case, you have to look at the controls you can implement to even get to that level of concern.
You need to work out how would someone even get to that file (and the costs associated with all of the mitigation techniques):
What if they steal the server, or the disk inside it?
What if they can get to the DB via a file share?
What if they can log onto the DB server?
What if an legitimate employee syphons off the data?
Physical Access
Restricting direct access to a server mitigates stealing it. You have to think about all of the preventative controls (door locks, ID cards, iris scanners), detective controls (alarm systems, CCTV) and how much you want to spend on that.
Hence why cloud computing is so attractive!
Access Controls
You then have to get onto the machine via RDP or connect remotely to its file system via Active Directory, so that only a select few could access it - probably IT support and database administrators. Being administrators, they should be vetted and trusted within the organisation (through an Information Security Governance Framework).
If you also wanted to reduce the risk even further, maybe implement 2 Factor Authentication like banks do, so that even knowing the username and password doesn't get you to the server!
Then there's the risk of employees of your company accessing it - legitimately and illegitimately. I mean why go to all of the trouble of buying security guards, dogs and a giant fence when users can query it anyway! You would only allow certain operations on certain parts of the data.
In summary ... 'defence in depth' is how you respond to it. There is always a risk that can be identified, but you need to consider the number of controls in place, add more if the risk is too high. But adding more controls to your organisation in general makes the system less user friendly.

Resources