What happens when you apply a label using the Microsoft Information Protection (MIP) SDK? - encryption

According to the File Parsing section on the MIP SDK FAQs and known issues page, applying a sensitivity label results in a protected copy of the original being made:
Any labeled files will result in a copy of the input file with the label actions applied.
This raises a few questions:
Does the labeled copy ever touch the filesystem in an unprotected state? For example, does the SDK only begin applying label actions after making a full, unprotected copy of the original?
Does the MIP SDK allocate the disk space required to store an encrypted copy ahead of time (e.g. using fallocate(2) on Linux)?
Is there any risk of leaving behind partially encrypted or corrupted copies if the protecting process is suddenly terminated?
The version release history page also makes mention of a Timeout category of NetworkError:
Category=Timeout: Failed to establish HTTP connection after timeout (retry will probably succeed)
What is the HTTP connection timeout, and is it configurable?

Related

Qt: Catch external changes on an SQLite database

)
I'm deveoping a program using an SQLite database I acces via QSqlDatabase. I'd like to handle the (hopefully rare) case when some changes are done to the database which are not caused by the program while it's running (e. g. the user could remove write access, move or delete the file or modify it manually).
I tried to use a QFileSystemWatcher. I let it watch the database file, and in all functions wrtiting something to it, I blocked it's signals, so that only "external" changes would trigger the changed signal.
Problem is that the check of the QFileSystemWatcher and/or the actual writing to disk of QSqlDatabase::commit() seems not to happen in the exact moment I call commit(), so that actually, first the QFileSystemWatcher's signals are blocked, then I change some stuff, then I unblock them and then, it reports the file to be changed.
I then tried to set a bool variable (m_writeInProgress) to true each time a function requests a change. The "changed" slot then checks if a write action has be requested and if so, sets m_writeInProgress to false again and exits. This way, it would only handle "external" changes.
Problem is still that if the change happens in the exact moment the actual writing is going on, it's not catched.
So possibly, using a QFileSystemWatcher is the wrong way to implement this.
How could this be done in a safe way?
Thanks for all help!
Edit:
I found a way to solve a part of the problem. Starting an exclusive lock on the database file prevents other connections from changing it. It's quite simple, I just have to execute
PRAGMA locking_mode = EXCLUSIVE
BEGIN EXCLUSIVE
COMMIT
and handle the error that emerges if another instance of my program trys to access the database.
What's left is to know if the user (accidentally) deleted the file during runtime ...
First of all, there's no SQLITE support for this: SQLITE only supports monitoring changes created over a database connection within your direct control. Whatever happens in a separate process concurrently with your process, or when your process is not running, is by design completely out of your control.
The canonical solution to this problem is to encrypt the database with a key specific to your application (and perhaps user, etc.). Then, no third-party process can modify the database using SQLITE. Of course any process can corrupt your database, or get rid of it -- that's too bad. You can detect corruption trivially by using cryptographic signatures, perhaps even error correcting codes so as to be able to restore the data should a certain amount of corruption happen. You don't need notifications of someone moving or deleting the database file: you will know when you attempt to open the database and the "file not found" error is given back to you.
Of course all of the above requires a custom VFS implementation. That's very much par for the course.

Does Firebase guarantee that data set using updateValues or setValue is available in the backend as one atomic unit?

We have an application that uses base64 encoded content to transmit attachments to backend. Backend then moves the content to Storage after some manipulation. This way we can enjoy world class offline support and sync and at the same time use the much cheaper Storage to store the files in the end.
Initially we used updateChildren to set the content in one go. This works fairly well, but then users started to upload bigger and more files at the same time, resulting in silent freezing of the database in the end user devices.
We then changed the code to write the files one by one using FirebaseDatabase.getInstance().getReference("/full/uri").setValue(base64stuff), and then using updateChildren to only set the metadata.
This allowed seemingly endless amount of files (provided that it is chopped to max 9 meg chunks), but now we're facing another problem.
Our backend uses Firebase listener to start working once new content is available. The trigger waits for the metadata and then starts to process the attachments. It seems that even though the client device writes the files before we set the metadata, the backend usually receives the metadata before the content from the files is available. This forced us to change backend code to stop processing and check later again if the attachment base64 data is available.
This works, but is not elegant and wastes cpu cycles and increases latencies.
I haven't found anything in the docs wether Firebase guarantees anything about the order in which the data is received by the backend. It seems that everything written in one go (using setValue or updateChildren) is available in the backend as one atomic unit.
Is this correct? Can I depend on that as a fact that will not change in the future?
The way I'm going to go about this (if the assumptions are correct above) is to write metadata first using updateChildren in the client like this
"/uri/of/metadata/uid/attachments/attachment_uid1" = "per attachment metadata"
"/uri/of/metadata/uid/attachments/attachment_uid2" = "per attachment metadata"
and then each base64 chunk using updateChildren with following payload:
"/uri/of/metadata/uid/uploaded_attachments/attachment_uid2" = true
"/uri/of/base64/content/attachment_uid" = "base64content"
I can't use setValue for any data to prevent accidental overwrite depending the order in which the writes will happen in the end.
This would allow me to listen to /uri/of/base64/content and try to start the handling of the metadata package every time a new attachment completes the load. The only thing needed to determine if all files have been already uploaded is to grab the metadata and see that all attachment uids found from /attachments/ are also present /uploaded_attachments/.
Writes from a single Firebase Database client are delivered to the server in the same order as they are executed on the client. They are also broadcast out to any listening clients in the same order.
There is no chance that another client will see the results of write B without seeing the results from write A (unless A was rejected by security rules)

How does QFileSystemWatcher determines if a file is modified?

I am trying to watch a log file using QFileSystemWatcher but fileChanged signal is not consistently emitted every time the log file is modified. Any idea how QFileSystemWatcher determines if a file is modified (on windows)?
QFileSystemWatcher's performance is entirely dependent on what the underlying platform provides. There are in general absolutely no guarantees that if one process is writing to a file, some other process will see these changes immediately. The behavior of QFileSystemWatcher may be informing you of that fact. The log writing process might elect to flush the file. Depending on the platform, the semantics of a flush might be such that when flush() returns, other processes are guaranteed to be able to see the changes made to the file prior to flush(). If so, then you'd expect QFileSystemWatcher to notify you of the changes.
As the platforms get new features, QFileSystemWatcher may lag in its implementation of new filesystem notification APIs. You'd need to read its source to figure out if it supports everything your platform of choice provides in this respect.
You need to qualify QFileSystemWatcher's behavior on each platform you intend to support. You may find out that explicitly polling a file information periodically may work better in some cases - again, the choice between polling and QFileSystemWatcher should be made on a platform-by-platform basis, as polling might incur unnecessary overheads if the watcher works OK on a given platform.

Possible reasons for javax.crypto.IllegalBlockSizeException

I am using amazon S3 to store files. While storing, I am encrypting the stream on the fly. Again on download, I decrypt the stream on the fly. This set up is working very well but occasionally I am getting following exceptions -
javax.crypto.IllegalBlockSizeException: Input length must be multiple of 16 when decrypting with padded cipher
What could be possible reasons for this error to happen. Is corruption of data during upload/download is one of the possibilities? If yes, will this happen only when padding bytes are corrupted or any of the bytes in file got corrupted?
[EDIT] But the strange thing is that the file size stored in S3 is proper, it's not like only half of the file got stored.
Yes, it is. Its most likely that you receive partial files. You should be able to check if the connection was aborted before completion. To be sure you get the full, unchanged file, add a (H)MAC or use a cipher mode with integrity validation (e.g. GCM).
[EDIT]: No, this particular decryption exception should only happen when the full file is not available, not when the file itself is currupted. Better check the file handling upon receiving (forgetting to close stream or delete partial files).

ASP.net file operations delay

Ok, so here's the problem: I'm reading the stream from a FileUpload control, reading in chunks of n bytes and writing the array in a loop until I reach the stream's end.
Now the reason I do this is because I need to check several things while the upload is still going on (rather than doing a Save(); which does the whole thing in one go). Here's the problem: when doing this from the local machine, I can see the file just fine as it's uploading and its size increases (had to add a Sleep(); clause in the loop to actually get to see the file being written).
However, when I upload the file from a remote machine, I don't get to see it until the the file has completed uploading. Also, I've added another call to write the progress to a text file as the progress is going on, and I get the same thing. Local: the file updates as the upload goes on, remote: the token file only appears after the upload's done (which is somewhat useless since I need it while the upload's still happening).
Is there some sort of security setting in (or ASP.net) that maybe saves files in a temporary location for remote machines as opposed to the local machine and then moves them to the specified destination? I would liken this with ASP.net displaying error messages when browsing from the local machine (even on the public hostname) as opposed to the generic compilation error page/generic exception page that is shown when browsing from a remote machine (and customErrors are not off)
Any clues on this?
Thanks in advance.
FileUpload control renders as an <input type="file"> HTML element; this way, your browser will open that file, read ALL content, encode and send it.
Your ASP.NET request just starts after IIS receives all browser data.
This way, you'll need to code a client component (Flash, Java applet, Silverlight) to send a file in small chunks and rebuild that at server-side.
EDIT: Some information on MSDN:
To control whether the file to upload is temporarily stored in memory or on the server while the request is being processed, set the requestLengthDiskThreshold attribute of the httpRuntime element. This attribute enables you to manage the size of the input stream buffer. The default is 256 bytes. The value that you specify should not exceed the value that you specify for the maxRequestLength attribute.
I understand that you want to check the file which is being uploaded for it's content.
If this is your requirement then why not add a textbox and populate it while you are reading the file from HttpPostedFile.

Resources