We have an asp.net application that allows users to upload files, the files are saved to temporary disk location and later attached to a record and saved in DB.
My question pertains to security and/or virus issues. Are there any security holes in this approach? Can a virus cause harm if it is never executed (file is saved, then opened using filestream, converted to byte array and saved to DB.
Later, when the file is needed we stream the file back to user.
The files are saved to a folder on the web server like this:
context.Request.Files[0].SaveAs(); (location is a folder under app_data/files)
later when the same user creates a record we grab the file from disk and store it in db like this:
FileStream fileStream = File.OpenRead(currentFilePath);
byte[] ba = new byte[fileStream.Length];
int len = fileStream.Read(ba, 0, ba.Length);
//ba saved to DB here as varbinary(max)
We limit the files that can be uploaded to this list:
List<string> supportedExtensions = new List<string>(10) {".txt", ".xls", ".xlsx", ".doc", ".docx", ".eps", ".jpg", ".jpeg", ".gif", ".png", ".bmp", ".rar", ".zip", ".rtf", ".csv", ".psd", ".pdf" };
The file is streamed back to user's web browser like this:
//emA = entity object loaded from DB
context.Response.AppendHeader("Content-Disposition", "inline; filename=\"" + emA.FileName + "\"");
context.Response.AddHeader("Content-Type", emA.ContentType);
context.Response.BinaryWrite(emA.FileContent);
There's always a security risk when accepting files from unknown users. Anyone could potentially write a virus in VBA (Visual Basic for Applications) in the office documents.
Your approach is no more or less of a security risk than saving them directly on the file system or directly in the database except for one concern...
If the files are saved to the disk, they can be scanned by traditional virus scanners. As far as I know most virus scanners don't scan files that are stored in a DB as a byte array.
If it were my server, I would be storing them on the file system for performance reasons, not security reasons, and you can bet I would have them scanned by a virus scanner if I were allowing potentially dangerous files, such as office documents, executables, etc.
Have your users create logins before allowing them to upload files. Unchecked access of this kind is unheard of... not saying that this is a solution in and of itself, but like all good security systems it can form an extra layer :-)
I can't see there being anymore security risk than saving the files to disk. The risks here are often not to do with where you store the data since as you've already pointed out the stored file doesn't get executred.
The risk is usually in how the data is transfered. Worms will exploit circumstances which allow what was just data on its way through the system to be treat as if it were code and start being executed. Such exploits do not require that any sense of "file" being transfered be present, in the past a specially formatted URL could suffice.
That said, I've never understood the desire to store large binary data in a SQL database. Why not just save the files on disk and store the file path in the DB. You can then use features such as WriteFile or URL re-writing to get IIS do what its good at.
Related
I'm uploading bobs to my azure cloud storage using the following way. The problem I'm facing is, if a user exits the web application or if the upload gets interrupted, the partially uploaded blob still remains on the storage. What is the way of handling interrupted blob uploads in Azure?
Code:
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(cloudString);
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
container.CreateIfNotExists();
container.SetPermissions(
new BlobContainerPermissions
{
PublicAccess = BlobContainerPublicAccessType.Blob
});
CloudBlockBlob blockBlob = container.GetBlockBlobReference(uniqueBlobName);
blockBlob.UploadFromByteArray(f, 0, f.Length);
When it comes to uploading files as block blobs, there are two possible scenarios:
File is uploaded without being split into chunks - Let's say a user is uploading a file without splitting it in chunks and in the middle of upload process user closes the browser. In this case, nothing will happen because the blob is not saved yet in blob storage.
File is uploaded in chunks - This is the case with large files where the upload is happening in chunks. Assuming a scenario where some chunks are uploaded and after that user terminates the upload process. In this case, there are two possible solutions:
1) You do nothing - If you don't do anything, chunks that are uploaded but not committed gets deleted by storage service automatically after 7 (or 14) days. Downside of this approach is that you would pay for these bytes for those days.
2) You can programmatically delete uncommitted blobs - You can get a list of uncommitted blobs in a container and delete those blobs. One thing I would suggest is that you find uncommitted blobs that have not been modified for a certain time so that you're not deleting the blobs which are still being uploaded.
UPDATE
I had a chance to play with uncommitted blobs. When you list blobs with BlobListingDetails.UncommittedBlobs, it will return both committed and uncommitted blobs. One way to identify an uncommitted blob is by checking it's ETag property. In my little experiment, I found that ETag property will be null and blob length to be 0 bytes in case of an uncommitted blob.
I have mp3 players set up on my site to play mp3s. At the moment, users can easily look through the source, run a search for "mp3" and download all of the music on my site. I know it's virtually impossible to completely prevent a determined user from downloading the music but I want to make it harder for the average user. Is there any way I can obfuscate the links to the mp3s?
Relevant site: http://boyceavenue.com/music
You did not specify the language you are using. To expand upon what Marc B wrote, I would recommend using the PHP http_send_file command along with the checksum of the file.
To send the file, use the following:
$filename = "/absolute/or/relative/path/to/file.ext";
$mime_type = "audio/mpeg"; // See note below
http_send_content_disposition($filename, true);
http_send_content_type($mime_type);
http_throttle(0.1, 2048);
http_send_file($filename);
If you are serving up multiple types of files using PHP 5.3.0 or later, you could determine the mimetype this way:
$filename = "/absolute/or/relative/path/to/file.ext";
$finfo = finfo_open(FILEINFO_MIME_TYPE);
$mime_type = finfo_file($finfo, $filename);
finfo_close($finfo);
Calculating the checksum is simple enough. Just use md5_file.
So, what can you do with the checksum?
Well, you could create an array of checksums and filenames that cross-reference each other. That is, you include the checksum in the links to the file, but have a little routine that looks up the checksum and delivers the mp3 file. You also could do this in a database. You also could do like some apps that store files in a directory structure based on their checksums (music/3/3a/song.mp3 with a checksum of 3a62f6 or whatever).
If you don't care about the filenames being mangled, you could save the files with a checksum for the filename. That could be done at upload time (if your files are being uploaded) or through a batch script (using the CLI).
Another thing you should do is to put a default document (index.php or whatever) that tells people to look elsewhere. Also disable browsing the directory. If only a very small number of people will need access, you could also throw a password on the directory, thus requiring a login to access the files.
I need to pick an underlying method of saving data collected in the field (offline and remote locations). I want to use the HTML5 Database with SQLite but I can I pick the location? So far, I haven't been able to accomplish that. Here is some sample code I was using:
var dbName = "";
var Dir = blackberry.io.dir;
var path = Dir.appDirs.shared.documents.path;
dbName = path + "/" + "databasetest.db";
var db = openDatabase(dbName, '1.0', 'Test', 50 * 1024);
I used an "alert()" to see the file was "supposedly" created, but when I opened the folder in Explorer I cannot find it. Not really sure why and hense my question.
My application is for data entry, without getting into specifics, user may end up collecting a lot or little data. But I want some way of downloading the SQLite database?
Is this the intention of the SQLite database, or will I have to use another solution?
Thanks!
Chris
The Web SQL Database specification was designed for browsers where it would not have been appropriate to allow web pages to access arbitrary file paths.
The intended way to download data is to upload it to a web server in the cloud.
If you want to know the file name of your database, try executing the PRAGMA database_list. (Whether your app can access that path is a different question.)
I am completely new to SQLite and I intend to use it in a M2M / client-server environment where a database is generated on the server, sent to the client as a file and used on the client for data lookup.
The question is: can I replace the whole database file while the client is using it at the same time?
The question may sound silly but the client is a Linux thin client and to replace the database file a temporary file would be renamed to the final file name. In Linux, a program which has still open the older version of the file will still access the older data since the old file is preserved by the OS until all file handles have been closed. Only new open()s will access the new version of the file.
So, in short:
client randomly accesses the SQLite database
a new version of the database is received from the server and written to a temporary file
the temporary file is renamed to the SQLite database file
I know it is a very specific question, but maybe someone can tell me if this would be a problem for SQLite or if there are similar methods to replace a database while the client is running. I do not want to send a bunch of SQL statements from the server to the client to update the database.
No, you cannot just replace an open SQLite3 DB file. SQLite will keep using the same file descriptor (or handle in Windows-speak), unless you close and re-open your database. More specifically:
Deleting and replacing an open file is either useless (Linux) or impossible (Windows). SQLite will never get to see the contents of the new file at all.
Overwriting an SQLite3 DB file is a recipe for data corruption. From the SQLite3 documentation:
Likewise, if a rogue process opens a
database file or journal and writes
malformed data into the middle of it,
then the database will become corrupt.
Arbitrarily overwriting the contents of the DB file can cause a whole pile of issues:
If you are very lucky it will just cause DB errors, forcing you to reopen the database anyway.
Depending on how you use the data, your application might just crash and burn.
Your application may try to apply an existing journal on the new file. Sounds painful? It is!
If you are really unlucky, the user will just get back invalid results from any queries.
The best way to deal with this would be a proper client-server implementation where the client DB file is updated from data coming from the server. In the long run that would allow for far more flexibility, while also reducing the bandwidth requirements by sending updates, rather than the whole file.
If that is not possible, you should update the client DB file in three discrete steps:
Send a message to the client application to close the DB. This allows the application to commit any changes, remove any journal files and clean-up its internal state.
Replace/Overwrite the file.
Send a message to the client application to re-open the DB. You would have to setup all prepared statements again, though.
If you do not want to close the DB file for some reason, then you should have your application - or even a separate process - update the original DB file using the new file as input. The SQLite3 backup API might be of interest to you in that case.
I'm writing an application with a dBASE database file in Borland Delphi 7.
Note: I think this question is file-security related and you can forget the dBASE thing (consider it as a TXT file) in this question.
The database must be accessed just by the application. Then it must be encrypted. Unfortunately dBASE doesn't support any password mechanism and i had to encrypt the file by myself (and i also HAVE to use dBASE)
What approach do you suggest to secure the database file?
The simple one is:
Encrypting the database file and placing it near beside the application EXE file.
When the application runs, it should decrypt the file (with a hard-coded password) and copy the result to a temporary file that has DeleteOnClose and NoSharingPermission flags.
When Closing, application should encrypt the temp dBASE file and replaces the old encrypted file with the new one.
I think this is a fair secure approach. But it have two big problems:
With an undelete tool the user can restore and access to the deleted temp file.
Worse: When application is running, if the system rebooted suddenly the DeleteOnClose flag fails and the temp file remains on hard disk and user can access it.
Is there any solution for, at least, the second part?
Is there any other solution?
You could also try to create a TrueCrypt file-based containter, mount it, and then put the dBase file inside the mounted encrypted volume. TrueCrypt is free (in both senses) and it's accessible via command line parameters from your application (mount before start, unmount before quit).
Depending on what you're doing with the database, you may be able to get away with just decrypting the records you actually need. For example, you could build indexes based on hash codes (rather than real data); this would reduce seeks into the database to a smaller set of data. Each record in the subset would have to be decrypted, but this could be a lot better than decrypting the entire database.