I have an ASP.NET core API project which lets users upload images. My first implementation was to use Base64 and save the images in the SQL server. However, I decided not to do that because of performance issues. The second implementation was to use Azure Blob storage and upload the files directly into the blob storage.
I am not sure if this is a good idea but instead of using Azure Blob, I would like to upload the images somewhere in my Linux server. Is there any special directory for saving files and would it be safe for me to do that?
As far as I know, there is no special directory for saving files in Linux server which would be more safe.
All the folder in the linux is the same, if you have enough permission to access it ,then you could read and write the image in it.
Normally, we will add a folder inside our application to store the uploaded image, so that we could use relative path in our codes.
In my opinion, use blob storage is a good opinion. This will be directly access from blob url if you have enough permission and it is safety, we could generate the SAS to allow only specific user access, it is High durability and cheap enough.
Related
Is there anyway to open and read a SQLite database file on GAE?
I am currently uploading dbs to blobstore as admin and serving them publicly to user clients. I just can't read them in the GAE admin interface.
You can use SQLITE on Google App Engine. The problem has nothing to do with the support of certain libraries. It has to do with read-only file system. There is, however, a writable /tmp directory. If your app on startup first copies the db.sqlite3 file to /tmp/db.sqlite3 and references this path as database path, it will work.
There are, however, drawbacks.
1. This is not a "real" directory i.e. it's stored im memory. If database is too large, one will get problems.
2. Each instance has its own copy of db.sqlite3 file. Does not scale well.
Here is a django example:
Using SQLITE for local Django development for Google App Engine?
Short answer, no it is not possible to use a SQLite database on a standard Google App Engine application as it is not currently supported. However, you can give a try at implementing your own configuration with the App Engine Flexible Environment that allows to take advantage of custom libraries through Infrastructure Customization.
In case you would want to experiment, here is a sample Django application designed to be run with its default SQLite database on the App Engine Flexible Environment. Still, make sure to read the database notice providing alternative data storage options and explaining that SQLite data does not persist upon instance restart.
I'm working with the FileUpload in my project. And this project would be high visited (it's not my ambitions, just because web application does work with a payment system, that's why it will be under high-load). And I wonder, what's better for a storing the user's files? My project is based on ASP.NET.
I suggest two variants:
save as/load a BLOB object into/from database
save/load to/from a folder where the files will locate and save info about files in the table for owner recognizing, the table design in BNF:
<user_files> ::= ( <id ::= int, primary_key, auto_increment, indexed><user_id ::= int><file_guid ::= varchar(255)>) | nil
I prefer BLOB , but afraid of a future high-load. Because, fetching data from the database requires more CPU-time and memory allocations, because:
I need to use a connector, which will open a new socket to connect to DB localhost
Then must call stored-procedure for a getting BLOB object
at client-side, I must get the result from some classes from the connector
I must deserialize it
and only then just to send the file to a user in uncompressed and not corrupted state, so user can later open it in some editor (files often would be images and ms-office documents)
As I thought all these operations may decrease the server work and will require more time, I think it would be slow for a 2000 users online, which will exchange the documents very quickly
As for the storing files on the filesystem, I see only problem in:
securing correctly the access of files, because different users must not see others docs and they must be hidden for the other users. I'm afraid, because the folder to which users are uploading files is seen for the system user of Windows for the IIS (IISUser...), because otherwise users won't be able to upload anything, so the folder will be public. I see only solution to make a Windows Service and to use IIS folder for the uploads as temporary. Windows Service will get files from it and place to the secure folder, where users from web would be unable to see it.
But, maybe, I'm going wrong with my ideas, that's why I'm asking you a piece of advice, because I want to make system more perfectly.
Thank you!
securing correctly the access of files
If you run into this situation you are already violation the OWASP security guidelines, since your files are insecure direct object references. This means that users can access files directly, because you opened a complete sub folder on IIS (like www.mysite.com/files/some_file.pdf) and your files probably have a name.
What you should do instead is:
Register a file in the database with a unique; not its data, just its name and the user who uploaded it (optionally including rights or roles).
Store the file on disk where the file name is the database identifier.
Don't allow direct access but write a special HttpHandler that takes in the id of the document (just as you would do when storing the files inside your database).
When taking this approach, you achieve the following:
Files have a unique number, which prevents them from having naming conflicts on disk.
The HttpHandler can check the database of the user that downloads that file has the proper rights to do so.
Because IDs are used, you are not vulnerable to canonical representation attacks, where the attacker does a request like this: www.mysite.com/file.ashx?file=..\web.config.
So from a security perspective, there is no problem in storing files on disk instead of your database.
Storing in a database will scale much better over time. If you use the folder solution, and someday you need or decide to use a cluster, synchronizing the files throughout the server farm will be hellish.
Even though fetching stuff from a database may be more CPU intensive, it does simplify a lot of things (your code will surely be more maintainable and portable), and you can always count on hosting and processing costs diminishing over time.
You can also cache stuff for speed. Either way I hope those files don't change a lot after being uploaded.
I am developing a web application in ASP.NET 4.5. One part of the application includes the user the option to upload images. The images are stores on Amazon S3. Right now the pathc I though about choosing is to use Amazon SDK to upload the images to the bucket on S3 and server them via CloudFront. The thing is that I think that using s3fs might be a better option.
If I mount an S3 bucket as a folder, when the user upload a photo, I can continue the application operation, knowing that the image will be transferred via the network to S3, so I don't need to wait until this process completes before continuing the code. So all I have to do is to wait until the image finished uploading to the server and continue the code.
I want to know if this a good way to do this. Waiting for images to upload can take time and I don't want the user to wait until all the images have been uploaded, which can take some time.
Any suggestions for the best implementation of image uploading?
That is a suitable approach if you will be having multiple application servers which need to interact with a single bucket. You might want to consider configuring s3fs to use a local storage directory as cache, so as to improve performance, as writing directly to your s3fs mount will typically take longer than to local storage.
I want to import csv file(already uploaded in blob storage) in Azure.
For example I have uploaded test.csv on blob storage, now I just want to import that test.csv file in .net(azure) and after importing I will insert that data into azure database. I am using C# .net. Please suggest how can I achieve this. I want to follow below steps:-
Creating a cvs file with all rows.
Upload it as blob.
Parse it with a Worker role and insert it in the sql azure db.
Thanks.
A bit more clarification around your question would be helpful. Are you trying to upload a file to Azure blob storage? Download it from there for your app to consume? What language(s) are you using?
There are plenty of examples of loading files into and pulling them from Azure blob storage using .NET at least a handful for doing it with Java or PHP.
I you can clarify what you're trying to do, I'd be happy to point you at the appropriate ones. :)
-- answer based on comment update --
The steps for retrieving the blob are fairly straight forward:
1) create your Azure storage client using your azure storage credentials
add a using clause:
using Microsoft.WindowsAzure.StorageClient;
get a client for accessing blob storage:
CloudBlobClient tmpClient = new CloudBlobClient("<nameofyourconfigsetting>");
get a referrence to the blob you want to download:
CloudBlob myBlob = tmpClient.GetBlobReference("container/myblob.csv");
2) read the blob & save to a file
myBlob.DownloadToFile("<path>/myblob.csv");
The save location can be the %temp% location or if its a large file you may want to allocate some local storage space and put it there. The other thing you want to keep in mind is that if you are doing this in a role instance, you'll need to make sure you have measures in place to prevent two instances from concurrently trying to process the same file. If the file is small enough, you can probably even keep it as a memory stream and process it that way. If this is the case, you can use the DownloadToStream property of the CloudBlob object.
For additionally reading, I'd recommend checking out the MSDN library for the details on the StorageClient and CloudBlob contains. Additionally, the Windows Azure Platform Training Kit has some good labs to help you get a better understanding of how Azure Storage works.
I am currently working on a project where i need to store few files and folders in encrypted manner. This project will be platform independent and hence will be written in Java.
Instead of encrypting individual file and folder, we have been thinking of using some virtual file-system where a single container file will hold complete file-system.
Most of the open source virtual encrypted file-system tools we studied work on following principle.
mount the virtual file system (using secure password)
use this filesystem
finally dismount it
But the main problem here we face is that anyone who has access of the PC (e.g. network admin) will be able to see decrypted files when virtual drive is mounted. We want to restrict access to encrypted file system at process level. No one else in same OS session should be able to see the contents, hence no drive mounting, etc.
So we are looking for some open source tool which will provided some some APIs using which we will be able to access files in encrypted container without mounting it.
can anyone point us to any such library?
This thing I'd normally say was pretty cool.
http://www.pismotechnic.com/pfm/
But I've recently accidently copied a sub-repository in a mercurial repository to another folder and when that happened a lot of files got magically messed up. If you don't mind possible issues like that (eg. keeping backups) this could be a solution for you.
I've stumbled upon this question while hunting for an alternative because corrupted files are definitely not on my requirement list.