We have a legacy system we're planning on migrating to Azure. The system uses sqlite files to store the data we need to access. After bouncing around with numerous solutions, we've decided to store the sqlite files in Azure file storage and access them via a UNC path from a cloud worker role (we can't use Azure functions or app services as they don't have the ability to use SMB).
This all seems to work ok, but what I'm nervous about is how sqlite is likely to react when trying to access a large file (effectively over a network) this way.
Does anyone have experience with this sort of thing and if so did you come across any problems?
The alternative plan was to use a web worker role and to store the sqlite files in blob storage. In order to access the data though, we'd have to copy the blob to a temp file on the web server machine.
You can certainly use Azure File Storage, since it's effectively an SMB share, backed by blob storage (which means it's durable). And also, since it's an SMB share, you can then access it from your various worker role instances.
As for your alternate choice (storing in blob and copying to temporary storage) - that won't work, since each worker role instance is independent, and you'd then have multiple, unsynchronized copies of your database on each VM. And if a VM rebooted, you would immediately lose all data on that temporary drive.
Note: With Web/worker role instances, as well as VM's, you can attach a blob-backed disk and store content durably there. However, you'd still have the issue of dealing with multiple instances (because attached disks cannot be attached to multiple VMs).
Related
Assuming my rules are setup to user read/write on owned object only, I want to know what data does firebase client (IOS/Android) store in devices? In this example, does it download the data that doesn't belongs to the user as well on the device but just blocked it? or only object owned by user will be downloaded on device.
Is there a way to just have some of the child object saved in the cloud only but not locally? I am worried about the db size getting too large in the devices.
Thanks!
Your Firebase app will only have access to data in the database that the rules permit. Security is handled by the Firebase Realtime Database (not the app) so only data that the user is allowed to access will be downloaded.
In order for your app to work with data stored in the database, it needs to be downloaded to the device. By default, data is cached so that your app still works even if your device temporarily loses its network connection. The app only stores this locally if you enable offline capabilities to allow the app to continue working when no network is available.
Firebase apps automatically handle temporary network interruptions. Cached data is available while offline and Firebase resends any writes when network connectivity is restored.
When you enable disk persistence, your app writes the data locally to the device so your app can maintain state while offline, even if the user or operating system restarts the app.
The Firebase app will automatically handle all of this functionality for you.
The size of the local cache will rarely be large enough to worry about, unless you are storing or downloading huge amounts of data, which is not recommended. If your database is large, you should implement strategies to restrict queries to only retrieve relevant data by filtering or paginating your queries.
The documentation shows how to connect using Storage Account Key:
https://learn.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-windows
That does work. However, I'd like to mount file storage using read-only SAS token.
Is this possible?
is this possible?
Unfortunately, no. We must set the storage account key when mounting Azure File shares, everyone who has storage account and account key will have full permissions to manage and operate file shares. From the feedback we could know that Microsoft has no plan to do that.
At the moment, Microsoft does not have plans to support SAS tokens with SMB access. Instead, we are looking into supporting AD integration for mounted file shares.
It's possible on different approach and secure. You still use the mount cifs (net use windows) but you stored the credentials in the key vault. You should mount this on the bootup (with systemctl) using the technique of curl to get the credentials. You need to allow key vault access policy on the vm, now this is tricky too to automate but it's possible.
I wrote an app that contains data that is sensitive to certain users which so not want it to end up online. I want to allow to use the app with firebase offline only with the option to sync at a later time. Is this possible with current ios and android firebase implementations as a replacement for sqlite database?
The Firebase Database is primarily an online database that can handle intermittent and medium-term lack of connectivity.
While the user is not connected, Firebase will keep a queue of pending write operations. It will aggregate those operations locally when it loads the data from disk into memory. This means that the larger the number of write operations while the user is offline, the longer loading will take and the more memory the database will use.
This is not a problem in the intended use-case: online apps that need to handle short/medium term lack of connectivity. But it is not a suitable database for long-term offline databases.
I am aware of how the permission system works in AWS:
By giving an EC2 instance a specific IAM role, it is possible to give all programs running on that specific EC2 instance some set of permissions for accessing other AWS services (e.g. permission to delete an EBS volume).
Is there something similar for Openstack? If you would like a program that is running on an Openstack server to be able to programmatically make changes through the Openstack API:s, how do you solve that?
The scenario I am thinking of is this:
You create a new Rackspace OnMetal cloud server together with an extra Rackspace Cloud Block Storage volume, and copy a big input data file to it with scp. You log in to the server with ssh and start a long running compute job. It would be great if the compute job by itself would be able to copy the result files to Rackspace Cloud Files and then unmount and delete the
Rackspace Cloud Block Storage volume that was used as temporary storage during the computation.
Rackspace's Role Based Access Control (RBAC) system is similar to AWS IAM roles. It lets you create users that restricted to specific APIs and capabilities. For example, a readonly cloud files user, or a cloud block storage administrator.
You could create a new user that only has access to the areas required for this compute job, e.g. cloud block storage and cloud files. Then your job would use that user's apikey to request a token and call the cloud block storage and cloud files api.
You did not mention a specific language but I recommend using an SDK, as it will handle the API specifics and quirks and get you up and running more quickly.
I'd like to upload my SQLite database to some remote storage to have access to my database from various computers and mobile devices programatically.
Is there a solution that enables secure solution (data won't be stolen) with good information privacy and some programming interface for various languages? (e.g. Python, C, Java in Android, etc.)?
SQLite has an Encryption Extension (SEE).
The SQLite Encryption Extension (SEE) is an add-on to the public domain version of SQLite that allows an application to read and write encrypted database files.
It is a commercial product, not Public Domain as SQLite.
SQLite is an embedded database which must be stored on a filesystem accessible by the client application. SQLite doesn't support multiple concurrent clients, remote access, access control, or encryption (natively.) The requirements that you list are much better served by a more traditional database server, such as MySQL or PostgreSQL. You can easily export SQLite data and import it into one of these databases.
If you are dead-set on using SQLite, you can try storing the database on a shared, remote, filesystem, a la Dropbox. You'll still have to worry about concurrent access and you'll lose many of speed advantages of using SQLite, but the database will be accessible from multiple machines.