SQL MI restore DB from Blob over privatelink - rbac

I'm able to restore a database backup file from Blob storage to a SQL MI database (part of a VNet) using managed identity, but this is with public network access allowed on the storage account. I have a privatelink setup for my storage account on same vnet as SQL MI, so I would think that turning off public access on the storage account would work, but I get the error "Operating system error 5(Access is denied)". My SQL MI System Managed Identity has the Blob Data Reader role, so I'm not quite sure what I'm missing here.
Any help would be most appreciated. Thank you

Related

Restrict user access to Azure CosmosDB MongoDB Database

Azure CosmosDB - MongoDB provides keys for Read-Write and Read Only at account level.
CosmosDB SDK and API are there through which users can be created and access can be define at database and document level.
• But What I need to do is to create a pair of username and password with restricted access to a MongoDB database similar to one provided by installable MongoDB.
• How a user can connect only CosmosDB MongoDB database using RoboMongo.
Highly appreciate any help.
Amit -
Today, Cosmos DB access are provided by using two keys, Master Key and Read Only key. However, if you want to restrict user access per collection, per document etc, you have to use Resource Tokens. You can read more about it here and please take a look at CH9 video to see the implementation details. Resource Token service can be implemented as an Azure Function. Here is code to get you started.
But if you are using RoboMongo you have to Use the keys as define in this document. At this time you cannot define different users and Keys for a Database.

Mount Azure File Storage using SAS token for authentication

The documentation shows how to connect using Storage Account Key:
https://learn.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-windows
That does work. However, I'd like to mount file storage using read-only SAS token.
Is this possible?
is this possible?
Unfortunately, no. We must set the storage account key when mounting Azure File shares, everyone who has storage account and account key will have full permissions to manage and operate file shares. From the feedback we could know that Microsoft has no plan to do that.
At the moment, Microsoft does not have plans to support SAS tokens with SMB access. Instead, we are looking into supporting AD integration for mounted file shares.
It's possible on different approach and secure. You still use the mount cifs (net use windows) but you stored the credentials in the key vault. You should mount this on the bootup (with systemctl) using the technique of curl to get the credentials. You need to allow key vault access policy on the vm, now this is tricky too to automate but it's possible.

Firebase database validation connection to other databases and/or storage

Could I have server side code on firebase database that references other databases or storages that I created? Like for example, have url of image in firebase storage be verified to exist when writing the url to database?

How do I give an Openstack server permissions to call the Openstack APIs?

I am aware of how the permission system works in AWS:
By giving an EC2 instance a specific IAM role, it is possible to give all programs running on that specific EC2 instance some set of permissions for accessing other AWS services (e.g. permission to delete an EBS volume).
Is there something similar for Openstack? If you would like a program that is running on an Openstack server to be able to programmatically make changes through the Openstack API:s, how do you solve that?
The scenario I am thinking of is this:
You create a new Rackspace OnMetal cloud server together with an extra Rackspace Cloud Block Storage volume, and copy a big input data file to it with scp. You log in to the server with ssh and start a long running compute job. It would be great if the compute job by itself would be able to copy the result files to Rackspace Cloud Files and then unmount and delete the
Rackspace Cloud Block Storage volume that was used as temporary storage during the computation.
Rackspace's Role Based Access Control (RBAC) system is similar to AWS IAM roles. It lets you create users that restricted to specific APIs and capabilities. For example, a readonly cloud files user, or a cloud block storage administrator.
You could create a new user that only has access to the areas required for this compute job, e.g. cloud block storage and cloud files. Then your job would use that user's apikey to request a token and call the cloud block storage and cloud files api.
You did not mention a specific language but I recommend using an SDK, as it will handle the API specifics and quirks and get you up and running more quickly.

Using redis with SQL server

I am developing a web app and came across redis for key value storage. I do have a relational db SQL server. But as I have a multi tenancy system there will be separate schemas for each customer.
I was thinking how viable would it be use both redis and SQL server together? I was thinking of storing user Id and schemas so then can connect to SQL server db for that user
It's perfectly viable to use both Redis and SQL Server together.
With more details about the kinds of schema differences you expect, we might be able to provide more insight.

Resources