SQL always encrypted and dump of migration scripts for validation - flyway

I have two issues I can't figure out how to accomplish in Flyway without forking the repo and we'd like to avoid that.
Issue #1. Sql Server Always Encrypted connection, how do we override or inject enough information so that Flyway can setup the database connect to an Always Encrypted database. The connection needs to connect to Azure Key Vault to get a token for use for encryption/decryption but this additional setup that is above the the standard User Name/Password the connection string needs. Also, you can't pass these values on the connections string.
More details here on how this would be done in JDBC as I'm not a Java person.
Issue #2. Is there a way to retrieve the full list of SQL statements that are about to run during the migration and after all the "placeholders" are resolved? We need a way to check all the SQL scripts to ensure the scripts don't run specific commands such as CREATE USER, DROP DATABASE, etc. as we running this in a controlled environment and though that those commands work great during development, they can't be run in PRODUCTION. In Production the database user will have elevated privileges so we need to check the scripts before running them. I see the Dry Runs Pro feature but that just writes to a file. We'd like to get this data back on a callback and then we can validate it prior to the migration running.

Related

How Can I filter out a dependency failure from app insights

Since moving to Azure Key Vault to manage some keys and connection strings my App Insights Failures blade is producing errors when attempting to connect to Key vault.
The error is specifically: InProc | Microsoft.ManagedIdentity: EnvironmentCredential.GetToken
Azure.Identity.CredentialUnavailableException: EnvironmentCredential authentication unavailable. Environment variables are not fully configured.
I know what this means and it's by design. We have chosen to use Managed Identities to handle the
Key vault connection and as such we do not have any Environment Variables set in the app to connect to the Key vault. This is by design.
This image shows the default connection methods for an app service to hit Key vault, this is the default path for app services to connect to a key vault as per the MS Docs. You can see the check for Environment Credentials fails before the successful call for the Managed Identity.
So while i realize this is just tracking "how it works" i don't like to see all those failures. My question is two-fold:
does this failure REALLY take no time at all? Seems that its just "how it works"
Can i suppress this from either being collected or reported in App Insights without having to extract the data myself and use some other reporting system?

Azure Key Vault Linked service not working, debug failed, trigger success

I have created a Linked Service using key vault and then used that Linked service in Data Linked Service (Azure SQL database). Both Linked services independently tested successfully. I have used that in a very simple pipeline, while I am debugging the pipeline, it gets failed with an error:
'Invalid linked service reference. Name: '.
This is referring to Key Vault linked service.
When I trigger the pipeline, it works fine. I have published my changes so many time but no success.
So my basic query is - My pipeline is not working on Debug, however it is working fine with Trigger now.
I had faced exactly the same problem, I performed the following actions:
Save all existing pipelines
Validated all
Publish all
Closed the datafactory browser window/tab
logged back into datafactory
Opened the pipeline again and the debug worked fine. I didn't have to touch the Azure Vault configuration. Its most likely to do with cached vault configuration (or a sync issue with the cached vault config)
When a pipeline is working by trigger, but not by debug, that suggests either: there is a difference between the published version and the version in the UI, or, you have parameters that depend upon the trigger.
It's very strange thing I have noticed in Linked Service in ADF. I have selected Azure Key Vault near to password and just passed AKV linked service name there and it worked.
That suggests that JSON is not properly working with Azure key vault services in Linked Services. Well, my issue has been resolved however logically I am still unclear.
If any one looking for resolution of same, please refer below. Thank you.
Key Vault Linked Service

How do I view the contents of my "default" type Cloud SQL database in AppMaker?

I created a new AppMaker app and selected the "default" (as opposed to "custom") Cloud SQL database backend.
While I'm prototyping my app, I'd like to be able to inspect the contents of my database periodically as an admin to debug issues. With a custom Cloud SQL database this is easy because you can acess a custom Cloud SQL database from the cloud console, but I don't see how to manually query a default Cloud SQL database.
I know that I can export my database to a Google Sheet, but that's inconvenient to do frequently.
How do I inspect the contents of my AppMaker default Cloud SQL database for debugging (eg. via a SQL command line, UI tool, etc)?
I believe it would be the same as with the custom one. The documentation explains:
A G Suite administrator can set up a Google Cloud SQL instance that is shared among App Maker apps in an organization. When this is enabled, a new database is automatically created for your app when you add at least one Cloud SQL data model. Choose this option if your app needs a database that is easy to use and requires no set up.
This means that you had set up correctly the instance information in the G Suite Admin console:
So to connect to your SQL instance, you just need to follow the instructions here. Then simply use the instance connection name where required. You will also need the database name and you can get that from the appsettings or deployment settings in appmaker.
For the preview mode it will be in the app settings. For any deployed version, it will be in the deployment settings:

How to establish a connection to DynamoDB using python using boto3

I am bit new to AWS and DynamoDB.
My aim is to embed a small piece of code.
The problem I am facing is how to make a connection in python code. I made a connection using AWS cli and then entering access ID and key.
But how to do it in my code as i wish to deploy my code on other systems.
Thanks in advance !!
First of all read documentation for boto3 dynamo, it's pretty simple:
http://boto3.readthedocs.io/en/latest/reference/services/dynamodb.html
If you want to provide access keys while connecting to dynamo, you can do the following:
client = boto3.client('dynamodb',aws_access_key_id='yyyy', aws_secret_access_key='xxxx', region_name='***')
But, remember, it is against best practices from security perspective to store such keys within the code.
For best security efforts use IAM roles.
boto3 driver will automatically consume IAM role if it is attached to the instance.
Link to the docs: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
Also, if IAM roles is to complicated, you can install and aws-cli and run aws configure on your server, and boto3 will use the key from here (less secure than a previous approach).
After implementing one of the options, you can connect to DynamoDB without the keys from code:
client = boto3.client('dynamodb', region_name='***')

confused about local data storage for occasionally connected application in .NET

Can I use a SQL Server Express database as my local database for an occasionally connected application (OCA) written using Visual Studio? Would that require SQL Server to be installed on the client machine? It looks like the default architecture for OCAs in .NET is to use SQL Server Compact. However, SQL Server Compact doesn't permit the use of stored procedures. I use stored procedures for all the data access in my application so, I am confused about the best way to create an occasionally connected client to extend the functionality.
I currently have an ASP.NET web application that connects to a web service (WCF). The web service connects to the DB and calls stored procedures to get data and submit changes to data. Now, I am trying to write a desktop application that can connect to the web service when a connection is available, and work locally when a connection is not available, using the MS Sync Framework. I don't quite understand how to do the architecture for this bit.
Yes, local data cache works with SQL CE 3.5 and you cannot use stored procedures on the cache. Once you add local data cache item to your project it automatically prepares all necessary MS Sync Framework code for data synchronization with the main data source + all necessary SQL scripts for local database and it will also offer you to create either typed datasets or entity data model to access the cache from your application.
Item doesn't work with SQL Server Express - it doesn't offer any other data provider then SQL Compact 3.5. Anyway if you want to use SQL Server Express you will have to either install it on the client machine or use another machine as DB server which breaks whole purpose of Local data cache.
Btw. I think Local data cache works only against database as the main data source so you cannot use it if you want to have WCF services as data source and you will have to write the store and synchronization yourselves.

Resources