Azure SQL deployment task with AAD authentication - azure-deployment

This
suggests that it's possible to use AAD rather than SQL Server Authentication to do the deployment.
Is that true?
I turned on the managed identity in the Azure SQL Server but none of these three AAD options work. Do I need to do something else? What?

I tried to reproduce the same in my environment and got below results:
To use Azure AD authentication, you need to set one Azure AD user as Admin of your SQL server like below:
Now, you can run one Azure DevOps SQL Database Deployment task by selecting Active Directory -Password authentication which is your Azure AD user credentials like below:
You can add your Azure subscription and Azure SQL details like below:
When you select add, your Azure SQL task is added in the YAML pipeline like below:
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
- main
pool:
vmImage: windows-latest
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script'
- script: |
echo Add other tasks to build, test, and deploy your project.
echo See https://aka.ms/yaml
displayName: 'Run a multi-line script'
- task: SqlAzureDacpacDeployment#1
inputs:
azureSubscription: 'SID subscription(01xxx365-f598-44d6-b4xd-e2b6xxxxa7)'
AuthenticationType: 'aadAuthenticationPassword'
ServerName: 'siliconserver.database.windows.net'
DatabaseName: 'silicondb'
aadSqlUsername: 'spuser#xxxxxxxxxxx.onmicrosoft.com'
aadSqlPassword: 'xxxxxxxxxxx'
deployType: 'InlineSqlTask'
SqlInline: |
SELECT * FROM Products
IpDetectionMethod: 'IPAddressRange'
StartIpAddress: '0.0.0.0'
EndIpAddress: '255.255.255.255'
Response:
When I checked the same in SSMS, table is created in database like below:
Reference:
azure-pipelines-tasks/README.md at master ยท microsoft/azure-pipelines-tasks (github.com)

Related

Create Insights Log query from Azure cli not Log Monitor saved search

Using latest azure cli (2.28.1)
The creation of Kusto queries against Log Analytics with the azure cli is documented here:
https://learn.microsoft.com/en-us/cli/azure/monitor/log-analytics/workspace/saved-search?view=azure-cli-latest
using the saved-search directive. A minor irritation is that the cli always creates legacy categories rather than non-legacy and tags sometimes are not correctly applied.
But what I can not find is how to create queries against Insights with the cli. Combed the Microsoft docs without a hit. Insights is a subset of Log Analytics (Monitor) but the queries are stored separately. Alarms can target both resource groups (i.e. Insights and Log Analytics).
You need to use az monitor app-insights query command to run the kusto queries for application insights using Azure CLI.
We have tested in our environment Using the below cmdlets, can pull the appid for an application insights & also Total numbers requests for that particular application over a time period of 1day.
az monitor app-insights component show --app <app-insightsName> --resource-group <resource-Name> --query appId
az monitor app-insights query --app <appId> --analytics-query 'requests | summarize count() by bin(timestamp, 24h)' --offset 1h30m
Here is the reference document for more information about running app insight analytics-queries using Azure CLI.
With bicep (az bicep build --file <bicep file>) resource definitions can be defined in a template (json) then deployed with the azure cli (az deployment group create --resource-group <name> --template-file <bicep generated template>)
Hard part was making parent and child resources in bicep. Needed a parent query pack and a child queries:
resource querypack 'Microsoft.OperationalInsights/queryPacks#2019-09-01-preview' =
{
name: 'DefaultQueryPack'
location: 'northeurope'
properties: {}
}
resource query 'Microsoft.OperationalInsights/queryPacks/queries#2019-09-01-preview' = {
parent: querypack
name: '6967c00c-9b46-4270-bee0-5a27b8b85cef'
properties: {
displayName: 'BadEventsBySdcFileId'
description: ''
body: '<kusto query>'
related: {
categories: [
'applications'
]
resourceTypes: [
'microsoft.insights/components'
]
}
tags: {}
}
}
Also the query resource name has to be a GUID which is not at all clear in the documentation. Tags are helpful to group by topic when hunting around for queries say that belong to a project domain.

BAD_GATEWAY when connecting Google Cloud Endpoints to Cloud SQL

I am trying to connect from GCP endpoints to a Cloud SQL (PostgreSQL) database in a different project. My endpoints backend is an app engine in the flexible environment using Python.
The endpoints API works fine for non-db requests and for db requests when run locally. But the deployed API produces this result when requiring DB access:
{
"code": 13,
"message": "BAD_GATEWAY",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "application"
}
]
}
I've followed this link (https://cloud.google.com/endpoints/docs/openapi/get-started-app-engine) to create the endpoints project, and this (https://cloud.google.com/appengine/docs/flexible/python/using-cloud-sql-postgres) to link to Cloud SQL from a different project.
The one difference is that I don't use the SQLALCHEMY_DATABASE_URI env variable to connect, but take the connection string from a config file to use with psycopg2 SQL strings. This code works on CE servers in the same project.
Also double checked that the project with the PostgreSQL db was given Cloud SQL Editor access to the service account of the Endpoints project. And, the db connection string works fine if the app engine is in the same project as the Cloud SQL db (not coming from endpoints project).
Not sure what else to try. How can I get more details on the BAD_GATEWAY? That's all that's in the endpoints logfile and there's nothing in the Cloud SQL logfile.
Many thanks --
Dan
Here's my app.yaml:
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 3
env_variables:
SQLALCHEMY_DATABASE_URI: >-
postgresql+psycopg2://postgres:password#/postgres?host=/cloudsql/cloudsql-project-id:us-east1:instance-id
beta_settings:
cloud_sql_instances: cloudsql-project-id:us-east1:instance-id
endpoints_api_service:
name: api-project-id.appspot.com
rollout_strategy: managed
And requirements.txt:
Flask==0.12.2
Flask-SQLAlchemy==2.3.2
flask-cors==3.0.3
gunicorn==19.7.1
six==1.11.0
pyyaml==3.12
requests==2.18.4
google-auth==1.4.1
google-auth-oauthlib==0.2.0
psycopg2==2.7.4
(This should be a comment but formatting really worsen the reading, I will update on here)
I am trying to reproduce your error and I come up with some questions:
How are you handling the environment variables in the tutorials? Have you hard-coded them or are you using environment variables? They are reset with the Cloud Shell (if you are using Cloud Shell).
This is not clear for me: do you see any kind of log file in CloudSQL (without errors) or you don't see even logs?
CloudSQL, app.yaml and requirements.txt configurations are related. Could you provide more information on this? If you update the post, be careful and do not post username, passwords or other sensitive information.
Are both projects in the same region/zone? Sometimes this is a requisite, but I don't see anything pointing this in the documentation.
My intuition points to a credentials issue, but it would be useful if you add more information to the post to better understand where the issue cames from.

boxfuse dev db not provisioned correctly

I'm just starting with boxfuse and can't seem to find a way to get my dev database to be provisioned.
In my boxfuse.yml I have (for the database section):
database:
# the name of your JDBC driver
driverClass: com.mysql.jdbc.Driver
# the username
user: root
# the password
password: <password>
# the JDBC URL
url: jdbc:mysql://10.0.0.84:3306/dmsdb
# any properties specific to your JDBC driver:
properties:
charSet: UTF-8
hibernate.dialect: org.hibernate.dialect.MySQLInnoDBDialect
# the maximum amount of time to wait on an empty pool before throwing an exception
maxWaitForConnection: 1s
# the SQL query to run when validating a connection's liveness
validationQuery: "/* MyApplication Health Check */ SELECT 1"
# the minimum number of connections to keep open
minSize: 8
# the maximum number of connections to keep open
maxSize: 32
# whether or not idle connections should be validated
checkConnectionWhileIdle: false
If I try running it (boxfuse run), my application doesn't work at all.
boxfuse info produces the following:
oxfuse client v.1.18.7.938
Copyright 2016 Boxfuse GmbH. All rights reserved.
Account: mlr11 (mlr11)
Info about mlr11/dms-service in the dev environment:
App Type : Single Instance with Zero Downtime updates
App URL : http://127.0.0.1:8082
DB Type : MySQL database
DB URL : jdbc:mysql://localhost:3306/boxfuse-dev-db
DB Host : localhost
DB Port : 3306
DB Database : boxfuse-dev-db
DB User : boxfuse-dev-db
DB Password : boxfuse-dev-db
DB Status : available
Which is very different than what I was expecting. URL, Database, User, Password) are not matching my boxfuse.yml file.
What I am missing. I know it must be something simple. I did all kind of search and read the doc a few times. I can't seem to find what's wrong. Any pointers will be appreciated.
From the config file you posted I am assuming this is a dropwizard app.
Since your Boxfuse app was configured to use a MySQL database, Boxfuse automatically provisions a database in each environment when you first deploy your application there. In your case you can see the connection info for that database in the dev environment in the output you post in your question.
Boxfuse exposes these values (db url, user, password, ...) as environment variables (https://cloudcaptain.sh/docs/databases#envvars) and automatically configures your framework (Dropwizard I assume) to use those instead of the ones included in your config file. It will do so by passing -Ddw.database.url=$BOXFUSE_DATABASE_URL -Ddw.database.user=$BOXFUSE_DATABASE_USER -Ddw.database.password=$BOXFUSE_DATABASE_PASSWORD as arguments to the JVM.
Also doublecheck in the VirtualBox GUI that your VirtualBox installation is fully functional and able to start VMs and that both the Boxfuse Dev VM and the instance of your application are started properly.

Azure Deployment Task Fails on Powershell Credential setting

I have a Visual Studio Team Services Build Definition to deploy an Asp.Net MVC application to Azure Web Site. I used the wizards to create my build definition so it is pretty vanilla implementation.
Most of the build goes well. The 'Get Source', 'Build Solution', 'Test Assemblies' tasks all pass. But the task for the 'Azure Deployment' is failing and it looks to me as though it is having problems with the PowerShell credentials.
The error stats :
AADSTS50034: To sign into this application the account must be added to the mydomain.org directory.
Since this is running in the cloud, I don't know what account it is trying to use so I am looking for some ideas how to get past this step.
Here is the output of the Azure Deployment task.
******************************************************************************
Starting task: Azure Deployment: http://superpoolsquares.azurewebsites.net
******************************************************************************
Executing the powershell script: C:\LR\MMS\Services\Mms\TaskAgentProvisioner\Tools\agents\default\tasks\AzureWebPowerShellDeployment\1.0.23\Publish-AzureWebDeployment.ps1
Importing Azure Powershell module.
Importing AzureRM Powershell module.
AzurePSCmdletsVersion= 1.0.0
Get-ServiceEndpoint -Name edb1710a-25b3-4037-93b0-58c00f83c038 -Context Microsoft.TeamFoundation.DistributedTask.Agent.Worker.Common.TaskContext
Username= ********
azureSubscriptionId= b4d2fa61-92ff-494a-9ff1-d1362895fc78
azureSubscriptionName= Visual Studio Professional with MSDN
Add-AzureAccount -Credential $psCredential
AADSTS50034: To sign into this application the account must be added to the mydomain.org directory.
Trace ID: 2cb051b9-6e76-4789-8a5d-e95a9486b731
Correlation ID: 22162659-23fa-4858-b957-9ccbf120654d
Timestamp: 2016-02-10 00:19:27Z: The remote server returned an error: (400) Bad Request.
Add-AzureRMAccount -Credential $psCredential
AADSTS50034: To sign into this application the account must be added to the mydomain.org directory.
Trace ID: ed10284e-87b6-4d45-8bd3-9ed1b25f4498
Correlation ID: 88960dea-0434-4eba-9f17-e4d6ceba1a41
Timestamp: 2016-02-10 00:20:21Z: The remote server returned an error: (400) Bad Request.
There was an error with the Azure credentials used for deployment.
It uses the account you configured on Service Endpoints dialog as following:
According to the error message, the account you use isn't been added to the mydomain.org directory which is the trusted AD by the subscription. So you need to add your account into that directory from Azure Portal and then try the deployment.
If you don't want make any change on Azure. You can use "Certificate Based" authentication when configure the connection.

Checklist When IBM WebSphere Application server is running up

I have an IBM WebSphere Application Server v8.5 (WAS) installed on Linux RedHat 6.
My question: how can I check on the following by command (if exist):
Is the application server running or not?
Is the web application that deployed on it running or not?
The database connectivity (using datasource) is connected successfully or not?
The easiest and quickest to check all these things is to use web administrative console available at http://yourHost:9060/ibm/console.
If you want to use command, then:
Is the application server running or not?
You can check that issuing serverStatus command (will check all servers):
%PROFILE_ROOT%/bin/serverStatus.sh -all
or for specific server:
%PROFILE_ROOT%/bin/serverStatus.sh serverName
the output will be something like:
C:\IBM\WebSphere\AppServer85\profiles\AppSrv02\bin>serverstatus server1
ADMU0116I: Tool information is being logged in file
C:\IBM\WebSphere\AppServer85\profiles\AppSrv02\logs\server1\serverSta
tus.log
ADMU0128I: Starting tool with the AppSrv02 profile
ADMU0500I: Retrieving server status for server1
ADMU0508I: The Application Server "server1" is STARTED
Is the web application that deployed on it running or not?
There is no direct command for this. You can use wsadmin script for that. A simple one could be like the one below, if it returns entry the application is running:
print AdminControl.completeObjectName('type=Application,name=myApplication,*')
For more details check this question How do I determine if an application is running using wsadmin?
The database connectivity (using datasource) is connected successfully or not?
There is no direct command for this. You can use wsadmin script for that also. Here is sample script:
ds = AdminConfig.getid('/DataSource:Default Datasource/')
AdminControl.testConnection(ds)
For more details check this page Testing data source connections using wsadmin scripting
The serverStatus.sh command is s..l..o..w.. If you want an answer today then there is a file in the logs folder with the process PID:
/opt/IBM/WebSphere/AppServer/profiles/AppSrv01/logs/servername/servername.pid
That file contains the PID of the server process. If it is running:
ps -p pid
Then the server is up.
1. ps -ef | grep dmgr
2. ps -ef | grep <application name>
Also grep SystemOut.log for e-business and verify latest timestamp.
Log into admin console, browse to DataSource, display from all scopes, select your datasource and then click test. As long as the nodeagent is running and has been restarted at least once since the Datasource config and credentials were added, then this test should be fairly accurate.

Resources