Deployment of ARM: Authorization failed for template resource 'sql - azure-resource-manager

I try to deply SQL Server Logical server with PS and ARM. I can succesfully create logical server at portal with contributor rights, but cannot figure out what is wrong here.
I have here PowerShell ISE on Windows.
ARM template is copy and paste from https://github.com/Azure/azure-quickstart-templates/tree/master/101-sql-logical-server/
//CODE
Connect-AzAccount -Credential $Credential -Tenant $tenant -Subscription $subscription
#ARM Deployment
$templateFile = "C:\Azure\SQLServer\azuredeploy.json"
New-AzResourceGroupDeployment `
-Name SQLDeployment `
-ResourceGroupName my-rg `
-TemplateFile $templateFile
ERROR:
New-AzResourceGroupDeployment : 17.35.18 - Error: Code=InvalidTemplateDeployment; Message=The
template deployment failed with error: 'Authorization failed for template resource 'sql
vasvtmcp42o3wko/Microsoft.Authorization/11fd61df-2336-5b96-9b45-ffc7160df111' of type
'Microsoft.Storage/storageAccounts/providers/roleAssignments'. The client 'john.smith#mycompany.
com' with object id '1115f3de-834b-4d28-a48f-ecaad01e3111' does not have permission to perform action 'Microsoft.Authorization/roleAssignments/write' at scope '/subscriptions/1111111
11111111111111/resourceGroups/my-rg/providers/Microsoft.Storage/storageAccounts/sqlvasvtmcp42o3wko/providers/Microsoft.Authorization/roleAssignments/11111df
-2336-5b96-9b45-ffc7160df168'.'.

I can succesfully create logical server at portal with contributor rights, but cannot figure out what is wrong here.
Because the template you used will enable the Advanced data security for you, this will create a storage account and service principal for your sql server, then assign the service principal to the storage account as a Storage Blob Data Contributor role automatically.
To do this operation, your user account need to be the Owner or User Access Administrator in the resource group or subscription. Or you can also create a custom role which has Microsoft.Authorization/roleAssignments/write in its actions, then the role will also be able to do that.
So in conclusion, you have two options to fix the issue.
1.Navigate to the Resource group or Subscription in the portal -> Access control (IAM) -> Add -> add your user account as a role mentioned above e.g. Owner, then it will work fine. See details here.
2.When you deploy the template, specify the enableADS with false in the azuredeploy.parameters.json file. Then it will not enable the Advanced data security for you, and you will be able to create the sql server with the Contributor via the template.
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"serverName": {
"value": "GEN-UNIQUE"
},
"administratorLogin": {
"value": "GEN-UNIQUE"
},
"administratorLoginPassword": {
"value": "GEN-PASSWORD"
},
"enableADS": {
"value": "false"
}
}
}

The error clearly states the account that is being used for the action doesn't have the proper role assignment to perform the action.
the client 'john.smith#mycompany. com' with object id '1115f3de-834b-4d28-a48f-ecaad01e3111' does not have permission to perform action 'Microsoft.Authorization/roleAssignments/write' at scope '/subscriptions/1111111 11111111111111
This means your next step should be validating what role assignment is assigned to that user, and then checking that the role does have the permission to perform Microsoft.Authorization/roleAssignments/write

Related

AWS Amplify Build Issue - StackUpdateComplete

When running amplify push -y in the CLI, my project errors with this message:
["Index: 0 State: {\"deploy\":\"waitingForDeployment\"} Message: Resource is not in the state stackUpdateComplete"]
How do I resolve this error?
The "Resource is not in the state stackUpdateComplete" is the message that comes from the root CloudFormation stack associated with the Amplify App ID. The Amplify CLI is just surfacing the error message that comes from the update stack operation. This indicates that the Amplify's CloudFormation stack may have been still be in progress or stuck.
Solution 1 – “deployment-state.json”:
To fix this issue, go to the S3 bucket containing project settings and deleted the “deployment-state.json” file in root folder as this file holds the app deployment states. The bucket should end with, or contain the word “deployment”.
Solution 2 – “Requested resource not found”:
Check the status of the CloudFormation stack and see if you can notice that the stack failed because of a “Requested resource not found” error indicating that the DynamoDB table “tableID” was missing and confirm that you have deleted it (possibly accidentally). Manually create the above DynamoDB table and retry to push again.
Solution 3A - “#auth directive with 'apiKey':
If you recieve an error stating that “#auth directive with 'apiKey' provider found, but the project has no API Key authentication provider configured”. This error appears when you define a public authorisation in your GraphQL schema without specifying a provider. The public authorization specifies that everyone will be allowed to access the API, behind the scenes the API will be protected with an API Key. To be able to use the public API you must have API Key configured.
The #auth directive allows the override of the default provider for a given authorization mode. To fix the issue specify “IAM” as the provider which allows to use an "Unauthenticated Role" from Cognito Identity Pools for public access instead of an API Key.
Below is the sample code for public authorisation rule:
type Todo #model #auth(rules: [{ allow: public, provider: iam, operations: [create, read, update, delete] }]) {
id: ID!
name: String!
description: String
}
After making the above changes, you can run “amplify update api” and add a IAM auth provider, the CLI generated scoped down IAM policies for the "UnAuthenticated" role automatically.
Solution 3B - Parameters: [AuthCognitoUserPoolId] must have values:
Another issue could occur here, where the default authorization type is API Key when you run the command “amplify add api” without specifying the API type. To fix this issue, follow these steps:
Deleted the the API
Recreate a new one by specifying the “Amazon Cognito user pool” as the authorization mode
Add IAM as an additional authorization type
Re-enable #auth directive in the newly created API Schema
Run “amplify push”
Documentation:
Public Authorisation
Troubleshoot CloudFormation stack issues in my AWS Amplify project

Need of scope parameter in Microsoft.Identity.Web downstream API

I am using microsoft.Identity.Web package on my .netcore API project which calls Graph API to get the directory objects of the user.
In the appsettings file the downstream api settings are provided as below,
"DownstreamApi": {
"BaseUrl": "https://graph.microsoft.com/v1.0",
"Scopes": "Directory.Read.All"
},
The relevant permission(Directory.Read.All) is setup in the app registration.
But even if I leave the "Scope" parameter blank the API is giving me the directory objects.
So if the settings is of the format below it still works. Then what is the need of this scope parameter?
"DownstreamApi": {
"BaseUrl": "https://graph.microsoft.com/v1.0",
"Scopes": ""
},
The scope claim might not had reflected in the token and so you might not seeing any difference with scope assigned.
user_impersonation is the default delegated permission /scope that exists initially for every Web app or API in Azure AD.
Please make sure to add the required delegated permissions or application permission in portal.And grant consent if required.
In your case add directory.read.all Application permission
ex:I added user.read
Appsettings:
"DownstreamApi": {
"BaseUrl": "https://graph.microsoft.com/v1.0",
"Scopes": "user.read"
},
In startUp.cs
Public void ConfigureServices(IServiceCollection services)
{
string[] initialScopes = Configuration.GetValue<string>("DownstreamApi:Scopes")?.Split(' ');
services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
.AddMicrosoftIdentityWebApp(Configuration)
// acquire a token to call a protected web API
.EnableTokenAcquisitionToCallDownstreamApi(initialScopes)
.AddMicrosoftGraph(Configuration.GetSection("DownstreamApi"))
.AddInMemoryTokenCaches();
//
//othercode
...
}
And in controller we need to specify scopes and send to request headers to get access token for required scopes.
References:
call Microsoft Graph | Microsoft Docs
(OR) active-directory-aspnetcore-webapp-openidconnect-v2 (github.com)
How can I create a new Azure App Registration without the user_impersonation OAuth2Permission? - Stack Overflow
If client_credentials is the grant type you may need to use https://graph.microsoft.com/.default for scope in the application settings which will give you the permissions defined for your app.
"DownstreamApi": {
"BaseUrl": "https://graph.microsoft.com/v1.0",
"Scopes": "https://graph.microsoft.com/.default"
}
Try to use /token endpoint in request and not common
Please see:
ASP.NET Core - Call Graph API Using Azure Ad Access Token - Stack Overflow-Reference

Google_Drive_API_comments_error

Good afternoon. I am trying to write a function that will read the comments on a jpg file in google drive. However, when I try to run it it gives me the following error:
An error occurred:
<HttpError 403 when requesting https://www.googleapis.com/drive/v2/files/1SbB4VwCIhaS9mdJ_xqcyjenZfxxrpTsY/comments?alt=json returned "Insufficient Permission: Request had insufficient authentication scopes.". Details: "[{'domain': 'global', 'reason': 'insufficientPermissions', 'message': 'Insufficient Permission: Request had insufficient authentication scopes.'}]">
def retrieve_comments(service, file_id):
"""Retrieve a list of comments.
Args:
service: Drive API service instance.
file_id: ID of the file to retrieve comments for.
Returns:
List of comments.
"""
try:
comments = service.comments().list(fileId=file_id).execute()
return comments.get('items', [])
except errors.HttpError as error:
print('An error occurred: %s' % error)
return None
SCOPES = ['https://www.googleapis.com/auth/drive.file', 'https://www.googleapis.com/auth/drive', 'https://www.googleapis.com/auth/drive.file', ]
credentials = Credentials.from_authorized_user_file('token.json', SCOPES)
service = build('drive', 'v2', credentials=credentials)
print(retrieve_comments(service, '1SbB4VwCIhaS9mdJ_xqcyjenZfxxrpTsY'))
Update: this is what my token.json file looks like:
{"token": "ya29.a0ARrdaM-lbQRcrOHcWXHXVCZ--FHEBFmhetZy5mtKyE-KYg7kkqc7DCB3ELoGWm7DSFFqZ5n7MZ2qtpomhhhh3YjyPlDmFNiBFqW8jfzQcq2bUboJVHWly7w5KajgYBW6vXfpUG7XB-NiSRIGbgGXg7pADS9E", "refresh_token": "1//03RuSdM4_a83LCgYIARAAGAMSNwF-L9Ir99uSssRC7-EDBGOchESXQuY8uQh3BIAUSnUFmT60dipjtvqGslz9wyAl_OnLkoLWdko", "token_uri": "https://oauth2.googleapis.com/token", "client_id": "936594993582-hm55manlg9g4hkdeeisq6i4ogqk6are2.apps.googleusercontent.com", "client_secret": "irvWegrf57dztuP6_OigoGIT", "scopes": ["https://www.googleapis.com/auth/drive.metadata.readonly", "https://www.googleapis.com/auth/drive.file"], "expiry": "2021-08-19T12:26:14.658525Z"}
This is how my code looks like. any ideas why this might be happening and what I can do to solve it?
Edit: For anyone who runs into the same problem, remember the scopes in the quickstart must be the same as the ones in your python file.
Insufficient Permission
Means that the user you are authenticated with does not have permission to do what you are trying to do, or that user has not granted your application permission.
You are trying to use comments.list this method requires that you have been authorized with one of the following scopes
Now you appear to be using the following scopes
'https://www.googleapis.com/auth/drive.file', 'https://www.googleapis.com/auth/drive', 'https://www.googleapis.com/auth/drive.file',
Im not sure why you have drive.file twice, but lets ignore that for now.
As you can see you appear to be using the proper scopes needed by this endpoint. What i suspect has happened is that you have already authorized the user using a different set of scopes and you then changed the scopes in your application. When you change the scopes you need to be sure that you have revoked the users access in your application and prompt the user to authorize your application again. You are probably running on a stored access token and or refresh token which have the old scopes.
The solution will be to simply force your application to authorize your user again, make sure the consent screen popsup.

code for using O365 groups in a .net core app

I am working on a .net core app and have to integrate O365 security groups for roles assignment, does someone have sample code to share, will be very helpful.
I have already used Azure AD app registration concept for O365 authentication and its working perfectly. .Net core app is hosted on IIS, when accessed by typing in url in browser, it redirects users to login.microsoftonline.com, once authenticated, users then see dashboard part of .net core app.
Not so sure about how O365 groups can be used in .net core app for permissions management, so looking for some sample snippet, thanks in advance.
You can query graph api either as your app or impersonate the user, to read which groups the user is in and then use those Id to filter views or what ever you need to do.
you can use the "List memberOf"
https://learn.microsoft.com/en-us/graph/api/user-list-memberof?view=graph-rest-1.0
Hope it helps.
Office365 security groups can be used for permissions management in your app, by verifying if a user is a member of a security group. You can achieve that by using Microsoft Graph API as MohitVerma suggested.
First, define groups to roles mapping in your app (configuration file seems to be a good place for that). Each group has a unique id, which you can get using e.g. Office365 or Microsoft Graph and map to a custom role in your config.json file:
{
"AppRoles": [
"Admin": "d17a5f86-57f4-48f8-87a0-79761dc8e706",
"Manager": "9a6a616e-5637-4306-b1fe-bceeaa750873"
]
}
Then, after successful login to the app, call the Graph API to get all groups the user belongs to. You will get a list of groups, each containing id property:
GET https://graph.microsoft.com/v1.0/me/memberOf
{
"#odata.context": "https://graph.microsoft.com/v1.0/$metadata#directoryObjects",
"value": [
{
"#odata.type": "#microsoft.graph.directoryRole",
"id": "43a63cc2-582b-4d81-a79d-1591f91d5558",
"displayName": "Company Administrator",
"roleTemplateId": "62e90394-69f5-4237-9190-012177145e10"
},
{
"#odata.type": "#microsoft.graph.group",
"id": "d17a5f86-57f4-48f8-87a0-79761dc8e706",
"createdDateTime": "2017-07-31T17:36:25Z",
"displayName": "Admins group",
"securityEnabled": true
}
]
}
You can use MS Graph SDK for .NET to make a request and to create a group objects form the response:
var userGroups = await graphServiceClient.Me.Groups.Request().GetAsync();
Finally, verify the id of each group with your custom roles, e.g.:
public string GetRole(IEnumerable<Group> userGroups, IConfiguration config)
{
foreach (var group in userGroups)
{
switch (group.id)
{
case config.GetSection("AppRoles:0"):
return "Admin";
case config.GetSection("AppRoles:1"):
return "Manager";
default:
return "Unknown";
}
}
}
Make sure to grant permissions for your app to access Microsoft Graph.

CAS authentication failing with Drupal but working as standalone

I am trying to setup CAS server locally and I have Drupal running locally as well. I am using Mongo DB for CAS ticket registry and user authentication. For CAS service registry I am using file-based JSON service registry.
My Service registry:
{
"#class": "org.apereo.cas.services.RegexRegisteredService",
"id": 3,
"serviceId": "http(s)?:\\/\\/relo.local(:\\d{4,5})?(\\/.*)?$",
"name": "relo.local",
"evaluationOrder": 10,
"accessStrategy": {
"#class": "org.apereo.cas.services.DefaultRegisteredServiceAccessStrategy",
"enabled": true,
"ssoEnabled": true
},
"attributeReleasePolicy": {
"#class": "org.apereo.cas.services.ReturnAllAttributeReleasePolicy"
}
}
In MongoDB I created a collection called accounts in which I have created some dummy user records like this:
/* 1 */
{
"_id" : ObjectId("5c24f234e51c56a02af5873f"),
"username" : "casuser",
"password" : "casuser",
"firstname" : "wohn",
"lastname" : "smith",
"mail" : "casuser#test.com"
}
/* 2 */
{
"_id" : ObjectId("5c24f24de51c56a02af58757"),
"username" : "wasuser",
"password" : "wasuser",
"firstname" : "wohn",
"lastname" : "smith",
"mail" : "wasuser#test.com"
}
For the ticket registry, I do not need to create any collection. CAS is taking care of creating ticket registry collection and insert a record in that when I try to log in.
Drupal is using cas module which uses phpCAS library to connect with CAS server.
Drupal version: 7.34
phpCAS version: 1.3.0
CAS Version: 6.1.0-RC1-SNAPSHOT
CAS provide its own login screen. After all these configurations I am able to login into CAS server with casuser and wasuser accounts.
My cas.properties file:
# Required CAS settings
cas.server.name=https://localhost:8443
cas.server.prefix=${cas.server.name}/cas
# Log4j config file location
logging.config: file:/etc/cas/config/log4j2.xml
# Control log levels via properties
logging.level.org.apereo.cas=DEBUG
# Restrict admin endpoints (like /status) to localhost
# cas.adminPagesSecurity.ip=127\.0\.0\.1
# Authenticate if any handler succeeds
cas.authn.policy.any.tryAll=false
# Disable authentication with a static list of credentials
# If below line is commented then you can use default
# username/password:casuser/Mellon
cas.authn.accept.users=ram::ram,shyam::shyam
# Ticket Grant Cookie (TGC) encryption key
cas.tgc.crypto.encryption.key=<my key>
# Ticket Grant Cookie (TGC) Signing key
cas.tgc.crypto.signing.key=<my key>
# Webflow encryption key
cas.webflow.crypto.encryption.key=<my key>
# Webflow signing key
cas.webflow.crypto.signing.key=<my key>
# Embedded Tomcat settings
server.servlet.context-path=/cas
server.port=8443
server.ssl.keyStore=file:/etc/cas/thekeystore
server.ssl.keyStorePassword=changeit
server.ssl.keyPassword=changeit
# JSON Service Registry
cas.serviceRegistry.json.location=file:/etc/cas/config/services-staging
# MongoDb Ticket registry
cas.ticket.registry.mongo.host=localhost
cas.ticket.registry.mongo.port=27017
cas.ticket.registry.mongo.userId=casDbAdmin
cas.ticket.registry.mongo.password=admin
cas.ticket.registry.mongo.databaseName=casdb
cas.ticket.registry.mongo.authenticationDatabaseName=casdb
# MongoDb Authentication
cas.authn.mongo.host=localhost
cas.authn.mongo.port=27017
cas.authn.mongo.userId=casDbAdmin
cas.authn.mongo.password=admin
cas.authn.mongo.databaseName=casdb
cas.authn.mongo.authenticationDatabaseName=casdb
cas.authn.mongo.usernameAttribute=username
cas.authn.mongo.attributes=
cas.authn.mongo.passwordAttribute=password
cas.authn.mongo.collection=accounts
# Authentication Policy
cas.authn.policy.requiredHandlerAuthenticationPolicyEnabled=true
# Default attributes.
cas.authn.attributeRepository.defaultAttributesToRelease=firstname,lastname,mail
# Spring Webflow
cas.webflow.autoconfigure=true
cas.webflow.alwaysPauseRedirect=false
cas.webflow.refresh=true
cas.webflow.redirectSameState=false
cas.webflow.session.lockTimeout=30
cas.webflow.session.compress=false
cas.webflow.session.maxConversations=5
cas.webflow.session.storage=true
I have configured Drupal to use local CAS server for authentication. When I try to access Drupal it redirects me to CAS login screen. After putting credentials in the login form and submit It is failing and showing me below error which I am not able to figure out. I am not very good in JAVA. CAS has embedded Spring webflow. I think the error is related to webflow. During authentication process, CAS has something called principle resolution and attribute resolution which decides which authentication handler to used and how many attributes to attach with the response.
Error:
2019-01-24 14:35:32,348 INFO [org.apereo.inspektr.audit.support.Slf4jLoggingAuditTrailManager] - <Audit trail record BEGIN
=============================================================
WHO: casuser
WHAT: TGT-5-*****b6YD2V8OBQ4X-jet
ACTION: TICKET_GRANTING_TICKET_CREATED
APPLICATION: CAS
WHEN: Thu Jan 24 14:35:32 IST 2019
CLIENT IP ADDRESS: 0:0:0:0:0:0:0:1
SERVER IP ADDRESS: 0:0:0:0:0:0:0:1
=============================================================
>
2019-01-24 14:35:32,353 ERROR [org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/cas].[dispatcherServlet]] - <Servlet.service() for servlet [dispatcherServlet] in context with path [/cas] threw exception [Request processing failed; nested exception is org.springframework.webflow.execution.ActionExecutionException: Exception thrown executing org.apereo.cas.web.flow.GenerateServiceTicketAction#147375b3 in state 'generateServiceTicket' of flow 'login' -- action execution attributes were 'map[[empty]]'] with root cause>
java.lang.NullPointerException: null
Full Error Stack Trace: https://pastebin.com/vEvcvFte
Any help is appreciated. I am struggling with this error for days and I am not able to figure out the issue. Please help.
You may want to consider force-updating your SNAPSHOT. While your logs don't show the commit id, it's possible you are running an old version with a bug that has since been fixed. If you examine the readme file of the project, you will find instructions on how to update the snapshot version via Gradle.

Resources