Updating existing security group using heat - openstack

I have crated a few security groups using CLI in my OpenStack tenant. I tried to update the security group rules using Heat for these groups. But instead of update, a new security group got created. Is there a way I can update the security group in these rules using Heat?

If security groups are created via Heat stack-create then those resource id's will be maintained by HEAT. In such a case, security groups can be updated with the new set of rules by stack-update option in Heat.

Resources not created by a heat stack can't be managed by Heat. Since security group with rules is a resource by itself, an existing security group can't simply be updated via Heat. You have to create the security group in Heat too.
For most use cases, you can simply create a new security group as part of your heat-stack and use that for VMs created in that heat-stack.

Related

How to get the role assignments of a resource through Resource Graph API?

I want to use the Azure Resource Graph API to get the role assignments of a resource (who are owners, contributors, etc.). That is, I want to create a query that finds the role assignments for a specific resource id that I provide. I've been going through the documentation, but I haven't found any way to get this information.
The only thing I found was this question from a couple of years ago, where it is mentioned as something that could be done somehow ("query the RBAC of each one of those resources").
Could anyone point me to how this could be done? Or is it not possible to do in Resource Graph API, and I need to use the Management API or something else?
I searched through the Azure Resource Graph table and resource type reference and the Advanced Resource Graph query samples, but didn't find an answer
I tried to reproduce the same in my environment and got the results like below:
I created Azure AD Application and added API permissions:
I generated an access token by using below parameters:
https://login.microsoftonline.com/TenantID/oauth2/v2.0/token
client_id:xxxxxx-xxx-xxx-xxxx-xxxxxxxx
client_secret:ClientSecret
scope:https://management.azure.com//.default
grant_type:client_credentials
To list the Role assignments in the subscription scope, I used the below query:
GET https://management.azure.com/subscriptions/subscriptionId/providers/Microsoft.Authorization/roleAssignments?api-version=2022-04-01
Based on your requirement you can change the scope and add the filter to get the role assignments. Refer the below MsDoc:
List Azure role assignments using the REST API - Azure RBAC
Currently it is not feasible to retrieve the role assignments via Azure Resource Graph. Alternatively, you can make use of Azure PowerShell or Azure CLI.
Get-AzRoleAssignment -Scope "/subscriptions/SubscriptionId/resourcegroups/RGName/providers/Providername/ResourceType/Resource"

How to add Azure custom Policy for Azure Data Factory to only use Azure Key Vault during the Linked Service Creation?

How to add Azure custom Policy for Azure Data Factory to only use Azure Key Vault during the Linked Service Creation for fetching the Data Store Credentials instead of credentials being put up directly in ADF Linked Service. Please suggest ARM or PowerShell methods for the policy implementation.
As of yesterday, the Data Factory Azure Policy integration is available which means you can now find some built-in policies that can be assigned to ADF.
One of those is exactly what you're asking for as you can see in the image below. You can find more information here
Edit: Based on your comment, I'm editing this answer with the info you want. When it comes to custom policies, it's pretty much up to you to come up with them and create what fits your needs. In your particular case, I've created one policy that does what you want, please see here.
This policy will audit your data factory linked services and check if they're using a self-hosted integration runtime. Currently, that check is only done for a few types of integration runtimes (if you look at the policy, you can see 5 of them) which means that if you want to check more types of linked services, you'll need to add them to the list of allowed values and select them when assigning the policy definition.
Bear in mind that for some linked services types, such as Key Vault, that check won't make sense since that service can't use a self-hosted IR

Grant one IAM role access to a large number of DynamoDB tables

I have an AppSync app defined using a master CloudFormation stack and more than a dozen nested stacks. Each nested stack defines a DynamoDB table, an AppSync DataSource for that table, and an IAM role for that DataSource to access that table. The DataSource depends on the role, which depends on the table.
I would like to consolidate these IAM roles, for three reasons:
The role definitions are very repetitive and boilerplate-y.
There are many copies of this app, and it adds up to a lot of IAM roles — enough that we're running close to the soft limits.
Some resolvers use DynamoDB batch operations to access multiple tables, so at least some of the IAM roles must grant access to multiple tables anyway.
I do not want to give the role blanket access to all DynamoDB tables in the account.
The simplest way to grant one role access to every required table would be to list them manually in the policy document. This has the obvious downside of requiring that the policy be manually kept in sync when new tables are added. However, there is also a dependency problem: the DataSource in a nested stack depends on a role in the master stack, which depends on tables in the nested stacks.
I would have liked to use tags: grant for all DynamoDB tables that have a certain tag, then set that tag for each table. This way, the IAM role would not need to be edited when a new table was added. But apparently DynamoDB does not support tag-based conditions.
Is there an easy way to grant a single IAM role access to many DynamoDB tables without granting access to all of DynamoDB and without individually listing the tables in the role?
If you can name your tables in a way that gives them the same prefix you can use wildcards in the resource.
arn:aws:dynamodb:<Region>:<Account>:table/MyPrefix-*
That will work on all tables that start with MyPrefix-
If you are using generated names you can probably use the AWS::StackName value in place of MyPrefix but be aware that with nested stacks that value may get shortened.

Cross-account DynamoDB access

I want to migrate data from DynamoDB from one AWS account to another. Could you please advise whether it is possible using AWS Data Pipeline? Otherwise, what are other options to do this?
I have tried migrating data within the account using Data Pipeline using HiveCopyActivity. But require more details/info how it can be done across accounts.
Yes, you can use Data Pipeline:
https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-importexport-ddb.html
For cross account, you need to share S3 in source account to another account

Meteor, get LoginWithService data without creating account

In Meteor, how can I proceed with LoginWithService (or LinkWithService) and get the service data without actually creating an account?
In my app, I use the service API keys to do certain tasks, to do that I use LinkWithService()
But I also allow users to login/create accounts with LoginWithService() function. But these two functions conflict with each other because if an account already exists with password + service, it will force a log out followed by another log-in.
I'm not sure if that made sense, anyhow I would like to just get the login service data without actually creating an account. How can I do this?

Resources