set MFA options for user using keystoneclient module - openstack

I am trying to enable default MFA for Openstack user using Python keystoneclient API
keystoneclient.users.update
I have sample API curl command from Openstack documentation, where you update the "options" attribute of user account with JSON object
{
"user": {
"options": {
"multi_factor_auth_enabled": true,
"multi_factor_auth_rules": [
["password", "totp"]
]
}
}
}
when I am trying to update the same in Python code I am getting below error
keystoneauth1.exceptions.http.BadRequest: Invalid input for field 'options': u'{ "multi_factor_auth_enabled": true,
"multi_factor_auth_rules": [["password", "totp"]]}' is not of type
'object'
Failed validating 'type' in schema['properties']['options']:
my code is like this
MFA_dict = '{ "multi_factor_auth_enabled": true, "multi_factor_auth_rules": [["password", "totp"]]}'
user = keystone.users.update(user_id, options=MFA_dict)

Never mind I figured it out.
MFA_dict was string, so I had to just remove single quotes and make the object dictionary.
and make lower case true into uppercase True to make it boolean
MFA_dict = { "multi_factor_auth_enabled": True, "multi_factor_auth_rules": [["password", "totp"]]}

Related

How can I filter a subscription using a custom resolver

I am working on a messaging app using AWS AppSync.
I have the following message type...
type Message
#model
#auth(
rules: [
{ allow: groups, groups: ["externalUser"], operations: [] }
]
) {
id: ID!
channelId: ID!
senderId: ID!
channel: Channel #connection(fields: ["channelId"])
createdAt: AWSDateTime!
text: String
}
And I have a subscription onCreatemessage. I need to filter the results to only channels that the user is in. So I get a list of channels from a permissions table and add the following to my response mapping template.
$extensions.setSubscriptionFilter({
"filterGroup": [
{
"filters" : [
{
"fieldName" : "channelId",
"operator" : "in",
"value" : $context.result.channelIds
}
]
}
]
})
$util.toJson($messageResult)
And it works great. But if a user is in more than 5 channels, I get the following error.
{
"message": "Connection failed: {"errors":[{"message":"subscription exceeds maximum value limit 5 for operator `in`.","errorCode":400}]}"
}
I am new to vtl. So my question is, how can I break up that filter in to multiple or'd filters?
According to Creating enhanced subscription filters, "multiple rules in a filter are evaluated using AND logic, while multiple filters in a filter group are evaluated using OR logic".
Therefore, as I understand it, you just need to split $context.result.channelIds into groups of 5 and add an object to the filters array for each group.
Here is a VTL template that will do this for you:
#set($filters = [])
#foreach($channelId in $context.result.channelIds)
#set($group = $foreach.index / 5)
#if($filters.size() < $group + 1)
$util.qr($filters.add({
"fieldName" : "channelId",
"operator" : "in",
"value" : []
}
))
#end
$util.qr($filters.get($group).value.add($channelId))
#end
$extensions.setSubscriptionFilter({
"filterGroup": [
{
"filters" : $filters
}
]
})
You can see this template running here: https://mappingtool.dev/app/appsync/042769cd78b0e928db31212f5ee6aa17
(Note: The Mapping Tool errors on line 15 are a result of the $filters array being dynamically populated. You can safely ignore them.)
Do you want to add server-side filter for GraphQL Subscriptions?
If so, Now, Amplify is supported for server-side filter for Subscriptions.
After you checking below blog, you may sense it.
https://aws.amazon.com/blogs/mobile/announcing-server-side-filters-for-real-time-graphql-subscriptions-with-aws-amplify/

Read write AD user custom attributes

I want to store some additional user attributes in form of key and value pairs to all the AD users, for example: 'colorTheme:red', 'userLang:english' etc.
I have added these custom attributes using the Azure AD B2C > User Attributes
I am trying to Read and Write as per the below link.
https://learn.microsoft.com/en-us/graph/extensibility-open-users
I did try using the Graph API calls:
GET https://graph.microsoft.com/v1.0/users?$select=displayName&$expand=extensions
I do get the user details but don't get custom attribute
GET https://graph.microsoft.com/v1.0/me/extensions
{
"#odata.context": "https://graph.microsoft.com/v1.0/$metadata#users('ad-user-id')/extensions",
"value": []
}
How do I get and set the value for the custom attribute?
Is there any other way of storing addition user properties?
The following steps can be used for getting extension properties (custom attributes) defined for a user in Azure AD B2C
Call the following endpoint to get all the existing extension properties. Replace the {{extensionappobjectidwithoutdashes}} with your extension app's object Id without dashes.
https://graph.microsoft.com/v1.0/applications/{{extensionappobjectidwithoutdashes}}/extensionProperties
This will give result that looks something like this. I have removed the guids
{
"#odata.context": "https://graph.microsoft.com/v1.0/$metadata#applications('extensionappobjectidwithoutdashes')/extensionProperties",
"value": [
{
"id": "",
"deletedDateTime": null,
"appDisplayName": "",
"dataType": "String",
"isSyncedFromOnPremises": false,
"name": "extension_<extensionappIdwithoutdashes>_extensionAttribute1",
"targetObjects": [
"User"
]
},
{
"id": "",
"deletedDateTime": null,
"appDisplayName": "",
"dataType": "String",
"isSyncedFromOnPremises": false,
"name": "extension_<extensionappIdwithoutdashes>_extensionAttribute2",
"targetObjects": [
"User"
]
}
]
}
While calling graph api to get user details, add the name of the extension attribute in the select query
https://graph.microsoft.com/v1.0/users?$select=displayName,extension_extension_<extensionappIdwithoutdashes>_extensionAttribute1,extension_extension_<extensionappIdwithoutdashes>_extensionAttribute2
Notes
Use the following docs to see how to create extension properties using ms graph apis
extensionProperty resource type
The extensionappobjectidwithoutdashes and extensionappIdwithoutdashes are different guids. Find them in App Registrations > b2c-extensions-app

How to return true if x data exists in JSON or CSV from API on Wordpress website

is there any easy method to call APIs from Wordpress website and return true or false, depends if some data is there?
Here is the API:
https://api.covalenthq.com/v1/137/address/0x3FEb1D627c96cD918f2E554A803210DA09084462/balances_v2/?&format=JSON&nft=true&no-nft-fetch=true&key=ckey_docs
here is a JSON:
{
"data": {
"address": "0x3feb1d627c96cd918f2e554a803210da09084462",
"updated_at": "2021-11-13T23:25:27.639021367Z",
"next_update_at": "2021-11-13T23:30:27.639021727Z",
"quote_currency": "USD",
"chain_id": 137,
"items": [
{
"contract_decimals": 0,
"contract_name": "PublicServiceKoalas",
"contract_ticker_symbol": "PSK",
"contract_address": "0xc5df71db9055e6e1d9a37a86411fd6189ca2dbbb",
"supports_erc": [
"erc20"
],
"logo_url": "https://logos.covalenthq.com/tokens/137/0xc5df71db9055e6e1d9a37a86411fd6189ca2dbbb.png",
"last_transferred_at": "2021-11-13T09:45:36Z",
"type": "nft",
"balance": "0",
"balance_24h": null,
"quote_rate": 0.0,
"quote_rate_24h": null,
"quote": 0.0,
"quote_24h": null,
"nft_data": null
}
],
"pagination": null
},
"error": false,
"error_message": null,
"error_code": null
}
I want to check if there is "PSK" in contract_ticker_symbol, if it exist and "balance" is > 0 ... then return true.
Is there any painless method because I'm not a programmer...
The Python requests library can handle this. You'll have to install it with pip first (package installer for Python).
I also used a website called JSON Parser Online to see what was going on with all of the data first so that I would be able to make sense of it in my code:
import requests
def main():
url = "https://api.covalenthq.com/v1/137/address/0x3FEb1D627c96cD918f2E554A803210DA09084462/balances_v2/?&format" \
"=JSON&nft=true&no-nft-fetch=true&key=ckey_docs "
try:
response = requests.get(url).json()
for item in response['data']['items']:
# First, find 'PSK' in the list
if item['contract_ticker_symbol'] == "PSK":
# Now, check the balance
if item['balance'] == 0:
return True
else:
return False
except requests.ConnectionError:
print("Exception")
if __name__ == "__main__":
print(main())
This is what is going on:
I am pulling all of the data from the API.
I am using a try/except clause because I need the code to
handle if I can't make a connection to the site.
I am looping through all of the 'items' to find the correct 'item'
that includes the contract ticker symbol for 'PSK'.
I am checking the balance in that item and returning the logic that you wanted.
The script is running itself at the end, but you can always just rename this function and have some other code call it to check it.

AppSync BatchResolver AssumeRole Error

I’m trying to use the new DynamoDB BatchResolvers to write to two DynamoDB table in an AppSync resolver (currently using a Lambda function to do this). However, I’m getting the following permission error when looking at the CloudWatch logs:
“User: arn:aws:sts::111111111111:assumed-role/appsync-datasource-ddb-xxxxxx-TABLE-ONE/APPSYNC_ASSUME_ROLE is not authorized to perform: dynamodb:BatchWriteItem on resource: arn:aws:dynamodb:us-east-1:111111111111:table/TABLE-TWO (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: AccessDeniedException;
I’m using TABLE-ONE as my data source in my resolver.
I added the "dynamodb:BatchWriteItem" and "dynamodb:BatchGetItem" to TABLE-ONE’s permission:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"dynamodb:BatchGetItem",
"dynamodb:BatchWriteItem",
"dynamodb:PutItem",
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:UpdateItem"
],
"Resource": [
"arn:aws:dynamodb:us-east-1:111111111111:table/TABLE-ONE",
"arn:aws:dynamodb:us-east-1:111111111111:table/TABLE-ONE/*",
"arn:aws:dynamodb:us-east-1:111111111111:table/TABLE-TWO",
"arn:aws:dynamodb:us-east-1:111111111111:table/TABLE-TWO/*"
]
}
]
}
I have another resolver that uses the BatchGetItem operation and was getting null values in my response - changing the table’s policy access level fixed the null values:
However, checking the box for BatchWriteItem doesn’t seem to solve the issue either adding the permissions to the data source table’s policy.
I also tested my resolver test feature in AppSync, the evaluated request and response are working as intended.
Where else could I set the permissions for a BatchWriteItem operation between two tables? It seems like it's invoking the user's assumed-role instead of the table's role - can I 'force' it to use the table's role?
It is using the role that you have configured for the table in the AppSync console. Note that that particular role, should have appsync as a trusted entity.
Or if you use the new role tick box when creating the data source in the console, it should take care of it.
variable "dynamodb_table" {
description = "Name of DynamoDB table"
type = any
}
# value of var
dynamodb_table = {
"dyn-notification-inbox" = {
type = "AMAZON_DYNAMODB"
table = data.aws_dynamodb_table.dyntable
}
"dyn-notification-count" = {
type = "AMAZON_DYNAMODB"
table = data.aws_dynamodb_table.dyntable2
}
}
locals {
roles_arns = {
dynamodb = var.dynamodb_table
kms = var.kms_keys
}
}
data "aws_iam_policy_document" "invoke_dynamodb_document" {
statement {
actions = [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:DeleteItem",
"dynamodb:UpdateItem",
"dynamodb:Query"
]
# dynamic dynamodb table
# for dynamodb table v.table.arn and v.table.arn/*
resources = flatten([
for k, v in local.roles_arns.dynamodb : [
v.table.arn,
"${v.table.arn}/*"
]
])
}
}
# make policy
resource "aws_iam_policy" "iam_invoke_dynamodb" {
name = "policy-${var.name}"
policy = data.aws_iam_policy_document.invoke_dynamodb_document.json
}
# attach role
resource "aws_iam_role_policy_attachment" "invoke_dynamodb" {
role = aws_iam_role.iam_appsync_role.name
policy_arn = aws_iam_policy.iam_invoke_dynamodb.arn
}
Result:
resources: [
'arn:aws:dynamodb:eu-west-2:xxxxxxxx:table/my-table',
'arn:aws:dynamodb:eu-west-2:xxxxxxxx:table/my-table/*'
]

Is it insecure to just validate with SimpleSchema, and not use allow/deny rules?

I am using SimpleSchema (the node-simpl-schema package) in an isomorphic way. Validation messages show up on the client as well as from meteor shell.
My question is whether or not this set up is actually secure, and if I need to also write allow/deny rules.
For example:
SimpleSchema.setDefaultMessages
messages:
en:
"missing_user": "cant create a message with no author"
MessagesSchema = new SimpleSchema({
content: {
type: String,
label: "Message",
max: 200,
},
author_id: {
type: String,
autoform:
defaultValue: ->
Meteor.userId()
custom: ->
if !Meteor.users.findOne(_id: #obj.author_id)
"missing_user"
},
room_id: {
type: String,
}
}, {tracker: Tracker})
In meteor shell I test it out and it works as intended.
> Messages.insert({content: "foo", author_id: "asd"})
/home/max/Desktop/project/meteor/two/.meteor/local/build/programs/server/packages/aldeed_collection2-core.js:501
throw error; // 440
^
Error: cant create a message with no author
Should I duplicate this validation logic in my allow/deny rules? Or can I let my allow function always return true, like I'm doing now?
I have some very simple rules that ensures the application is secure:
Do not use allow/deny rules - deny all client-side write requests.
If the client needs to write something in the database, they must do so through Meteor methods.
Ideally, the Meteor methods would call a function (which can be shared code, or server-specific code), and then check for the validity of the database modifier (using the Schema) would be done inside these functions.
Optionally, you can also create client-side methods, which would clean the object and carry out its own validation using the schema before calling the server-side method.

Resources