Not able to connect cosmo table api using datafactory - azure-cosmosdb

Keys are stored in secret as "cosmostablekey" for cosmos table api.
Created another secret stored in key valuets as below.
{
"name": "CosmosDbSQLAPILinkedService",
"properties": {
"type": "CosmosDb",
"typeProperties": {
"connectionString": "AccountEndpoint=https://XXXXXXX.table.cosmos.azure.com:443/;Database=TablesDB",
"accountKey": { 
"type": "AzureKeyVaultSecret", 
"store": { 
"referenceName": "ls_cosmos_key"", 
"type": "LinkedServiceReference" 
}, 
"secretName": "cosmostablekey" 
}
},
"connectVia": {
"referenceName": "AutoResolveIntegrationRuntime",
"type": "IntegrationRuntimeReference"
}
}
}
when try to create linked service used authentication type as key authentication in adf and tried for test connection got below error.
Error code
9082
Details
The CosmosDb key is in a wrong format.
The input is not a valid Base-64 string as it contains a non-base 64 character, more than two padding characters, or an illegal character among the padding characters.
Activity ID: f0c9c682-12de-4b53-95e9-7abe7ea722b7.
am sure copied key strig properly to key vaults.
Used for refence to connect cosmos db from adf.
microsoftdoctoconnectcosmosDB
Thanks for quick help.

Issue is got resolved
Need to call only cosmostablekey key in linked services for key athentication. More over need to specify endpoint as
https://XXXX.documents.azure.com:443/
instead of https://XXXX.table.azure.com:443/
Working fine for me now..

Related

Azure Data Factory Cosmos DB sql api 'DateTimeFromParts' is not a recognized built-in function name

I am using Copy Activity in my Datafactory(V2) to query Cosmos DB (NO SQL/SQLAPI). I have a where clause to build datetime from parts using DateTimeFromParts datetime function. THis query works fine when I execute it on the Cosmos DB data explorer query window. But when i use the same query from my copy activity I get the following error:
"message":"'DateTimeFromParts' is not a recognized built-in function name."}]}
ActivityId: ac322e36-73b2-4d54-a840-6a55e456e15e, documentdb-dotnet-sdk/2.5.1 Host/64-bit
I am trying convert a string attribute which is like this '20221231', this translates to Dec 31,2022, to a date to compare it with current date, i use the DateTimeFromParts to build the date, is there another way to convert this '20221231' to a valid date
Select * from c where
DateTimeFromParts(StringToNumber(LEFT(c.userDate, 4)), StringToNumber(SUBSTRING(c.userDate,4, 2)), StringToNumber(RIGHT(c.userDate, 2))) < GetCurrentDateTime()
I suspect the error might be because the documentdb-dotnet-sdk might be an old version. Is there way to specify which sdk to use in the activity?
I tried to repro this and got the same error.
Instead of changing the format of userDate column using DateTimeFromParts function, try changing the GetCurrentDateTime() function to userDate column format.
Workaround query:
SELECT * FROM c
where c.userDate <
replace(left(GetCurrentDateTime(),10),'-','')
Input data
[
{
"id": "1",
"userDate": "20221231"
},
{
"id": "2",
"userDate": "20211231",
}
]
Output data
[
{
"id": "2",
"userDate": "20211231"
}
]
Apologies for the slow reply here. Holidays slowed getting an answer for this.
There is a workaround that allows you to use the SDK v3 which would then allows you to access the DateTimeFromParts() system function which was released in .NET SDK v.3.13.0.
Option 1: Use AAD authentication (i.e Service Principal or System or User Managed Identity) for the Linked Service object in ADF to Cosmos DB. This will automatically pick up the .NET SDK v3.
Option 2: Modify the linked service template. First, click on Manage in ADF designer, next click on Linked Services, then select the connection and click the {} to open the JSON template, you can then modify and set useV3 to true. Here is an example.
{
"name": "<CosmosDbV3>",
"type": "Microsoft.DataFactory/factories/linkedservices",
"properties": {
"annotations": [],
"type": "CosmosDb",
"typeProperties": {
"useV3": true,
"accountEndpoint": "<https://sample.documents.azure.com:443/>",
"database": "<db>",
"accountKey": {
"type": "SecureString",
"value": "<account key>"
}
}
}
}

Insert a Map in DynamoDB

I'm trying to insert map data into a DynamoDB table using API Gateway. Here's my payload:
{
"TableName":"OrderDB",
"Item":{
"_id": {"S":"04FA887FP2S5R"},
"_rev": {"S":"9-12e098e2490e1b7c9782597226689403"},
"_merchantId":{"S":"AXN3EKXT0SJ61"},
"doc":{"M":{"something":"storedHere"} }
}
And my body mapping template in API Gateway:
{
"TableName": "$input.path('$.TableName')",
"Item": $input.json('$.Item')
}
Everything works as expected if I remove the doc item from the payload. With trying to post with the map I get the following error:
{
"__type": "com.amazon.coral.service#SerializationException",
"Message": "Unexpected value type in payload"
}
All of the examples that I see suggest using the DynamoDB document mapping object, but I don't think this is possible for me because I'm using API Gateway to connect directly to DynamoDB. Is it possible to insert a map this way?
Your map entry needs the same format as the rest of the items, so instead of "doc":{"M":{"something":"storedHere"} } it should be "doc":{"M":{"something":"S": "storedHere"} }

Simple GetItem with ctx.identity.username returns null

I'm using AppSync with IAM auth with a DynamoDB resolver and Cognito. I'm trying to do the following.
{
"version": "2017-02-28",
"operation": "GetItem",
"key": {
"userId": $util.dynamodb.toDynamoDBJson($ctx.identity.username)
}
}
$ctx.identity.username is supposed to contain userId generated by Cognito and I'm trying to use it to fetch current user data.
Client side, I'm using AWS Amplify that tells me I'm currently logged:
this.amplifyService.authStateChange$.subscribe(authState => {
if (authState.state === 'signedIn') {
this.getUserLogged().toPromise();
this._isAuthenticated.next(true);
}
});
getUserLogged is the Apollo query that is supposed to returns user data.
What I've tried:
If I leave it like this, getUserLogged returns null.
If I replace in the resolver $util.dynamodb.toDynamoDBJson($ctx.identity.username) with a known userId like this $util.dynamodb.toDynamoDBJson("b1ad0902-2b70-4abd-9acf-e85b62d06fa8"): It works! I get this user data.
I tried to use the test tool in the resolver page but it only gives fake data so I can't rely on this.
Did I make a mistake? To me everything looks good but I guess I'm missing something?
Can I clearly see what $ctx.identity contains?
You'll want to use $ctx.identity.cognitoIdentityId to identify Cognito IAM users:
https://docs.aws.amazon.com/appsync/latest/devguide/resolver-context-reference.html#aws-appsync-resolver-context-reference-identity
You could see the contents of $ctx.identity by creating a Lambda resolver and logging the event or by creating a local resolver and returning the input that the mapping template receives:
https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-local-resolvers.html
My cognitoIdentityId looks like this: eu-west-1:27ca1e79-a238-4085-9099-9f1570cd5fcf

Post request to firebase without unique key

I want to post new data to my firebase API, but everytime I do so, a new key, like -L545gZW7E6Ed6iqXRok is generated with my object inside it. I would like to save my object directly to the API without this new key. This SO question answers how to do it using the set() method, but I would like to achieve this using Postman. I am posting directly to firebase using Postman.
url: https://my-firebase-project.firebaseio.com/galaxies.json with method POST.
//current saving like this in firebase
"0000001" : {
"active": false,
"name": "tp-milky-way",
"time": 60
},
"-L545gZW7E6Ed6iqXRok": {
"0000011": {
"active": false,
"name": "tp-andromeda",
"time": 60
}
}
//I want it without the key
"0000001" : {
"active": false,
"name": "tp-milky-way",
"time": 60
},
"0000011" : {
"active": false,
"name": "tp-andromeda",
"time": 60
}
EDIT: I found out I can use PUT with the entire json object that was originally 'put' to firebase with the additions or deletions, and firebase compares the new put request with what's already on there and updates accordingly. I don't know the behaviour is as I understand it or if there isn't a better way to add data without auto-generated keys.
When you use the POST verb, Firebase generates a new location. This is in line with REST-ful idioms: POST is used to create a new object in a server-defined new location.
If you want to write to an existing location, or a new location you control, use the PUT verb. In this case the data will be written to exactly the location you specify in the URL, and it will overwrite any existing data at that location.
If you want to update part of the data at an existing location, but leave other pieces of the data unmodified, use the PATCH verb.
If your HTTP client doesn't support specifying a verb, you can optionally pass the verb as HTTP-Method-Override header.

Can't create cloudsql role for Service Account via api

I have been trying to use the api to create service accounts in GCP.
To create a service account I send the following post request:
base_url = f"https://iam.googleapis.com/v1/projects/{project}/serviceAccounts"
auth = f"?access_token={access_token}"
data = {"accountId": name}
# Create a service Account
r = requests.post(base_url + auth, json=data)
this returns a 200 and creates a service account:
Then, this is the code that I use to create the specific roles:
sa = f"{name}#dotmudus-service.iam.gserviceaccount.com"
sa_url = base_url + f'/{sa}:setIamPolicy' + auth
data = {"policy":
{"bindings": [
{
"role": roles,
"members":
[
f"serviceAccount:{sa}"
]
}
]}
}
If roles is set to one of roles/viewer, roles/editor or roles/owner this approach does work.
However, if I want to use, specifically roles/cloudsql.viewer The api tells me that this option is not supported.
Here are the roles.
https://cloud.google.com/iam/docs/understanding-roles
I don't want to give this service account full viewer rights to my project, it's against the principle of least privilege.
How can I set specific roles from the api?
EDIT:
here is the response using the resource manager api: with roles/cloudsql.admin as the role
POST https://cloudresourcemanager.googleapis.com/v1/projects/{project}:setIamPolicy?key={YOUR_API_KEY}
{
"policy": {
"bindings": [
{
"members": [
"serviceAccount:sa#{project}.iam.gserviceaccount.com"
],
"role": "roles/cloudsql.viewer"
}
]
}
}
{
"error": {
"code": 400,
"message": "Request contains an invalid argument.",
"status": "INVALID_ARGUMENT",
"details": [
{
"#type": "type.googleapis.com/google.cloudresourcemanager.projects.v1beta1.ProjectIamPolicyError",
"type": "SOLO_REQUIRE_TOS_ACCEPTOR",
"role": "roles/owner"
}
]
}
}
With the code provided it appears that you are appending to the first base_url which is not the correct context to modify project roles.
This will try to place the appended path to: https://iam.googleapis.com/v1/projects/{project}/serviceAccount
The POST path for adding roles needs to be: https://cloudresourcemanager.googleapis.com/v1/projects/{project]:setIamPolicy
If you remove /serviceAccounts from the base_url and it should work.
Edited response to add more information due to your edit
OK, I see the issue here, sorry but I had to set up a new project to test this.
cloudresourcemanager.projects.setIamPolicy needs to replace the entire policy. It appears that you can add constraints to what you change but that you have to submit a complete policy in json for the project.
Note that gcloud has a --log-http option that will help you dig through some of these issues. If you run
gcloud projects add-iam-policy-binding $PROJECT --member serviceAccount:$NAME --role roles/cloudsql.viewer --log-http
It will show you how it pulls the existing existing policy, appends the new role and adds it.
I would recommend using the example code provided here to make these changes if you don't want to use gcloud or the console to add the role to the user as this could impact the entire project.
Hopefully they improve the API for this need.

Resources