Unbale to create a container in Cosmos DB Emulator - azure-cosmosdb

I am running CosmosDB Emulator and trying to add New Container from Explorer but after adding all details I am getting below error
Error while creating container loyalty-vd:
The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used. Server used the following payload to sign: 'get
dbs
dbs/practice
tue, 23 feb 2021 18:39:51 gmt
'
ActivityId: 3505c9a5-a872-49e7-b30d-42ae48a3c21d, Microsoft.Azure.Documents.Common/2.11.0
Please see the below Image -
Can someone guide what I am doing wrong here and how to create new container and database in Azure Cosmos DB Emulator

Related

AWS Amplify Build Issue - StackUpdateComplete

When running amplify push -y in the CLI, my project errors with this message:
["Index: 0 State: {\"deploy\":\"waitingForDeployment\"} Message: Resource is not in the state stackUpdateComplete"]
How do I resolve this error?
The "Resource is not in the state stackUpdateComplete" is the message that comes from the root CloudFormation stack associated with the Amplify App ID. The Amplify CLI is just surfacing the error message that comes from the update stack operation. This indicates that the Amplify's CloudFormation stack may have been still be in progress or stuck.
Solution 1 – “deployment-state.json”:
To fix this issue, go to the S3 bucket containing project settings and deleted the “deployment-state.json” file in root folder as this file holds the app deployment states. The bucket should end with, or contain the word “deployment”.
Solution 2 – “Requested resource not found”:
Check the status of the CloudFormation stack and see if you can notice that the stack failed because of a “Requested resource not found” error indicating that the DynamoDB table “tableID” was missing and confirm that you have deleted it (possibly accidentally). Manually create the above DynamoDB table and retry to push again.
Solution 3A - “#auth directive with 'apiKey':
If you recieve an error stating that “#auth directive with 'apiKey' provider found, but the project has no API Key authentication provider configured”. This error appears when you define a public authorisation in your GraphQL schema without specifying a provider. The public authorization specifies that everyone will be allowed to access the API, behind the scenes the API will be protected with an API Key. To be able to use the public API you must have API Key configured.
The #auth directive allows the override of the default provider for a given authorization mode. To fix the issue specify “IAM” as the provider which allows to use an "Unauthenticated Role" from Cognito Identity Pools for public access instead of an API Key.
Below is the sample code for public authorisation rule:
type Todo #model #auth(rules: [{ allow: public, provider: iam, operations: [create, read, update, delete] }]) {
id: ID!
name: String!
description: String
}
After making the above changes, you can run “amplify update api” and add a IAM auth provider, the CLI generated scoped down IAM policies for the "UnAuthenticated" role automatically.
Solution 3B - Parameters: [AuthCognitoUserPoolId] must have values:
Another issue could occur here, where the default authorization type is API Key when you run the command “amplify add api” without specifying the API type. To fix this issue, follow these steps:
Deleted the the API
Recreate a new one by specifying the “Amazon Cognito user pool” as the authorization mode
Add IAM as an additional authorization type
Re-enable #auth directive in the newly created API Schema
Run “amplify push”
Documentation:
Public Authorisation
Troubleshoot CloudFormation stack issues in my AWS Amplify project

Use Realm GraphQL Client with a global/shared realm in Realm Cloud

I can successfully use the Realm GraphQL Client with a realm path like myInstance.us1.cloud.realm.io/~/realmName but when trying to use a global path, i.e., myinstance.us1.cloud.realm.io/realmName, I always get a 502 response from the server.
Any thoughts?
TLDR;
I have been fighting with getting data from a global/shared realm, i.e., no /~/ in the realm path with no luck. I always get a 502 Bad Gateway in response to executing a query. If I add the /~/ to the realm path, a connection is established and a new and empty user-specific realm is created (as expected) but then queries fail because the realm is empty (also expected).
Does the GraphQL Service provided by Realm Cloud support connecting to global/shared realms? I’ve skimmed over the source for both the server and client and did not see any specific reason why global/shared would not be supported.
I also tried passing isQueryBasedSync to the GraphQLConfig which results in a connection and successfully executed query, but the query responses are always empty
Any advice is greatly appreciated.
I got past the 502 Bad Gateway error using the undocumented API(s) shown below (I had to find them by reading the current code in the realm-graphql repo):
const credentials = Credentials.usernamePassword(<username>, <password>);
const user = await User.authenticate(credentials, <server>);
const config = await GraphQLConfig.create(user, <realm_name>, undefined, false);
const client = config.createApolloClient();
However, I now frequently receive the following error during GraphQLConfig.create execution:
network timeout at: https://.cloud.realm.io/auth
Additionally, I posted this question on the Realm Forums that you may want to follow and received the following response:
Getting a 502 in the GraphQL service usually means you were trying to open a very large Realm that runs into some resourcing limits.
I am still waiting for more information from the Realm team and will update this answer accordingly.

cosmos db, generate authentication key on client for Azure Table endpoint

Cosmos DB, API Azure Tables, gives you 2 endpoints in the Overview blade
Document Endpoint
Azure Table Endpoint
An example of (1) is
https://myname.documents.azure.com/dbs/tempdb/colls
An example of (2) is
https://myname.table.cosmosdb.azure.com/FirstTestTable?$filter=PartitionKey%20eq%20'car'%20and%20RowKey%20eq%20'124'
You can create the authorization code for (1) on the client using the prerequest code from this Postman script: https://github.com/MicrosoftCSA/documentdb-postman-collection/blob/master/DocumentDB.postman_collection.json
Which will give you a code like this:
Authorization: type%3Dmaster%26ver%3D1.0%26sig%3DavFQkBscU...
This is useful for playing with the rest urls
For (2) the only code I could find to generate a code that works was on the server side and gives you a code like this:
Authorization: SharedKey myname:JXkSGZlcB1gX8Mjuu...
I had to get this out of Fiddler
My questions
(i) Can you generate a code for case (2) above on the client like you can for case (1)
(ii) Can you securely use Cosmos DB from the client?
If you go to the Azure Portal for a GA Table API account you won't see the document endpoint anymore. Instead only the Azure Table Endpoint is advertised (e.g. X.table.cosmosdb.azure.com). So we'll focus on that.
When using anything but direct mode with the .NET SDK, our existing SDKs when talking to X.table.cosmosdb.azure.com endpoint are using the SharedKey authentication scheme. There is also a SharedKeyLight scheme which should also work. Both are documented in https://learn.microsoft.com/en-us/rest/api/storageservices/authentication-for-the-azure-storage-services. Make sure you read the sections specifically on the Table Service. The thing to notice is that a SharedKey header is directly tied to the request it is associated with. So basically every request needs a unique header. This is useful for security because it means that a leaked header can only be used for a limited time to replay a specific request. It can't be used to authorize other requests. But of course that is exactly what you are trying to do.
An alternative is the SharedKeyLight header which is a bit easier to implement as it just requires a date and the a URL.
But we don't have externalized code libraries to really help with either.
But there is another solution that is much friendly to things like Fiddler or Postman, which is to use a SAS URL as defined in https://blogs.msdn.microsoft.com/windowsazurestorage/2012/06/12/introducing-table-sas-shared-access-signature-queue-sas-and-update-to-blob-sas/.
There are at least two ways to get a SAS token. One way is to generate one yourself. Here is some sample code to do that:
var connectionString = "DefaultEndpointsProtocol=https;AccountName=tableaccount;AccountKey=X;TableEndpoint=https://tableaccount.table.cosmosdb.azure.com:443/;";
var tableName = "ATable";
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
CloudTable table = tableClient.GetTableReference(tableName);
await table.CreateIfNotExistsAsync();
SharedAccessTablePolicy policy = new SharedAccessTablePolicy()
{
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(1000),
Permissions = SharedAccessTablePermissions.Add
| SharedAccessTablePermissions.Query
| SharedAccessTablePermissions.Update
| SharedAccessTablePermissions.Delete
};
string sasToken = table.GetSharedAccessSignature(
policy, null, null, null, null, null);
This returns the query portion of the URL you will need to create a SAS URL.
Another, code free way, to get a SAS URL is to go to https://azure.microsoft.com/en-us/features/storage-explorer/ and download the Azure Storage Explorer. When you start it up it will show you the "Connect to Azure Storage" dialog. In that case:
Select "Use a connection string or a shared access signature URI" and click next
Select "Use a connection string" and paste in your connection string from the Azure Portal for your Azure Cosmos DB Table API account and click Next and then click Connect in the next dialog
In the Explorer pane on the left look for your account under "Storage Accounts" (NOT Cosmos DB Accounts (Preview)) and then click on Tables and then right click on the specific table you want to explore. In the right click dialog you will see an entry for "Get Shared Access Signature", click on that.
A new dialog titled "Generate Shared Access Signature" will show up. Unfortunately so will an error dialog complaining about "NotImplemented", you can ignore that. Just click OK on the error dialog.
Now you can choose how to configure your SAS, I usually just take the defaults since that gives the widest access permission. Now click Create.
The result will be a dialog with both a complete URL and a query string.
So now we can take that URL (or create it ourselves using the query output from the code) and create a fiddler request:
GET https://tableaccount.table.cosmosdb.azure.com/ATable?se=2018-01-12T05%3A22%3A00Z&sp=raud&sv=2017-04-17&tn=atable&sig=X&$filter=PartitionKey%20eq%20'Foo'%20and%20RowKey%20eq%20'bar' HTTP/1.1
User-Agent: Fiddler
Host: tableaccount.table.cosmosdb.azure.com
Accept: application/json;odata=nometadata
DataServiceVersion: 3.0
To make the request more interesting I added a $filter operation. This is an OData filter that lets us explore the content. Note, btw, to make filter work both the Accept and DataServiceVersion headers are needed. But you can use the base URL (e.g. without the filter parameter) to make any of the REST API calls on a specific table.
Do be aware that the SAS token is scoped to an individual table. So higher level operations won't work with this SAS token.

Wrong S3 Authentication with Paw App

End device: EMC ECS,
Protocol: AWS S3
I'm trying to authenticate with my Python script and construct the same request using Paw.
Python with boto works just fine.
The primitive code:
from boto.s3.connection import S3Connection
accessKeyId = 'objuser'
secretKey = 'spl4vDHl11H7uW/683WZCoYrle03Bn1hd42gy8bd'
host = '10.10.10.10'
port = 9020
conn = S3Connection(aws_access_key_id=accessKeyId,
aws_secret_access_key=secretKey,
host=host,
port=port,
calling_format='boto.s3.connection.ProtocolIndependentOrdinaryCallingFormat',
is_secure=False)
print conn.get_all_buckets()
Correct headers are accepted by the S3 server
Date: Fri, 08 Apr 2016 07:38:34 GMT
Authorisation: AWS obtuser:Gi/qcdbyYcVMdI9EkdORPMx2wbo=
Next I re-create the same request with Paw but get wrong headers:
Date: Fri, 08 Apr 2016 07:38:34 GMT
Authorisation: AWS obtuser:/znFNFviqD5fw3t1oWUwBQ8B5M4=
Of course it is rejected by the S3 server.
In Paw I use Authorisation header with standard "S3 Amazon S3 Authorisation Header" dynamic value. AWS Access Key ID and Secret Access key ID are the same as in the script (triple checked).
According to the ECS documentation, S3 Authentication follows Signing and Authenticating REST Requests So signature is based on the standard HMAC-SHA1.
I expect that the same method is used by Paw.
Could you please advice what is potential reason why Paw doesn't create correct Authorisation header and how to fix that?
Many thanks in advance !
Sorry for the very late answer! I've just tested this again myself, using our AWS account, and for example, I've been able to list all our S3 buckets easily, see this screenshot:
It seems like it's probably an issue related to the way the Authorization header has been configured, or an invalid character being inserted in the URL field. (A screenshot would help to see what can be wrong)
Using Paw v3.0.16 and "Amazon S3 Authorisation Header" dynamic value, SignatureDoesNotMatch error occurred with this URL.
GET https://s3-ap-northeast-1.amazonaws.com/bucket123/sub456/some789.json
And it works well with this URL.
GET https://bucket123.s3.amazonaws.com/sub456/some789.json
http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html

Connect to data/service wsdl URL introspect error

Hi I am trying to add a web service in flex 4. This web service is deployed in share point 2010 in the intranet . I am able to see this wsdl file through browser but trying to introspect the service is giving an authentication error.
I am getting the following error :-
There was an error during service introspection.
WSDLException: faultCode=OTHER_ERROR: Unable to resolve imported document at 'http://sql2008:47672/_vti_bin/StoryboardingDatabaseConnect.asmx?WSDL'.: java.io.IOException: Authentication failure
Edit :-
Have added the video showing the error at http://www.youtube.com/watch?v=moXfxmiHAqQ
The Data Services Wizard does not support (as of now, afaik) connection to https services, neither to ones that need authentication.
So you should add your credentials manually to your soap request's header using name-value pairs:
[{name: "userName", value: "yourUserName"},
{name: "password", value: "yourPassword"}].
You can read more about it in the Working with SOAP Headers section of this article (Using WebService components).
You might also find this post from the Adobe forums useful, elaborating this issue.

Resources