CosmoDB with Qt - qt

Hello and I am new to developing with Qt for cross platform development. My current goal is to be able to cache data from a CosmosDB with my app that I've developed in Qt. I've had no problem setting up the CosmosDB, but I am confused at what the best way to communicate with the database is.
Do I need to create my own API to talk with the database? Are there libraries out there already do what I want?

You can use either standard HTTP requests, or use SQL API.
Azure Cosmos DB exposes resources through the REST APIs that can be called by HTTP/HTTPS requests. How can I develop apps with the SQL API
Azure Cosmos DB REST API reference
Also Azure Cosmos DB offers a query language as an interface to query JSON documents. The language supports a subset of ANSI SQL grammar and adds deep integration of JavaScript object, arrays, object construction, and function invocation. Microsoft shows exmaples here

Related

Azure ASP.NET REST API and Database deployment

We started our planning phase on a new project and we settled on a ASP.NET REST API which should be hosted on Azure. Since none of us has any experience on deployment on Azure (or any other cloud service), I have two questions.
Do you need separate Azure Services for the Database and the API, or might there be a combined "package" for the prototype, which later can be changed easily?
Is there any documentation or are there any examples of the entire deployment process of a simple dummy API and the DB? I have spent the last few hours reading the official documentation and searching around, but I would really love to see some sort of reference, just to ensure I don't miss something.
For now, the best I have found is this and this. This seems rather shallow, so I really hope, that there might be more.
If you're looking for in-depth design an implementation details then I would suggest that the Azure Architecture Center would be an excellent place to start, for hands on experience there are hundreds of free courses available on Microsoft Learn.
Specifically there are sections on API design and API implementation. From the Serverless web application page is:
If you don't need all of the functionality provided by API Management, another option is to use Functions Proxies. This feature of Azure Functions lets you define a single API surface for multiple function apps, by creating routes to back-end functions. Function proxies can also perform limited transformations on the HTTP request and response. However, they don't provide the same rich policy-based capabilities of API Management.
Function Proxies
I would suggest starting with using Azure Functions for your API (you only pay for the number of calls + a combination of CPU, memory, and runtime, but the first 1,000,000 calls per month are free (consumption plan), rather than paying for an Azure App Service to host your API and run all the time but only be utilized some of the time.
Some links that might help:
Build Serverless APIs with Azure Functions
Customize an HTTP endpoint in Azure Functions
There is an excellent summary in this article that states:
For the heavy workloads.
Private(enterprise) API - API Management with a Premium plan.
Public API - Functions Proxy with the Premium plan.
For light/moderate workloads.
Private API -Functions Proxy with the Premium plan.
Public API -Functions Proxy with a Consumption plan and custom warm-up solution.
Then from here you can use a connection string to an Azure SQL DB inside your functions to write to the DB or something like Azure Managed Identity (yes the link is for Azure PostgreSQL but the process will be much the same for Azure SQL).
In terms of deployment you should be looking at using Azure DevOps (or GitHub Actions):
Setting up a CI/CD pipeline for Azure functions (old way - GUI pipelines)
Deploy an Azure Functions app to Azure (new way - YAML pipelines)
Continuous Delivery for Azure SQL DB using Azure DevOps
Another helpful tool to get a gauge of costs is the Azure Pricing Calculator.

Cosmos Graph DB auto-failover when using region-specific Gremlin API endpoints?

Following the advice in this article, I have configured my applications to use region-specific Gremlin endpoints so that reads and writes are always directed to the master replica in the same data centre (the Cosmos DB account is multi-master and the applications are deployed to every region on AKS). My question is this: in the event of a regional Cosmos DB outage, what will the behaviour be when using region-specific Gremlin connection strings? Will applications that reference a regional endpoint that is affected by an outage be automatically redirected to a region where the Cosmos replica is healthy?
This depends upon the client SDK the application is using to connect and the connection string logic. If the application connection string is pointed to the .NET SDK URI, then you will want to implement either the .NET SDK v2 or .NET SDK v3 multi-master functionality. If you are using the Gremlin Endpoint, please follow the specific guidance: Regional endpoints for Cosmos DB for Graph Accounts
Once that is configured correctly, in the event of an outage, the routing will automatically be redirected to an available write region.

Post to Azure Cosmos Db from NiFi

I created Azure CosmosDb database and container for my documents.
I use NiFi as a main data ingestion tool and want to feed my container with documents from NiFi flow files.
Can anybody please share a way to post flowfile content to Azure Cosmos Db from NiFi?
Thanks in advance
UPDATE(2019.05.26):
In the end I used Python script and called it from NiFi to post messages. I passed a message as a parameter. The reason I chose python is because it has some examples on official Microsoft site with all the required connection settings and libraries, so it was easy to connect to Cosmos.
I tried Mongo component, but couldn't connect to Azure (security config didn't work), didn't really go too far with it as Python script worked just fine.
Azure CosmosDB exposes MongoDB API so you can use the following MongoDB processors which are available in NiFi to read/query/write to & from Azure CosmosDB using Apache NiFi.
DeleteMongo
GetMongo
PutMongo
PutMongoRecord
RunMonogAggregation
Useful Links
https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb-introduction
https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb-feature-support
Valeria. According to the components list supported by Apache Nifi related to Azure, you could only get Azure Blob Storage, Queue Storage, Event Hub etc,not including Cosmos DB.
So,I suggest you using PutAzureBlobStorage to feed azure blob container with documents from NiFi flow files. Then please create a copy activity pipeline in Azure Data Factory to transfer data from Azure Blob Storage into Azure Cosmos DB.

MongoDB in Azure Cosmos DB

I was wondering if MongoDB is fully supported in Azure Cosmos DB through the MongoDB API https://learn.microsoft.com/es-es/azure/cosmos-db/mongodb-introduction
I have read that the aggregation pipeline, map-reduce and the full-text indexes is not fully integrated. Does anyone have further information about it? Would you use MongoDB in Azure Cosmos DB considering its current status?
Cosmos DB implements MongoDB wire protocol and many customers already use MongoDB API in production. Aggregation pipeline is in private preview and you can enable it by emailing askcosmosmongoapi#microsoft.com. Map-reduce functionality is mostly covered by aggregation pipiline. Full-text search is partially available through Azure Search, which can index MongoDB collections and $regex operator within MongoDB API covers less complex text search. You can find some other feature requests and their status at https://feedback.azure.com/forums/263030-azure-cosmos-db/category/321994-mongodb-api
Cosmos DB's MongoDB layer implements a large subset of native MongoDB functionality. Specifics of supported features are published here.
You mentioned aggregation pipeline: As of November 2017, this is now supported.
Regarding "current status" of the Cosmos DB MongoDB API: It's a production database with SLA. You'll need to make your own decision on whether to use it, based on feature set and your app's needs.
You can activate aggregation pipeline through Azure portal by going to Preview Features menu.

OData Service for SQL Azure read only?

according to here: http://watwp.codeplex.com/wikipage?title=Architecture%20Diagrams
The SQL Azure OData Service is a sample WCF Data Service built on top
of a SQL Azure (or SQL Server) database using Entity Framework 4.1
Code First.
The current version of this service only supports Read operations and,
in addition to exposing the SQL Azure database as an OData feed, it
adds a security layer to manage authentication / authorization.
so does it mean the my windows phone app will only be able to read from sql azure and not write to ? or can i do it by creating a data service on the asp.net server ?
i'm a little confused.
What this is saying is that the sample OData service that they provided only implements read operations. If you want read and write, you're going to have to roll that yourself.

Resources