We have several app service APIs which will read from same Cosmos DB. We want each API use different connection string for security reason. But from Azure portal, seems we can only have 2 master connection strings and 2 read-only connection strings.
Can I generate more read-only connection strings?
That is not directly supported, but there are other ways to achieve what you're looking for.
Essentially, you are now using the 'Master Keys' approach to define your connection string. You are building it by specifying the account and one of the 2 master keys to access your data.
The only other approach available is the 'Resource Token' approach. In this architecture you'd have a middleware tier that is configured with a secure connection to your Cosmos account using one of the master keys, and all the applications (that need to be individually secured) call on to this layer for an access token.
In this approach you will have to manage authentication of your clients to your middleware so it's definitely more involved, but ultimately more secure, especially if your APIs are deployed outside of your environment.
See the official docs page for all the details about Resource Tokens.
Related
I am going to migrate our service to use Azure.Search.Documents (v11) SDK instead of raw HttpClient GET/POST to query documents from an azure search.
Per the SDK document, we need to initialize a SearchClient with a service endpoint and an index name. As our service is a multi-tenancy service which share multiple customers' indexes in the same search service (could be 3000 indexes on S3 HD service), so in theory, we need up to 3000 search client instances.
My question is, is it worth implementing a search client pool to reuse the client for the same index for better performance? Or can I just create a new search client each time when sending a request to azure search? Not sure if the SDK handles the client pool internally.
Please take a look at the best practices guide for working with client objects in Azure SDK. Reusing client instances (wherever possible) will result in better performing applications.
API Keys in web side context are never secured, this can be a big problem if API services are paid for. How to solve this problem in Azure Cognitive Service context ?
I assume you are referring to not exposing the keys to the end user who is browsing to your website, as opposed to securing the keys on the webserver itself from another admin on the server.
This isn't really a Cognitive Services question, but a question generic to any secrets you want to keep when hosting a website (or creating a mobile app, or really any app that uses some sort of key or password).
The short answer is, don't give the key to the client, which means that the client can't directly make the call to Cognitive Services, and you have to have code running on your web server that makes the call.
Generally you would do one of two things:
Code running on your web server would make the call to Cognitive and then process and display relevant results to the user via the webpage.
Your web server would expose an API, and then you would have client side script call your API. Your API internally would call the Cognitive Services API and return the data where the client side script would process and display the results.
You can also find similar info at How to protect an API Key when using JavaScript?, or a web search for something like 'web development protecting api keys javascript'.
I am using SignalR hosted in multiple servers behind a load balancer. I am storing the connnection id and the user id in the custom database table in sql server. Every time, I need to send notification to the selected users. It is working fine in the single server environment. How do I scale the SignalR implementation with custom database table without using existing backplane options?
I am not sure what is your current implementation because it seems to be a bit mixed your explanation. If you have multiple servers behind a load balancer it means you applied some techniques (I think so!). But you said it's working fine in the single server environment but not in multiple servers. Let's review what is mandatory for multiple servers (scale out)
Communication between instances: It means that any message in one instance is available on all the other instances. The classic implementation is any type of queue, SignalR supports Redis, you can use SQL Server but it's clear the limitations of any SQL solution. Azure has a Redis Cache as a PaaS
In-memory storage: You normally use this in a single server but it's mandatory to implement shared memory. Again, Redis has a shared memory solution in case you have the server available. There is not any possibility of implementing this without a solution like Redis.
Again, a lower-performance solution would be a MemStorage implementation in SQL.
Authentication: The out-of-the-box implementation of security uses a cookie to store the encrypted key. But once you have multiple servers every server has its unique key. To solve the problem you have to implement your own DataProtector in case this is your method used.
The examples are extremely beyond this explanation, most of the code (even templates without the actual methods implemented) would take several pages. I suggest you to take a look at the 3 items that are mandatory to scale out your application.
I have moved my SQL Server 2008 db with Stored Procedures onto SQL Azure. Next I wanted to define an API endpoint to that server URL and with the right authorizations & inputs have the stored proc return JSON (or the like).
However research seems to indicate I need to route through another application/webserver/calling mechanism. Frankly I'm hoping to concentrate on learning only db code (I.e. outsource the middle and frontend) so to some extent all I want to do at present is test . . . can anyone help me get a better understanding of the process???
SQL Database (essentially SQL Server-as-a-Service) doesn't provide a data-access API built in. The only API's available for SQL Database service are system-level ones. That is, for provisioning and scaling. Side note: Years ago, there was an experimental OData service offered for SQL Azure (the former name of SQL Database service), but that was terminated.
You'd need to run a separate service to handle your API. How you do that is quite broad and subjective, and I'll avoid giving recommendations due to that, but... assuming you'll run your API in Azure, you have many places to run it, such as App Service (the most straightforward due to you not having to deal with any infrastructure), Cloud Services, and Virtual Machines. And then there's Azure API Management service (and since your question mentions authorization, this might be a good thing for you to look into). This is all documented at azure.com.
Is it efficient using a web service to access database objects?
I'm developing a win phone app and a web app. Both of them will use the same db. Should I create one web service for two apps?
A shared webservice is definitely the right way to go. That's really the point of a service, to be able to access the same business and data logic from multiple places (assuming both places are doing the same thing of course). It also acts as a natural security buffer between your app and database - so your database only needs to accept connections from the service, as opposed to multiple client applications.
As far as the technology, since both of your clients are Microsoft, you can use WCF as your service as opposed to a traditional SOAP service. Or you can go with something more universally accepted, like WebAPI with JSON. Lots of options there.