Most examples of Cognitive Services in Azure require an endpoint and a key. However the Language service in Azure needs a Location, instead of Endpoint? Why?
?
https://microsoftlearning.github.io/AI-900-AIFundamentals/instructions/04b-translate-text-and-speech.html
It is because the first part of the endpoints are the same, and the system use region to append to the general endpoint URL to generate the whole endpoint as below screenshot - api.cognitiveservice.microsofttranslator.com/[region/location]
I think this is just the difference of how to generate endpoint between different products - directly generate the whole endpoint URL or do A + region
Hope this helps.
Related
We are planning to create a custom webservice solution on APIGEE.
Our requirement is create a single API PROXY which should be serve for more than 2 WSDL.
All WSDL contains different Operation and Binding.
Also data type defined in schema also entirely different.
Our main objective is no need to modify the API PROXY for any customer in the future.
Also want to utilize the OAUTH service provided by APIGEE for authetication.
Can you please tell how feasible in APIGEE?
I've looked at a few places, Including this post and the firebase panel
Is there no way to use these api's to secure these endpoints using an api key you create per client who uses your cloud functions?
I'm able to block every one putting a restriction on the Browser key, but I would like to create a new api key, and use that as a way to authenticate my endpoint for various clients.
Creating a new api key, and using that as a parameter in my query doesn't work (don't now if I'm doing anything wrong)
Is there a way to do this?
Option 1: handle authentication within the function
https://github.com/firebase/functions-samples/tree/master/authorized-https-endpoint
Adapt above to use clients/keys stored in firestore
Option 2: Use an an API Gateway
Google Cloud Endpoints (no direct support for functions yet, need to implement a proxy)
Apigee (higher cost, perhaps more than you need)
Azure API Management (lower entry cost + easy to implement as a facade for services hosted outside Azure)
there are more..
The above gateways are probably best for your use case in that the first two would let you keep everything within Google, albeit with more complexity/cost -- hopefully Endpoints will get support for functions soon. Azure would mean having part of your architecture outside Google, but looks like an easy way to achieve what your after (api key per client for your google cloud / firebase functions)
Here's a good walkthrough of implementing Azure API Management:
https://koukia.ca/a-microservices-implementation-journey-part-4-9c19a16385e9
Not to achieve what you are after, as far as firebase and GCP is concerned your clients is your specific business problem.
One way you could tackle this (with the little information that is provided);
You need somewhere to store a list of clients + their API key (I would use firestore)
For the endpoints you want to secure with a client-specific API key you can include a check to confirm the header exists and also exists in your firestore client record.
Considerations:
Depending on your expected traffic loads and the the number of firestore reads you'll be adding, you might want to double check this kind of solution will work for your budget.
Is the API-key type solution the only option you must go for? You Could probably get pretty far using the https://github.com/firebase/firebaseui-web and doing user checks in your function with no extra DB read required. If you go down this path most of the user signup/ emails / account creation logic is ready to go.
https://firebase.google.com/docs/auth/web/password-auth#before_you_begin
Curious to see what some other firebase users suggest.
Scenario: Auth0 Single Page application client. .NET Web API and Angular SPA both configured to use this client. Works great.
I'd like to add Azure API Management as a layer in front of the API. Have set up the API in the Management Portal, updated SPA to call API, tested calls from SPA, works great.
Now, I'd like to configure API Management Portal with the right security settings such that people can invoke API calls from the Developer Portal. I've used this [https://auth0.com/docs/integrations/azure-api-management/configure-azure] as a guide.
Where I'm at:
From the Developer portal, I can choose Authorization Code as an Auth type, go through a successful sign-in process with Auth0 and get back a Bearer token. However, calls made to the API always return 401. I think this is because I'm confused about how to set it up right. As I understand it:
either I follow the instructions and setup a new API client in Auth0, but if that's the case then surely it's not going to work, because tokens generated from one client aren't going to work against my SPA client? (or is there something I need to change to make it work)
or, how should I configure Azure API Management to work with a SPA application. (this would be my preferred method, having two clients in Auth0 seems 'messy'). But, don't I need an 'audience' value in my authorization endpoint URL? How do I get that?
If anyone has done this, would very much appreciate some guidance here.
Well, I didn't think I'd be back to answer my own question quite so soon. The reason is mostly rooted in my general ignorance of this stuff, combined with trying to take examples and fuse them together for my needs. Posting this to help out anyone else who finds themselves here.
Rather than take the Single Application Client in Auth0 and make it work with Azure API Management, I decided to go the other way, and make the non-interactive Client work with my SPA. This eventually 'felt' more right: the API is what I'm securing, and I should get the API Management portal working, then change my SPA to work with it.
Once I remembered/realised that I needed to update my audience in the API to match the audience set in the Client in Auth0, then the Management Portal started working. Getting the SPA to work with the API then became a challenge: I was trying to find out how to change the auth0 angular code to pass an audience to match the one the API was sending, but it kept sending the ClientID instead. (by the way, finding all that out was made easier by using https://jwt.io/ to decrypt the Bearer tokens and work out what was happening - look at the 'aud' value for the audience.
In the end, I changed my API, in the new JwtBearerAuthenticationOptions object, the TokenValidationParameters object (of type TokenValidationParameters) has a property ValidAudiences (yes, there is also a ValidAudience property, confusing) which can take multiple audiences. So, I added my ClientID to that.
The only other thing I then changed (which might be specific to me, not sure) is that I had to change the JsonWebToken Signature Algorithm value in Auth0 for my non-interactive client (advanced settings, oAuth tab) from HS256 to RS256.
With all that done, now requests from both the API Management Portal, and my SPA work.
Curious to know if this is the "right" way of doing it, or if I've done anything considered dangerous here.
Since you're able to make the validation of the jwts with the .Net API work, Only few changes are actually necessary to get this working with Azure API Management.
In API management,
Create a validate-jwt inbound policy on an Operation (or all operations)
set the audiences and issuers the same as what you've used with your .NET web api. (you can check the values in Auth0 portal if you don't know this yet)
The important field that is missing at this point is the Open ID URLs since auth0 uses RS256 by default. The url can be found in you Auth0 portal at: Applications -> your single page application -> settings -> Scroll down, Show Advanced Settings -> End points. Then copy the OpenID Configuration
Here's the reference for API management's requirement for JWT tokens
optional reading
I have reviewed every topic that seems relevant and I believe I am having a problem because the configuration in which I am attempting to use this service is different from any of the other postings.
I can get acceptable Reverse GeoCode results only without a Key.
But acceptable is not optimal. The Guide documents filtering which would be applied on the server side to reduce the number of results I would receive to check to determine which result is 'best'.
I do not believe that the ability to get server-side filtering is a Premier Service; I do not have a Premier License.
No matter whether I use a current Browser Key or Server Key, every request will result in REQUEST_DENIED status.
At console.cloud.google.com/apis I have enabled "Google maps JavaScript" and just by reading all the other postings, I have added, probably unnecessarily, and with not change in the result: "Google Place API Web Service".
My only remaining guess is that my request is being denied in relationship to the terminology of the service agreement requiring that this service include the display of a Google Map. My application DOES display a Google Map, but I do not see how to let the Google Maps Server know that. May API stack is using the Javascript API with XML results requested via this URL: "http://maps.googleapis.com/maps/api/js?language=en&libraries=places", and the GeoCoding requests [forward and reverse] work fine via this URL:
http://maps.googleapis.com/maps/api/geocode/xml? but adding a key="" in order to take advantage of server-side filtering is always denied.
What am I missing that needs to be passed in the request in order to have my api key honored and for me to get a better result set consuming less network bandwidth?
As you use Geocoding API you have to enable it in your project. You have to generate a Server API key and use it with your request.
The official documentation covers this subject:
https://developers.google.com/maps/documentation/geocoding/get-api-key
For Maps JavaScript API you have to use a Browser API key:
https://developers.google.com/maps/documentation/javascript/get-api-key
I want to use Webex API [www.webex.com] to create meeting from my site.
For that I need my own domain in the case of URL API in this way:
"https://yourWebExHostedName.webex.com/yourWebExHostedName/".
And in the case XML API, I need WebexID, SiteID, ParternerID.
Those are mentioned in this Webex official document.
https://developer.cisco.com/documents/4733862/4736679/URL+API+WBS+27+Ref+Guide.pdf
I want to say that these parameters are available in testing environment.
But I don't have my own domain to use this API in production environment.
So I want to know that it is possible to use this API in production environment without owning a domain.
Do you have any Idea? Have you faced such problem? I need urgent solution regarding that.
For the XML API, you can obtain those parameters from this page (you need to login or register first to be able to see the form):
https://developer.cisco.com/site/webex-developer/develop-test/try-webex-apis/
To test the API, all the requests would be made to the sandbox site https://apidemoeu.webex.com
No
You cam't go for production without Webexdomain. Because For recording of video,Host users's and Attendee user's it take space on server to stored all this data you need your web-ex hosting site.