I installed and configured a distributed setup of WSO2 API Managager with multitenancy enabled. I have 2 distributed gateways and i followed this guide https://docs.wso2.com/display/AM260/Distributed+Deployment+of+the+Gateway . I created one tenant (we will call it tenantA) and deployed some APIs under it.
The problem is that with multitenancy the Synapse APIs artifacts (for APIs created in tenants [and not in the super-tenant]) on the gateway are stored in APIM-HOME/repository/tenants/tenantA/synapse-configs/default/api and not under APIM-HOME/repository/deployments/server/.
The question is: Should I share both paths (NFS/glusterFS) between the gateways? If not, which one should I share?
How about the registry? I shared both the config and governance registry partitions, is it supposed to be like this?
Many thanks
In the multi-tenancy use case, those API artifacts are getting created at repository/tenants location. You can find those locations in https://docs.wso2.com/display/AM260/Common+Runtime+and+Configuration+Artifacts
Yes, you have to share both paths as token, revoke, etc. APIs exist in super tenant location.
You have to share the userdb and registry db for the GWs in the multi-tenancy use case. https://docs.wso2.com/display/AM210/Understanding+the+Distributed+Deployment+of+WSO2+API-M
Related
I have a backend system built in AWS and I'm utilizing CloudWatch in all of the services for logging and monitoring. I really like the ability to send structured JSON logs into CloudWatch that are consistent and provide a lot of context around the log message. Querying the logs and getting to the root of an issue is simple or just exploring the health of the environment - makes CloudWatch a must have for my backend.
Now I'm working on the frontend side of things, mobile applications using Xamarin.Forms. I know AWS has Amplify but I really wanted to stick with Xamarin.Forms as that's a skill set I've already got and I'm comfortable with. Since Amplify didn't support Xamarin.Forms I've been stuck looking at other options for logging - one of them being Microsoft's AppCenter.
If I go the AppCenter route I'll end up having to build out a mapping of the AppCenter installation identifier and my users between the AWS environment and the AppCenter environment. Before I start down that path I wanted to ask a couple questions around best practice and security of an alternative approach.
I'm considering using the AWS SDK for .Net, creating an IAM Role with a Policy that allows for X-Ray and CloudWatch PUT operations on a specific log group and then assigning it to an IAM User. I can issue access keys for the user and embed them in my apps config files. This would let me send log data right into CloudWatch from the mobile apps using something like NLog.
I noticed with AppCenter I have to provide a client secret to the app, which wouldn't be any different than providing an IAM User access key to my app for pushing into CloudWatch. I'm typically a little shy about issuing access keys from AWS but as long as the Policy is tight I can't think of any negative side-effects... other than someone flooding me with log data should they pull the key out of the app data.
An alternative route I'm exploring is instead of embedding the access keys in my config files - I could request them from my API services and hold it in-memory. Only downside to that is when the user doesn't have internet connectivity logging might be a pain (will need to look at how NLog handles sinks that aren't currently available - queueing and flushing)
Is there anything else I'm not considering or is this approach a feasible solution with minimal risk?
We started our planning phase on a new project and we settled on a ASP.NET REST API which should be hosted on Azure. Since none of us has any experience on deployment on Azure (or any other cloud service), I have two questions.
Do you need separate Azure Services for the Database and the API, or might there be a combined "package" for the prototype, which later can be changed easily?
Is there any documentation or are there any examples of the entire deployment process of a simple dummy API and the DB? I have spent the last few hours reading the official documentation and searching around, but I would really love to see some sort of reference, just to ensure I don't miss something.
For now, the best I have found is this and this. This seems rather shallow, so I really hope, that there might be more.
If you're looking for in-depth design an implementation details then I would suggest that the Azure Architecture Center would be an excellent place to start, for hands on experience there are hundreds of free courses available on Microsoft Learn.
Specifically there are sections on API design and API implementation. From the Serverless web application page is:
If you don't need all of the functionality provided by API Management, another option is to use Functions Proxies. This feature of Azure Functions lets you define a single API surface for multiple function apps, by creating routes to back-end functions. Function proxies can also perform limited transformations on the HTTP request and response. However, they don't provide the same rich policy-based capabilities of API Management.
Function Proxies
I would suggest starting with using Azure Functions for your API (you only pay for the number of calls + a combination of CPU, memory, and runtime, but the first 1,000,000 calls per month are free (consumption plan), rather than paying for an Azure App Service to host your API and run all the time but only be utilized some of the time.
Some links that might help:
Build Serverless APIs with Azure Functions
Customize an HTTP endpoint in Azure Functions
There is an excellent summary in this article that states:
For the heavy workloads.
Private(enterprise) API - API Management with a Premium plan.
Public API - Functions Proxy with the Premium plan.
For light/moderate workloads.
Private API -Functions Proxy with the Premium plan.
Public API -Functions Proxy with a Consumption plan and custom warm-up solution.
Then from here you can use a connection string to an Azure SQL DB inside your functions to write to the DB or something like Azure Managed Identity (yes the link is for Azure PostgreSQL but the process will be much the same for Azure SQL).
In terms of deployment you should be looking at using Azure DevOps (or GitHub Actions):
Setting up a CI/CD pipeline for Azure functions (old way - GUI pipelines)
Deploy an Azure Functions app to Azure (new way - YAML pipelines)
Continuous Delivery for Azure SQL DB using Azure DevOps
Another helpful tool to get a gauge of costs is the Azure Pricing Calculator.
I'm using two firebase projects: one for development and staging, and another for production. The Firebase CLI allows me to switch projects with firebase use _____.
For the client I'm using create-react-app and implicitly configuring firebase by using the From Hosting URLs.
The trouble comes with configuring each project's connection to third party services. For most services I have separate accounts, so need different keys (and secrets on the server), for development and production.
For firebase functions, I can use functions config vars for each project. Pretty easy.
But what's the best way to do this on the client?
create-react-app has great support for various .env files, but can I link a .env file to a firebase project rather than using their prioritization?
Or is there a way to expose the firebase functions config vars to create-react-app's start, build, and test processes as environment variables? (preferably without building all variables into the public js :-P)
What's the best way to do this?
The best way to do this seems to be to use GCP secret manager :
Secret Manager stores API keys, passwords, certificates, and other
sensitive data. It provides convenience while improving security
https://cloud.google.com/secret-manager/docs/quickstart
Beware, it's a standalone service by GCP, therefore Google charges you to store your API keys. The pricing calculation example they detail, so i'm guessing it's a typical use case, gives a monthly cost of $15.15.
That's not cheap to store dumb API keys.
The other way is to use cloud functions as you did.
The benefits of using GCP SM are that the service can be combined with audit logs, that it has a version management feature, and that you can set permission levels.
I am now hosting Pre-Packaged+Identity+Server+5.2.0+with+API+Manager+2.0.0 [https://docs.wso2.com/display/CLUSTER44x/Configuring+the+Pre-Packaged+Identity+Server+5.2.0+with+API+Manager+2.0.0] in my own AWS instance.
Planning to move on to managed Cloud solution by WSO2. But I can see independent installatiion of identity server and wso2 api manager. But is there a cloud alternative for idenitity server , api manager combo.
I am using WSO2 idenity server for user management only.keeping users in that. Can it be done in API manager as well?
What is the cloud alternative for this?
WSO2 Cloud uses Identity Server for providing Single Sign On. Cloud has its deployment architecture done in a way API Manager can also do the user management (thats comes with the power of WSO2 platform). You dont need to worry about cloud having the API Manager and Identity Server separately.
IF you are managing your subscribers and publishers, then its an out of the box scenario in the cloud. If you want to store end users of the APIs (i.e. if you are using the password grant type), then you can add a secondary userstore and store the end users in it.
I recommend you to raise these questions via the "Contact Support" option available in the Cloud UI.
I am trying to test run a basic .NET web application on pivotal cloud foundry. This web application uses as its database a MongoDB server hosted on my local machine. At the moment I am limited to use of the cloud infrastructure by using just the Apps Manager.
I have read the pivotal cloud foundry docs about user provided services, but cannot figure out as to how the connection is to be really made. I have already come across various other ways like using MongoDB as a service (beta version), but at the moment I am not allowed access to the Operations Manager. Looking for an explanation on user provided services or how to implement the service broker API, specifically.
I am new to Mongo as well, so any suggestion regarding making a connection through tweaking Mongo may help as well. Thanks
The use case you describe (web app in PCF connecting to a resource in your local machine) is not recommended.
You can create a MongoDB instance for development purposes in PCF.
$ cf marketplace
...
mlab sandbox Fully managed MongoDB-as-a-Service
...
You can create a mlab service and bind it to your application. You will then have a MongoDB instance in PCF that you can use for development purposes.
Edit:
In that case a user provided service might help you, where you pass in your remote MongoDB instance configuration that you can read in your application. e.g.:
cf.exe cups my-mongodb -p '{"key1":"value1","key2":"value2"}'
You can add your local mongo-db as a CUPS service to your PCF Dev.
Check out the following post.
How to create a CUPS service for mongoDB?