How WSO2 API Manager distributed setup works - wso2-api-manager

How the process of deployment of api into GW node happens after publishing API from Publisher node in WSO2 APIM in a distributed set up?

There is a section called <Environments> under <APIGateway> in api-manager.xml configuration file. It is where the Gateway Environment section of an API is populated from in Publisher webapp. When you select an environment from that and publish from the Publisher webapp, it will create the Synapse artifact related to API and push it to Gateway by doing an Admin service call. For that <ServerURL> is used. So you need to correctly define the URL <ServerURL> in Publisher node so that it points to the GW node.

Related

Failed to publish API using WSO2 API manager

I am unable to publish API in WSO2 API manager 2.6.0 environments.
Error message is:
Failed to publish Environments
Production and Sandbox
Error while publishing prototype API to the Gateway. Error while publishing API to the Gateway. No API exists by the name
I did checked API in the DB it exist.
I did searched over the net but did not find any solution on this issue.
Looks like Even after deleting the API, still context URI is persist somewhere so I think I need to delete the context URI - Please suggest
I found solution -
Even after deleting the API from WSO2 publisher, Looks like API will not completely deleted from WSO2 so I just checked is there API still exist in WSO2 and I found the API in the below mentioned location and deleted.
API Manager\2.6.0\repository\deployment\server\synapse-configs\default\api
Now I am able to create new API with the context URI.

Azure CosmosDB Firewall for Azure Web Apps

I have an Azure Web App hosting an API (ASP.NET MVC project) that interacts with a CosmosDB database and collections to get subscriptions and other information.
The CosmosDB database is accessed R/W by the Web App middle-ware uses through the nuget package "Microsoft.Azure.DocumentDB" SDK v1.19.1.
I am trying to set up the CosmosDB IP Firewall through the Azure Portal. I allowed the Azure Portal to have access to the db and then I needed to also allow the web app (also hosted on Azure) to have access. To do this, I copied the Virtual IP Address of the Web App from the Properties tab in the Azure Portal.
But this was not enough. I waited more than 10 minutes trying my web app but all the calls to the CosmosDB were rejected with error 404, which as the documentation states it is the proper behavior for SDK Calls (security reasons).
Then I added, all the Outbound IP Addresses stated at the same Properties Tab of the Web App. Waited for more than 20 mins and still 404 error.
What are the correct steps to achieve the requested task?
For example in SQL On Azure, the IP Filtering allowed for an option, to allow access from any Azure App/ VM / Service. How can we achieve the equivalent in CosmosDB?
Thanks in advance
Since Azure App Service is PaaS, and following this article, please try adding the IP 0.0.0.0.
On the Azure Portal, this can also be set by switching on Allow access to Azure Services.

cloud alternative for wso2 Pre-Packaged Identity Server and API+Manager

I am now hosting Pre-Packaged+Identity+Server+5.2.0+with+API+Manager+2.0.0 [https://docs.wso2.com/display/CLUSTER44x/Configuring+the+Pre-Packaged+Identity+Server+5.2.0+with+API+Manager+2.0.0] in my own AWS instance.
Planning to move on to managed Cloud solution by WSO2. But I can see independent installatiion of identity server and wso2 api manager. But is there a cloud alternative for idenitity server , api manager combo.
I am using WSO2 idenity server for user management only.keeping users in that. Can it be done in API manager as well?
What is the cloud alternative for this?
WSO2 Cloud uses Identity Server for providing Single Sign On. Cloud has its deployment architecture done in a way API Manager can also do the user management (thats comes with the power of WSO2 platform). You dont need to worry about cloud having the API Manager and Identity Server separately.
IF you are managing your subscribers and publishers, then its an out of the box scenario in the cloud. If you want to store end users of the APIs (i.e. if you are using the password grant type), then you can add a secondary userstore and store the end users in it.
I recommend you to raise these questions via the "Contact Support" option available in the Cloud UI.

ServiceDataPublisherAdmin not set in wso2 api manager gateway

I am setting up wso2 API manager 1.10.x with DAS 3.0.1 for publishing API statistics using mysql. My API manager system is clustered with gateway worker node on a separate VM. I followed the documents to enable analytics for API manager via UI. I also followed this document to manually enable analytics for gateway worker node. http://blog.rukspot.com/2016/05/configure-wso2-apim-analytics-using-xml.html After setup, I restart all servers, everything seems fine. But when I make a request to published API, gateway does not publish any statistics to DAS receiver. No data in DAS summary tables either.
By debugging wso2 Gateway, I am able to narrow it down to the fact that
private static ServiceDataPublisherAdmin dataPublisherAdminService; inside org.wso2.carbon.apimgt.impl.internal.APIManagerComponent never get set. Therefore APIMgtUsageHandler does not do anything.
Any idea on what could cause this to happen?
Thanks.
Figured it out myself.
bundle org.wso2.carbon.statistics_4.4.8 and 2 other statistics bundles are necessary for gateway worker to publish statistics data to DAS. But worker profile provided in the package of wso2 API manager 1.10.0 had them excluded.
To work around it, start wso2 on worker node with -Dprofile=default.
You can use osgi console to confirm the activation of these bundles. Once the bundle is activated, class inside is instantiated, gateway will start to publish statistics to DAS when you invoke a published API.

Is it possible to use "Google Cloud Endpoints" for backend APIs that are not hosted on "Google Platform"?

I wonder, is it possible to use such feature of "Google Cloud Endpoints" as authentication (integration with "Auth0" or "Firebase"), logs and others with backend APIs that are hosted on thirdparty servers ?
I learned that "Google Cloud Endpoint" is "Extensible Service Proxy" that is based on NGINX. Does it mean that I can somehow edit nginx config and setup it as a Reverse Proxy in order to request backend APIs that are outside of Google Platform?
The announcement from https://cloudplatform.googleblog.com/2016/09/manage-your-APIs-with-Google-Cloud-Endpoints.html says that: "Google Cloud Endpoints, a distributed API management suite that lets you deploy, protect, monitor and manage APIs written in any language and running on Google Cloud Platform (GCP)"
But article from https://cloud.google.com/endpoints/docs/about-cloud-endpoints says that: "you can host your API anywhere Docker is supported so long as it has internet access to Google Cloud Platform."
There are no any examples how to customize "Extensible Service Proxy" ngnix config file in docs.
I'm little bit confused here. Is it possible to use "Google Cloud Endpoint" in a way that I described above, and if it's so how should I do it properly?
The Extensible Service Proxy is a simple nginx web server, but it uses template files. So if you make any changes to the nginx.conf file and then restart the nginx server web server, your changes will be overwritten. You need to edit the nginx template configuration file, which is in the folder /etc/nginx.
I've found that it is possible to run Extensible Service Proxy inside a docker container that accepts additional command line parameters. It allows to specify application server address to which nginx will proxy the requests and even specify path to custom nginx.conf file that will be used. That's great !
See discussion in google group for details at https://groups.google.com/forum/#!topic/google-cloud-endpoints/b0QtQoPwHzA
Yes, the ESP is designed to run anywhere, including in GCP, in another Cloud, or on your own server.

Resources