How to config the load balancer/reverse proxy settings in the wso2 api manager - wso2-api-manager

I am trying to config an Active-Active deployment of WSO2 API-M.
I have three nodes with two active nodes of wso2am all-in-one instance, and annother one is nginx.
deployment.toml config instruct by : https://apim.docs.wso2.com/en/latest/install-and-setup/setup/setting-up-proxy-server-and-the-load-balancer/configuring-the-proxy-server-and-the-load-balancer/
but request was fail .
How to config nginx and product to let the product work with reverse proxy?

Related

Why after adding proxyPort in wso2 api manager 3.2.0, can not accsess directly to https://api.am.wso2.com:443/publisher?

I want to use Nginx revers proxy as load balancer, but after adding proxyPort in wso2 api manager 3.2.0 deployment.toml :
[transport.https.properties]
proxyPort = 443
I can not accsess directly to https://api.am.wso2.com:443/publisher?
Also my hostname = "api.am.wso2.com"
cloud please guide me?
Once the proxyPort is enabled, you need to at least have an Nginx instance running with the relative configurations to access the API Manager. You can find the default Single node Nginx configurations in here.
Therefore, this is expected behavior. As an alternate, you can disable the Proxy Port configurations and only configure the Hostname in the TOML and try accessing the portals.

wos2 active-active all in one config load balancer with IIS not NGINX

I have 3 services. One service is for wso2 database. I am using wso2 pattern 2 (active-active all in one) so I have wso2-am 2.6.0 and wso2-am-analytics 2.6.0 in both remained services. I want these two services work active-active.
I configure my services according to this .in part2 configure load balancer: I don't want to use NGINX and I want to use IIS. I try to config IIS in my local according to this . So I define two services (IP of my services) in service frim of IIS. I am not sure that I config right and I don't know how test it. how host names of wso2 set in IIS?
Is it possible to use IIS instead of NGINX for load balance support in wso2?

How to serve Kubernetes backend and Firebase hosting frontend from the same domain name?

I want to setup web app using three components that i already have:
Domain name registered on domains.google.com
Frontend web app hosted on Firebase Hosting and served from example.com
Backend on Kubernetes cluster behind Load Balancer with external static IP 1.2.3.4
I want to serve the backend from example.com/api or api.example.com
My best guess is to use Cloud DNS to connect IP adress and subdomain (or URL)
1.2.3.4 -> api.exmple.com
1.2.3.4 -> example.com/api
The problem is that Cloud DNS uses custom name servers, like this:
ns-cloud-d1.googledomains.com
So if I set Google default name servers I can reach Firebase hosting only, and if I use custom name servers I can reach only Kubernetes backend.
What is a proper way to be able to reach both api.example.com and example.com?
edit:
As a temporary workaround i'm combining two default name servers and two custom name servers from cloud DNS, like this:
ns-cloud-d1.googledomains.com (custom)
ns-cloud-d2.googledomains.com (custom)
ns-cloud-b1.googledomains.com (default)
ns-cloud-b2.googledomains.com (default)
But if someone knows the proper way to do it - please post the answer.
Approach 1:
example.com --> Firebase Hosting (A record)
api.example.com --> Kubernetes backend
Pro: Super-simple
Con: CORS request needed by browser before API calls can be made.
Approach 2:
example.com --> Firebase Hosting via k8s ExternalName service
example.com/api --> Kubernetes backend
Unfortunately from my own efforts to make this work with service type: ExternalName all I could manage is to get infinitely redirected, something which I am still unable to debug.
Approach 3:
example.com --> Google Cloud Storage via NGINX proxy to redirect paths to index.html
example.com/api --> Kubernetes backend
You will need to deploy the static files to Cloud Storage, with an NGINX proxy in front if you want SPA-like redirection to index.html for all routes. This approach does not use Firebase Hosting altogether.
The complication lies in the /api redirect which depends on which Ingress you are using.
Hope that helps.
I would suggest creating two host paths. The first would be going to "example.com" using NodePort type. You can then use the External Name service for "api.exmple.com".

WSO2 ESB 4.8.1 Clustering

Is it possible to create one ESB node as a dual role as a worker and manager ?
I'm using wso2 ESB 4.8.1 and nginx as load balancer.
This is pretty easy. This is what you have to do.
Forget about nginx and setup the ESB cluster. Lets say a cluster with one manager and one worker. I think you will be able to get it done by following the instructions here. Instead of WSO2 ELB mentioned in the doc, you are going to use nginx. Instead of the ELB, You can set the management and worker node as the well known members. i.e. In both nodes, you set both nodes as the well known members.
Once you have the cluster working, you should be able to send requests to an artifact deployed to both nodes separately. Difference between the manager node and worker node is, manager node is the one who only commits to the svn repo. So, when you deploy new artifacts you should deploy them using the manager node.
Now you have to configure two sites in nginx. Lets assume you decided to use esbmgt.mydomain.com for the management node and esb.mydomain.com for the worker. In esbmgt's upstream, you only mention about the manager node and also you route the requests to the 9443 port of the node. In the esb's upstream, you mention both nodes and the requests are routed to 8280 (http) and 8243 (https). Thats because the ESB serves requests using those ports and the UI is exposed via 9443 (https)
I hope the above information will help you.

Cannot acces WSO2 API Manager Publisher after adding API Manager Feature in ESB

I'm new to WSO2 products and the company I worked for asked me to evaluate WSO2 Enterprise Service Bus (ESB). Aside from this they also wanted to evaluate the WSO2 Identity Server (IS) and WSO2 API Manager (APIM). So we created a test system installing ESB as the base product. After researching most of the references in the web states that you can install other WSO2 products inside an existing one by installing it's feature. So we decided that approach and after a few issues we have successfully installed APIM and IS inside a running ESB. However while accessing the APIM Publisher by using the url https://:9443/publisher we got an error
HTTP Status 405 - HTTP method GET is not supported by this URL
type Status report
message HTTP method GET is not supported by this URL
description The specified HTTP method is not allowed for the requested resource.
Apache Tomcat/7.0.34
Any idea what happened as we have not seen any errors on the logs? Is it possible if I just install on a separate instance the WSO2 APIM but assigning it to a different port so as to avoid conflict with the ESB?
Thanks for the help.
Is it possible if I just install on a separate instance the WSO2 APIM
but assigning it to a different port so as to avoid conflict with the
ESB?
Port offset allows you to run multiple WSO2 products, multiple instances of a WSO2 product, or multiple WSO2 product clusters on the same server or virtual machine (VM). Port offset defines the number by which all ports defined in the runtime such as the HTTP/S ports will be offset.
For example, if the default HTTP port is 9763 and the portOffset is 1, the effective HTTP port will be 9764. Therefore, for each additional WSO2 product instance, set the port offset to a unique value (the default is 0) so that they can all run on the same server without any port conflicts.
Port offset can be passed to the server during startup. The following command starts the server with the default port incremented by 3.
./wso2server.sh -DportOffset=3
Alternatively, you can set it in the Ports section of <PRODUCT_HOME>/repository/conf/carbon.xml as follows:
<Offset>3</Offset>
Hi You dont need to install API manager features in the ESB. You can take a API Manager instance instead which has a lightweight ESB running inside. you can access this from the management console on the API Manager

Resources