How to config wso2 api manager by more nodes? - wso2-api-manager

I have read wso2 api manager active-active setup, but I want to setup api manager by more than two nodes, for example by three nodes. So I confused in some configs, for example, how to should I Configure Throttling part for 3 nodes in deployment.toml
[apim.throttling]
event_duplicate_url = ["tcp://<node2-hostname>:<node2-port>"]
[[apim.throttling.url_group]]
traffic_manager_urls = ["tcp://<node1-hostname>:<node1-port>"]
traffic_manager_auth_urls = ["ssl://<node1-hostname>:<node1-port>"]
type = "loadbalance"
[[apim.throttling.url_group]]
traffic_manager_urls = ["tcp://<node2-hostname>:<node2-port>"]
traffic_manager_auth_urls = ["ssl://<node2-hostname>:<node2-port>"]
type = "loadbalance"
and
[apim.throttling]
event_duplicate_url = ["tcp://<node1-hostname>:<node1-port>"]
[[apim.throttling.url_group]]
traffic_manager_urls = ["tcp://<node1-hostname>:<node1-port>"]
traffic_manager_auth_urls = ["ssl://<node1-hostname>:<node1-port>"]
type = "loadbalance"
[[apim.throttling.url_group]]
traffic_manager_urls = ["tcp://<node2-hostname>:<node2-port>"]
traffic_manager_auth_urls = ["ssl://<node2-hostname>:<node2-port>"]
type = "loadbalance"
And also I want have two analytics instance and configure in Nginx, should I add new upstream for analytics in nginx config?
In wso2 api manager document, reverse proxy has no direct relation to analytics, why?and should we config analytics in reverse proxy(Nginx)?

It is good to have a distributed setup and deploy Gateway nodes as many as you want without deploying 3 All-In-One API Manager nodes. You can learn more about the API Manager profiles and components here. Further, you can refer to this doc to set up and deploy a distributed setup.
If your requirement is to spin 3 All-In-One nodes of API Manager, then you have to configure the event_duplicate_url and throttling.url_group configurations with all three nodes.
Given below is a sample configuration
# node1 configurations
[apim.throttling]
event_duplicate_url = ["tcp://<node2-hostname>:<node2-port>", "tcp://<node3-hostname>:<node3-port>"]
...
[[apim.throttling.url_group]]
traffic_manager_urls = ["tcp://<node3-hostname>:<node3-port>"]
traffic_manager_auth_urls = ["ssl://<node3-hostname>:<node3-port>"]
type = "loadbalance"
# node2 configurations
[apim.throttling]
event_duplicate_url = ["tcp://<node1-hostname>:<node2-port>", "tcp://<node3-hostname>:<node3-port>"]
...
[[apim.throttling.url_group]]
traffic_manager_urls = ["tcp://<node3-hostname>:<node3-port>"]
traffic_manager_auth_urls = ["ssl://<node3-hostname>:<node3-port>"]
type = "loadbalance"
# node3 configurations
[apim.throttling]
event_duplicate_url = ["tcp://<node1-hostname>:<node1-port>", "tcp://<node2-hostname>:<node2-port>"]
[[apim.throttling.url_group]]
traffic_manager_urls = ["tcp://<node1-hostname>:<node1-port>"]
traffic_manager_auth_urls = ["ssl://<node1-hostname>:<node1-port>"]
type = "loadbalance"
[[apim.throttling.url_group]]
traffic_manager_urls = ["tcp://<node2-hostname>:<node2-port>"]
traffic_manager_auth_urls = ["ssl://<node2-hostname>:<node2-port>"]
type = "loadbalance"
[[apim.throttling.url_group]]
traffic_manager_urls = ["tcp://<node3-hostname>:<node3-port>"]
traffic_manager_auth_urls = ["ssl://<node3-hostname>:<node3-port>"]
type = "loadbalance"
Further, you can follow the Configuration Catalog documentation to refer and configure the Analytics section in the API Manager node.

Related

WSO2 API Manager(wso2am-4.0.0) - APIs not getting deployed to the 2nd node in a clustered setup

I'm using the latest API Manager(wso2am-4.0.0) and I am trying to implement the clustering of 2 nodes in 2 separate servers. I am trying to sync the APIs according to the below documentation, setting up deployment.toml files in both nodes accordingly.
https://apim.docs.wso2.com/en/3.2.0/install-and-setup/setup/distributed-deployment/synchronizing-artifacts-in-a-gateway-cluster/#inbuilt-artifact-synchronization
Currently, the APIs appear in both the nodes once deployed from the 1st node. But, when I try to access the API in the 2nd node(by requesting using Postman), it results in a 404 resource not being found. Interestingly, if I restart the 2nd node, the API starts working in the 2nd node as well.
Any solution for this is most welcome.
Thanks in advance.
The below configuration worked for the APIs to sync between the nodes.
Node 1
[apim.throttling]
event_duplicate_url = ["tcp://127.0.0.1:5673"]
[[apim.throttling.url_group]]
traffic_manager_urls = ["tcp://127.0.0.1:9611"]
traffic_manager_auth_urls = ["ssl://127.0.0.1:9711"]
type = "loadbalance"
[[apim.throttling.url_group]]
traffic_manager_urls = ["tcp://<Node2_IP>:9611"]
traffic_manager_auth_urls = ["ssl://<Node2_IP>:9711"]
type = "loadbalance"
Node 2
[apim.throttling]
event_duplicate_url = ["tcp://127.0.0.1:5672"]
[[apim.throttling.url_group]]
traffic_manager_urls = ["tcp://127.0.0.1:9611"]
traffic_manager_auth_urls = ["ssl://127.0.0.1:9711"]
type = "loadbalance"
[[apim.throttling.url_group]]
traffic_manager_urls = ["tcp://<Node1_IP>:9611"]
traffic_manager_auth_urls = ["ssl://<Node1_IP>:9711"]
type = "loadbalance"
PS: In the documentation, the 2nd configuration's IP is mentioned with the localhost IP which needs to be corrected as above.
https://apim.docs.wso2.com/en/latest/install-and-setup/setup/single-node/configuring-an-active-active-deployment/
Step number 8.

APIs not getting deployed in one gateway in latest API Manager(wso2am-4.0.0)

I'm using the latest API Manager(wso2am-4.0.0) and I am trying to implement one control plane node(which also acts as a separate gateway node), and also a gateway worker node as per the documentation.
https://apim.docs.wso2.com/en/latest/deploy-and-publish/deploy-on-gateway/deploy-api/deploy-through-multiple-api-gateways/
Currently, the APIs are getting deployed to the gateway worker node as expected(when accessing via port 8243) but not getting deployed to the control plane node, which results in a 404 not found when invoking the API via the control plane node(when accessing via port 8244).
The relevant deployment.conf configurations for both the nodes are as below.
Control Plane
[[apim.gateway.environment]]
name = "Production Gateway"
type = "production"
display_in_api_console = true
description = "Production Gateway Environment"
show_as_token_endpoint_url = true
service_url = "https://localhost:9443/services/"
username= "${admin.username}"
password= "${admin.password}"
ws_endpoint = "ws://localhost:9099"
wss_endpoint = "wss://localhost:8099"
http_endpoint = "http://localhost:8280"
https_endpoint = "https://localhost:8243"
[[apim.gateway.environment]]
name = "Default"
type = "hybrid"
display_in_api_console = true
description = "External Gateway Environment"
show_as_token_endpoint_url = true
service_url = "https://localhost:9444/services/"
username= "${admin.username}"
password= "${admin.password}"
ws_endpoint = "ws://localhost:9100"
wss_endpoint = "wss://localhost:8100"
http_endpoint = "http://localhost:8281"
https_endpoint = "https://localhost:8244"
Gateway Worker
[apim.key_manager]
service_url = "https://<hostname>:9443/services/"
username = "$ref{super_admin.username}"
password = "$ref{super_admin.password}"
[apim.throttling]
service_url = "https://<hostname>:9443/services/"
throttle_decision_endpoints = ["tcp://<hostname>:5672"]
[[apim.throttling.url_group]]
traffic_manager_urls = ["tcp://<hostname>:9611"]
traffic_manager_auth_urls = ["ssl://<hostname>:9711"]
[apim.sync_runtime_artifacts.gateway]
gateway_labels =["Default","Production Gateway"]
Note: I started these servers using the switches -Dprofile=control-plane and -Dprofile=gateway-worker respectively.
Am I missing any configuration here?
Thanks in advance.

The "localhost" in the combobox in Devportal doesn't take the IP adresse of my VMware, but the URL of the page does

I've installed WSO2 API Manager 3.2 on Docker/VMware/Linux, when i want to test the API created and published on "Publisher" from "Devportal", the servers combobox of selecting the URL address always shows https://localhost:8243/test/1.0, but I want : https://x.x.27.197:8243/test/1.0
When i tested the same URL(https://x.x.27.197:8243/test/1.0) on Postman it successes!
Many thanks.
the image attached describes more the problem
Did you try changing the gateway environment configurations in the deployment.toml? You can change the gateway url values by updating http_endpoint and https_endpoint values in the config.
For example,
[[apim.gateway.environment]]
name = "Production and Sandbox"
type = "hybrid"
display_in_api_console = true
description = "This is a hybrid gateway that handles both production and sandbox token traffic."
show_as_token_endpoint_url = true
service_url = "https://localhost:${mgt.transport.https.port}/services/"
username= "${admin.username}"
password= "${admin.password}"
ws_endpoint = "ws://localhost:9099"
wss_endpoint = "wss://localhost:8099"
http_endpoint = "http://<your_ip>:${http.nio.port}"
https_endpoint = "https://<your_ip>:${https.nio.port}"
For more information, refer this.

WS02 apim.gateway.environment

https://github.com/wso2/k8s-wso2am-operator
followed in setting up the server instance is up but the apigateway for sandbox/production evnironment routing shows error it shows page not found.below is the configuration which i have done in kubernetes applied ingress controller to route the domain tRaffic with SSL. But gateway it page does shows
[[apim.gateway.environment]]
name = "Production and Sandbox",
type = "hybrid"
display_in_api_console = true
description = "This is a hybrid gateway that handles both production and sandbox token traffic."
show_as_token_endpoint_url = true
service_url = "https://domainname:443/services/
ws_endpoint = "ws://wso2apim:9099"
wss_endpoint = "wss://wso2apim:8099"
http_endpoint = "http://wso2apim:32004"
https_endpoint = "https://wso2apim:32003"

scollector - tagging metrics from vsphere

just a question about scollector tagging. I have a config file that looks like this:
Host = "bosun01:80"
BatchSize = 5000
[Tags]
customer = "Admin"
environment = "bosun"
datacenter = "SITE1"
[[Vsphere]]
Host = "CUST2SITE1VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[Vsphere]]
Host = "CUST3SITE1VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[Vsphere]]
Host = "CUST4SITE1VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[Vsphere]]
Host = "CUST4SITE2VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[TagOverride]]
CollectorExpr = "Vsphere"
[TagOverride.MatchedTags]
Host = '^(?P<customer>.{5})(?P<datacenter>.{5})(?P<environment>)\.[.]+'
with the idea being that we can retrieve and tag data from different vsphere servers.
My understanding of the docs is that this will give us a number of different tag values based what is regex'd out of the Vsphere hostname. The initial tags are for the local host, and the we use overrides for the data coming from Vsphere.
However when i implement this, I notice that these metrics are coming in with the original environment tag of "bosun" rather than the override being applied.
I have tried an alternate config:
Host = "bosun01:80"
BatchSize = 5000
[Tags]
customer = "Admin"
environment = "bosun"
datacenter = "SITE1"
[[Vsphere]]
Host = "CUST2SITE1VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[TagOverride]]
[TagOverride.Tags]
environment = "Env01"
[[Vsphere]]
Host = "CUST3SITE1VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[TagOverride]]
[TagOverride.Tags]
environment = "Env02"
[[Vsphere]]
Host = "CUST4SITE1VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[TagOverride]]
[TagOverride.Tags]
environment = "Env03"
[[Vsphere]]
Host = "CUST4SITE2VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[TagOverride]]
[TagOverride.Tags]
environment = "Env04"
But i am seeing similar behavior (the last environment tag is applied to all vpshere data), so im not quite sure where i am going wrong.
Can someone help me understand where i am going wrong here ?
Update
As per Greg's answer below, my problem was that i didnt have the CollectorExpr quite right.
Using scollector -l i was able to come up with the correct CollectorExpr.
# ./scollector-linux-amd64 -l | grep vsphere
vsphere-CUST1-SITE1-MGMTVC01
vsphere-CUST1-SITE2-MGMTVC01
vsphere-CUST1-SITE1-CLIVC01
vsphere-CUST1-SITE2-CLIVC01
#
Our config (for those looking for examples) ended up something like this:
Host = "hwbosun01:80"
BatchSize = 5000
[Tags]
customer = "Customer1"
environment = "bosun"
datacenter = "eq"
[[Vsphere]]
Host = "CUST1-SITE1-MGMTVC01"
User = "user"
Password = "pass"
[[Vsphere]]
Host = "CUST1-SITE2-MGMTVC01"
User = "user"
Password = "pass"
[[Vsphere]]
Host = "CUST1-SITE1-CLIVVC01"
User = "user"
Password = "pass"
[[Vsphere]]
Host = "CUST-SITE1-CLIVVC01"
User = "user"
Password = "pass"
[[TagOverride]]
CollectorExpr = "CUST-SITE1-MGMTVC01"
[TagOverride.Tags]
environment = "vsphere.mgmt"
datacenter = 'site1'
[[TagOverride]]
CollectorExpr = "CUST-SITE1-MGMTVC01"
[TagOverride.Tags]
environment = "vsphere.mgmt"
datacenter = 'site2'
[[TagOverride]]
CollectorExpr = "CUST-SITE1-CLIVC01"
[TagOverride.Tags]
environment = "vsphere.mgmt"
datacenter = 'site1'
[[TagOverride]]
CollectorExpr = "CUST-SITE1-CLIVC01"
[TagOverride.Tags]
environment = "vsphere.mgmt"
datacenter = 'site2'
I believe CollectorExpr is a regular expression that must match against the output of scollector -l or the collector tag values used in the scollector.collector.duration metric. Our vsphere instances get the tag values of vsphere-ny-vsphere02 for ny-vsphere02 and vsphere-nyhq-vsphere01 for nyhq-vsphere01. The following settings should match against those collector names:
[[TagOverride]]
CollectorExpr = "vsphere-ny-"
[TagOverride.Tags]
datacenter = 'ny'
[[TagOverride]]
CollectorExpr = "vsphere-nyhq-"
[TagOverride.Tags]
datacenter = 'nyhq'
Using [TagOverride.MatchedTags] instead of [TagOverride.Tags] should work to extract the value out of the hostname, but keep in mind that all the hostnames are truncated to their shortname (no FQDN) unless you set FullHost = true in the scollector.toml file. My guess is your settings are failing because the CollectorExpr is incorrect. Try something like:
[[TagOverride]]
CollectorExpr = "vsphere-"
[TagOverride.MatchedTags]
Host = '^(?P<customer>.{5})(?P<datacenter>.{5})(?P<environment>[^.]+)'
If that doesn't work try using '[TagOverride.Tags]' in a dev environment to see if you can add test tags/values to those metrics.

Resources