APIs not getting deployed in one gateway in latest API Manager(wso2am-4.0.0) - wso2-api-manager

I'm using the latest API Manager(wso2am-4.0.0) and I am trying to implement one control plane node(which also acts as a separate gateway node), and also a gateway worker node as per the documentation.
https://apim.docs.wso2.com/en/latest/deploy-and-publish/deploy-on-gateway/deploy-api/deploy-through-multiple-api-gateways/
Currently, the APIs are getting deployed to the gateway worker node as expected(when accessing via port 8243) but not getting deployed to the control plane node, which results in a 404 not found when invoking the API via the control plane node(when accessing via port 8244).
The relevant deployment.conf configurations for both the nodes are as below.
Control Plane
[[apim.gateway.environment]]
name = "Production Gateway"
type = "production"
display_in_api_console = true
description = "Production Gateway Environment"
show_as_token_endpoint_url = true
service_url = "https://localhost:9443/services/"
username= "${admin.username}"
password= "${admin.password}"
ws_endpoint = "ws://localhost:9099"
wss_endpoint = "wss://localhost:8099"
http_endpoint = "http://localhost:8280"
https_endpoint = "https://localhost:8243"
[[apim.gateway.environment]]
name = "Default"
type = "hybrid"
display_in_api_console = true
description = "External Gateway Environment"
show_as_token_endpoint_url = true
service_url = "https://localhost:9444/services/"
username= "${admin.username}"
password= "${admin.password}"
ws_endpoint = "ws://localhost:9100"
wss_endpoint = "wss://localhost:8100"
http_endpoint = "http://localhost:8281"
https_endpoint = "https://localhost:8244"
Gateway Worker
[apim.key_manager]
service_url = "https://<hostname>:9443/services/"
username = "$ref{super_admin.username}"
password = "$ref{super_admin.password}"
[apim.throttling]
service_url = "https://<hostname>:9443/services/"
throttle_decision_endpoints = ["tcp://<hostname>:5672"]
[[apim.throttling.url_group]]
traffic_manager_urls = ["tcp://<hostname>:9611"]
traffic_manager_auth_urls = ["ssl://<hostname>:9711"]
[apim.sync_runtime_artifacts.gateway]
gateway_labels =["Default","Production Gateway"]
Note: I started these servers using the switches -Dprofile=control-plane and -Dprofile=gateway-worker respectively.
Am I missing any configuration here?
Thanks in advance.

Related

WSO2 API Manager(wso2am-4.0.0) - APIs not getting deployed to the 2nd node in a clustered setup

I'm using the latest API Manager(wso2am-4.0.0) and I am trying to implement the clustering of 2 nodes in 2 separate servers. I am trying to sync the APIs according to the below documentation, setting up deployment.toml files in both nodes accordingly.
https://apim.docs.wso2.com/en/3.2.0/install-and-setup/setup/distributed-deployment/synchronizing-artifacts-in-a-gateway-cluster/#inbuilt-artifact-synchronization
Currently, the APIs appear in both the nodes once deployed from the 1st node. But, when I try to access the API in the 2nd node(by requesting using Postman), it results in a 404 resource not being found. Interestingly, if I restart the 2nd node, the API starts working in the 2nd node as well.
Any solution for this is most welcome.
Thanks in advance.
The below configuration worked for the APIs to sync between the nodes.
Node 1
[apim.throttling]
event_duplicate_url = ["tcp://127.0.0.1:5673"]
[[apim.throttling.url_group]]
traffic_manager_urls = ["tcp://127.0.0.1:9611"]
traffic_manager_auth_urls = ["ssl://127.0.0.1:9711"]
type = "loadbalance"
[[apim.throttling.url_group]]
traffic_manager_urls = ["tcp://<Node2_IP>:9611"]
traffic_manager_auth_urls = ["ssl://<Node2_IP>:9711"]
type = "loadbalance"
Node 2
[apim.throttling]
event_duplicate_url = ["tcp://127.0.0.1:5672"]
[[apim.throttling.url_group]]
traffic_manager_urls = ["tcp://127.0.0.1:9611"]
traffic_manager_auth_urls = ["ssl://127.0.0.1:9711"]
type = "loadbalance"
[[apim.throttling.url_group]]
traffic_manager_urls = ["tcp://<Node1_IP>:9611"]
traffic_manager_auth_urls = ["ssl://<Node1_IP>:9711"]
type = "loadbalance"
PS: In the documentation, the 2nd configuration's IP is mentioned with the localhost IP which needs to be corrected as above.
https://apim.docs.wso2.com/en/latest/install-and-setup/setup/single-node/configuring-an-active-active-deployment/
Step number 8.

The "localhost" in the combobox in Devportal doesn't take the IP adresse of my VMware, but the URL of the page does

I've installed WSO2 API Manager 3.2 on Docker/VMware/Linux, when i want to test the API created and published on "Publisher" from "Devportal", the servers combobox of selecting the URL address always shows https://localhost:8243/test/1.0, but I want : https://x.x.27.197:8243/test/1.0
When i tested the same URL(https://x.x.27.197:8243/test/1.0) on Postman it successes!
Many thanks.
the image attached describes more the problem
Did you try changing the gateway environment configurations in the deployment.toml? You can change the gateway url values by updating http_endpoint and https_endpoint values in the config.
For example,
[[apim.gateway.environment]]
name = "Production and Sandbox"
type = "hybrid"
display_in_api_console = true
description = "This is a hybrid gateway that handles both production and sandbox token traffic."
show_as_token_endpoint_url = true
service_url = "https://localhost:${mgt.transport.https.port}/services/"
username= "${admin.username}"
password= "${admin.password}"
ws_endpoint = "ws://localhost:9099"
wss_endpoint = "wss://localhost:8099"
http_endpoint = "http://<your_ip>:${http.nio.port}"
https_endpoint = "https://<your_ip>:${https.nio.port}"
For more information, refer this.

RabbitMQ SSL Configuration: DotNet Client

I am trying to connect (dotnet client) to RabbitMQ. I enabled the Peer verification option from the RabbitMQ config file.
_factory = new ConnectionFactory
{
HostName = Endpoint,
UserName = Username,
Password = Password,
Port = 5671,
VirtualHost = "/",
AutomaticRecoveryEnabled = true
};
sslOption = new SslOption
{
Version = SslProtocols.Tls12,
Enabled = true,
AcceptablePolicyErrors = System.Net.Security.SslPolicyErrors.RemoteCertificateChainErrors
| System.Net.Security.SslPolicyErrors.RemoteCertificateNameMismatch,
ServerName = "", // ?
Certs = X509CertCollection
}
Below are my client certification details which I am passing through "X509CertCollection".
CertSubject: CN=myhostname, O=MyOrganizationName, C=US // myhostname is the name of my client host.
So, if I pass "myhostname" value into sslOption.ServerName, it works. If I pass some garbage value, it still works.
As per documentation of RabbitMQ, these two value should be match i.e. certCN value and serverName. What will be the value of sslOption.ServerName here and why?
My Bad. I found the reason. Posting as it might help someone.
Reason: As I set a policy "System.Net.Security.SslPolicyErrors.RemoteCertificateNameMismatch".

WS02 apim.gateway.environment

https://github.com/wso2/k8s-wso2am-operator
followed in setting up the server instance is up but the apigateway for sandbox/production evnironment routing shows error it shows page not found.below is the configuration which i have done in kubernetes applied ingress controller to route the domain tRaffic with SSL. But gateway it page does shows
[[apim.gateway.environment]]
name = "Production and Sandbox",
type = "hybrid"
display_in_api_console = true
description = "This is a hybrid gateway that handles both production and sandbox token traffic."
show_as_token_endpoint_url = true
service_url = "https://domainname:443/services/
ws_endpoint = "ws://wso2apim:9099"
wss_endpoint = "wss://wso2apim:8099"
http_endpoint = "http://wso2apim:32004"
https_endpoint = "https://wso2apim:32003"

scollector - tagging metrics from vsphere

just a question about scollector tagging. I have a config file that looks like this:
Host = "bosun01:80"
BatchSize = 5000
[Tags]
customer = "Admin"
environment = "bosun"
datacenter = "SITE1"
[[Vsphere]]
Host = "CUST2SITE1VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[Vsphere]]
Host = "CUST3SITE1VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[Vsphere]]
Host = "CUST4SITE1VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[Vsphere]]
Host = "CUST4SITE2VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[TagOverride]]
CollectorExpr = "Vsphere"
[TagOverride.MatchedTags]
Host = '^(?P<customer>.{5})(?P<datacenter>.{5})(?P<environment>)\.[.]+'
with the idea being that we can retrieve and tag data from different vsphere servers.
My understanding of the docs is that this will give us a number of different tag values based what is regex'd out of the Vsphere hostname. The initial tags are for the local host, and the we use overrides for the data coming from Vsphere.
However when i implement this, I notice that these metrics are coming in with the original environment tag of "bosun" rather than the override being applied.
I have tried an alternate config:
Host = "bosun01:80"
BatchSize = 5000
[Tags]
customer = "Admin"
environment = "bosun"
datacenter = "SITE1"
[[Vsphere]]
Host = "CUST2SITE1VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[TagOverride]]
[TagOverride.Tags]
environment = "Env01"
[[Vsphere]]
Host = "CUST3SITE1VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[TagOverride]]
[TagOverride.Tags]
environment = "Env02"
[[Vsphere]]
Host = "CUST4SITE1VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[TagOverride]]
[TagOverride.Tags]
environment = "Env03"
[[Vsphere]]
Host = "CUST4SITE2VC01.F.Q.D.N"
User = "user"
Password = "pass"
[[TagOverride]]
[TagOverride.Tags]
environment = "Env04"
But i am seeing similar behavior (the last environment tag is applied to all vpshere data), so im not quite sure where i am going wrong.
Can someone help me understand where i am going wrong here ?
Update
As per Greg's answer below, my problem was that i didnt have the CollectorExpr quite right.
Using scollector -l i was able to come up with the correct CollectorExpr.
# ./scollector-linux-amd64 -l | grep vsphere
vsphere-CUST1-SITE1-MGMTVC01
vsphere-CUST1-SITE2-MGMTVC01
vsphere-CUST1-SITE1-CLIVC01
vsphere-CUST1-SITE2-CLIVC01
#
Our config (for those looking for examples) ended up something like this:
Host = "hwbosun01:80"
BatchSize = 5000
[Tags]
customer = "Customer1"
environment = "bosun"
datacenter = "eq"
[[Vsphere]]
Host = "CUST1-SITE1-MGMTVC01"
User = "user"
Password = "pass"
[[Vsphere]]
Host = "CUST1-SITE2-MGMTVC01"
User = "user"
Password = "pass"
[[Vsphere]]
Host = "CUST1-SITE1-CLIVVC01"
User = "user"
Password = "pass"
[[Vsphere]]
Host = "CUST-SITE1-CLIVVC01"
User = "user"
Password = "pass"
[[TagOverride]]
CollectorExpr = "CUST-SITE1-MGMTVC01"
[TagOverride.Tags]
environment = "vsphere.mgmt"
datacenter = 'site1'
[[TagOverride]]
CollectorExpr = "CUST-SITE1-MGMTVC01"
[TagOverride.Tags]
environment = "vsphere.mgmt"
datacenter = 'site2'
[[TagOverride]]
CollectorExpr = "CUST-SITE1-CLIVC01"
[TagOverride.Tags]
environment = "vsphere.mgmt"
datacenter = 'site1'
[[TagOverride]]
CollectorExpr = "CUST-SITE1-CLIVC01"
[TagOverride.Tags]
environment = "vsphere.mgmt"
datacenter = 'site2'
I believe CollectorExpr is a regular expression that must match against the output of scollector -l or the collector tag values used in the scollector.collector.duration metric. Our vsphere instances get the tag values of vsphere-ny-vsphere02 for ny-vsphere02 and vsphere-nyhq-vsphere01 for nyhq-vsphere01. The following settings should match against those collector names:
[[TagOverride]]
CollectorExpr = "vsphere-ny-"
[TagOverride.Tags]
datacenter = 'ny'
[[TagOverride]]
CollectorExpr = "vsphere-nyhq-"
[TagOverride.Tags]
datacenter = 'nyhq'
Using [TagOverride.MatchedTags] instead of [TagOverride.Tags] should work to extract the value out of the hostname, but keep in mind that all the hostnames are truncated to their shortname (no FQDN) unless you set FullHost = true in the scollector.toml file. My guess is your settings are failing because the CollectorExpr is incorrect. Try something like:
[[TagOverride]]
CollectorExpr = "vsphere-"
[TagOverride.MatchedTags]
Host = '^(?P<customer>.{5})(?P<datacenter>.{5})(?P<environment>[^.]+)'
If that doesn't work try using '[TagOverride.Tags]' in a dev environment to see if you can add test tags/values to those metrics.

Resources