Endpoint Load balancing and Fail-over simultaneous configuration wso2apim - wso2-api-manager

I have 3 endpoints in my wso2apim publisher. I want to have load-balance configuration between 1 and 2 and Failover configuration for 1 and 3 (1 is main endpoint). I am able to add either Failover or Load Balance but not both at a time in endpoint configuration. Just wanted to know if this is possible to achieve in wso2apim 3.2.0
If I add fail over configuration -> load balance configuration gets reset
If I add load balance configuration -> fail over configuration gets reset

When you are defining the loadbalance endpoints you can enable failover and specify endpoints as failover endpoints. So only when the primary Endpoint fails the failover endpoint will trigger.
Update: It seems this is not available in version 3.2.0. One workaround would be to manually enable failover configs. When you create an Endpoint the relevant configurations are added to wso2am-3.2.0/repository/deployment/server/synapse-configs/default/endpoints
You may be able to add the failover configs directly to this file. This is because the underline runtime does support failover. You can refer to [1] for more info.
The downside is if you republish the API through the publisher,the Endpoint configurations will reset. You can get around this by changing the default templates located at wso2am-3.2.0/repository/resources/api_templates. You can refer to [2] for more details on this.
[1] - https://docs.wso2.com/display/EI660/Load-balance+Group+
[2] - https://apim.docs.wso2.com/en/3.2.0/develop/extending-api-manager/extending-gateway/customizing-api-template-for-gateway/#engaging-a-custom-handler-based-on-api-properties

Related

Euca 5.0 Enable SSL with Combined CLC and Cluster Controller?

I have completed an automated ansible install and have most of the wrinkles worked out.
All of my services except Nodes are running on a single box on non-secure HTTP though I specified 443 in my inventory I see now that does not imply an HTTPS configuration. So I have non-secure API endpoints listening on 443.
Is there any way around the requirements of operating CLC and Cluster Controller on different hardware as described in the SSL howto: https://docs.eucalyptus.cloud/eucalyptus/5/admin_guide/managing_system/bps/configuring_ssl/
I've read that how-to and can only guess that installing certs on the CLC messes up the Cluster Controller keys but I don't fully grasp it. Am I wasting my time trying to find a workaround or can I keep these services on the same box and still achieve SSL?
When you deploy eucalyptus using the ansible playbook a script will be available:
# /usr/local/bin/eucalyptus-cloud-https-import --help
Usage:
eucalyptus-cloud-https-import [--alias ALIAS] [--key FILE] [--certs FILE]
which can be used to import a key and certificate chain from PEM files.
Alternatively you can follow the manual steps from the documentation that you referenced.
It is fine to use HTTPS with all components on a single host, the documentation is out of date.
Eucalyptus will detect if an HTTP(S) connection is using TLS (SSL) and use the configured certificate when appropriate.
It is recommended to use the ansible playbook certbot / Let's Encrypt integration for the HTTPS certificate when possible.
When manually provisioning certificates, wildcards can be used (*.DOMAIN *.s3.DOMAIN) so that all services and S3 buckets are included. If a wildcard certificate is not possible then the certificate should include the service endpoint names if possible (autoscaling, bootstrap, cloudformation, ec2, elasticloadbalancing, iam, monitoring, properties, route53, s3, sqs, sts, swf)

How to add server dynamically in HA proxy backend?

I am using HA proxy version 1.6.6 for load balancing rabbitmq server, and it works fine but i want to add server dynamically in ha proxy backend in ubuntu using script. can anyone please tell me how can i done it?
HAProxy OSS v1.8 does not include add/remove commands in Runtime API, but you can achieve similar functionality by using ready/disabled state commands.
Add server(s) config in haproxy.cfg in disabled state: server-template websrv 1-100 192.168.122.1:8080 check disabled //This adds 100 servers (websrv1...websrv100) in disabled state
Enable server (similar to add feature): set server be_template/websrv1 state ready
Disable server (similar to remove feature): set server be_template/websrv1 state maint
Address and port can be changed using Runtime API as usual: set server be_template/websrv1 addr 192.168.50.112 port 8000
Reference(s):
https://www.haproxy.com/blog/dynamic-configuration-haproxy-runtime-api/
https://www.haproxy.com/blog/dynamic-scaling-for-microservices-with-runtime-api/
As far as I know haproxy api ( stats socket ) does not support dynamic adding/removing servers to backend.
One of the solution to use consul, the cost - reload service after some change.
https://www.hashicorp.com/blog/haproxy-with-consul.html
I think there is not this kind of features with HAProxy Open Source.
If you use their ALOHA Load Balancer, there is an API to do this actions here :
https://www.haproxy.com/resources/documentation/

Is it possible to setup multiple SSL on one Jelastic app?

I want to ask if the configuration to have multiple SSL on one IP in Jelastic is possible with Nginx Load Balancer.
The usage is for a proxy server that will receive request from multiple custom domains.
For example:
example-proxy.com points to a Public IP address assigned to a Jelastic Jetty Application.
Now custom domains points to the Jetty Application
custom-domain-example.com CNAME www points to example-proxy.com etc.
custom-domain-example-N.org CNAME www points to example-proxy.com etc.
Is it is possible to have this kind of configuration with Jelastic?
Is this possible to be done using the existing Jelastic API? Right now what I see in the API docs is BindSSL but it seems it can only bind one, is this correct?
Yes it's possible, but you need to configure it manually (just in nginx configs) instead of using the Jelastic dashboard/API SSL feature.
The other point to remember is that because there's 1 IP per container, multiple SSL certificates can only be served via SNI. That may have implications for you depending on what browsers your users use: in most cases it's ok now (old mobile OS and Windows XP are the primary exceptions)
The BindSSL API method allows you to automatically configure one SSL certificate on the externally facing node of your environment (Nginx Load Balancer in your case). If you attempt to BindSSL multiple times you just replace the existing certificate (not add multiple certificates).
Basically this functionality was built before SNI was widely acceptable, so it was assumed 1 SSL cert. per 1 environment. You can read more about SNI to make an informed decision about whether it will suit your needs here: http://blog.layershift.com/sni-ssl-production-ready/
An alternative for your needs would be to purchase a multi-domain SSL certificate (SAN cert). This lets you contain multiple hostnames within 1 certificate. Since you mentioned that you're our customer, you can contact our SSL team for details/pricing for this option.
If you still want to use multiple SSL certs + serve them via SNI, you will probably need to use the Read and Write API methods to save the SSL certificate parts and config. file(s) on your Nginx node.
Don't forget to restart the nginx service (you can use RestartNodeById for that) after any config. changes.
EDIT: As you mentioned that your end users will have control over this process, you probably prefer to use reload instead of restart (see http://nginx.org/en/docs/beginners_guide.html#control).
You can invoke that via Jelastic API using ExecCmdById, with commandList=[{"command": "sudo service nginx reload"}]
But take care if you're allowing end users to upload their own certificates via your application - you need to ensure that what they upload is really a certificate and nothing malicious...

WSO2 ESB 4.8.1 Clustering

Is it possible to create one ESB node as a dual role as a worker and manager ?
I'm using wso2 ESB 4.8.1 and nginx as load balancer.
This is pretty easy. This is what you have to do.
Forget about nginx and setup the ESB cluster. Lets say a cluster with one manager and one worker. I think you will be able to get it done by following the instructions here. Instead of WSO2 ELB mentioned in the doc, you are going to use nginx. Instead of the ELB, You can set the management and worker node as the well known members. i.e. In both nodes, you set both nodes as the well known members.
Once you have the cluster working, you should be able to send requests to an artifact deployed to both nodes separately. Difference between the manager node and worker node is, manager node is the one who only commits to the svn repo. So, when you deploy new artifacts you should deploy them using the manager node.
Now you have to configure two sites in nginx. Lets assume you decided to use esbmgt.mydomain.com for the management node and esb.mydomain.com for the worker. In esbmgt's upstream, you only mention about the manager node and also you route the requests to the 9443 port of the node. In the esb's upstream, you mention both nodes and the requests are routed to 8280 (http) and 8243 (https). Thats because the ESB serves requests using those ports and the UI is exposed via 9443 (https)
I hope the above information will help you.

Cannot acces WSO2 API Manager Publisher after adding API Manager Feature in ESB

I'm new to WSO2 products and the company I worked for asked me to evaluate WSO2 Enterprise Service Bus (ESB). Aside from this they also wanted to evaluate the WSO2 Identity Server (IS) and WSO2 API Manager (APIM). So we created a test system installing ESB as the base product. After researching most of the references in the web states that you can install other WSO2 products inside an existing one by installing it's feature. So we decided that approach and after a few issues we have successfully installed APIM and IS inside a running ESB. However while accessing the APIM Publisher by using the url https://:9443/publisher we got an error
HTTP Status 405 - HTTP method GET is not supported by this URL
type Status report
message HTTP method GET is not supported by this URL
description The specified HTTP method is not allowed for the requested resource.
Apache Tomcat/7.0.34
Any idea what happened as we have not seen any errors on the logs? Is it possible if I just install on a separate instance the WSO2 APIM but assigning it to a different port so as to avoid conflict with the ESB?
Thanks for the help.
Is it possible if I just install on a separate instance the WSO2 APIM
but assigning it to a different port so as to avoid conflict with the
ESB?
Port offset allows you to run multiple WSO2 products, multiple instances of a WSO2 product, or multiple WSO2 product clusters on the same server or virtual machine (VM). Port offset defines the number by which all ports defined in the runtime such as the HTTP/S ports will be offset.
For example, if the default HTTP port is 9763 and the portOffset is 1, the effective HTTP port will be 9764. Therefore, for each additional WSO2 product instance, set the port offset to a unique value (the default is 0) so that they can all run on the same server without any port conflicts.
Port offset can be passed to the server during startup. The following command starts the server with the default port incremented by 3.
./wso2server.sh -DportOffset=3
Alternatively, you can set it in the Ports section of <PRODUCT_HOME>/repository/conf/carbon.xml as follows:
<Offset>3</Offset>
Hi You dont need to install API manager features in the ESB. You can take a API Manager instance instead which has a lightweight ESB running inside. you can access this from the management console on the API Manager

Resources