Why after adding proxyPort in wso2 api manager 3.2.0, can not accsess directly to https://api.am.wso2.com:443/publisher? - wso2-api-manager

I want to use Nginx revers proxy as load balancer, but after adding proxyPort in wso2 api manager 3.2.0 deployment.toml :
[transport.https.properties]
proxyPort = 443
I can not accsess directly to https://api.am.wso2.com:443/publisher?
Also my hostname = "api.am.wso2.com"
cloud please guide me?

Once the proxyPort is enabled, you need to at least have an Nginx instance running with the relative configurations to access the API Manager. You can find the default Single node Nginx configurations in here.
Therefore, this is expected behavior. As an alternate, you can disable the Proxy Port configurations and only configure the Hostname in the TOML and try accessing the portals.

Related

How to config the load balancer/reverse proxy settings in the wso2 api manager

I am trying to config an Active-Active deployment of WSO2 API-M.
I have three nodes with two active nodes of wso2am all-in-one instance, and annother one is nginx.
deployment.toml config instruct by : https://apim.docs.wso2.com/en/latest/install-and-setup/setup/setting-up-proxy-server-and-the-load-balancer/configuring-the-proxy-server-and-the-load-balancer/
but request was fail .
How to config nginx and product to let the product work with reverse proxy?

Is it possible to define mutliple ip address as callback in wso2 api manager 3.2.0 carbon config?

Is it possible to define mutliple ip address as callback in wso2 api manager 3.2.0 carbon config?
I used WAF as load balancer I want to set three node ip address as call back url in carbon config.
You can define as follows.
regexp=(https://myapp.com/callback|https://testapp:8000/callback)
Please refer - https://is.docs.wso2.com/en/latest/learn/configuring-oauth2-openid-connect-single-sign-on/

When I run my daemon the service is a http proxy instead off http

I am currently running a service with systemctl, and it is running as an http proxy, not normal http. Is this something that Google does? I am using port 8080 and I can't connect to it via http. My daemon is using port 8080, while using the type http-proxy (I am seeing this with the command nmap -sV -sC -p 8080 35.208.25.61 -vvvv -Pn). Instead, I want the daemon I'm running (wings.service) to use http, so it can use that type of connection to connect to my panel.
The panel is part of a piece of software along with the daemon, it's called pterodactyl. Anyways, I have tried everything on what to do, and I think this problem that I am addressing is the problem that causes dysfunction on my panel. I might just have to move to a different service to host my bots for discord.
Let me know if there's anything I can do to fix this.
As per I can understand you are unable to access the panel via web URL.
Pterodactyl web server can be installed using NGINX or Apache web servers, and both web servers by default listed on port 80 based on Pterodactyl web server installation guide, so you must enable HTTP port 80 traffic on your Compute Engine VM instance
The default firewall rules on GCP do not allow HTTP or HTTPS connections to your instances. However, it is fairly simple to add a rule that does allow them following this steps:
1.-Go to the VM instances page.
2.- Click the name of the desired instance.
3.- Click Edit button at the top of the page.
4.- Scroll down to the Firewalls section.
5.- Check the Allow HTTP or Allow HTTPS options under your desired VPC network.
6.- Click Save.
Note: The Pterodactyl panel and Daemon installation are not the same for each operating system, if after checking the VPC firewall rules on the VM settings and also the status of the web server in the instance (NGINX or Apache) you still cannot access your panel, please provide a step by step list with all commands you followed to complete the installation, including the OS version you used.

Redirect http://domain/artifactory to http://localhost:8081/artifactory

We recently moved our web server from one machine to another. The web server is running Artifactory 2.6.1 repository on my web-server which is accessible from port 8081. I would like to redirect requests made to http://domain/artifactory to http://localhost:8081/artifactory. I tried to achieve this by creating a reverse proxy using apache2 but failed. If you could direct me in the right direction, that would be appreciated.
Did you try to follow https://www.jfrog.com/confluence/display/RTF/Apache+HTTP+Server ?
You'll have to configure the ajp connector on your tomcat's server.xml and add a virtualhost in your apache configuration with mod_proxy_ajp
EDIT:
Since you're using Jetty instead of tomcat, Jetty recommends using http proxy instead of ajp.
Following this to configure Jetty and Apache: http://wiki.eclipse.org/Jetty/Howto/Configure_mod_proxy

docker registry on localhost with nginx proxy_pass

I'm trying to setup a private docker registry to upload my stuff but I'm stuck. The docker-registry instance is running on port 5000 and I've setup nginx in front of it with a proxy pass directive to pass requests on port 80 back to localhost:5000.
When I try to push my image I get this error:
Failed to upload metadata: Put http://localhost:5000/v1/images/long_image_id/json: dial tcp localhost:5000: connection refused
If I change localhost with my server's ip address in nginx configuration file I can push allright. Why would my local docker push command would complain about localhost when localhost is being passed from nginx.
Server is on EC2 if it helps.
I'm not sure the specifics of your traffic, but I spent a lot of time using mitmproxy to inspect the dataflows for Docker. The Docker registry is actually split into two parts, the index and the registry. The client contacts the index to handle metadata, and then is forwarded on to a separate registry to get the actual binary data.
The Docker self-hosted registry comes with its own watered down index server. As a consequence, you might want to figure out what registry server is being passed back as a response header to your index requests, and whether that works with your config. You may have to set up the registry_endpoints config setting in order to get everything to play nicely together.
In order to solve this and other problems for everyone, we decided to build a hosted docker registry called Quay that supports private repositories. You can use our service to store your private images and deploy them to your hosts.
Hope this helps!
Override X-Docker-Endpoints header set by registry with:
proxy_hide_header X-Docker-Endpoints;
add_header X-Docker-Endpoints $http_host;
I think the problem you face is that the docker-registry is advertising so-called endpoints through a X-Docker-Endpoints header early during the dialog between itself and the Docker client, and that the Docker client will then use those endpoints for subsequent requests.
You have a setup where your Docker client first communicates with Nginx on the (public) 80 port, then switch to the advertised endpoints, which is probably localhost:5000 (that is, your local machine).
You should see if an option exists in the Docker registry you run so that it advertises endpoints as your remote host, even if it listens on localhost:5000.

Resources