I installed Kong (Kong proxy+kong ingress controller) over Kubernetes/Kubesphere cluster with Istio mesh inside, and I added annotations and ingress types needed, so am able to access only the Kong Proxy at node exposed IP and port, but am unable neither add rules nor access Admin GUI or do any kind of configuration, every request I do to my Kong end-point like
curl -i -X GET http://10.233.124.79:8000/rules
or any kind of request to the proxy, I get the same response of:
Content-Type: application/json; charset=utf-8 Connection: keep-alive
Content-Length: 48 X-Kong-Response-Latency: 0 Server: kong/2.2.0
{"message":"no Route matched with those values"}
Am not able to invoke Admin API, its pod-container is only listening to 127.0.0.1, my environment var's for kong-proxy pod
KONG_PROXY_LISTEN
0.0.0.0:8000, 0.0.0.0:8443 ssl http2
KONG_PORT_MAPS
80:8000, 443:8443
KONG_ADMIN_LISTEN
127.0.0.1:8444 ssl
KONG_STATUS_LISTEN
0.0.0.0:8100
KONG_DATABASE
off
KONG_NGINX_WORKER_PROCESSES
2
KONG_ADMIN_ACCESS_LOG
/dev/stdout
KONG_ADMIN_ERROR_LOG
/dev/stderr
KONG_PROXY_ERROR_LOG
/dev/stderr
And env. var's for ingress-controller:
CONTROLLER_KONG_ADMIN_URL
https://127.0.0.1:8444
CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
true
CONTROLLER_PUBLISH_SERVICE
kong/kong-proxy
So how to be able to expose Admin GUI over the mesh over the nodeport and how to able to invoke Admin API, to add rules, etc?
Yes, first you should add rules.
You can directly add routers in KubeSphere. See the documentation for more info.
I want to test proxy server. In order to make https request, browser sends CONNECT method beforehand (e.g. like Firefox does, when proxy is specified).
I can not achieve/send the same result in curl:
Following has root slash /www.example.com:443:
curl -X CONNECT http://proxy_host:proxy_port/www.example.com:443
Following will not work (without slash):
curl -X CONNECT http://proxy_host:proxy_portwww.example.com:443
Following is not what I want:
curl -X CONNECT http://proxy_host:proxy_port/some_path
So the first line of HTTP data should be CONNECT www.example.com:443 HTTP/1.1 but not CONNECT /www.example.com:443 HTTP/1.1 like curl sends in this case.
Maybe this question also related some-how, if I would know how to not send path.
NOTE! I do not want to use curl -x http://proxy_host:proxy_port https://www.example.com, because this option/flag -x does not work with custom SSL certificates --cacert ... --key ... --cert ....
Any ideas how to send plain header data or not specify path, or specify host and port as a path?
(-X simply replaces the string in the request so of course setting it to CONNECT will not issue a proper CONNECT request and will certainly not make curl handle it correctly.)
curl will do a CONNECT by itself when connecting to a TLS server through a HTTP proxy, and even though you claim -x breaks the certificate options that is an incorrect statement. The --cacert and other options work the same even when the connection is done through a HTTP proxy.
You can also make curl do a CONNECT trough a HTTP(S) proxy for other protocols by using -p, --proxytunnel - also in combination with -x.
Am trying to use SQLMap with https but when i try
"C:\Python27\sqlmap>sqlmap.py -u https://localhost:8774/App/console/index.jsp --force-ssl" it returns
"Can't establish SSL Connection".
So it there any way that i can pass SSL certificate to SQLMap?
Environment Details:
OS: Windows 10
Python: 2.7
SQLMap: 1.4.2.42
Refer to attached image for more details.
remove https:// from 'u' paremeter, just put:
-u localhost:8774/App/console/index.jsp
A simple solution for that is to set up a proxy listener like Burp Suite, browse over to the site with the bad SSL certificate and Trust it.
After that, you can include the following option in your SQLMap command:
--proxy="http://PROXY-IP:PROXY-PORT"
where proxy ip is generally 127.0.0.1 and proxy port 8080.
By making a rewrite in Haproxy 1.8, I need to make a URI redirect to another domain (host), but keep header host in request.
Example:
www.mysite.com/api -> 104.4.4.4/api (rw) -> result www.mysite.com/api (response)
I made a lot of tests with some parameters of HA, and I managed to obtain some succes, but with one problem.
This is my actual scnenario
backend site1
acl path_to_rw url_beg /api
acl mysite hdr(host) -i www.mymainsite.com
http-request set-header Host www.mymainsite.com if mysite path_to_rw
reqirep ^Host Host:\ host_to_forward/api if mysite path_to_rw
cookie SERVERID insert indirect nocache maxlife 1h
server site1 myhost:80 check cookie site1
My backend is a IIS server, and my rewrite works. But, I get error bellow:
"HTTP Error 400. The request hostname is invalid"
It seems that my backend does not accept the headerhost tha i send. Have somebody already had this problem ?
I managed to fix this problem, with a simple combination between acl´s and "use backend" directive.
e.g:
Header host:
www.mysite.com
Path to aplication in another origin
/api
acl myhost hdr(host) -i www.myhost.com
acl path_api url_reg -i /API(.*)
use_backend be_origin_servers if myhost path_api
backend be_origin_servers
server myserver1 10.10.10.10 check cookie myserver1
I use nginx to proxy and hold persistent connections to far away servers for me.
I have configured about 15 blocks similar to this example:
upstream rinu-test {
server test.rinu.test:443;
keepalive 20;
}
server {
listen 80;
server_name test.rinu.test;
location / {
proxy_pass https://rinu-test;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $http_host;
}
}
The problem is if the hostname can not be resolved in one or more of the upstream blocks, nginx will not (re)start. I can't use static IPs either, some of these hosts explicitly said not to do that because IPs will change. Every other solution I've seen to this error message says to get rid of upstream and do everything in the location block. That it not possible here because keepalive is only available under upstream.
I can temporarily afford to lose one server but not all 15.
Edit:
Turns out nginx is not suitable for this use case. An alternative backend (upstream) keepalive proxy should be used. A custom Node.js alternative is in my answer. So far I haven't found any other alternatives that actually work.
Earlier versions of nginx (before 1.1.4), which already powered a huge number of the most visited websites worldwide (and some still do even nowdays, if the server headers are to be believed), didn't even support keepalive on the upstream side, because there is very little benefit for doing so in the datacentre setting, unless you have a non-trivial latency between your various hosts; see https://serverfault.com/a/883019/110020 for some explanation.
Basically, unless you know you specifically need keepalive between your upstream and front-end, chances are it's only making your architecture less resilient and worse-off.
(Note that your current solution is also wrong because a change in the IP address will likewise go undetected, because you're doing hostname resolution at config reload only; so, even if nginx does start, it'll basically stop working once IP addresses of the upstream servers do change.)
Potential solutions, pick one:
The best solution would seem to just get rid of upstream keepalive as likely unnecessary in a datacentre environment, and use variables with proxy_pass for up-to-date DNS resolution for each request (nginx is still smart-enough to still do the caching of such resolutions)
Another option would be to get a paid version of nginx through a commercial subscription, which has a resolve parameter for the server directive within the upstream context.
Finally, another thing to try might be to use a set variable and/or a map to specify the servers within upstream; this is neither confirmed nor denied to have been implemented; e.g., it may or may not work.
Your scenario is very similar to the one when using aws ELB as uptreams in where is critical to resolve the proper IP of the defined domain.
The first thing you need to do and ensure is that the DNS servers you are using can resolve to your domains, then you could create your config like this:
resolver 10.0.0.2 valid=300s;
resolver_timeout 10s;
location /foo {
set $foo_backend_servers foo_backends.example.com;
proxy_pass http://$foo_backend_servers;
}
location /bar {
set $bar_backend_servers bar_backends.example.com;
proxy_pass http://$bar_backend_servers;
}
Notice the resolver 10.0.0.2 it should be IP of the DNS server that works and answer your queries, depending on your setup this could be a local cache service like unbound. and then just use resolve 127.0.0.1
Now, is very important to use a variable to specify the domain name, from the docs:
When you use a variable to specify the domain name in the proxy_pass directive, NGINX re‑resolves the domain name when its TTL expires.
You could check your resolver by using tools like dig for example:
$ dig +short stackoverflow.com
In case is a must to use keepalive in the upstreams, and if is not an option to use Nginx +, then you could give a try to openresty balancer, you will need to use/implement lua-resty-dns
A one possible solution is to involve a local DNS cache. It can be a local DNS server like Bind or Dnsmasq (with some crafty configuration, note that nginx can also use specified dns server in place of the system default), or just maintaining the cache in hosts file.
It seems that using hosts file with some scripting is quite straightforward way. The hosts file should be spitted into the static and dynamic parts (i.e. cat hosts.static hosts.dynamic > hosts), and the dynamic part should be generated (and updated) automatically by a script.
Perhaps it make sense to check from time to time the hostnames for changing IPs, and update hosts file and reload configuration in nginx on changes. In case of some hostname cannot be resolved the old IP or some default IP (like 127.0.1.9) should be used.
If you don't need the hostnames in the nginx config file (i.e., IPs are enough), the upstream section with IPs (resolved hostnames) can be generated by a script and included into nginx config — and no need to touch the hosts file in such case.
I put the resolve parameter on server and you need to set the Nginx Resolver in nginx.conf as below:
/etc/nginx/nginx.conf:
http {
resolver 192.168.0.2 ipv6=off valid=40s; # The DNS IP server
}
Site.conf:
upstream rinu-test {
server test.rinu.test:443;
keepalive 20;
}
My problem was container related. I'm using docker compose to create the nginx container, plus the app container. When setting network_mode: host in the app container config in docker-compose.yml, nginx was unable to find the upstream app container. Removing this fixed the problem.
we can resolve it temporarily
cd /etc
sudo vim resolv.conf
i
nameserver 8.8.8.8
:wq
then do sudo nginx -t
restart nginx it will work for the momment
An alternative is to write a new service that only does what I want. The following replaces nginx for proxying https connections using Node.js
const http = require('http');
const https = require('https');
const httpsKeepAliveAgent = new https.Agent({ keepAlive: true });
http.createServer(onRequest).listen(3000);
function onRequest(client_req, client_res) {
https.pipe(
protocol.request({
host: client_req.headers.host,
port: 443,
path: client_req.url,
method: client_req.method,
headers: client_req.headers,
agent: httpsKeepAliveAgent
}, (res) => {
res.pipe(client_res);
}).on('error', (e) => {
client_res.end();
})
);
}
Example usage:
curl http://localhost:3000/request_uri -H "Host: test.rinu.test"
which is equivalent to:
curl https://test.rinu.test/request_uri