Squid config different proxy rule - squid

I want to create a proxy server that when I try to access some urls, the server connect to another server, and other urls will maintain the regular proxy service. Does anyone know how to set this in squild?

specify the domains (I try to access some urls, the server connect to another server, and other urls will maintain the regular proxy service.) in an acl.
if you want to connect to the other server for google requests, define acl as
acl alloweddomains dstdomain google.com google.co.uk
u can simply specify all domains in a text file and load it in acl
acl alloweddomains dstdomain "<path>/domains.txt"
then use cache_peer option.
cache_peer next_server_ip parent 80 0 no-query originserver name=allowserver
cache_peer_access allowserver allow alloweddomains
cache_peer_access deny all

With your explanation, I realized that :
You would need dual or more proxy set on source client.
If this is true ;
so this is not valid configuration .
The perfect solution for your idea ;
you should set one proxy on your client side
other side on your proxy (for example on Squid Proxy server )
You could configure tunneling to other proxy servers.

Related

Reverse proxy with http inbound, https outbound, and parent proxy

I have an application that needs to use a proxy (call it proxy1) to access some https endpoints outside of its network. The application doesn't support proxy settings, so I'd like to provide it a reverse proxy url, and I would prefer not to provide tls certs for proxy1, so I would use http for application -> proxy1.
I don't have access to the application host or forward proxy mentioned below, so I cannot configure networking there.
The endpoints the application needs are https, so proxy1 must make its outbound connections via https.
Finally, this whole setup is within a corporate network that requires a forward proxy (call it proxy2) for outbound internet, so my proxy1 needs to chain to proxy2 / use it as a parent.
I tried squid and it worked well for http only, but I couldn't get it to accept http inbound while using https outbound. Squid easily supported the parent proxy2.
I tried haproxy, but had the same result as with squid.
I tried nginx and it did what I wanted with http -> proxy -> https, but doesn't support a parent proxy. I considered setting up socat as in this answer, or using proxy_pass and proxy_set_header as in this answer, but I can't shake the feeling there's a cleaner way to achieve the requirements.
This doesn't seem like an outlandish setup, is it? Or is there a preferred approach for it? Ideally one using squid or nginx.
You can achive this without the complexity by using a port forwarder like socat. Just install it on a host to do the forwarding (or locally on the app server if you wish to) and create a listener that forwards connections through the proxy server. Then on your application host use a local name resolution overide to map the FQDN to the forwarder.
So, the final config should be the app server using a URI that points to the forwarding server (using its address if no name resolution excists), which has a socat listener that points to the the corporate proxy. No reverse proxy required.
socat TCP4-LISTEN:443,reuseaddr,fork \
PROXY:{proxy_address}:{endpoint_fqdn}:443,proxyport={proxy_port}
Just update with your parameters.

Enable squid proxy blocking https

I'm trying to block some sites like gmail and outlook from my squid proxy server.
My squid.conf is:
acl blacklist dstdomain "/etc/squid/blacklist.acl"
http_access deny blacklist
And blacklist.acl is:
.atlassian.net
.accounts.google.com
.mail.google.com
.gmail.com
.gmail.google.com
This only seems to work for sites using http (ie. they successfully get blocked)
https sites still are able to get through ?
I'm running squid 4.10 on Ubuntu-20.04
Does anyone know how to achieve this ?
Thanks in advance!
this is probably because you haven't enabled SSL bumping, i.e. your http_port directive is set to the default http_port 3128.
I've written about both Squid's SSL setup and blocking websites
configure squid with ICAP & SSL
block and allow websites with squid
When the site is encrypted squid can validate only the Domain but not the entire URL path or keywords in the URL. To block https sites using urlpath_regex we need to setup Squid proxy using SSLbump. It is tricky and a long process , need to carefully configure the SSL bump settings by generating certificates.. but it is possible . I have succeeded in blocking the websites using urlpathregex over https sites...
For more detailed explanation:
Squid.conf file should have the below to achieve block websites using Keywords or Path ie..urlpath_regex
http_port 3128 ssl-bump cert=/usr/local/squid/certificate.pem generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
acl BlockedKeywords url_regex -i "/etc/squid/.."
acl BlockedURLpath urlpath_regex -i "/etc/squid/..."
acl BlockedFIles urlpath_regex -i "/etc/squid3/...."
http_access deny BlockedKeywords
http_access deny BlockedFIles
http_access deny BlockedURLpath

Enable reverse proxy and block access to the original port

I am hosting an app (Kibana) on port 5601. I want to restrict access to it by whitelisting IPs, so I am trying to host it behind Nginx. Below is my Nginx conf.
server {
listen *:5700;
server_name _;
allow 10.20.30.40; # My IP
deny all;
location / {
proxy_pass http://localhost:5601;
}
}
It works as only I can access the app on port 5700 and everyone else gets a 403. However, others can directly goto localhost:5601 and bypass the whole security. How do I stop direct access to port 5601?
localhost:5601 is a connection only accessible to users/processes running on the same host that is running Nginx & Kibana. It needs to be there so that Nginx can proxy_pass traffic to Kibana.
However, I think you are talking about external users also connecting to port 5601 from remote systems.
Kibana does not need to listen to traffic from external systems on port 5601. Note that by default at least some Kibana installs do not listen to external systems and you may not need to make any changes.
However to be sure:
Edit your kibana.yml file (possibly /etc/kibana/kibana.yml)
Ensure that server.host: "localhost" is the only server.host line and is not commented out
Restart Kibana
To further manage your system using best practices. I would strongly recommend operating some form of firewall and only opening access to ports and protocols which you expect external users to need.

How to access control.unit.sock with Nginx for securely proxying Unit API

I'm trying to use Nginx as proxy to access control.unit.sock (Nginx Unit) as it is recommended here : Securely Proyxing Unit Api. But Nginx is not able to access the socket.
I use the default configuration for Unit. unix:control.unit.sock is created as root with 600 permissions. Nginx uses the user : www-data by default.
How can I give Nginx access to this socket securely ? By avoid opening sockets on public interfaces in production or something else.
(For sure, Nginx has access if I set permission as 777.)
server {
location / {
proxy_pass http://unix:/var/run/control.unit.sock;
}
}
You can consider to run Unit with --control option and specify address that you want use (e.g. --control 127.0.0.1:8080).
Documentation https://unit.nginx.org/installation/#installation-startup:
--control socket
Address of the control API socket. IPv4, IPv6, and Unix domain sockets
are supported.

How to configure cascade squid proxy with squid parent digest authentication

How can I configure a squid proxy behind another proxy that requires digest authentication?
I have this line in my squid conf, but the parent proxy keeps asking me username and password.
cache_peer $PARENTIP parent $PARENTPORT 0 default no-query proxy-only login=$user:$pass
It doesn't have to be a squid if there is another solution.
For those who come upon this question on search, forwarding requests to a parent proxy works using basic proxy authentication (without failover) via the following configuration. This allows squid to manage the forwarding and authentication to the parent proxy without the additional client credential configuration.
cache_peer $PARENT_PROXY_HOST parent $PARENT_PROXY_PORT 0 default no-query login=$PARENT_PROXY_USERNAME:$PARENT_PROXY_PASSWORD
never_direct allow localhost
However, I couldn't get this to work with proxy digest authentication. This, apparently, isn't supported by squid via a cache_peer configuration declaration [squid mailing list citation].
One can manage this by storing or passing the configuration credentials (username/password) at the client and then passing them through to the squid proxy. This works for basic and digest authentication. The client passes the credentials. squid, in this case, does not require authentication, but passes through the client-provided credentials to the parent proxy which does require them.
cache_peer $PARENT_PROXY_HOST parent $PARENT_PROXY_PORT 0 default no-query login=PASSTHRU
never_direct allow localhost
Add
never_direct:
acl all src 0.0.0.0/0.0.0.0
http_access allow localhost
http_access deny all
cache_peer $PARENTID parent 8080 0 default no-query proxy-only login=a:b
never_direct allow localhost
http_port 8080
Maybe if you install redsocks on the same machine where child squid is running you can do that.
It is just a supplement to the first answer, and in case it will be useful to the latecomer.
If your Squid cannot access https but can access http. The following information may be useful to you.
Without considering security, consider adding the following values to make https accessible.
never_direct allow all
Comment out the following values
#http_access deny CONNECT !SSL_ports

Resources