I'm trying to block some sites like gmail and outlook from my squid proxy server.
My squid.conf is:
acl blacklist dstdomain "/etc/squid/blacklist.acl"
http_access deny blacklist
And blacklist.acl is:
.atlassian.net
.accounts.google.com
.mail.google.com
.gmail.com
.gmail.google.com
This only seems to work for sites using http (ie. they successfully get blocked)
https sites still are able to get through ?
I'm running squid 4.10 on Ubuntu-20.04
Does anyone know how to achieve this ?
Thanks in advance!
this is probably because you haven't enabled SSL bumping, i.e. your http_port directive is set to the default http_port 3128.
I've written about both Squid's SSL setup and blocking websites
configure squid with ICAP & SSL
block and allow websites with squid
When the site is encrypted squid can validate only the Domain but not the entire URL path or keywords in the URL. To block https sites using urlpath_regex we need to setup Squid proxy using SSLbump. It is tricky and a long process , need to carefully configure the SSL bump settings by generating certificates.. but it is possible . I have succeeded in blocking the websites using urlpathregex over https sites...
For more detailed explanation:
Squid.conf file should have the below to achieve block websites using Keywords or Path ie..urlpath_regex
http_port 3128 ssl-bump cert=/usr/local/squid/certificate.pem generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
acl BlockedKeywords url_regex -i "/etc/squid/.."
acl BlockedURLpath urlpath_regex -i "/etc/squid/..."
acl BlockedFIles urlpath_regex -i "/etc/squid3/...."
http_access deny BlockedKeywords
http_access deny BlockedFIles
http_access deny BlockedURLpath
Related
I have an webserver with snap which run in http://example.com:3001 (it doesn't have support for https) so i'm using nginx proxypass together with cerbot to generate a certificate and it works great but i have two questions:
1.- This https://example.com <--> nginx <--> http://example.com:3001 redirection is the correct way to do it? I am not sure since it's still being posible to go directly to unsecure port 3001.
2.- It's posible to do this redirection in the host to avoid to open 3001 port to internet?
Thanks in advance.
I have website and some game server.
I have domain which I connect to Cloudflare.
I want to redirect non http/https traffic to my server IP because when I try to connect to server with domain I can't do this because of Cloudflare proxy.
Maybe it can be done differently?
I use Nginx.
Cloudflare has its own SSL configuration.
There are 4 options for you:
Off disables https completely
Flexible Cloudflare will automatically switch client requests from HTTP to HTTPS but it still points to port 80 on your nginx server, should not configure SSL on nginx in this case.
So the only options for you are Full or Full Strict (more restricted on the cert configured on nginx, must be a valid cert).
With Full you can configure your nginx with a self-signed SSL and let it go. Cloudflare will handle the part between client and its proxy server.
I am hosting an app (Kibana) on port 5601. I want to restrict access to it by whitelisting IPs, so I am trying to host it behind Nginx. Below is my Nginx conf.
server {
listen *:5700;
server_name _;
allow 10.20.30.40; # My IP
deny all;
location / {
proxy_pass http://localhost:5601;
}
}
It works as only I can access the app on port 5700 and everyone else gets a 403. However, others can directly goto localhost:5601 and bypass the whole security. How do I stop direct access to port 5601?
localhost:5601 is a connection only accessible to users/processes running on the same host that is running Nginx & Kibana. It needs to be there so that Nginx can proxy_pass traffic to Kibana.
However, I think you are talking about external users also connecting to port 5601 from remote systems.
Kibana does not need to listen to traffic from external systems on port 5601. Note that by default at least some Kibana installs do not listen to external systems and you may not need to make any changes.
However to be sure:
Edit your kibana.yml file (possibly /etc/kibana/kibana.yml)
Ensure that server.host: "localhost" is the only server.host line and is not commented out
Restart Kibana
To further manage your system using best practices. I would strongly recommend operating some form of firewall and only opening access to ports and protocols which you expect external users to need.
I want to create a proxy server that when I try to access some urls, the server connect to another server, and other urls will maintain the regular proxy service. Does anyone know how to set this in squild?
specify the domains (I try to access some urls, the server connect to another server, and other urls will maintain the regular proxy service.) in an acl.
if you want to connect to the other server for google requests, define acl as
acl alloweddomains dstdomain google.com google.co.uk
u can simply specify all domains in a text file and load it in acl
acl alloweddomains dstdomain "<path>/domains.txt"
then use cache_peer option.
cache_peer next_server_ip parent 80 0 no-query originserver name=allowserver
cache_peer_access allowserver allow alloweddomains
cache_peer_access deny all
With your explanation, I realized that :
You would need dual or more proxy set on source client.
If this is true ;
so this is not valid configuration .
The perfect solution for your idea ;
you should set one proxy on your client side
other side on your proxy (for example on Squid Proxy server )
You could configure tunneling to other proxy servers.
How can I configure a squid proxy behind another proxy that requires digest authentication?
I have this line in my squid conf, but the parent proxy keeps asking me username and password.
cache_peer $PARENTIP parent $PARENTPORT 0 default no-query proxy-only login=$user:$pass
It doesn't have to be a squid if there is another solution.
For those who come upon this question on search, forwarding requests to a parent proxy works using basic proxy authentication (without failover) via the following configuration. This allows squid to manage the forwarding and authentication to the parent proxy without the additional client credential configuration.
cache_peer $PARENT_PROXY_HOST parent $PARENT_PROXY_PORT 0 default no-query login=$PARENT_PROXY_USERNAME:$PARENT_PROXY_PASSWORD
never_direct allow localhost
However, I couldn't get this to work with proxy digest authentication. This, apparently, isn't supported by squid via a cache_peer configuration declaration [squid mailing list citation].
One can manage this by storing or passing the configuration credentials (username/password) at the client and then passing them through to the squid proxy. This works for basic and digest authentication. The client passes the credentials. squid, in this case, does not require authentication, but passes through the client-provided credentials to the parent proxy which does require them.
cache_peer $PARENT_PROXY_HOST parent $PARENT_PROXY_PORT 0 default no-query login=PASSTHRU
never_direct allow localhost
Add
never_direct:
acl all src 0.0.0.0/0.0.0.0
http_access allow localhost
http_access deny all
cache_peer $PARENTID parent 8080 0 default no-query proxy-only login=a:b
never_direct allow localhost
http_port 8080
Maybe if you install redsocks on the same machine where child squid is running you can do that.
It is just a supplement to the first answer, and in case it will be useful to the latecomer.
If your Squid cannot access https but can access http. The following information may be useful to you.
Without considering security, consider adding the following values to make https accessible.
never_direct allow all
Comment out the following values
#http_access deny CONNECT !SSL_ports