I have configured squid on RHEL, Its working properly. Following are the directive that I have configured to populate access.log.
cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log none
But when I check, access.log is not going to populate and does not show any data. Any idea what may be the problem ?
access_log daemon:/usr/local/squid/var/logs/access.log squid
specify this in you squid config for generating access log information.
(/usr/local/squid/var/logs/) path depends on you custom squid folder structure
Related
I am only allowing certain IPs through that sites particular config. I have multiple configs for multiple sites, as you do.
My example, in sites-available, I have website123.com. I edit this, and see a "server" block, and inside that a "location" block. In the "location" block, I am allow or denying IPs. With multiple
allow 123.123.123.124
allow 123.123.123.125
allow 123.123.123.126
then a deny all
Now I am using Cloudflare, it's obviously proxying the IP, but it does include the True-Client-IP value in the header. This is now what I need to check for/consider the source in my nginx sites-available sites config.
This seems like a simple change in my sites-available site config to tell it to read a different header value for the IP. Is there a solution I should look at?
I have explored the solution below, but it expects a maintained list of real_ips (Cloudflare IPs). This seems unnecessary, I simply want to change whicih header value nginx looks at. https://danielmiessler.com/blog/getting-real-ip-addresses-using-cloudflare-nginx-and-varnish/
I thought maybe this doc would help, but we need to adjust this per site config not a global nginx conf change. https://nginx.org/en/docs/http/ngx_http_realip_module.html
I am doing a poc on nginx server. It would listen to ports and redirect the path to different domains. The servers I am adding is dynamic in nature.
server config blocks looks like below
attatched image
I have to fetch server name|port address from an api and create servers based on it. The number of servers may increase or decrease it is dynamic in nature.
What I tried was creating new-config.conf which is already included into nginx.conf. I am writing server config dynamically into new-config.conf and restarting nginx after it.
I need something like where I don't require to restart nginx and embed server config into nginx.conf
We have installed Nginx ingress controller through Helm. For our app we have maintained setting separately in the config maps. Application is working fine.
As far security concerns, we want to hide NGinx server and version details from the response header.
We explored a lot and we found below solutions -
Set server_tokens off. in the NGinx.conf file
Set server-tokens = false in the config map file on the AKS portal.
None of the solutions are working currently.
Any Ideas?
Okay, so I know there is a lot of questions about Nginx Reverse Proxy but I don't understand any of the answers. I read the documentation on Nginx's website and I kind of get it but need some help.
So here is what I want.
Visitor ---> Nginx Reverse Proxy ---> Nginx Server (Website)
I know that you can get the reverse proxy to listen to a remote server, but I can't find the configuration files that I want. I also want to show a static html page when passing through the Nginx Reverse Proxy. So like a page that says "Hosted by this company", like Cloudflare does. I asked someone and they told me if I put the html file on the reverse proxy server then it'll show up when the visitor goes through the server, but I don't understand that concept, how do I get that static html to show up? And what will be the correct configuration be for this? I imagine it'll be different from a normal remote server configuration.
Thanks in advance!
In Nginx reverse proxy configuration it is possible to make some urls to be served from reverse proxy itself without going to source server.
Take a look at the following article: https://www.cevapsepeti.com/how-to-setup-nginx-as-reverse-proxy-for-wordpress/
In this article you can see in virtual.conf lines 51-55 that error_pages are served directly from reverse proxy, while other assets are fetched from source server first. Cloudflare employs the same concept. I hope this helps.
I have 2 servers. One Reverse proxy on the web and one on a private link serving WebDAV.
Booth servers are apache httpd v2.
On the proxy I have:
ProxyRequests Off
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
ProxyPass /repo/ http : //share.local/repo/
ProxyPassReverse /repo/ http : //share.local/repo/
On the dav server I have:
<Location /repo/>
DAV on
Order allow,deny
allow from all
</Location>
The reverse proxy is accessed via https and the private server is accessed via http.
And there lies the problem!
Read only commands work fine. But when I want to move something I get 502 Bad gateway.
The reason for this is the reverse proxy not rewriting the url's inside the extended dav request.
The source URL is inside the header and is correctly transformed to http://share.local/file1.
The destination URL is inside some xml fragment I do not understand and stays https://example.com/file1 :(
Is there a standard way to let the apache correctly transform the request?
Thanks for your effort.
Hmm, found the answer. Always the same :)
I added the next line to my 'private server' config file:
LoadModule headers_module /usr/lib/apache2/modules/mod_headers.so
RequestHeader edit Destination ^https http early
(e.g. of config location '/etc/httpd/conf.d/DefaultRequestHeader.conf')
and it worked. I don't know if this has drawbacks. I'll see.
The destination URL shouldn't be in XML but in the "Destination" header, as you already noticed. Maybe you were looking at the error response...
In general, this problem would go away when clients and servers implement WebDAV level 3 (as defined in RFC4918), which allows the Destination header to be just a relative path.