url_rewrite_program by squid-cache really redirects the URL. In other words, the end-user gets a response back that says "redirectted page from foo to bar", and then the user makes another request to the redirected address. I don't want this.
what I want to achieve is similar to apache's mod_rewrite. I want an abselutely transparent rewriting mechanism. So that the user requests for a spefific content, and he gets it as a response (regardless of his initial requested URL) without any HTTP redirection.
the reason I want to avoid redirection via http is because I don't want the end-user to see internal application structures. For example, he requests "application1.foo.com", and he gets content of a URL that's much lengthier. So if the end-user bookmarks it, he bookmarks my clean little URL (application1.foo.com", this is good to keep users away from such details, it eventually gives them a uniform URL for the service even if I change it in the future. For example, I might map application1.foo.com to badprovider.com/path/to/file.php initially, and then change it to goodprovider.com/file.php and the user won't notice that. The advantage is, for example, end-user bookmarks would remain correct, as well as regulating their behaviour in a more guided manner.
Did you try setting squid as a reverse proxy with the 'accel' mode ? It worked for me:
acl all src all
http_port 3128 accel vhost
cache_peer 127.0.0.1 parent 80 0 no-query originserver name=myAccel
acl our_sites dstdomain your_domain.net
http_access allow our_sites
http_access deny all
cache_peer_access myAccel allow our_sites
cache_peer_access myAccel deny all
negative_ttl 0
access_log /var/log/squid/access.log squid
hosts_file /etc/hosts
coredump_dir /var/spool/squid
your_domain.net is the domain you want to redirect
Related
I have a route in a web server which needs to fetch a file from remote server and then process the content.
I want nginx to proxy this fetch action so that I can take the advantages of cache and performance.
At first I think I can use x-accel-redirect, but as I need to process the content, I think I cannot.
Second I think I can just create a proxy_pass route for this purpose, but I also need to restrict this route to be accessed only from my web server.
What is the best practice? Adding allow 127.0.0.1 in this route?
The internal directive will restrict the route in this manner, allow 127.0.0.1; deny all; will have the same effect.
If you are intending to process the content within Nginx eg with the subs filter module then dont forget to disable gzip for this location
I want to create a proxy server that when I try to access some urls, the server connect to another server, and other urls will maintain the regular proxy service. Does anyone know how to set this in squild?
specify the domains (I try to access some urls, the server connect to another server, and other urls will maintain the regular proxy service.) in an acl.
if you want to connect to the other server for google requests, define acl as
acl alloweddomains dstdomain google.com google.co.uk
u can simply specify all domains in a text file and load it in acl
acl alloweddomains dstdomain "<path>/domains.txt"
then use cache_peer option.
cache_peer next_server_ip parent 80 0 no-query originserver name=allowserver
cache_peer_access allowserver allow alloweddomains
cache_peer_access deny all
With your explanation, I realized that :
You would need dual or more proxy set on source client.
If this is true ;
so this is not valid configuration .
The perfect solution for your idea ;
you should set one proxy on your client side
other side on your proxy (for example on Squid Proxy server )
You could configure tunneling to other proxy servers.
How can I configure a squid proxy behind another proxy that requires digest authentication?
I have this line in my squid conf, but the parent proxy keeps asking me username and password.
cache_peer $PARENTIP parent $PARENTPORT 0 default no-query proxy-only login=$user:$pass
It doesn't have to be a squid if there is another solution.
For those who come upon this question on search, forwarding requests to a parent proxy works using basic proxy authentication (without failover) via the following configuration. This allows squid to manage the forwarding and authentication to the parent proxy without the additional client credential configuration.
cache_peer $PARENT_PROXY_HOST parent $PARENT_PROXY_PORT 0 default no-query login=$PARENT_PROXY_USERNAME:$PARENT_PROXY_PASSWORD
never_direct allow localhost
However, I couldn't get this to work with proxy digest authentication. This, apparently, isn't supported by squid via a cache_peer configuration declaration [squid mailing list citation].
One can manage this by storing or passing the configuration credentials (username/password) at the client and then passing them through to the squid proxy. This works for basic and digest authentication. The client passes the credentials. squid, in this case, does not require authentication, but passes through the client-provided credentials to the parent proxy which does require them.
cache_peer $PARENT_PROXY_HOST parent $PARENT_PROXY_PORT 0 default no-query login=PASSTHRU
never_direct allow localhost
Add
never_direct:
acl all src 0.0.0.0/0.0.0.0
http_access allow localhost
http_access deny all
cache_peer $PARENTID parent 8080 0 default no-query proxy-only login=a:b
never_direct allow localhost
http_port 8080
Maybe if you install redsocks on the same machine where child squid is running you can do that.
It is just a supplement to the first answer, and in case it will be useful to the latecomer.
If your Squid cannot access https but can access http. The following information may be useful to you.
Without considering security, consider adding the following values to make https accessible.
never_direct allow all
Comment out the following values
#http_access deny CONNECT !SSL_ports
I am trying to call a web service from asp.net 3.5 application. I have a URL that has the DNS in it and when it is used I get the following Error.
(the xxxxxx is there for privacy concerns)
The request failed with the error message: -- 301 Moved Permanently Moved Permanently The document has moved here.
When I use the URL with the physical IP it works just fine. Are there any setting that I am missing. I currently have the URL behavior set to dynamic so that it uses the url from the webconfig.
Hm - maybe I do not understand your question correctly, but it sounds like the web service URL simply has changed from the one you use to the one returned by the 301 response (the xxxxxxxx one).
Are you sure you call the web service with exactly the xxxxxxx URL?
PS:
I have a URL that has the DNS in it
This is probably not what you wanted to say - DNS stands for Domain Name System, which would be the system that translates URLs to IP Addresses. I assume you wanted to say FQDN, meaning Fully Qualified Domain Name.
It's possible for request rewriting to be happening on the server side, based on the incoming request, including what you pass for the hostname in the URL. A request rewrite may result in a 301 response.
In other words, requests with a hostname of www.domain.com may be rewritten, while requests using a particular ip address, even if the IP address is the address that www.domain.com resolves to, may not be rewritten.
The solution is to either use the IP address, or use the new location that you get from the 301 response.
If you are using a Web Reference, then you can set the AllowAutoRedirect property of the proxy instance to true. In this case, the redirection will happen behind the scenes.
I have 2 servers. One Reverse proxy on the web and one on a private link serving WebDAV.
Booth servers are apache httpd v2.
On the proxy I have:
ProxyRequests Off
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
ProxyPass /repo/ http : //share.local/repo/
ProxyPassReverse /repo/ http : //share.local/repo/
On the dav server I have:
<Location /repo/>
DAV on
Order allow,deny
allow from all
</Location>
The reverse proxy is accessed via https and the private server is accessed via http.
And there lies the problem!
Read only commands work fine. But when I want to move something I get 502 Bad gateway.
The reason for this is the reverse proxy not rewriting the url's inside the extended dav request.
The source URL is inside the header and is correctly transformed to http://share.local/file1.
The destination URL is inside some xml fragment I do not understand and stays https://example.com/file1 :(
Is there a standard way to let the apache correctly transform the request?
Thanks for your effort.
Hmm, found the answer. Always the same :)
I added the next line to my 'private server' config file:
LoadModule headers_module /usr/lib/apache2/modules/mod_headers.so
RequestHeader edit Destination ^https http early
(e.g. of config location '/etc/httpd/conf.d/DefaultRequestHeader.conf')
and it worked. I don't know if this has drawbacks. I'll see.
The destination URL shouldn't be in XML but in the "Destination" header, as you already noticed. Maybe you were looking at the error response...
In general, this problem would go away when clients and servers implement WebDAV level 3 (as defined in RFC4918), which allows the Destination header to be just a relative path.