I setup a proxy machine (centOS) and I'm using Squid. I have php external ACL program that will handle proxy authentication.
On my external program (php cli) I have included a small logic that will remember a user's log-in. My goal is the piece of information i collected in external acl can be used in my redirector external program so i can redirect the user to a page i set-up (A message of a day page). and after redirecting to a page i just simply reset the flag.
My problem is this; the external acl is not always triggered by Squid during the Ident authentication process when we have same username/password entered. Unless I restart the Squid. Calling always the external acl during the proxy authentication process is important to me because I want the logic inside my external acl is also executed during the proxy authentication.
Is there any setting in the squid.conf to be configure so that it will behave as I want?
Am I understanding properly that you're expecting every incoming HTTP request to trigger a call to your acl helper, saving the credentials in the helper instead of using squid's internal auth cache? If so, add ttl=0 to the external_acl_type argument.
Related
I have the following setup:
{Client App} --[HTTP]--> {NGINX} --[HTTPS]--> {API Gateway}
I need such a setup since API Gateway does not support HTTP (it only supports HTTPS) and my client apps are old/obsolete and do not talk HTTPS. So far it works perfectly fine. Now I need to make sure not just any requests are accepted by my API Gateways. I know how to protect API Gateways using Cognito (if I leave the NGINX out of the equation). The process plays as follows in action:
1.
{Consumer Server} --[Credentials]--> {Cognito}
<--[JWT Token]--
2.
{Consumer Server} --[JWT Token]--> {API Gateway}
...
To make sure there's no misunderstanding, the credential sent in step 1 is NOT an email. This is not for authenticating human users, but rather a machine-to-machine communication. And in my case, the client machine is an NGINX instance.
Having set up the scene, this is what I'm trying to achieve. I want my client app to communicate with my API gateway in HTTP. For that, I have to introduce an NGINX in between. But now, I want to make sure that only my designated NGINX instances can do so. So I need to authenticate the requests coming in from the NGINX. That means that NGINX needs to follow the second diagram and asks for JWT tokens from Cognito. And this is not a one-time process. Tokens expire and once they do, NGINX has to refresh them by sending another request to Cognito.
Does anyone know if there's a ready-made solution for this? Or an easy way to implement it?
We have a fleet of IoT devices, and want to proxy a port to an end user (remote diagnostics sessions). In order to avoid exposing the IP of our IoT devices, we want to proxy it to an end user. However, these IPs can be dynamic, and the proxy will of course need to be authenticated.
So the flow will be like this:
The user requests a remote diagnostic session;
Backend sends request to IoT device to check if the diagnostic service is running, and otherwise starts it;
IoT device starts the diagnostic service and replies with the status;
Backend creates a new secure proxy which proxies the IoT device to the end user with authentication;
Backend replies to the user with the ip and authorization tokens to connect to the proxy;
User connects to the diagnostic session through the proxy;
Now, I found only one solution thus far, which is Ceryx, however, it has no authentication. NGINX plus doesn't seem like an option, due to the significant license costs, but also due to the fact it doesn't seem to be able to handle this.
Are there any solutions besides adjusting Cyrex to support authentication?
With OpenResty you can set up your proxy using:
acces_by_lua request phase to authenticate your request
balancer_by_lua to handle a dynamic proxying
This can be easily achieved, but will require you to write some code.
I'm completely new to Google Cloud and Google Compute Engine. I have a VM instance set up in GCE, and would like to make requests to it.
Inside the instance, I have a basic Nginx running (of which I admittedly have also a very limited understanding), with the following configuration:
http {
server {
listen 80 default_server;
return 200 hello;
}
}
If I access it from inside the instance through the google cloud console, for instance with a curl, it does work, but I don't know how to access it from outside.
In the list of Compute Engine VM instances, the instance has an external IP associated (let's say for example 35.204.94.110), but requests to http://35.204.94.110:80 don't get a response.
How can I do it to access the instance from the outside?
I would make sure that HTTP access is enabled on the VM instance. When creating a VM instance, there are two check boxex:
Allow HTTP traffic
Allow HTTPS traffic
If the box is unchecked for “Allow HTTP traffic”, then this would explain the behavior. Go into your console and click on the affected VM instance and then scroll down until you see if the “Allow HTTP traffic” box is checked. If not, click Edit, checkmark the box to allow HTTP traffic and then save the changes. You should now be able to load the page externally.
I tested this myself by just installing and enabling nginx on a VM instance. If I disable “Allow HTTP traffic” the page does not load. When it is enabled, I am able to load the default web page of nginx successfully.
Looks like you don't have http access enabled. Check the firewall rules and add the default-allow-http label to your GCE instance.
We are using squid proxy along with the GreasySpoon ICAP server to modify responses for development purposes. We have a need to allow different developers to have different modifications to the responses because they are working on things relevant to different modifications. Initially, when we installed this setup insider our LAN, we were able to accommodate this by using the user_id script parameter inside the GreasySpoon response scripts. This parameter is populated with the local IP of the developer, and so we could base things upon the different IPs.
When we moved the setup to the cloud, everyone had our shared WAN IP for the user_id parameter, and so our scheme broke.
The comments in the default GreasySpoon script indicate that the user_id can be a user login:
// user_id : (String)user id (login or user ip address)
I configured authentication with the squid server, but the user_id is still set to our shared WAN IP. Is it possible to populate this script parameter in GreasySpoon with a proxy user's username using squid?
The GreasySpoon (1.0.10) configuration file service.properties contains a setting: SpoonScript.icapuserheader=x-authenticated-user. This is the header that GreasySpoon inspects to find the user_id. If this value is not found, then GreasySpoon falls back to the IP address. So you must configure squid to send the authenticated user's username in the same header as is configured in the properties file.
// in squid.conf:
icap_client_username_header x-authenticated-user
In an application we are writing we need to have a page that is essentially public but should only be available to certain people.. yep i know very paradoxical!
Its basically a "Submit a support ticket" style page that is hosted outside of a customers intranet but should only be available to users on that intranet.
Naturally making the user sign up for an account is the usual course of action but in this case it isn't really an option..
Is there any way of doing a "secure redirect" to that page?
My initial though would be to use an internal page which redirects appending a unique one time hash to the url which expires and then although its not 100% airtight it is only valid for about 1 min..
Two ways come to mind.
1) Deploy IP restrictions on the web server for the off-network resource. If the request is coming from one of the exit points from your network (proxy server, other public-facing egress points, etc) then allow the connection, otherwise do not.
2) Deploy mutually authenticated SSL on both the web server and a reverse-proxy server on your internal network. Clients connect to the internal reverse-proxy and that proxies them back to the external resource over an SSL connection that is mutually authenticated (so the external web server will only connect over SSL and it will only connect to a client (the reverse-proxy in this case) that has a recognized/accepted client certificate).
"Secure Redirect" is meaningless. What you want to do is make sure your ticket submit system will only accept clients connecting from your users' network. This would be a web site configuration thing.