We have a fleet of IoT devices, and want to proxy a port to an end user (remote diagnostics sessions). In order to avoid exposing the IP of our IoT devices, we want to proxy it to an end user. However, these IPs can be dynamic, and the proxy will of course need to be authenticated.
So the flow will be like this:
The user requests a remote diagnostic session;
Backend sends request to IoT device to check if the diagnostic service is running, and otherwise starts it;
IoT device starts the diagnostic service and replies with the status;
Backend creates a new secure proxy which proxies the IoT device to the end user with authentication;
Backend replies to the user with the ip and authorization tokens to connect to the proxy;
User connects to the diagnostic session through the proxy;
Now, I found only one solution thus far, which is Ceryx, however, it has no authentication. NGINX plus doesn't seem like an option, due to the significant license costs, but also due to the fact it doesn't seem to be able to handle this.
Are there any solutions besides adjusting Cyrex to support authentication?
With OpenResty you can set up your proxy using:
acces_by_lua request phase to authenticate your request
balancer_by_lua to handle a dynamic proxying
This can be easily achieved, but will require you to write some code.
Related
I have the following setup:
{Client App} --[HTTP]--> {NGINX} --[HTTPS]--> {API Gateway}
I need such a setup since API Gateway does not support HTTP (it only supports HTTPS) and my client apps are old/obsolete and do not talk HTTPS. So far it works perfectly fine. Now I need to make sure not just any requests are accepted by my API Gateways. I know how to protect API Gateways using Cognito (if I leave the NGINX out of the equation). The process plays as follows in action:
1.
{Consumer Server} --[Credentials]--> {Cognito}
<--[JWT Token]--
2.
{Consumer Server} --[JWT Token]--> {API Gateway}
...
To make sure there's no misunderstanding, the credential sent in step 1 is NOT an email. This is not for authenticating human users, but rather a machine-to-machine communication. And in my case, the client machine is an NGINX instance.
Having set up the scene, this is what I'm trying to achieve. I want my client app to communicate with my API gateway in HTTP. For that, I have to introduce an NGINX in between. But now, I want to make sure that only my designated NGINX instances can do so. So I need to authenticate the requests coming in from the NGINX. That means that NGINX needs to follow the second diagram and asks for JWT tokens from Cognito. And this is not a one-time process. Tokens expire and once they do, NGINX has to refresh them by sending another request to Cognito.
Does anyone know if there's a ready-made solution for this? Or an easy way to implement it?
Im' looking to build a similar application to https://www.proxysite.com/ but am not sure on the best architecture.
Looking to have a data flow like this.
User Web Browser -> myproxysite.com -> Ngninx Proxy Server (somehow rotating IP for each client session) -> Targetsite.com
Then the user would need to maintain a full session on Targetsite.com as a logged in user.
In this example, targetsite.com is always the same site and is pre-determined. The challenge we are facing is that targetsite.com is blocking our users based on IP, many of whom are accessing it from the same office network.
So my questions are:
Does this seem correct?
Is there anyway for me to configure nginx with a rotating proxy service like luminati? Or do I need to add an API software layer to handle the actual IP changes?
Any guidance on this one would be greatly appreciated!
While I can't help you with your application, I do want to suggest an alternative. You mentioned an office so it sounds like the users who will use the proxy are workers.
Luminati (now BrightData) has a proxy manager which you can host on any server. The proxy manager allows you to create ports (ie port 24000) and configure it with whatever proxy you want (doesn't have to be BrightData's proxy). It has a ton of different parameters that you can include for each proxy (including IP rotation) and each port can be configured to have a unique setup.
Then you simply go to your user PC, open the browser proxy settings, type the IP address of the server that the proxy manager is running on and the specific port you configured and voila. You have central control of the managing the proxies and your user's browser is proxied.
A big benefit of this is the logs in the proxy manager show all activity on each port you setup, so you can monitor traffic and the success rates right there.
Proxy manager: https://prnt.sc/13uyjgj
short question : How can I host an MQTT server on my remote Ubuntu 16 server while at the same time hosting an HTTP server that will be using the MQTT data ?
true question : I want to build an IoT system that will be MONITORED and CONTROLLED by ESP32, which will SEND FEEDBACK and ACCEPT COMMANDS respectively from a remote server (maybe LAMP ?). I also want the user to log-in in a website hosted on this remote server, where s/he can monitor any sensor values or send commands (e.g. turning a led on or off).
So what's the way to go here?
I was adviced to go with MQTT but then the above problem arised.
what I've found : I 've found that using Mosquitto MQTT, I may be able to serve a website using websockets. But I prefer a more scalable HTTPS approach. That is, I intend to have a database linked with my site and running my PHP scripts.
I'm not that experienced, so please don't take anything for granted :)
MQTT uses TCP connection and follows publish/subscribe API model where as the web(http) follows Restful API model(Create,read,update,delete). If you want to stick with MQTT then you should use SAAS service like enterprise MQTT from HIVE which provide this integrability but will charge some fees and in return, they will provide you with an account and a dashboard for all your devices. Otherwise, you can try to make your own middleware which can integrate MQTT with web services .
Another thing I would recommend is CoAP which is also an M2M protocol but follows Restful API model and UDP connection. It has direct forward proxy to convert coap packets to https packets and vice versa.
In MQTT you have a central server(Broker) to which the nodes send their data and fetch their required data through topic filters.
In CoAP each device having some data to share becomes a server and the other device interested in it's data becomes a client and sends a GET request to the respective server to get its data. Similarly a PUT request along with a payload from a client would update the value at the server.
You really should not be looking to combine the MQTT broker with a HTTP server, especially if you intent the HTTP Server to actually be an application server (Running back end logic e.g. PHP). These are 2 totally separate systems. There is nothing to stop your application logic connecting to the broker as a client.
If you intend to use MQTT over WebSockets you can use something link nginx to proxy the WebSockets connection to the broker so it can sit behind the same logical HTTP/HTTPS address.
Our client has a requirement that a web server can only have port 80 and 443 open, both public and internal facing, but our application would benefit from using queuing on the inside.
Is it possible to run RabbitMQ over port 80?
Update
The setup is as follows.
We have a public facing API server which calls various back end systems.
In between the API server and the back end servers there is another layer which in most cases just works like a proxy.
Some of the back end systems, as well as the proxy layer, go up and down intermittently.
What I would like to do is have a queue on the API server, a queue in the proxy layer and a queue in the back end layer.
These queues would be federated so that a messages placed on the queue on the API server would be forwarded all the way down to the back end servers (queuing is needed for inserts and updates only).
One way is using Web-Stomp plugin and Sock.js, using nginx as proxy.
Another way - node.js callback for some sending messages, handling events and create messages with node.js.
Server side works with RabbitMQ by localhost connect with default port.
Third way is using subdomain with another IP adress.
My q is whats can stop http post from desktop applications ?
e.g
i have a desktop application before it start it's ask users form some information
like a username ad Email ,,, and then take this information and post it on php webpage and php insert it into MySql Server any way the problem now is lets say like
6 of 16 download(s) are registered and the others not so whats can make http post not run correctly ?
Note :
Software tested on every windows os and runs ok
Software run with all anti viruses programs ok
Software add port throw windows firewall ok
So whats can make http post not run correctly ?
Regards
There are many things that could stop communication between your application, and your database.
If the client has a firewall that requires authorisation for outbound requests.
If the client has to connect via a proxy server, and you application is not proxy aware
If your website fails to process your request (perhaps, if the MySql server is too busy to allow connections, etc.)
So, consider an end user behind a WebSense proxy that additionally allows administrators to filter out unwanted traffic. If your application is not proxy server aware, it will fail to connect; If your application is proxy aware, and whatever WebSense category you fall into is filtered for that client, it will also fail to connect.