Our client has a requirement that a web server can only have port 80 and 443 open, both public and internal facing, but our application would benefit from using queuing on the inside.
Is it possible to run RabbitMQ over port 80?
Update
The setup is as follows.
We have a public facing API server which calls various back end systems.
In between the API server and the back end servers there is another layer which in most cases just works like a proxy.
Some of the back end systems, as well as the proxy layer, go up and down intermittently.
What I would like to do is have a queue on the API server, a queue in the proxy layer and a queue in the back end layer.
These queues would be federated so that a messages placed on the queue on the API server would be forwarded all the way down to the back end servers (queuing is needed for inserts and updates only).
One way is using Web-Stomp plugin and Sock.js, using nginx as proxy.
Another way - node.js callback for some sending messages, handling events and create messages with node.js.
Server side works with RabbitMQ by localhost connect with default port.
Third way is using subdomain with another IP adress.
Related
I have a frontend app in Fargate (ECS) in a private subnet exposed to internet through an Application Load Balancer. My frontend makes API calls to my backend apps, also in Fargate, same VPC.
Users calls to my frontend are made via HTTPS, but my frontend communicates with my backend via HTTP (AWS Service Discovery - AWS Cloud Map). This way, the user browser is showing the error "blocked: mixed content" since half of the communication is made via HTTPS and the other half uses HTTP.
infra here
As far as I know and had been searching, it is not possible to use a SSL/TLS certificate with Service Discovery.
I've made a lot of researches and couldn't find something really useful. I also tried to create an internal load balancer for each backend service but the communication is timing out, it only works when I have a VPN connected.
What am I missing here? Do I need an internal load balancer in front of each backend service to attach a certificate between frontend and backend? What is the best approach to solve this?
Users calls to my frontend are made via HTTPS, but my frontend communicates with my backend via HTTP (AWS Service Discovery - AWS Cloud Map). This way, the user browser is causing the error "blocked: mixed content" since half of the communication is made via HTTPS and the other half uses HTTP.
The user's browser wouldn't know anything about this if the communication was happening between the front-end server and the back-end server. Apparently you have front-end client JavaScript code running in the user's web browser trying to access the backend server directly.
If you want to access the backend server directly from the user's web browser, then service discovery won't work, because service discovery is only for traffic that is inside the VPC. And of course by trying to use service discovery in this way you are also causing a security issue which the browser is correctly blocking you from doing. You will need to add another load balancer, or another listener on your current load balancer, that exposes the backend API to the Internet.
Alternatively you could use a reverse proxy like Nginx on your front-end server to send backend API requests to the backend service, and then have your client-side JavaScript code send all requests to the front-end server.
I have a ServiceFabric with two applications. On application gets invoked from outside the ServiceFabric and then issues HTTP get requests to the other application inside the ServiceFabric.
My first attempt was to address the second application with the ServiceFabric's reverse proxy IP, the same as the first application is addressed with:
http://10.0.0.1:19081/App2/App2.Service/
This led to unreliable communication inside the ServiceFabric and the first request always failed, while the second mostly succeeded.
Then I read about internal ServiceFabric communication at https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-reverseproxy. Now I address my second application with localhost and it seems to work as expected:
http://localhost:19081/App2/App2.Service/
The only open question is: Does addressing applications inside the ServiceFabric with localhost only work because the application is also running on the same node? Or does it work because there is real reverse proxy behavior and even if the application does not run on the same node, the request gets to it regardless?
The reverse proxy runs on all nodes, so it can be reached on localhost at all times. It forwards your call to the second service, which is resolved automatically.
You could also use the built-in DNS service to resolve internal services. This way, you save some of the overhead of the reverse proxy.
Opposed to using the ip address, you don't need to know whether the service runs on localhost or on a different node. Also, you don't get into trouble if your service is moved at run-time.
short question : How can I host an MQTT server on my remote Ubuntu 16 server while at the same time hosting an HTTP server that will be using the MQTT data ?
true question : I want to build an IoT system that will be MONITORED and CONTROLLED by ESP32, which will SEND FEEDBACK and ACCEPT COMMANDS respectively from a remote server (maybe LAMP ?). I also want the user to log-in in a website hosted on this remote server, where s/he can monitor any sensor values or send commands (e.g. turning a led on or off).
So what's the way to go here?
I was adviced to go with MQTT but then the above problem arised.
what I've found : I 've found that using Mosquitto MQTT, I may be able to serve a website using websockets. But I prefer a more scalable HTTPS approach. That is, I intend to have a database linked with my site and running my PHP scripts.
I'm not that experienced, so please don't take anything for granted :)
MQTT uses TCP connection and follows publish/subscribe API model where as the web(http) follows Restful API model(Create,read,update,delete). If you want to stick with MQTT then you should use SAAS service like enterprise MQTT from HIVE which provide this integrability but will charge some fees and in return, they will provide you with an account and a dashboard for all your devices. Otherwise, you can try to make your own middleware which can integrate MQTT with web services .
Another thing I would recommend is CoAP which is also an M2M protocol but follows Restful API model and UDP connection. It has direct forward proxy to convert coap packets to https packets and vice versa.
In MQTT you have a central server(Broker) to which the nodes send their data and fetch their required data through topic filters.
In CoAP each device having some data to share becomes a server and the other device interested in it's data becomes a client and sends a GET request to the respective server to get its data. Similarly a PUT request along with a payload from a client would update the value at the server.
You really should not be looking to combine the MQTT broker with a HTTP server, especially if you intent the HTTP Server to actually be an application server (Running back end logic e.g. PHP). These are 2 totally separate systems. There is nothing to stop your application logic connecting to the broker as a client.
If you intend to use MQTT over WebSockets you can use something link nginx to proxy the WebSockets connection to the broker so it can sit behind the same logical HTTP/HTTPS address.
Using Spring 4 I need configure WebSocket use other port than HTTP.
In other words by default user access to HTTP and WebSocket as follow:
http://server:9090/
ws://server:9090/
But I need do the follow:
http://server:9090/
ws://server:9999/
In code I have only following:
#Configuration
#EnableWebSocket
public class WebSocketConfig
implements WebSocketConfigurer {
Also I have Handler:
Handler extends TextWebSocketHandler {
Is there such ability in Spring?
AFAIK all current implementations of websockets depend on a handshake via HTTP. After the handshake the existing connection is upgraded. You don't get a new one and the port stays the same. Basically all websocket connections start as HTTP connections.
As a side note the ports, IP addresses etc. are subject of the server, not the application itself.
It might be possible to configure your server so that two ports can be used for an application, but they would both be used for HTTP and websocket alike. On the other hand this might be useful in your situation.
Spring WebSocket different port for ws:// protocol
Due to limitation and in order to use websockets on App Engine Flexible Environment, app need to connect directly to application instance using the instance's public external IP. This IP can be obtained from the metadata server.
All MVC/Rest (http://) call should still serve from 8080 and in App Engine Flexible Environment ws:// server from ws://external_ip:65080
working code
https://github.com/kevendra/springmvc-websocket-sample
http://localhost:8080/
ws://localhost:8080/
to work with App Engine need below
http://localhost:8080/
ws://localhost:65080/ - in local
ws://external_ip:65080/ - App engine
Ref:
Extends org.eclipse.jetty.websocket.server.WebSocketHandler and start server context to 65080, but I'm looking for server managed by spring
How do I create an embedded WebSocket server Jetty 9?
Spring 4 WebSocket Remote Broker configuration
i've been searching and trying for weeks now to find a solution to my issue that I can understand and easily implement but I had no joy. So i would be very grateful if someone could put me out of my misery.
I'm building an iphone app similar in functionality to apps like "Air Video" and "Air Playit". The app should communicate with a server running on a remote host. This server should be able to execute a command sent by the iphone to encode a video and stream it over http.
In my case, my iphone app sends commands to be executed on a remote host. the remote host is running a python socket server listening for example on port 3333.
On the iphone, i'm simply using
"CFStreamCreatePairWithSocketToHost", "CFWriteStreamOpen" and
"CFReadStreamOpen"
to connect, write and read data.
My remote host, successfully intercepts the commands and starts the encoding.
To serve the contents, I'm having to run a separate http server (i'm using Python simpleHTTPServer) which is listening on another port.
What I would like to do is use the same port for both system commands and http requests.
The apps I've mentioned above seem to do it that way and I've noticed they have their own build-in web server.
I'm sure I'm missing something but please bear with me this is my first attempt at building an app.
Encode your system commands into special HTTP requests. Decide which thing to do (execute command or serve the contents) based on HTTP request, not on the incoming port. If you need to use separate http servers (like you told), consider having a layer that receives everything from the devices and dispatches to other servers (or ports) based on the request.