Intra ServiceFabric communication with internal reverse proxy on localhost - http

I have a ServiceFabric with two applications. On application gets invoked from outside the ServiceFabric and then issues HTTP get requests to the other application inside the ServiceFabric.
My first attempt was to address the second application with the ServiceFabric's reverse proxy IP, the same as the first application is addressed with:
http://10.0.0.1:19081/App2/App2.Service/
This led to unreliable communication inside the ServiceFabric and the first request always failed, while the second mostly succeeded.
Then I read about internal ServiceFabric communication at https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-reverseproxy. Now I address my second application with localhost and it seems to work as expected:
http://localhost:19081/App2/App2.Service/
The only open question is: Does addressing applications inside the ServiceFabric with localhost only work because the application is also running on the same node? Or does it work because there is real reverse proxy behavior and even if the application does not run on the same node, the request gets to it regardless?

The reverse proxy runs on all nodes, so it can be reached on localhost at all times. It forwards your call to the second service, which is resolved automatically.
You could also use the built-in DNS service to resolve internal services. This way, you save some of the overhead of the reverse proxy.
Opposed to using the ip address, you don't need to know whether the service runs on localhost or on a different node. Also, you don't get into trouble if your service is moved at run-time.

Related

Routing - Web browser logs out of first instance when second instance is logged in, can I stop this behaviour?

This is easier to explain with a diagram.
Instance One -> Port 5000 - > Forward to router 5000
Instance Two -> Port 5000 - > Forward to router 5001
Connect to instance One using WANIP:5000, this works fine until I connect to instance two (WANIP:5001).
Web browser logs me out of instance one when I log in to instance two.
How can I stop the web browser from logging me out of Instance one when I connect to instance Two?
I was expecting Instance 1 and Instance 2 to be useable simultaneously.
What have I tried?
Check that instance one and instance Two are not on the same IP address. They use the same port currently 5000 - > forward port is differnt for each instance.
I changed the port running on the instance to a differnt one and forwarded that port.
I switched UDP off on the router.
I unticked NAT on the router.
These actions did not resolve my issue.
I can connect to both instances if I use a separate web browser
to connect to each instance. For example, firefox (instance one), Edge (Instance 2).
This does not happen when the instances are on a local lan, behaviour only manifests when the instances are forwarded through the router. If it helps the instance is running .netCore MVC.
Answering my own question in case anyone else has issues with this.
This seems to be a well known problem involving cookies and the way they are shared via the web browser. Cookies are not (the way we use them anyway) independent for each instance and a new cookie is not generated when the port number changes.
You can use an alias for the IP you are connecting to which then will allow the web browser to have a cookie for each instance.

Correct way to get a gRPC client to communicate with one of many ECS instances of the gRPC service?

I have a gRPC client, not dockerised, and server application, which I want to dockerise.
What I don't understand is that gRPC first creates a connection with a server, which involves a handshake. So, if I want to deploy the dockerised server on ECS with multiple instances, then how will the client switch from one to the other (e.g., if one gRPC server falls over).
I know AWS loadbalancer now works with HTTP 2, but I can't find information on how to handle the fact that the server might change after the client has already opened a connection to another one.
What is involved?
You don't necessarily need an in-line load balancer for this. By using a Round Robin client-side load balancing policy along with a DNS record that points to multiple backend instances, you should be able to get some level of redundancy.

R-Shiny script timeout behind a load-balancer

I am testing one Shiny script on an instance in GCP. The instance resides behind a load-balancer that serves as a front end with a static IP address and SSL certificate to secure connections. I configured the GCP instance as part of a backend service to which the load-balancer forwards the requests. The connection between the load-balancer and the instance is not secured!
The issue:
accessing the Shiny script via the load-balancer works, but the web browser's screen gets grayed (time-out) on the client-side after a short time of initiating the connection!! When the browser screen grayed out, I have to start over!!
If I try to access the Shiny script on the GCP instance directly (not through the load-balancer), the script works fine. I suppose that the problem is in the load-balancer, not the script.
I appreciate any help with this issue.
Context: Shiny uses a websocket (RFC 6455) for its constant client-server communication. If, for whatever reason, this websocket connection gets dicsonnected, the user experience is the described "greying out". Fortunately GCP supports websockets.
However, it seems that your load balancer has an unexpected http time out value set.
Depending on what type of load balancer you are using (TCP, HTTPS) this can be configured differently. For their HTTPS offering:
The default value for the backend service timeout is 30 seconds. The full range of timeout values allowed is 1-2,147,483,647 seconds.
Consider increasing this timeout under any of these circumstances:
[...]
The connection is upgraded to a WebSocket.
Answer:
You should be able to increase the timeout for your backend service with the help of this support document.
Mind you, depending on your configuration there could be more proxies involved which might complicate things.
Alternatively you can try to prevent any timeout by adding a heartbeat mechanism to the Shiny application. Some ways of doing this have been discussed in this issue on GitHub.

Fixing kubernetes service redeploy errors with keep-alive enabled

We have a kubernetes service running on three machines. Clients both inside and outside of our cluster talk to this service over http with the keep-alive option enabled. During a deploy of the service, the exiting pods have a readiness check that starts to fail when shutdown starts, and are removed from the service endpoints list appropriately, however they still receive traffic and some requests fail as the container will abruptly exit. We believe this is because of the keep-alive which allows the the client to re-use these connections that were established when the host was Ready. Is there a series of steps one should follow to make sure we don't run into these issues? We'd like to allow keep-alive connections if at all possible.
The issue happens if the proxying/load balancing happens in layer 4 instead of layer 7. For the internal services (Kubernetes service of type ClusterIP), since the Kube-proxy does the proxying using layer 4 proxying, the clients will keep the connection even after the pod isn't ready to serve anymore. Similarly, for the services of type LoadBalancer, if the backend type is set to TCP (which is by default with AWS ELB), the same issue happens. Please see this issue for more details.
The solution to this problem as of now is:
If you are using a cloud LoadBalancer, go ahead and set the backend to HTTP. For example, You can add service.beta.kubernetes.io/aws-load-balancer-backend-protocol annotation to kubernetes service and set it to HTTP so that ELB uses HTTP proxying instead of TCP.
Use a layer 7 proxy/ingress controller within the cluster to route the traffic instead of sending it via kube-proxy
We're running into the same issue, so just wondering if you figured out a way around this issue. According to this link it should be possible to do so by having a Load Balancer in front of the service which will make direct requests to the pods and handle Keep-Alive connections on it's own.
We will continue to investigate this issue and see if we can find a way of doing zero downtime deployments with keep-alive connections.

Running a federated RabbitMQ on port 80

Our client has a requirement that a web server can only have port 80 and 443 open, both public and internal facing, but our application would benefit from using queuing on the inside.
Is it possible to run RabbitMQ over port 80?
Update
The setup is as follows.
We have a public facing API server which calls various back end systems.
In between the API server and the back end servers there is another layer which in most cases just works like a proxy.
Some of the back end systems, as well as the proxy layer, go up and down intermittently.
What I would like to do is have a queue on the API server, a queue in the proxy layer and a queue in the back end layer.
These queues would be federated so that a messages placed on the queue on the API server would be forwarded all the way down to the back end servers (queuing is needed for inserts and updates only).
One way is using Web-Stomp plugin and Sock.js, using nginx as proxy.
Another way - node.js callback for some sending messages, handling events and create messages with node.js.
Server side works with RabbitMQ by localhost connect with default port.
Third way is using subdomain with another IP adress.

Resources