SignalR: WebSocket security and networking for load balancer - asp.net

We are evaluating SignalR for chat in our web application. Web application runs on a IIS serverfarm behind a Netscaler load balancer. Back plane will be Redis.
What are the ports that need to be opened on load balancer to enable Websocket from web browser to web server?
Other than enabling sticky session from client, should I need to do anything on the load balancer?
SSL is terminated at load balancer. Do we need to do anything additional to enable SSL for websockets?
Are there anything I need to do in load balancer for long polling?
We are a HIPAA site. Are there any known vulnerabilities in using WebSocket?
We create a session cookie after initial authentication, that gets validated on every server request using a global filter in ASP.Net MVC5. Will this global filter gets invoked on every server side method of SignalR?

Related

Fargate errors when frontend service tries to communicate with backend service via Service Discovery

I have a frontend app in Fargate (ECS) in a private subnet exposed to internet through an Application Load Balancer. My frontend makes API calls to my backend apps, also in Fargate, same VPC.
Users calls to my frontend are made via HTTPS, but my frontend communicates with my backend via HTTP (AWS Service Discovery - AWS Cloud Map). This way, the user browser is showing the error "blocked: mixed content" since half of the communication is made via HTTPS and the other half uses HTTP.
infra here
As far as I know and had been searching, it is not possible to use a SSL/TLS certificate with Service Discovery.
I've made a lot of researches and couldn't find something really useful. I also tried to create an internal load balancer for each backend service but the communication is timing out, it only works when I have a VPN connected.
What am I missing here? Do I need an internal load balancer in front of each backend service to attach a certificate between frontend and backend? What is the best approach to solve this?
Users calls to my frontend are made via HTTPS, but my frontend communicates with my backend via HTTP (AWS Service Discovery - AWS Cloud Map). This way, the user browser is causing the error "blocked: mixed content" since half of the communication is made via HTTPS and the other half uses HTTP.
The user's browser wouldn't know anything about this if the communication was happening between the front-end server and the back-end server. Apparently you have front-end client JavaScript code running in the user's web browser trying to access the backend server directly.
If you want to access the backend server directly from the user's web browser, then service discovery won't work, because service discovery is only for traffic that is inside the VPC. And of course by trying to use service discovery in this way you are also causing a security issue which the browser is correctly blocking you from doing. You will need to add another load balancer, or another listener on your current load balancer, that exposes the backend API to the Internet.
Alternatively you could use a reverse proxy like Nginx on your front-end server to send backend API requests to the backend service, and then have your client-side JavaScript code send all requests to the front-end server.

Why do we need a web server along side an app server in an orchestrated containerised architecture?

Assuming I am using framework like Flask to serve requests, I understand that web server handles static file requests and directs any program execution requests to the app server. Example: nginx. Where as app server can handle both static files as well as program executions. Example: gunicorn.
It makes sense to have a web server to handle static files, caching, request redirection, load balancing. The request first comes to the web server and it knows how to handle it and redirect any program executions to the app server.
However, in architectures where we use orchestration and containerization, that is - there is cluster of nodes, each node running a container - assume the container has got only the app server (example: gunicorn), and the request arrives at the API management/gateway(which has same features as a web server - other than serving static files), gets redirected to the cluster of nodes (which does load balancing), eventually the request reaches a node containing the appserver (example: gunicorn) that serves the request.
Is there any benefit of having a web server running along side an app server inside such a configuration?
In azure does API gateway play the role of webserver equivalant?
It depends. It's common to have some proxy / routing logic (e.g. url rewrite) in the API Gateway, so probably this is why you can have the app server and the web server inside a container.
In Azure, API Management is a fully managed API Gateway which allows you to implement caching, routing, security, api versioning, and more.
More info:
https://microservices.io/patterns/apigateway.html
https://learn.microsoft.com/en-us/dotnet/architecture/microservices/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-api-gateway-pattern

How to handle SignalR implementation under LoadBalancer Environment

I am fairly new to signalr concepts. I have a scenario where load balancing is in place with two servers. The situation is that client request is taken by the load balancer and redirects it to a one of the server based on the load. After redirection the connection from client to the server is lost. Important thing here is that client request is for different purposes i.e they call different methods on the hub. The server continues processing the request further and during this time if it detects any status change, it has to push the notification back to the clients. However at this point, server won't be knowing to which client it has to respond back as the load balancer doesn't store any information about the same once the connection is lost from client to server. How to handle this kind of scenario?. Should I be manually storing session id and other details in a table?
I have gone through the scaleout options suggested for load balancing using backplane by the signalr team(Azure service bus, Redis and SQL Server). However my scenario is little different. Any help will be appreciated.

Load balancing and sessions

What is the better approach for load balancing on web servers? My services run in .NET and Mono, so they could be hosted on IIS or Apache2, and the will have to provide SSL connection.
I've read two main approaches, store the state in a common server and use sticky sessions, there is any other else?
I've read 3 diffent things about sticky sessions:
1)the load balancing device will know with which server did you start the connection and all the further connections from that host will be routed to the same server.
2)the load balancing devide read a cookie named: JSESSIONID
3)the load balancing devide read a cookie named: ASPSESSIONID
I'm a little bit confused, what will happen exactly? As the connections will be SSL there is not a chance for the load balancing devide of read the cookies, so then what?
About store the estate in a common server, what solutions do you know? I've read memcache is a good solution but is there any other else?
Cheers.
When using SSL with a load balancer, it is common to put the SSL certificate on the load balancing server, and not on the back end servers. In this way you only need 1 certificate on 1 server. The load balancer then talks to the back end servers using plain HTTP. This obviously requires that your back end servers are not directly accessible from the internet.
So, if the load balancer is responsible for decrypting the request, it will also be able to inspect the request for a jsessionid.
Sticky sessions work well with Apache as load balancer. You should check out the Apache modules mod_proxy and mod_proxy_balancer.
Generally SSL load balancing means that the client is talking to the load balancer over HTTPS, and the load balancer is talking to the web server via HTTP.
Some load balancers are smart enough to establish an SSL session with the web server (so it can read cookies) and maintain a separate SSL session with the client.
And, some load balancers can maintain stickiness without using web server cookies. My load balancers are able to send their own cookies to the client (they have a bunch of other stickiness settings as well).

Call ASP.NET Web Service on the Same Farm as Web Application

I am getting the following error when I try to call an ASP.NET Web Service from an ASP.NET Web Application. I believe it is because the Web Service and Web Application are on the same Farm/behind the same Load Balancer.
A connection attempt failed because the connected party did not properly
respond after a period of time, or established connection failed because
connected host has failed to respond [IP Address removed]:80
This error does not occur when I call the Web Service on the Farm from the Web App on my local machine, or when I call the Web Service on my local machine from the Web App on the Farm.
Any idea why this error is occurring?
The solution to my problem was to turn on NATing on the load balancer.
The request was being made from a server in the farm to the Load Balancer, then the Load Balancer would send that request to one of the servers in the farm (possibly even the same server that requested it). The problem was, the server that was handling the request would try to send the response directly back to the "requesting client" instead of back to the Load Balancer, so the server that made the request would just ignore the response because it was not being sent by the Load Balancer. By turning NATing on, all responses are sent back to the Load Balancer, and then the Load Balancer sends the response on to the original client.
This is just a guess, but can the web server actually see the IP address being used? If it's on a farm behind a load balancer then that IP might be being blocked by the load balancer itself or a firewall or proxy server.
Can you access the web server via remote desktop and ping the IP address?
The TCP/IP stack on your farm node is not going to route the call to the IP address of the load balancer, but will automatically translate this into a local call on 127.0.0.1:80 on the specific farm node that is making the call. Make sure your web servers are set up to handle this case.

Resources