Network Load Balancer Not Disconnecting With On Prem Application - tcp

Currently working on a setup where I need to connect on-prem application to an application running AWS EKS via Network Load Balancer . The flow is POD ----> Ingress Controller ---> NLB ---> On-Prem application .
Have a stream timeout set to 10 min (600 seconds) in nginx.conf file. As expected after 10 minutes of inactivity the POD gets disconnected , however on-prem application still shows the connections with Network Load Balancer as active as seen from netstat command.
Currently analysing the issue using tcpdump . However just wanted to know is there any configuration on Ingress/ NLB end that can help in disconnecting on-prem application as soon as POD gets disconnected.

Related

Problems with k8s service after few minutes

we started a redis chart (bitnami's redis helm, cluster mode disabled) and connected our java app (using redisson) to the service.
After 5 minutes the connection to the redis is closed ("Reading from client: Connection reset by peer in redis debug log"). (The networking still seems fine, and we can establish new connections, but the old ones are closed and redis conf timeout is 0)
When configuring the Java to access directly to the redis pod (without the k8s service in the middle), the problem doesn't happen.
We didn't find any similar problem on the web, which is weird (pretty out of the box settings, nothing special).
We have no idea what can cause such problems, any ideas?
Kuberenets version 1.11.7 installed on AWS via kops, we tried Kubenet cluster and new calico-flannel (canal) cluster (without policies) and switching redis versions (4 and 5), and accessing the service by IP/name but it didn't help.
redis timeout is 0 (disabled)
Bitnami's helm chart https://github.com/helm/charts/tree/master/stable/redis
using usePassword: false and loglevel debug.

Kafka Receives Messages But Fails To Add To Topic - With Setup Local Kafka VM and Minikube Kubernetes Cluster

Set Up
Laptop with:-
Kafka in a virtualbox vm : vagrant 9092 port forwarded from laptop's localhost
Kubernetes Cluster in a virtualbox VM : miniKube
Desired Outcome
Microservices on my miniKube cluster can fire messages to Kafka VM.
Note that this works in Google Container Engine (GKE)
Actual Outcome
From the laptop I can use a console producer to send messages to Kafka VM and it happily obliges adding these to the topic. But when a microservice from the kubernetes cluster sends a message, the message is received but it is not added to the topic.
Instead I get the error on the microservice ...
Batch containing 1 record(s) expired due to timeout while requesting metadata from brokers for generated-test-script-0
If I tail kafka-request.log I see ...
[2017-02-08 21:57:05,891] TRACE Completed request:{api_key=3,api_version=1,correlation_id=0,client_id=producer-5} -- {topics=[generated-test-script]} from connection 10.0.2.15:9092-10.0.2.2:50124;totalTime:0,requestQueueTime:0,localTime:0,remoteTime:0,responseQueueTime:0,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (kafka.request.logger)
While in the "success" case when I simply use a console producer on the laptop I see 2 lines. 1 the same as above but I guess another ACK ...
[2017-02-08 22:08:12,764] TRACE Completed request:{api_key=3,api_version=2,correlation_id=0,client_id=console-producer} -- {topics=[test]} from connection 10.0.2.15:9092-10.0.2.2:50748;totalTime:6,requestQueueTime:0,localTime:6,remoteTime:0,responseQueueTime:0,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (kafka.request.logger)
[2017-02-08 22:08:13,799] TRACE Completed request:{api_key=0,api_version=2,correlation_id=1,client_id=console-producer} -- {acks=1,timeout=1500,topic_data=[{topic=test,data=[{partition=0,record_set=java.nio.HeapByteBuffer[pos=0 lim=39 cap=39]}]}]} from connection 10.0.2.15:9092-10.0.2.2:53696;totalTime:22,requestQueueTime:1,localTime:21,remoteTime:0,responseQueueTime:0,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (kafka.request.logger)
Conclusion And Thoughts
So there is no ERROR as such on the kafka server side, just on the client. My guess is that this a a network issue setup ( NAT? ) whereby the microserice in the virtual Kubernetes cluster can talk to my Kafka VM but the reply route is dropped?
The metadata is required to be returned by Kafka on the first sent message so making the batch size == 0, or "acks" = 0 doesn't really help as a hack because of the initial requirement to send this metadata back.
Any thoughts or pointers would be great as I really want to run this cluster and Kafka VM locally for dev work.

SocketException: No connection could be made because the target machine actively refused it XX.XXX.XX.XXX:443

I got 2 servers with two equal wcf services hosted on them and one client application server. I can connect to endpoints and send a requests to both services using test wcf client app (.NET Web Service Studio) from my local machine successfully. But when I am trying to connect from client application server using the same test wcf client app I successfully connected only to the one wcf service server, but I have got an error when connecting to another one:
System.Net.WebException: There was an error downloading 'https://XXX/XXX?wsdl'. ---> System.Net.WebException: Unable to connect to the remote server ---> System.Net.Sockets.SocketException: No connection could be made because the target machine actively refused it XX.XXX.XX.XXX:443
I performed netstat -an | find "443" command in command prompt on the client server and on my local machine to find out the difference and here what I have got:
1. On my local machine:
2. On the client app server:
What I already tried to do on client application server is:
- turned off firewall;
- stopped windows firewall service
- uninstalled mcafee virusscan enterprise application.
(I tried to set "prevent mass mailing worms from send mail" first, but mcafee was in foreign language that I don't understand, so I just uninstalled it)
after running command netstat -aon | findstr "443" on client application server I have got this result:
but I still got an error.
Does anybody know how to solve this issue?
Could be the problem on the wcf service server side?
The solution was predictable simple one - firewall was blocking the port,
but it's important to notice that the issue was caused by firewall on the wcf service server side, but not on client application server, which is making the request to that service.
I asked the technical support of that server, and they made firewall changes.
After that error was disappeared.
I faced the same issue and tried different ways to fix this. Nothing works. Later i found the issue which is, the application i tried to run is https and in my IIS, https binding was not created. I created binding https with the website and it works.

Kubernetes Service Deployment

I have recently started exploring kuberenetes and done with practical implementation of pods,services and replication Controller on google cloud. I have some doubts over service and network access .
First, Where is the service deployed which will work as load balancer for group of pods ?
Second, does the request to access an application running in pod using a service load balancer go through master or direct to minions nodes ?
A service proxy runs on each node on the cluster. From inside the cluster, when you make a request to a service IP, it is intercepted by the service proxy and routed to a pod matching the label selector for the service. If you have specified an external load balancer for your service, the load balancer will pick a node to send the request to, at which point it will be captured by the proxy and directed to an appropriate pod. If you are using public IPs, then your router will send the request to the node with the public IP where it will be captured by the proxy and directed to an appropriate pod.
If you followed by description, you can see that service requests do not go through the master. They bounce through a proxy running on the nodes.
As an aside, there is also a proxy running on the master, which you can use to reach nodes, services, pods, but this proxy isn't in the packet path for services that you create within the cluster.

Issue while deploying wcf service in load balancer environment

I configured a load balancer service for port 80 on two windows server 2003 machines. I deploy the same wcf service on IIS and shared the virtual IP to client to consume the service.
If both the servers are up and running then clients able to consume the service but of any server goes down it throws the below exception :-
There was no endpoint listening at server port that could accept the
message. This is often caused by an incorrect address or SOAP action.
See InnerException, if present, for more details.
Any idea ?

Resources