Are there rules for gRPC port number? - asp.net-core-webapi

I am currently developing an application which will expose both REST an gRPC endpoints.
I need to set a port for the gRPC server.
Are there any rules for the port number? Any special ranges other than the standard for REST services?

To my knowledge, no rules.
You'll see 50051 used as a gRPC default.
If you're multiplexing HTTP 1.x traffic on the same port as gRPC, you'll likely want to default to 80 (insecure) and 443 (secure) ports for the front-end service (often proxy) and 8080 and 8443 respectively for backend (proxied) services.
NOTE Google defaults to 8080 (for proxied containers) on Google Cloud Platform (e.g. App Engine, Cloud Run) with an often-ignored (but important) requirement that the deployed service bind to the value of PORT environment variable exported to the container environment (which defaults but may not always be 8080). Suffice to say, check your deployment platforms' requirements and adhere to their requirements.

No there are not any rules for the port number. Just be careful to assign Http2 protocol and SSL for gRPC (not mandatory but highly recommended). All you need is just to configure the Kestrel endpoints parameters in your appsettings.json. Set Endpoints with WebApi and gRPC with your custom name as follow:
"Kestrel": {
"Endpoints": {
"Grpc": {
"Protocols": "Http2",
"Url": "https://localhost:5104"
},
"webApi": {
"Protocols": "Http1",
"Url": "https://localhost:5105"
}
}
}

Related

Nginx ingress : Host based routing on TCP port

Usage of the same TCP port for Rabbitmq 5672 and transfer requests to different namespaces/rabbitmq_service based on the host-based routing.
What works:
chart: nginx-git/ingress-nginx
version: 3.32.0
values:
- tcp:
5672: "cust1namespace/rabbitmq:5672"
Block reflected in nginx.conf:
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-cust1namespace-services-rabbitmq-5672";
}
listen :5672;
proxy_pass upstream_balancer;
}
Note: this will transfer all the requests coming to port 5672 to cust1namespace/rabbitmq:5672, irrespective of the client domain name and we want host-based routing based on domain name.
What is expected:
chart: nginx-git/ingress-nginx
version: 3.32.0
values:
- tcp:
cust1domainname:5672: "cust1namespace/rabbitmq:5672"
cust2domainname:5672: "cust2namespace/rabbitmq:5672"
Error:
Failed to render chart: exit status 1: Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(Service.spec.ports[3].port): invalid type for io.k8s.api.core.v1.ServicePort.port: got "string", expected "integer", ValidationError(Service.spec.ports[4].port): invalid type for io.k8s.api.core.v1.ServicePort.port: got "string", expected "integer"]
The final nginx.conf should look like:
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-cust1namespace-services-rabbitmq-5672";
}
listen cust1domainname:5672;
proxy_pass upstream_balancer;
}
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-cust2namespace-services-rabbitmq-5672";
}
listen cust2domainname:5672;
proxy_pass upstream_balancer;
}
A bit of theory
Approach you're trying to implement is not possible due to network protocols implementation and difference between them.
TCP protocol works on transport layer, it has source and destination IPs and ports, it does not have any hosts information within. In turn HTTP protocol works on application layer which seats on top of the TCP and it does have information about host where this request is intended to be sent.
Please get familiar with OSI model and protocols which works on these levels. This will help to avoid any confusion why this works this way and no other.
Also there's a good answer on quora about difference between HTTP and TCP protocols.
Answer
At this point you have two options:
Use ingress to work on application layer and let it direct traffic to services based on hosts which are presented in request body. All traffic should go through ingress endpoint (usually it's loadbalancer which is exposed outside of the cluster).
Please find examples with
two paths and services behind them
two different hosts and services behind them
Use ingress to work on transport layer and expose separate TCP ports for each service/customer. In this case traffic will be passed through ingress directly to services.
Based on your example it will look like:
chart: nginx-git/ingress-nginx
version: 3.32.0
values:
- tcp:
5672: "cust1namespace/rabbitmq:5672" # port 5672 for customer 1
5673: "cust2namespace/rabbitmq:5672" # port 5673 for customer 2
...

How to make Kubernetes service load balance based on client IP instead of NGINX reverse proxy IP

I have configured NGINX as a reverse proxy with web sockets enabled for a backend web application with multiple replicas. The request from NGINX does a proxy_pass to a Kubernetes service which in turn load balances the request to the endpoints mapped to the service. I need to ensure that the request from a particular client is proxied to the same Kubernetes back end pod for the life cycle of that access, basically maintaining session persistence.
Tried setting the sessionAffinity: ClientIP in the Kubernetes service, however this does the routing based on the client IP which is of the NGINX proxy. Is there a way to make the Kubernetes service do the affinity based on the actual client IP from where the request originated and not the NGINX internal pod IP ?
This is not an option with Nginx. Or rather it's not an option with anything in userspace like this without a lot of very fancy network manipulation. You'll need to find another option, usually an app-specific proxy rules in the outermost HTTP proxy layer.

Allow access to kafka via nginx

Good day,
I want to connect to my kafka server from the internet. Kafka installed on the virtual server and all servers hidden behind a nginx.
I updated kafka settings (server.properties).
Added: listeners=PLAINTEXT://:9092
I can connect to kafka server from local network via ip address 10.0.0.1:9092, but unable connect from internet by domain name.
Response from kafka: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Topic test-topic not present in metadata after 60000 ms.
Nginx: [26/Nov/2019:12:38:25 +0100] "\x00\x00\x00\x14\x00\x12\x00\x02\x00\x00\x00\x00\x00" 400 166 "-" "-" "request_time=1.535" "upstream_response_time=-" "upstream_connect_time=-" "upstream_header_time=-"
nginx conf:
server {
listen 9092;
server_name site.name;
# Max Request size
client_max_body_size 20m;
location / {
proxy_pass http://10.0.0.1:9092;
}
}
Does anyone know what the problem is?
Kafka doesn't use http protocol for communication, so it can't be fronted by an HTTP reverse proxy.
You'll have to use nginx stream definition blocks for TCP proxying
(I've not tried this personally)
https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/
unable connect from internet by domain name.
Sounds like an issue with your advertised.listeners configuration. Note that there is no clear way to "hide" Kafka behind a proxy since your clients are required to communicate directly with each broker individually (therefore defeating the purpose of having
Ngnix unless you want to use one Nginx server or open a new port, per broker), and would therefore also require Kafka to know that it would need to "advertise" the proxy rather than its own address.
If you really want to expose Kafka to the public web, you should really be using SSL/SASL listeners, not PLAINTEXT
If you want to use HTTP, then you can install Kafka REST Proxy, then put Nginx in front of that. Then your clients would use http rather than standard kafka libraries

How to access control.unit.sock with Nginx for securely proxying Unit API

I'm trying to use Nginx as proxy to access control.unit.sock (Nginx Unit) as it is recommended here : Securely Proyxing Unit Api. But Nginx is not able to access the socket.
I use the default configuration for Unit. unix:control.unit.sock is created as root with 600 permissions. Nginx uses the user : www-data by default.
How can I give Nginx access to this socket securely ? By avoid opening sockets on public interfaces in production or something else.
(For sure, Nginx has access if I set permission as 777.)
server {
location / {
proxy_pass http://unix:/var/run/control.unit.sock;
}
}
You can consider to run Unit with --control option and specify address that you want use (e.g. --control 127.0.0.1:8080).
Documentation https://unit.nginx.org/installation/#installation-startup:
--control socket
Address of the control API socket. IPv4, IPv6, and Unix domain sockets
are supported.

Kubernetes: Using same service(Rabbitmq) for both HTTP and WS traffic

I am trying to set up rabbitmq cluster within kubernetes. Clients can connect to Rabbitmq using amqp protocol rides on TCP and webclients using websockets. As mentioned in WebSocket support for kubernetes, I need to add websocket services against "nginx.org/websocket-services" annotation for ingress configuration. In this case rabbitmq acts as both websocket and non-websocket service.
Will the configuration work for amqp clients loadbalancing if I give rabbitmq service name against "nginx.org/websocket-services" ?
In short, can a service be both non-ws and ws at sametime?
EDIT [ 05-02-2018 ]
It seems there is a different flow for TCP loabalancing. I implemented that. Atleast the routing part is happening but I am not sure about load-balancing of TCP, I need to debug further on that part.
And there is one more reference for websocket loadbalancing and seems to say "No special configuration required".
Kiran

Resources