Does Cloud Load Balancer support QUIC when connecting to an upstream server? - http

Introducing QUIC support for HTTPS load balancing explains how QUIC is supported for a client <-> load balancer connection. My upstream server uses aioquic and accepts only HTTP/3 connections attempted proactively, without support for upgrading from HTTP/1.1 or HTTP2. Can I configure Cloud Load Balancer to always communicate with my upstream servers in Google Kubernetes Engine via HTTP/3?

As of now you cannot use QUIC/HTTP3 backend, when using the HTTP(S) loadbalancing the following backend protocols are supported:
HTTP, HTTPS, HTTP/2
Please see this for more info on backend protocols supported.

Related

TLS in golang http and grpc server

I've seen that some app developed in Go run without tls enabled from the app, rather enabled in its proxy server(nginx). The requests coming to the app is encrypted at the nginx side only. So the Go http server is served using only http.ListenAndServe.
While using gRPC, I've seen the gRPC server served without tls enabled, and the client dial with insecure mode enabled.
I assumed all of this because you only need enable tls only if you serve requests coming from outside(external networks). If you use http and grpc for internal services communication within internal network in microservices architecture, you don't need enable tls at all since it only adds overhead. Is this true?
How is tls properly applied in Golang development for http and gRPC server?

http over wiregarud vs pure https

I have a HTTP service running on a server that is to be used by my android application. I am thinking about various way to so that clients can send data to server in a secure manner. One common way is to use HTTPS protocol and have an load-balancer or a proxy that do the SSL termination.
Instead, I am thinking of using wiregaurd as a secure medium for communication. So I will first install wiregarud client as a part of my android application and send all the traffic through this wiregarud channel to the server which is being served from an http endpoint.
Which of the two approaches are better in terms of security and speed?

WebSockets not working with HTTP/2 Load Balancer backend in GCP

I have an application running behind a Load Balancer in Google Cloud Platform.
When I use the HTTPS protocol in the backend, I'm able to connect with WebSockets and all WebSocket connections work fine. However, when I change the backend protocol to HTTP/2, I'm unable to connect from the application, and it returns a response of 502 Bad Gateway.
Can I use WebSockets with HTTP/2, or do I need to perform some configuration in order to use WebSockets with an HTTP2 backend?
As others have commented, WebSockets are not supported in HTTP/2 and this is the reason why you receive the 5XX error.
Having said that, the WebSocket functionality is achievable (and improved) with HTTP/2 ref.
If you have existing code working with WebSocket it might not be great to rewrite both backend and frontend.
However, if you are developing a new asynchronous service, it is a good idea to take a look at the HTTP/2 + Server Sent Event (SSE) scheme.

grpc - is TLS necessary if https enabled?

I'm newbie of grpc and have played with simple grpc clients of java, go, and python. I know basic http and https but not familiar with protocal details. So this question may be rediculous to you but I didn't find any explaination online.
I know grpc has insecure(go: grpc.WithInsecure(), python: grpc.insecure_channel, java: usePlaintext()) and secure mode(TLS). and grpc is based on httpv2, and http has security mode(https).
So what if use insecure grpc with https? Is the overall data transfer safe?
And what if use TLS grpc with https? Is there performance overhead(becuase I think the messages are encrypted twice)?
Thank you for any answer, any exsiting webpages explaining such topic that will be best!
Insecure implies http. And TLS implies https. So there's no way "to use insecure grpc with https", as at that point it is then http.
There is no double-encryption. The gRPC security mode is the same as the HTTP security mode.
Using gRPC over TLS is highly recommended if you gRPC server is serving requests coming from outside(external network). For example you're creating front end app in javascript serving user requests. Your javascript app make call to your gRPC server for APIs your server provide. Your javascript communicate to your gRPC server through stub created in javascript end. At the end of your gRPC server, you need to set tls mechanism to secure communication between your javascript app and your gRPC server(because requests coming from outside).
gRPC somehow mostly used for internal services communication inside internal network in microservice architecture. You don't need to set tls for internal network usage since requests coming from your own environment from within your watch.
If you want to apply something like "gRPC over HTTPS", then you need something like gateway to map your http call to your gRPC server. Check this out.
You need to compile your proto file as gateway service definitions as well using provided tools. Now you can create your normal http server with tls enabled through something like http.ListenAndServeTLS(...). Dont forget to register your grpc server to the http server using the service definitions compiled from the proto file. With this all your requests to are encrypted with tls to your http server like normal rest apis do, but get proxied to gRPC server you defined. There's no need to enable tls at your gRPC server since it has been enabled in your http server.

NGINX - Websocket client support

A quick question, Does Nginx support websocket client.
I have a webserver that uses NGINX and i use a websocket server for which NGINX acts as proxy. In the same port , can i use websocket client to initiate a connection with the external websocket server?
Yes, it does (since 1.3.13).
Have a look at the docs here and an example setup here

Resources