How to enable HTTP2 in Cloud Foundry using nginx-buildpack? - nginx

Is it possible to enable HTTP2 in cloud foundry using NGINX buildpack or any? I understand that GoRouter will not support HTTP2 but not sure if there is any workaround for this?
My original requirement is to serve large JS file from Cloud Foundry so to improve performance looking for enabling HTTP2.
Thanks,

Not exactly the same question, but the solution here applies: https://stackoverflow.com/a/55552398/1585136.
If you have the need for public clients (i.e.clients outside CF) to connect to your app, you need to use TCP routing. If your provider doesn't enable this by default, find another provider (see this list of public providers, hint Pivotal Web Services will provide TCP routes upon request) or self host.
If you only need to use HTTP/2 and/or gRPC between apps running on CF, you can use the container to container network. When you talk app to app, there are no restrictions (so long as you properly open required ports). You can use TCP, UDP and any protocol built on top of those. There some details about how this works here.
You'll also need the Nginx http_v2_module. This is a very recent addition and isn't yet in a build of the Nginx or Staticfile buildpack as I write this. It's should be, if everything goes right, in the next release though. That should be Nginx buildpack 1.1.10+ and Staticfile buildpack 1.5.8+.
My original requirement is to serve large JS file from Cloud Foundry so to improve performance looking for enabling HTTP2.
It might, it might not. Your mileage may vary. HTTP/2 isn't a silver bullet. This explains this well.
https://www.nginx.com/blog/http2-module-nginx/

Related

How to set in a Dockerfile an nginx in front of the same container (google cloud run)? I tried making a python server (odoo) handle http/2

Thanks in advance. My question is, how to set in a Dockerfile an nginx in front of a container? I saw other questions [5], and it seems that the only way to allow http/2 on odoo in cloud run is to create an nginx container, as sidecars are not allowed in gcrun. But I also read that with supervisord it can be done. Has anyone been able do that so to handle http/2 and so to increase the cloud run max request quota?
I wanted to to try this: in the entrpoint.sh, write a command to install nginx, and then set its configuration to be used as a proxy to allow http2. But, I ask you here as I'm not sure if it'll work, as I read in [2] that nginx won't work with a python server.
The whole story: I'm deploying an odoo ce on google cloud run + cloud sql. I configured one odoo as a demo with 5 blogposts and when I tried to download it, it says that the request was too large. Then, I imagined this was because of the 32MB of cloud run request quota [1], as the backup size was 52 MB. Then, I saw that the quota for http/2 connections was unlimited, so I activated http/2 button in cloud run. Next, when I accessed the service an error relating "connection failure appeared".
To do so I thought of two ways: one, was upgrading the odoo http server to one that can handle http/2 like Quark. This first way seems impossible to me, because it would force me to rewrite many pieces of odoo, maybe. Then, the second option that I thought of was running in front of the odoo container (that runs a python web server on a Werkzeug), an nginx. I read in the web that nginx could upgrade connections to http/2. But, I also read that cloud run was running its internal load balancer [2]. So, then my question: would it be possible to run in the same odoo container an nginx that exposes this service on cloud run?
References:
[1] https://cloud.google.com/run/quotas
[2] Cloud Run needs NGINX or not?
[3] https://linuxize.com/post/configure-odoo-with-nginx-as-a-reverse-proxy/
[4] https://github.com/odoo/docker/tree/master/14.0
[5] How to setup nginx in front of node in docker for Cloud Run?
Has anyone been able do that so to handle http/2 and so to increase the cloud run max request quota?
Handling HTTP/2 does not help you increase your maximum requests per container limit on Cloud Run.
HTTP/2 only helps you reuse TCP connections to send concurrent requests over a single connection, but Cloud Run does not really count connections so you are not on the right track here. HTTP/2 won't help you.
Cloud Run today already supports 1000 container instances (with 250 concurrent requests in preview) so that's 250,000 simultaneous requests for your service. If you need more, contact support.
But, I ask you here as I'm not sure if it'll work, as I read in [2] that nginx won't work with a python server.
Sounds incorrect.
If you configure a multi-process container you can run Python behind nginx on Cloud Run. But as you said, Cloud Run does not need nginx.
Overall you don't need HTTP/2 in this scenario.

Is gRPC not suited for small projects?

For the past 2 weeks I have been struggling to setup a simple backend that utilises gRPC and communicates with a mobile client.
Reading online about this technology it feels like it is the proper easy going solution for my needs.
Bridging client/server communication written in multiple languages Java/Kotlin/Swift/Go.
Backwards compatibility checks for the API realized with buf
Efficient communication by transferring binary data and utilising HTTP2
Support for both RPC and REST thanks to grpc-gateway
However when I decided to go down the gRPC path I faced a ton of issues (highlights of the issues, not actual questions):
How to share protobuf message definitions across clients and server?
How to manage third party protobuf message dependencies?
How to manage stub generation for projects using different build tools?
How to secure the communication using SSL certificates? Also keep in mind that here I am talking about a mobile client <--> server communication and not server <--> server communication.
How to buy a domain because SSL certificates are issued agains public domains in order to be trusted by Certificate Authorities?
How to deploy a gRPC server as it turns out that there aren't any easy to use PaaS that support gRPC and HTTP2? Instead you either need to configure the infrastructure like load balancers and machines hosting the server by installing the appropriate certificates or just host everything on your own bear metal.
How to manage all of the above in a cost effective way?
This is more of a frustration question.
Am I doing something wrong and misunderstanding how to use gRPC or is it simply too hard to setup for a small project that should run in production mod?
I feel like I wasted a ton of time without having made any progress.

Unable to figure out why firebase hosting is not using http 2

I've set up a fresh hosting project (not using any custom domain at the moment) and split up some of my js files expecting them to be served via http2 (as described in firebase blog posts it should be enabled by default?) However protocol still shows up as http/1.1. Am I missing something? Do I need to add entry in my config files to force http2?
DEMO: https://asimetriq-com.firebaseapp.com/
Works for me, see attached screenshot.
It may mean that you have some transparent proxy that does not support HTTP/2 in the network hops from your client to the server.
Also, from time to time, browsers may downgrade the protocol they are using to collect statistics about protocol performances to be able to compare them.

How can I use nginx 1.9.5 as a reverse proxy with gRPC?

I want to
write my backend code with Java,
use HTTP/2 (NGINX 1.9.5 has been supported HTTP/2),
write a bidirectional stream to send data between client and server at any time.
gRPC seems to be the best choice and I want use NGINX as my reverse proxy and loading balancing, I could not find any documentation to figure out how to use NGINX with gRPC Java, does anyone know?
I saw the gRPC PHP has already supported NGINX: https://github.com/grpc/grpc/tree/master/src/php#use-the-grpc-php-extension-with-nginxphp-fpm
But I also saw there is an issue said it is in the process of submitting a 3rd party module in NGINX for gRPC support, and there is a ticket on NGINX means we can't write HTTP/2 NGINX proxy module for gRPC, and I also saw nginx does not support the full HTTP/2 specs, gRPC does not work through it
I'm confused about that, why do some posts say gRPC PHP works but in other posts it says it can't.
Not on nginx, but I just published a grpc-proxy written in Go. It's lightweight and configurable, and there is a docker image available.
Yeah, nowdays, gRPC/HTTP2 with or without TLS are indeed fully supported on NGINX, as long as you have version 1.13.9 (if you just install the docker image with either alpine or latest tags then it'll be the right version).
As of (at least) late 2020 there is full support for it. Here's a link to their official Documentation:
https://www.nginx.com/blog/nginx-1-13-10-grpc/

Value of proxying HTTP requests with node.js

I have very recently started development on a multiplayer browser game that will use nowjs to synchronize player states from the server state. I am new to server-side development (so many of the things I'm saying are probably being said incorrectly), and while I understand how node.js works on its own I have seen discussions about proxying HTTP requests through another server technology (a la NGinx or Apache) for efficiency.
I don't understand why it would be beneficial to do so, even though I've seen plenty of explanations of how to do so. My current plan is to have the game's website and info on the same server as the game itself, so if there is any gain from proxying node I'd love to know why.
In the context of your question it seems you are looking for an answer on the benefits of implementing a reverse proxy in front of your node.js webserver. In summary, a reverse proxy (depending on implementation) can provide the following features out of the box:
Load balancing
Caching of static content
Failover
Compression of responses (e.g gzip)
SSL support
All these features are cross-cutting concerns that you should not need to accommodate in your application tier/code. By implementing these features within the proxy it allows you to focus on developing the code for your application and leaves the web server to do what it's good at, serving the HTTP requests for your application.
nginx appears to be a common choice in a reverse proxy/node configuration and if you take a look at the modules reference you should get a feel for what features the proxy can provide.
When you say "through another technology" I assume you mean through a dedicated web server such as NGinx or Apache.
The reason you do that is b/c in a production environment there are a number of considerations you don't want your application to have to do on its own. Caching, domain (or sub-domain) mapping, perhaps security, SSL, load balancing, and serving static files to name a few.
The web servers are already built to do all those things for you, and so they can handle them and then pass only the requests on to your app that actually need to be handled by your app. They're also optimized for doing those things and will probably do them as well or better than the average developer can.
Hope that helps.
Another issue that people haven't added in here is that with a front-end proxy, when you need to take your service down for maintenance (or even just restart it), nginx can serve up a pretty "YourCompanyName is currently under maintenance" page, making for a much more pleasant user experience.

Resources