Is gRPC not suited for small projects? - grpc

For the past 2 weeks I have been struggling to setup a simple backend that utilises gRPC and communicates with a mobile client.
Reading online about this technology it feels like it is the proper easy going solution for my needs.
Bridging client/server communication written in multiple languages Java/Kotlin/Swift/Go.
Backwards compatibility checks for the API realized with buf
Efficient communication by transferring binary data and utilising HTTP2
Support for both RPC and REST thanks to grpc-gateway
However when I decided to go down the gRPC path I faced a ton of issues (highlights of the issues, not actual questions):
How to share protobuf message definitions across clients and server?
How to manage third party protobuf message dependencies?
How to manage stub generation for projects using different build tools?
How to secure the communication using SSL certificates? Also keep in mind that here I am talking about a mobile client <--> server communication and not server <--> server communication.
How to buy a domain because SSL certificates are issued agains public domains in order to be trusted by Certificate Authorities?
How to deploy a gRPC server as it turns out that there aren't any easy to use PaaS that support gRPC and HTTP2? Instead you either need to configure the infrastructure like load balancers and machines hosting the server by installing the appropriate certificates or just host everything on your own bear metal.
How to manage all of the above in a cost effective way?
This is more of a frustration question.
Am I doing something wrong and misunderstanding how to use gRPC or is it simply too hard to setup for a small project that should run in production mod?
I feel like I wasted a ton of time without having made any progress.

Related

Micro Service with API Gateway Ocelot vs Nginx

I have a .net core based micro service architecture.
I chose ocelot as api gateway. My frontend application is vue js based and hosted on nginx container. During a discussion today, I learned that nginx can already be used as a gateway.it was suggested that "you should use nginx for a gateway because you already use for serving frontend, nginx could deploying as a gateway too" I searched the internet to compare the two gateways (I know the main purpose of nginx is not a gateway) but couldn't find any information about their pros and cons like performance,scalability availability etc...
Can someone who uses the 2 technologies share information with me about which one I should choose?
Ocelot is .NET API gateway but cloud agnostic. It has following features as mentioned in article here. It is a free simple Nuget package for simple installations and not too advanced features or performance requirements, but it beautifully does the job always in .net environment, and provides some features also. It is lightweight, fast, scalable and provides routing and authentication also besides all gateway features. On Azure, Azure API management provides these features plus many more advanced gateway features.
NGinx has a open source version and a plus version.
NGINX is a high‑performance, scalable, secure, and reliable web server
and a reverse proxy. NGINX enables all the main web acceleration
techniques for managing HTTP connections and traffic. For many years,
NGINX capabilities such as load balancing, SSL termination, connection
and request policing, static content offload, and content caching have
helped NGINX users to build reliable and fast websites quickly and
efficiently.
NGINX can also act as a secure application gateway, offering a number of specialized built‑in interfaces to pass traffic from users to applications. So, as you see NGinx is much more than just an API gateway. With a server, you can integrate many other services like traffic distribution, policies, monitoring, alerts, notifications, custom configurations etc. Ocelot may have limited performance or configurability for an enterprise grade application.
Nginx is a different software than Ocelot. Kong on NGinx is more popular software for API management gateway and is not .NET specific.
If your company already is using it, and has it already for deployment, you should continue with Nginx.
Ocelot vs Kong vs Azure API management vs Nginx
In normal circumstances, Ocelot may appear better software than Nginx. But Nginx is a full fledged software. Few reasons: a. Your company has a license for Nginx plus already, why use another software for API management. b. Nginx is configurable for high performance, ocelot is not. If you use thread pool is nginx, performance can be tuned. Ten Nginx performance tuning tips.. You can do limited to none such things in Ocelot. It has its own bunch of open issues for a gateway. NGinx plus users will have training as well as team support already available. Nginx will be one API Gateway for all of Your applications (technology independent). Ocelot is technology dependent. Given the pros and cons, your company might already be having a Nginx plus license and a common API for multiple applications, and that's why they might be pushing for it.
First and foremost Nginx and Ocelot are very different software.
using Nginx will eventually be good for your infra as it can be used as an API gateway and it's open-source, secure, and offers many other benefits.
Using Ocelot will have disadvantages in long run. You'll need to handle one more software extra. Why would one waste time on that? and the developer will need to understand both of them.
I suggest you go with Nginx as it's already implemented and do your work
Yes you can use nginx as a gateway. But Apache APISIX is even a better option to use compared to Nginx and Kong. I found this article helpful while searching for a better API Gateway option to use.
https://api7.ai/blog/why-choose-apisix-instead-of-nginx-or-kong/

How to enable HTTP2 in Cloud Foundry using nginx-buildpack?

Is it possible to enable HTTP2 in cloud foundry using NGINX buildpack or any? I understand that GoRouter will not support HTTP2 but not sure if there is any workaround for this?
My original requirement is to serve large JS file from Cloud Foundry so to improve performance looking for enabling HTTP2.
Thanks,
Not exactly the same question, but the solution here applies: https://stackoverflow.com/a/55552398/1585136.
If you have the need for public clients (i.e.clients outside CF) to connect to your app, you need to use TCP routing. If your provider doesn't enable this by default, find another provider (see this list of public providers, hint Pivotal Web Services will provide TCP routes upon request) or self host.
If you only need to use HTTP/2 and/or gRPC between apps running on CF, you can use the container to container network. When you talk app to app, there are no restrictions (so long as you properly open required ports). You can use TCP, UDP and any protocol built on top of those. There some details about how this works here.
You'll also need the Nginx http_v2_module. This is a very recent addition and isn't yet in a build of the Nginx or Staticfile buildpack as I write this. It's should be, if everything goes right, in the next release though. That should be Nginx buildpack 1.1.10+ and Staticfile buildpack 1.5.8+.
My original requirement is to serve large JS file from Cloud Foundry so to improve performance looking for enabling HTTP2.
It might, it might not. Your mileage may vary. HTTP/2 isn't a silver bullet. This explains this well.
https://www.nginx.com/blog/http2-module-nginx/

Is it a good practice to have embedded jetty and GRPC server running in the same JVM?

Our organization is looking into implementing new internal APIs using GRPC.
Currently, we have a microservice that is serving internal/external requests using embedded Jetty. We want to make internal communication between services to be done over GRPC.
So, we'll have 2 servers running on the same VM: jetty and GRPC. Is it a good practice, any red flags with that approach?
We do not want to split that said microservice into 2 to save costs. We should be able to run the app on the same number of VMs.
There's nothing inherently special or wrong about having Jetty and gRPC in the same JVM. The main point of potential trouble is just that you will have two ports exposed instead of one; that might matter for service discovery or firewalls.

How to provide secure communication between client & server?

I'm creating a web server using Jetty (v9) and I need any traffic between browsers and the server to be encrypted. I'll be uploading files to the server, plus the client/server will maintain a session carrying sensitive access tokens.
I don't have much experience with web servers, but it seems like the solution is to have the web server serve on port 443 so that communication will use the HTTPS protocol.
I was going to start running through this tutorial for configuring Jetty with SSL, but before I start messing around with certificates and signing etc. I just wanted to ask if this is the right approach or if there is something else more suitable that I don't know about.
In answer to your question, using https is indeed the right approach.

Value of proxying HTTP requests with node.js

I have very recently started development on a multiplayer browser game that will use nowjs to synchronize player states from the server state. I am new to server-side development (so many of the things I'm saying are probably being said incorrectly), and while I understand how node.js works on its own I have seen discussions about proxying HTTP requests through another server technology (a la NGinx or Apache) for efficiency.
I don't understand why it would be beneficial to do so, even though I've seen plenty of explanations of how to do so. My current plan is to have the game's website and info on the same server as the game itself, so if there is any gain from proxying node I'd love to know why.
In the context of your question it seems you are looking for an answer on the benefits of implementing a reverse proxy in front of your node.js webserver. In summary, a reverse proxy (depending on implementation) can provide the following features out of the box:
Load balancing
Caching of static content
Failover
Compression of responses (e.g gzip)
SSL support
All these features are cross-cutting concerns that you should not need to accommodate in your application tier/code. By implementing these features within the proxy it allows you to focus on developing the code for your application and leaves the web server to do what it's good at, serving the HTTP requests for your application.
nginx appears to be a common choice in a reverse proxy/node configuration and if you take a look at the modules reference you should get a feel for what features the proxy can provide.
When you say "through another technology" I assume you mean through a dedicated web server such as NGinx or Apache.
The reason you do that is b/c in a production environment there are a number of considerations you don't want your application to have to do on its own. Caching, domain (or sub-domain) mapping, perhaps security, SSL, load balancing, and serving static files to name a few.
The web servers are already built to do all those things for you, and so they can handle them and then pass only the requests on to your app that actually need to be handled by your app. They're also optimized for doing those things and will probably do them as well or better than the average developer can.
Hope that helps.
Another issue that people haven't added in here is that with a front-end proxy, when you need to take your service down for maintenance (or even just restart it), nginx can serve up a pretty "YourCompanyName is currently under maintenance" page, making for a much more pleasant user experience.

Resources