Suppose I have a service which rather than listening for http request, or gRPC procedure calls only consumes messages from a broker (Kafka, rabbitMQ, Google Pub/Sub, what have you). How should I go about healthchecking the service (eg. k8s liveness and readyness probes) ?
Should the service also listen for http solely for the purpose of healthchecking or is there some other technique which can be used ?
Having the service listen to HTTP solely to expose a liveness/readiness check (although in services that pull input from a message broker, readiness isn't necessarily something that a container scheduler like k8s would be concerned with) isn't really a problem (and it also opens up the potential to expose diagnostic and control endpoints).
Kubernetes supports three different types of probes, see also Kubernetes docs:
Running a command
Making an HTTP request
Checking a TCP socket
So, in your case you can run a command that fails when your service is unhealthy.
Also be aware that liveness probes may be dangerous to use.
I'd like to create a reverse proxy to expose several grpc backend services on one host. But I'd also like to whitelist certain grpc status exception categories, and drop all others. I think I've read somewhere that grpc exceptions go into http/2 trailers, so that might be an option.
I'm trying to find info on the grpc wire protocol for passing exceptions, but can't find anything amid all the info on protobuf itself.
Any hints/links ?
I understand that gRPC is designed for client-server architecture. A server provides remote services and clients obtain the services by calling the defined RPCs. But is it possible for a client also defines a service so that other clients can request services from that client too?
An example, a server knows every client's locations and can inform other clients about the location information. A client, upon receiving the other clients' locations from the server, can now directly call the services provided by other clients.
Can gRPC do that? Thank you!
Yes, this is possible.
The terms "client" and "server" are overloaded in this context and would be better thought of as (stub) caller and (implementation) receiver. It's possible for the client and server to be the same process but then you don't need the complexity of gRPC.
There's no prohibition on some entity functioning as both a caller ("client") and receiver ("server"). This situation arises commonly, in peer-peer networks and in micro-services where some original client calls some service which (acts as a client and) then calls various other services ....
I have a situation where messages are being generated by an internal application but the consumers for the messages are outside our enterprise network. Will either of http(s) transport or REST connectivity work in this scenario, with HTTP reverse proxy on DMZ? If not, is it safe to have a broker on the DMZ which can act as gateway to outside consumers?
Well, the rest/http approach to connect to ActiveMQ is very limited as it does not support true messaging semantics.
Exposing an ActiveMQ broker is no less secure than any other communication software if precautions are taken (TLS, default passwords changed, high entropy passwords are used and/or mutual authentication, recent patches applied, web console/jolokia not exposed externally without precautions etc etc).
In fact - you can buy online ActiveMQ instances from Amazon - which indicates that at least they think it's not such a bad idea to put them on the Internet.
In order to deal with the microservice architecture, it's often used alongside a Reverse Proxy (such as nginx or apache httpd) and for cross cutting concerns implementation API gateway pattern is used. Sometimes Reverse proxy does the work of API gateway.
It will be good to see clear differences between these two approaches.
It looks like the potential benefit of API gateway usage is invoking multiple microservices and aggregating the results. All other responsibilities of API gateway can be implemented using Reverse Proxy. Such as:
Authentication (It can be done using nginx LUA scripts);
Transport security. It itself Reverse Proxy task;
Load balancing
...
So based on this there are several questions:
Does it make sense to use API gateway and Reverse proxy simultaneously (as example request -> API gateway -> reverse proxy(nginx) -> concrete microservice)? In what cases ?
What are the other differences that can be implemented using API gateway and can't be implemented by Reverse proxy and vice versa?
It is easier to think about them if you realize they aren't mutually exclusive. Think of an API gateway as a specific type reverse proxy implementation.
In regards to your questions, it is not uncommon to see both used in conjunction where the API gateway is treated as an application tier that sits behind a reverse proxy for load balancing and health checking. An example would be something like a WAF sandwich architecture in that your Web Application Firewall/API Gateway is sandwiched by reverse proxy tiers, one for the WAF itself and the other for the individual microservices it talks to.
Regarding the differences, they are very similar. It's just nomenclature. As you take a basic reverse proxy setup and start bolting on more pieces like authentication, rate limiting, dynamic config updates, and service discovery, people are more likely to call that an API gateway.
I believe, API Gateway is a reverse proxy that can be configured dynamically via API and potentially via UI, while traditional reverse proxy (like Nginx, HAProxy or Apache) is configured via config file and has to be restarted when configuration changes. Thus, API Gateway should be used when routing rules or other configuration often changes. To your questions:
It makes sense as long as every component in this sequence serves its purpose.
Differences are not in feature list but in the way configuration changes applied.
Additionally, API Gateway is often provided in form of SAAS, like Apigee or Tyk for example.
Also, here's my tutorial on how to create a simple API Gateway with Node.js https://memz.co/api-gateway-microservices-docker-node-js/
Hope it helps.
API gateway acts as a reverse proxy to accept all application programming interface (API) calls, aggregate the various services required to fulfill them, and return the appropriate result.
An API gateway has a more robust set of features — especially around security and monitoring — than an API proxy. I would say API gateway pattern also called as Backend for frontend (BFF) is widely used in Microservices development. Checkout the article for the benefits and features of API Gateway pattern in Microservice world.
On the other hand API proxy is basically a lightweight API gateway. It includes some basic security and monitoring capabilities. So, if you already have an API and your needs are simple, an API proxy will work fine.
The below image will provide you the clear picture of the difference between API Gateway and Reverse proxy.
API Gateways usually operate as a L7 construct.
API Gateways provide additional functionality as compared to a plain reverse proxy. If you consider some of the portals out there they can provide :
full API Lifecycle Management including documentation
a portal which can be used as the source of truth for various client applications and where you can provide client governance, rate limiting etc.
routing to different versions of the API including canary/beta versions
detecting usage patterns, register apps, retrieve client credentials etc.
However with the advent of service meshes like Istio, Consul a lot of the functionality of API Gateways will be subsumed by meshes.
From HTTP: The Definitive Guide:
Strictly speaking, proxies connect two or more applications that speak
the same protocol, while gateways hook up two or more parties that
speak different protocols. A gateway acts as a "protocol converter,"
allowing a client to complete a transaction with a server, even when
the client and server speak different protocols.
In practice, the difference between proxies and gateways is blurry.
Because browsers and servers implement different versions of HTTP,
proxies often do some amount of protocol conversion. And commercial
proxy servers implement gateway functionality to support SSL security
protocols, SOCKS firewalls, FTP access, and web-based applications.
Reverse proxy, such as Nginx and Apache, do not deal with observability, authentication, authorization, service orchestration, etc., but only do load balancing and forward traffic to upstream.
API Gateway is close to the user's business scenario and helps users solve the security and observability issues of various APIs and microservices.
Different positioning leads to different technical aspects of reverse proxy and API gateway. API gateways, such as Apache APISIX, have nearly 100 plugins and support multiple programming languages for plugin development.
If you already have a good API gateway, there is no need to use a reverse proxy.
Regarding the Andrey Chausenko's answer that
I believe, API Gateway is a reverse proxy that can be configured dynamically via API and potentially via UI, while traditional reverse proxy (like Nginx, HAProxy or Apache) is configured via config file and has to be restarted when configuration changes.
I think it is not true nowadays as modern reverse proxy like Envoy can be dynamically configured by control plane via xDS.