i have a grpc service where i want to add it to service discovery and grpc client be able to discover it
but i could not find anything online to help me with that.
I am using steeltoe with eureka
thanks
Steeltoe doesn't have explicit support for GRPC, but at this point it doesn't appear to be necessary (because it does work).
See this issue for more info and/or join the conversation
Related
We are currently having a requirement of passing the data to topics from 3rd party API. First, we have tried with HTTP bridge but looks like it is not supporting the HTTPS protocol. Now, we got some suggestions for having or creating the Kafka connector to solve this. Now, we got confused about whether Kafka connect can solve this or not. How it will be solved? I am looking for some suggestions for solving this problem.
Kafka Connect can send records to an HTTP(S) endpoint, or read from one and into Kafka. It will not allow external users to read through it, to get Kafka records.
The Strimzi Bridge or Confluent REST Proxy should both support HTTPS, even if that means you need to place either behind an SSL terminated reserve proxy such as Nginx, Caddy, etc.
Looking for documentation on our overall networking for WKC on Cloud in order to feel confident in its viability & security. Want to know all connectivity and networking options for WKC.
The recommended way is to use the IBM Cloud Secure Gateway service. WKC has direct support for it.
Here are the docs
for creating a connection.
Here are the docs
for configuring a secure gateway service.
Here are the docs
for all connections available out of the box in WKC.
We have an Azure Function supporting a SignalR hub leveraging Azure SignalR Services to publish messages to connected signalR clients. We'd like to enable MessagePack protocol in the function but we could not find any documentation or guidelines to tell how to do it. Any ideas on how to approach this problem?
Interestingly Message Pack protocol is initiated and used when the client app negotiates with the function. Further information can be found here.
I have a kubernetes cluster with running pods. In order to monitor and troubleshoot the infrastructure, I want to implement a centralized logging solution so all incoming and outgoing HTTP requests will be logged within one place.
For the incoming requests this is not a problem at all, I can use nginx log from ingress controller and present it.
I also understand that I can log outgoing requests inside the application I run in pod, but the problem is that applications from outside developers are also used and it may not contain logging implementation.
As for the outgoing requests, there is no any solution provided by default if I understand it right. I have explored k8s logging and k8s audit, but it does not provide such feature.
Probably, I need some network sniffer, but it is quite a low-level solution for such problem as I can see. So, the question is: is there any out-of-the-box implementation for such demand?
Thanks!
Take a look at a service mesh solution like Istio or Linkerd as well as tracing solutions like Jaeger or Zipkin. With these you can build to have full observability on how information flows in/out and through your kube cluster
I am reading several articles about micro-service.
At the server-side service discovery part, I am drawn to the Kubernetes and Marathon's style of running a proxy on each host/docker-container that functions as a server-side discovery router.
By doing this, we can move all code coupled with service registration/discovery and circuit-breaker to that proxy.
By configuring the router of each host/docker-container, the proxy can be transparent to the service and the network, and we can implement some gossip strategy to make the proxies sync their knowledge of the network. Seems like an excellent solution.
Can anyone explain to me what are the drawbacks of doing this and recommend some open-sourced solution implemented such kind of thing?