Configuring retry policy for grpc request - grpc

I was trying to configure a retry policy from the client side for for some grpc services but it's not behaving the way I expect it to behave so I might be misunderstanding how retry policy works in grpc or there's a mistake in the policy. Here's the policy:
var retryPolicy = `{
"methodConfig": [{
"name": [{"service": "serviceA"}, {"service":"serviceB"}],
"timeout":"30.0s",
"waitForReady": true,
"retryPolicy": {
"MaxAttempts": 10,
"InitialBackoff": ".5s",
"MaxBackoff": "10s",
"BackoffMultiplier": 1.5,
"RetryableStatusCodes": [ "UNAVAILABLE", "UNKNOWN" ]
}
}]
}`
What I expected was that if the client's grpc request to a method defined in one the services(serviceA or serviceB) failed then I expect a retry and since waitForReady is true the client will block the call until a connection is available (or the call is canceled or times out) and will retry the call if it fails due to a transient error. But when I purposefully down the server which this request is going to. The client gets an Unavailable grpc status code and error is: Error while dialing dial tcp xx.xx.xx.xx:xxxx: i/o timeout but the client didn't get this error message 30 seconds later, instead received this error right away. Could the reason be because of how I'm giving the service names? Does it need the path of the file where the service is defined? For a bit more context, the grpc service is defined in another package which the client imports. Any help would be greatly appreciated.

Looking through the documentation, came across this link: https://github.com/grpc/grpc-proto/blob/master/grpc/service_config/service_config.proto and on line 72 it mentions
message Name {
string service = 1; // Required. Includes proto package name.
string method = 2;
}
I wasn't adding the proto package name when listing the services. So the retry policy should be:
var retryPolicy = `{
"methodConfig": [{
"name": [{"service": "pkgA.serviceA"}, {"service":"pkgB.serviceB"}],
"timeout":"30.0s",
"waitForReady": true,
"retryPolicy": {
"MaxAttempts": 10,
"InitialBackoff": ".5s",
"MaxBackoff": "10s",
"BackoffMultiplier": 1.5,
"RetryableStatusCodes": [ "UNAVAILABLE", "UNKNOWN" ]
}
}]
}`
where pkgA and pkgB are the proto package names.

Related

Firebase service account private key exposed to admin.firestore

I configured firebase admin in my node.js backend into a variable called admin, and I call admin.firestore(). When I do console.log(admin.firestore()), I see my private key of my service account been displayed in the back-end terminal. Here is the console log I see:
Firestore {
_settings: {
credentials: {
private_key: 'my actual private key',
client_email: 'xxxxx'
},
projectId: 'pxxxx3',
firebaseVersion: '8.13.0',
libName: 'gccl',
libVersion: '3.8.6 fire/8.13.0'
},
_settingsFrozen: false,
_serializer: Serializer { createReference: [Function], allowUndefined: false },
_projectId: 'xxxxx',
registeredListenersCount: 0,
_lastSuccessfulRequest: 0,
_backoffSettings: { initialDelayMs: 100, maxDelayMs: 60000, backoffFactor: 1.3 },
_preferTransactions: false,
_clientPool: ClientPool {
concurrentOperationLimit: 100,
maxIdleClients: 1,
clientFactory: [Function],
clientDestructor: [Function],
activeClients: Map {},
terminated: false,
terminateDeferred: Deferred {
resolve: [Function],
reject: [Function],
promise: [Promise]
}
}
}
I am a bit concerned that it might be a security risk. Although it is within the codes in my backend. But should I be concerned?
If data is only ever available on your backend, then it is "secure" in that only people who have permission to access your backend can see it. The problem is not that the data is in the log, the problem is in who you allow to see that log.
If the data never escapes to a client app, then you don't have to worry about random people on the internet from seeing your credentials.
IMHO, if an external entity can log into your system, you have a different kind of problem.
If you think about it, most of the environment variables have to be placed somewhere during runtime. They should not be hardcoded in your code, but in runtime, you need a mechanism to ensure the values are copied into your system. After that, it's all about authorization, only users with the right permissions should be allowed to get into your system.

WireMock To Use As Proxy For SOAP Service

Here is the scenario I'm trying to work on:
I'm writing Contract Driven Tests using Spring Cloud Contract. The tests for inter-communication between the microservices works fine.
Some microservices are calling SOAP-based services. As part of integration tests, I'm trying to use
WireMock as a proxy for the SOAP-based services. Basically, the WireMock should intercept the call, then call the target live environment with the same request, return the same response to the test as a stub.
Unfortunately, I couldn't find any examples how to proceed with that. These services use the HTTP protocol. Any examples of how or any pointers to achieve this would be great. Thanks!
Firstly you need to point your SOAP client to the WireMock base URL, so e.g. if you're using a Spring properties file you might have something like this:
soap.api.host=wiremock-host.internal
soap.api.port=8888
Then you need to configure the WireMock server with a low-priority, broad matching proxy stub. Here's an example of how that would look in JSON form:
{
"priority": 8,
"response": {
"proxyBaseUrl" : "http://target.soap.endpoint"
}
}
Then finally, you would create additional stubs (at the default priority) for each request you want to intercept e.g.
{
"request": {
"method": "POST",
"urlPath": "/v1/some/thing",
"headers": {
"SOAPAction": {
"contains": "MyAction"
}
}
},
"response": {
"status": 200,
"body": "<soap:Envelope ..."
}
}

Resilience4j and Spring Actuator - Open circuit killing service

I have added the following dependency to my Spring Boot project
implementation 'io.github.resilience4j:resilience4j-spring-boot2:0.14.1'
When a circuit breaker opens, I get the following response on my actuator/health endpoint, with status code 503 Service Unavailable:
{
"status": "DOWN",
"details": {
"diskSpace": {
"status": "UP",
"details": {
"total": 499963174912,
"free": 432263229440,
"threshold": 10485760
}
},
"refreshScope": {
"status": "UP"
},
"getFlightInfoCircuitBreaker": {
"status": "DOWN",
"details": {
"failureRate": "100.0%",
"failureRateThreshold": "2.0%",
"maxBufferedCalls": 1,
"bufferedCalls": 1,
"failedCalls": 1,
"notPermittedCalls": 1,
"state": "OPEN"
}
}
}
}
My AWS ECS container health check uses this endpoint to determine its health, and restarts the container on a non-200 response.
As I do not want my service to be restarted when a circuit breaker opens, is there a way to have a circuit breaker being open, without causing the status of the service to be down?
I am aware of the registerHealthIndicator: false property to get round this issue, but this removes the circuit breaker stats from actuator, which I would still like to see.
Since 1.2.0 you can set the allowHealthIndicatorToFail to false for this.
I can think of two possibilities.
1) Create a custom HealthIndicator based on resilience4j's code:
https://github.com/resilience4j/resilience4j/blob/master/resilience4j-spring-boot2/src/main/java/io/github/resilience4j/circuitbreaker/monitoring/health/CircuitBreakerHealthIndicator.java
You would need to return Health.up() or Health.unknown() to avoid the 503 from the /health endpoint.
2) Disable the resilience4j's health indicator and get the same information from the metrics actuator endpoint.

Using nginx to redirect dynamic request

I have a druid service which runs at my local machine at port 8082 as follows:
Method POST: http://localhost:8082/druid/v2/?pretty
Body:
{
"queryType" : "topN",
"dataSource" : "some_source",
"intervals" : ["2015-09-12/2015-09-13"],
"granularity" : "all",
"dimension" : "page",
"metric" : "edits",
"threshold" : 25,
"filter": {
"type": "and",
"fields": [
{
"type": "selector",
"dimension": "pix_id",
"value": "1234"
}
}
}
Hitting this query gives me a list of records based on the value of the dimension 'pix_id'.
Now, I want to setup an nginx such that the external application should not have any clue about my druid service. I just want the external application to hit the URL:
http://localhost:80/pix_id/98765
This url should dynamically generate a JSON with the above mentioned pix_id and send a request to druid and return the response to the user.
Is it possible to do this in nginx?
Yes you can do this, but rather I would suggest to have a php or python script in between to give the results.
So the setup would be -
Have php page receive the request.
make a curl call from php to the druid, locally.
get the result and pass on the response.
There are multiple benefits of doing this eg. -
You completely mask druid, and not necessarily limited to druid.
You can do more calculations in php before sending the request to druid.
caching at php end.

Openstack API Authentication

Openstack noob here. I have setup an Ubuntu VM with DevStack, and am trying to authenticate with Keystone to obtain a token to be used for subsequent Openstack API calls. The identity endpoint shown on the “API Access” page in Horizon is: http://<DEVSTACK_IP>/identity.
When I post the below JSON payload to this endpoint, I get the error get_version_v3() got an unexpected keyword argument 'auth’.
{
"auth": {
"identity": {
"methods": [
"password"
],
"password": {
"user": {
"name": "admin",
"domain": {
"name": "Default"
},
"password": “AdminPassword”
}
}
}
}
}
Based on the Openstack docs, I should be hitting http://<DEVSTACK_IP>/v3/auth/tokens to obtain a token, but when I hit that endpoint, I get 404 Not Found.
I'm currently using Postman for testing this, but will eventually be doing programmatically.
Does anybody have any experience with authenticating against the Openstack API that can help?
Not sure whether you want to do it in a python way, but if you do, here is a way to do it:
from keystoneauth1.identity import v3
from keystoneauth1 import session
v3_auth = v3.Password(auth_url=V3_AUTH_URL,
username=USERNAME,
password=PASSWORD,
project_name=PROJECT_NAME,
project_domain_name="default",
user_domain_name="default")
v3_ses = session.Session(auth=v3_auth)
auth_token = v3_ses.get_token()
And you V3_AUTH_URL should be http://<DEVSTACK_IP>:5000/v3 since keystone is using port 5000 as a default.
If you do have a multi-domain devstack, you can change the domains, otherwise, they should be default
Just in case you don't have the client library installed: pip install python-keystoneclient
Here is a good doc for you to read about it:
https://docs.openstack.org/keystoneauth/latest/using-sessions.html
HTH

Resources