I have tried really hard to use the embed retryPolicy of GRPC documentation (https://github.com/grpc/proposal/blob/master/A6-client-retries.md#retry-policy) but i fail to understand where i should setup the config in my code.
Ideally i would like the python client to specify its retry policy but i am also interested to understand how to manage it from the server side.
After some digging, i came up with this snipped but it does not work.
import json
from grpc import insecure_channel
service_default_config = {
# see https://github.com/grpc/proposal/blob/master/A6-client-retries.md#retry-policy-capabilities
"retryPolicy": {
"maxAttempts": 5,
"initialBackoff": "1s",
"maxBackoff": "10s",
"backoffMultiplier": 2,
"retryableStatusCodes": [
"RESOURCE_EXHAUSTED",
"UNAVAILABLE"
]
}
}
service_default_config = json.dumps(service_default_config)
options = [
('grpc.service_config', service_default_config)
]
insecure_channel(hostname, options=options)
Can anyone point me out the relevant documentation for me to understand how this works or explain to me what i misunderstand ?
I had the same problem with the syntax of the config.
I found this.
In short, the retryPolicy has to be specified as part of a methodConfig which describes how to handle calls to a specific service:
import json
import grpc
json_config = json.dumps(
{
"methodConfig": [
{
"name": [{"service": "<package>.<service>"}],
"retryPolicy": {
"maxAttempts": 5,
"initialBackoff": "0.1s",
"maxBackoff": "10s",
"backoffMultiplier": 2,
"retryableStatusCodes": ["UNAVAILABLE"],
},
}
]
}
)
address = 'localhost:50051'
channel = grpc.insecure_channel(address, options=[("grpc.service_config", json_config)])
where <package> and <service> are defined in your proto file.
Related
I am trying to send fluentbit metrics to an external source for processing. My understanding from the documentation is that the fluentbit_metrics input is intended to be used with output plugins that are for specific telemetry solutions like Prometheus, OpenTelemetry, etc. However, for my purposes, I cannot actually use any of those solutions and instead have to use a different bespoke metrics solution. For this to work, I would like to just send lines of text to a port that my metrics solution is listening on.
I am trying to use the fluentbit forward output to send data to this endpoint, but I am getting an error in response from my metrics solution because it is receiving a big JSON object which it can't parse. However, when I output the same fluentbit_metrics input to a file or to stdout, the contents of the file is more like what I would expect, where each metric is just a line of text. If these text lines were what was being sent to my metrics endpoint, I wouldn't have any issue ingesting them.
I know that I could take on the work to change my metrics solution to parse and process this JSON map, but before I do that, I wanted to check if this is the only way forward for me. So, my question is, is there a way to get fluentbit to send fluentbit_metrics to a forward output where it does not convert the metrics into a big JSON object? Is the schema for that JSON object specific to prometheus? Is there a reason why the outputs differ so substantially?
Here is a copy of an example config I am using with fluentbit:
[SERVICE]
# This is a commented line
Daemon off
log_level info
log_file C:\MyFolder\fluentlog.txt
flush 1
parsers_file .\parsers.conf
[INPUT]
name fluentbit_metrics
tag internal_metrics
scrape_interval 2
[OUTPUT]
Name forward
Match internal_metrics
Host 127.0.0.1
Port 28232
tag internal_metrics
Time_as_Integer true
[OUTPUT]
name stdout
match *
And here is the output from the forward output plugin:
{
"meta": {
"cmetrics": {},
"external": {},
"processing": {
"static_labels": []
}
},
"metrics": [
{
"meta": {
"ver": 2,
"type": 0,
"opts": {
"ns": "fluentbit",
"ss": "",
"name": "uptime",
"desc": "Number of seconds that Fluent Bit has been running."
},
"labels": [
"hostname"
],
"aggregation_type": 2
},
"values": [
{
"ts": 1670884364820306500,
"value": 22,
"labels": [
"myHostName"
],
"hash": 16603984480778988994
}
]
}, etc.
and here is the output of the same metrics from stdout:
2022-12-12T22:02:13.444100300Z fluentbit_uptime{hostname="myHostName"} = 2
2022-12-12T22:02:11.721859000Z fluentbit_input_bytes_total{name="tail.0"} = 1138
2022-12-12T22:02:11.721859000Z fluentbit_input_records_total{name="tail.0"} = 12
2022-12-12T22:02:11.444943400Z fluentbit_input_files_opened_total{name="tail.0"} = 1
I was trying to configure a retry policy from the client side for for some grpc services but it's not behaving the way I expect it to behave so I might be misunderstanding how retry policy works in grpc or there's a mistake in the policy. Here's the policy:
var retryPolicy = `{
"methodConfig": [{
"name": [{"service": "serviceA"}, {"service":"serviceB"}],
"timeout":"30.0s",
"waitForReady": true,
"retryPolicy": {
"MaxAttempts": 10,
"InitialBackoff": ".5s",
"MaxBackoff": "10s",
"BackoffMultiplier": 1.5,
"RetryableStatusCodes": [ "UNAVAILABLE", "UNKNOWN" ]
}
}]
}`
What I expected was that if the client's grpc request to a method defined in one the services(serviceA or serviceB) failed then I expect a retry and since waitForReady is true the client will block the call until a connection is available (or the call is canceled or times out) and will retry the call if it fails due to a transient error. But when I purposefully down the server which this request is going to. The client gets an Unavailable grpc status code and error is: Error while dialing dial tcp xx.xx.xx.xx:xxxx: i/o timeout but the client didn't get this error message 30 seconds later, instead received this error right away. Could the reason be because of how I'm giving the service names? Does it need the path of the file where the service is defined? For a bit more context, the grpc service is defined in another package which the client imports. Any help would be greatly appreciated.
Looking through the documentation, came across this link: https://github.com/grpc/grpc-proto/blob/master/grpc/service_config/service_config.proto and on line 72 it mentions
message Name {
string service = 1; // Required. Includes proto package name.
string method = 2;
}
I wasn't adding the proto package name when listing the services. So the retry policy should be:
var retryPolicy = `{
"methodConfig": [{
"name": [{"service": "pkgA.serviceA"}, {"service":"pkgB.serviceB"}],
"timeout":"30.0s",
"waitForReady": true,
"retryPolicy": {
"MaxAttempts": 10,
"InitialBackoff": ".5s",
"MaxBackoff": "10s",
"BackoffMultiplier": 1.5,
"RetryableStatusCodes": [ "UNAVAILABLE", "UNKNOWN" ]
}
}]
}`
where pkgA and pkgB are the proto package names.
I'm creating a blog using Next.js + MDX-Bundler and trying to use remark-disable-tokenizers to disable "indented codeblock". But i'm not able to make it work. I found a reference here which says that we can use remark-disable-tokenizers for this purpose.
Here is my xdmoptions for reference:
import disableTokens from 'remark-disable-tokenizers';
xdmOptions(options) {
options.rehypePlugins = [
...(options.rehypePlugins ?? []),
rehypeSlug,
rehypeCodeTitles,
rehypePrism,
[disableTokens,
{
block: [
['indentedCode', 'indented code is not supported by MDX-Bundler']
]
}],
[
rehypeAutolinkHeadings,
{...}
]
];
return options;
},
I'm just getting started with next-offline and found the section regarding workbox integration and its recipes.
According to the docs:
If you're new to workbox, I'd recommend reading this quick guide --
anything inside of workboxOpts will be passed to
workbox-webpack-plugin.
Define a workboxOpts object in your next.config.js and it will gets
passed to workbox-webpack-plugin. Workbox is what next-offline uses
under the hood to generate the service worker, you can learn more
about it here.
After digging around, I found this great section.
Essentially it gives a suggestion to use two different options:
GenerateSW or InjectManifest
I would like to use the InjectManifest, however when I try to implement that in my next.config.js file. I get this error:
"runtimeCaching" is not a supported parameter.
This is my next.config.js:
const withCSS = require('#zeit/next-css');
const withSass = require('#zeit/next-sass');
const withImages = require('next-images');
const optimizedImages = require('next-optimized-images');
const withOffline = require('next-offline');
module.exports = withOffline(
withImages(
optimizedImages(
withCSS(
withSass({
// useFileSystemPublicRoutes: false,
// generateSw: false, // this allows all your workboxOpts to be passed in injectManifest
generateInDevMode: true,
workboxOpts: {
swDest: './service-worker.js', // this is the important part,
exclude: [/.+error\.js$/, /\.map$/, /\.(?:png|jpg|jpeg|svg)$/],
runtimeCaching: [
{
urlPattern: /\.(?:png|jpg|jpeg|svg)$/,
handler: 'CacheFirst',
options: {
cacheName: 'hillfinder-images'
}
},
{
urlPattern: /^https?.*/,
handler: 'NetworkFirst',
options: {
cacheName: 'hillfinder-https-calls',
networkTimeoutSeconds: 15,
expiration: {
maxEntries: 150,
maxAgeSeconds: 30 * 24 * 60 * 60 // 1 month
},
cacheableResponse: {
statuses: [0, 200]
}
}
}
]
},
dontAutoRegisterSw: false,
env: {
MAPBOX_ACCESS_TOKEN: process.env.MAPBOX_ACCESS_TOKEN,
useFileSystemPublicRoutes: false
},
webpack(config, options) {
config.module.rules.push({
test: /\.(png|jpg|gif|svg|eot|ttf|woff|woff2)$/,
use: {
loader: 'url-loader',
options: {
limit: 100000,
target: 'serverless'
}
}
});
return config;
}
})
)
)
)
);
Also when I check the Application pane, in devTools I see this:
You'll notice what appears to me a duplication of fields i.e. https-calls and hillfinder-https-calls and images and hillfinder-images.
I thought the cacheName field in the options: {} in each was allowing one to include a custom name?
Just wondering if anyone has had experience setting this up?
Thank you in advance!
(These comments apply to the basic Workbox build tools, not specifically to the next-offline wrapper, but I think they're still accurate.)
If you're using InjectManifest mode, the idea is that you write all of your service worker logic, using the underlying pieces of Workbox that you need, following a model that's similar to what's described in the Getting Started guide. You should include a call to precacheAndRoute(self.__WB_MANIFEST) somewhere in your service worker, and then the InjectManifest build tool is responsible for swapping out self.__WB_MANIFEST with an array containing the list of URLs to precache, along with revision information for each URL.
The runtimeCaching parameter is not compatible with InjectManifest. It's a parameter that can be used in GenerateSW mode, in with the Workbox build tool creates an entire service worker for you (including runtime caching routes). The GenerateSW mode takes in a declarative configuration and spits out the code for service worker based on that configuration. If that sounds good—if you'd just like to configure some build options and get a complete service worker as a result—then using GenerateSW is the right choice.
Openstack noob here. I have setup an Ubuntu VM with DevStack, and am trying to authenticate with Keystone to obtain a token to be used for subsequent Openstack API calls. The identity endpoint shown on the “API Access” page in Horizon is: http://<DEVSTACK_IP>/identity.
When I post the below JSON payload to this endpoint, I get the error get_version_v3() got an unexpected keyword argument 'auth’.
{
"auth": {
"identity": {
"methods": [
"password"
],
"password": {
"user": {
"name": "admin",
"domain": {
"name": "Default"
},
"password": “AdminPassword”
}
}
}
}
}
Based on the Openstack docs, I should be hitting http://<DEVSTACK_IP>/v3/auth/tokens to obtain a token, but when I hit that endpoint, I get 404 Not Found.
I'm currently using Postman for testing this, but will eventually be doing programmatically.
Does anybody have any experience with authenticating against the Openstack API that can help?
Not sure whether you want to do it in a python way, but if you do, here is a way to do it:
from keystoneauth1.identity import v3
from keystoneauth1 import session
v3_auth = v3.Password(auth_url=V3_AUTH_URL,
username=USERNAME,
password=PASSWORD,
project_name=PROJECT_NAME,
project_domain_name="default",
user_domain_name="default")
v3_ses = session.Session(auth=v3_auth)
auth_token = v3_ses.get_token()
And you V3_AUTH_URL should be http://<DEVSTACK_IP>:5000/v3 since keystone is using port 5000 as a default.
If you do have a multi-domain devstack, you can change the domains, otherwise, they should be default
Just in case you don't have the client library installed: pip install python-keystoneclient
Here is a good doc for you to read about it:
https://docs.openstack.org/keystoneauth/latest/using-sessions.html
HTH