How do I grab the desired replica sets of a helm release using jsonpath? - jsonpath

I have gotten this far:
$kubectl get replicaset --namespace default -l "app=myapp,release=myapp" -o jsonpath="{.items[0].metadata.annotations}"
Which gives me:
map[deployment.kubernetes.io/revision:1 deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3]
I want to extract '2'
I tried various versions of
$kubectl get replicaset --namespace default -l "app=myapp,release=myapp" -o jsonpath="{.items[0].metadata.annotations.'deployment.kubernetes.io\/desired-replicas'}"
but I am getting a blank response.
Any help is appreciated

Try -o jsonpath="{.items[0].metadata.annotations.deployment\.kubernetes\.io/desired-replicas'}"
I mean just escape the .s with \ if any present in key.

Related

How to disable "private mount namespace" (sandboxing) with the Nix package manager?

I'm trying to use nix on repl.it. I'm using static-nix from https://matthewbauer.us/blog/static-nix.html. If I run the following code:
mkdir -p "$HOME/.cache/nix/"
curl https://matthewbauer.us/nix > "$HOME/.cache/nix/nix.exe"
cat "$HOME/.cache/nix/nix.exe" | bash -s run --no-sandbox --store "$HOME/.cache/nix/store" -f channel:nixpkgs-unstable bash graphviz -c sh -c 'dot --help'
I get this error:
error: setting up a private mount namespace: Operation not permitted
I tried --no-sandbox, --option sandbox false and --option build-use-sandbox false, none of these have any effect on the error.
This is executed as non-root on a machine for which it is not possible for me to change kernel settings.
Here's a REPL reproducing the issue (it runs for a short while before displaying the error): https://repl.it/#suzannesoy/AgonizingWittyCoding#main.sh

Install Helm v3 in Kubernetes (GKE)

I am trying to install nginx ingress using helm version 3 on Google Cloud Terminal as follows :
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh
and
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm install my-nginx stable/nginx-ingress --set rbac.create=true
I keep getting the error :
Error: This command needs 1 argument: chart name
Can you please help me?
From helmv3 docs:
https://helm.sh/docs/intro/install/
$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
$ chmod 700 get_helm.sh
$ ./get_helm.sh
Can you also run helm version after the above steps. In the semantic helm version you should see 3.x.x something like that.
The command is just fine. Are you sure it's helmv3 and not helmv2. It shouldn't give that error because you are already providing the name for the chart.
Also can you try running the following command, it's just a test to see if the chart gets installed or does it throw an error. It will generate a random name for the chart and not my-nginx as you specified.
helm install --debug stable/nginx-ingress --set rbac.create=true --generate-name

Nagios Alert returns "NRPE: Unable to read output" Command: check_service!httpd

I have installed Nagios on Redhat with the following configurations:
/usr/local/nagios/etc/static/commands.cfg
define command {
command_name check_service
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c check_service -a $ARG1$
}
When I try to run it manually:
if i try to use the following syntax, I get error:
/usr/local/nagios/libexec/check_nrpe -H 10.111.55.92 -c check_service -a check_http
NRPE: Unable to read output
not using nope:
/usr/local/nagios/libexec/check_http -H 10.111.55.92
HTTP OK: HTTP/1.1 200 OK - 4298 bytes in 0.024 second response time |time=0.024462s;;;0.000000 size=4298B;;;0
I am consistently getting Nagios Email notifications:
HOST: Proxy (Dev) i-01aa24242424d7
IP: 10.111.55.92
Service: Apache Running
Service State: UNKNOWN
Attempts: 3/3
Duration: 0d 9h 28m 49s
Command: check_service!httpd
\More Details:
NRPE: Unable to read output
Not sure how I can use nrpe with check_service to check http
Just. running the check_nrpe with check_http displays the version of installed nope
/usr/local/nagios/libexec/check_nrpe -H 10.111.55.92 -a check_http
NRPE v3.2.1
/usr/local/nagios/etc/nrpe.cfg
command[check_users]=/usr/local/nagios/libexec/check_users -w 10 -c 15
command[check_load]=/usr/local/nagios/libexec/check_load -w 15,10,5 -c 30,25,20
command[check_root_disk]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /
command[check_zombie_procs]=/usr/local/nagios/libexec/check_procs -w 10 -c 15 -s Z
command[check_total_procs]=/usr/local/nagios/libexec/check_procs -w 500 -c 750
command[check_total_procs]=/usr/local/nagios/libexec/check_procs -w 500 -c 750
command[check_ping]=/usr/local/nagios/libexec/check_ping $ARG1$
command[check_http]=/usr/local/nagios/libexec/check_http
# LINUX DEFAULT
command[check_service]=/bin/sudo -n /bin/systemctl status -l $ARG1$
# GLUSTER CHECKS
command[check_glusterdata]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /gluster
# GITLAB CHECKS
command[gitlab_ctl]=/bin/sudo -n /bin/gitlab-ctl status $ARG1$
command[gitlab_rake]=/bin/sudo -n /bin/gitlab-rake gitlab:check
command[check_gitlabdata]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /var/opt/gitlab
# OPENSHIFT CHECKS
command[check_openshift_pods]=/usr/local/nagios/libexec/check_pods
File: /usr/local/nagios/etc/nagios.cfg
cfg_dir=/usr/local/nagios/etc/static
You seem to be confusing two plugins. check_service will just check a service is running locally. Try calling it like this:
/usr/local/nagios/libexec/check_nrpe -H 10.111.55.92 -c check_service -a httpd
I'd hesitate to use the check_service command you have in there though. Giving nrpe access to run systemctl with sudo seems dangerous to me.
check_http is an http client. It will actually connect to an http server and check a given URI. It can check status codes and do all sorts of things.
It looks like in your nrpe.cfg you didn't include any arguments to check_http. It will just print its help message if you call it like that, I don't think it will check the local machine.
Note that when you call check_http above manually, you supply -H. That -H is not passed through automatically, you need to provide arguments to your check_http command in nrpe.cfg.
Change the line:
command[check_http]=/usr/local/nagios/libexec/check_http
To something like:
command[check_http]=/usr/local/nagios/libexec/check_http -H 127.0.0.1
And it should work better assuming your http is listening on localhost.
You probably don't want to call check_http via nrpe like this though. Let your nagios server call check_http out to the remote machine.

How to import and export All algolia settings using scripts

I have an issue with algolia settings. I can not import or export settings from aloglia. There is no settings or tools to do this.
I want to do it using my own script. How is it possible? Is there any alternative to do this or i have to create a script for that?
Check out the Algolia CLI tool!
Installation: npm install -g #algolia/cli
Docs: https://github.com/algolia/algolia-cli
While you can still certainly write your own scripts to import/export settings or records, with the Algolia CLI tool you can also do it at the command line like so:
$ algolia getsettings -a <algoliaAppId> -k <algoliaApiKey> -n <algoliaIndexName>
and
$ algolia setsettings -a <algoliaAppId> -k <algoliaApiKey> -n <algoliaIndexName> -s <sourceFilepath> -p <setSettingsParams>
The best way to export/import index settings is to use Algolia's REST API clients and the {get,set}_settings methods.
Building a small script wrapping those 2 commands is pretty straight forward.
Sepehr's answer is really helpful in pointing out how to achieve it with Algolia CLI. A time saver!
Here is the exact command you need to execute in your command line in order to:
Export index:
algolia export -a <algoliaAppId> -k <algoliaApiKey> -n <algoliaIndexName> -o <outputPath> -p <algoliaParams>
Example: algolia export -a EXAMPLE_APP_ID -k EXAMPLE_API_KEY -n EXAMPLE_INDEX_NAME -o ~/Desktop/example_output_folder/ -p '{"filters":["category:book"]}'
Params -p argument is optional and you can skip it.
Import index:
algolia import -s <sourceFilepath> -a <algoliaAppId> -k <algoliaApiKey> -n <algoliaIndexName> -b <batchSize> -t <transformationFilepath> -m <maxconcurrency> -p <csvToJsonParams>
Example: algolia import -s ~/Desktop/example_source_directory/ -a EXAMPLE_APP_ID -k EXAMPLE_API_KEY -n EXAMPLE_INDEX_NAME -b 5000 -t ~/Desktop/example_transformations.js -m 4 -p '{"delimiter":[":"]}'
More at https://github.com/algolia/algolia-cli#examples

5125cc8e register_matching_rule: could not locate associated matching rule generalizedTimeMatch

After compiling openldap-2.4.33 on Centos 6.3 with the following options, I am unable to understand what this error is telling me:
Server was installed as a 'minimal' install, with the following addons:
yum install ntp autofs gcc make perl strace nmap tree rpm-build rpm-devel rpmdevtools rpm-libs rpm-python \
openssl openssl-devel perl-CPAN libtool libtool-ltdl-devel.x86_64 libtool-ltdl.x86_64 \
db4.x86_64 nss_db.x86_64 compat-db.x86_64 db4-devel.x86_64 \
tcp_wrappers.x86_64 tcp_wrappers-devel.x86_64 tcp_wrappers-libs.x86_64 \
unixODBC unixODBC-devel mysql-devel cyrus-sasl-devel.x86_64 perl-ExtUtils-Embed.x86_64 \
-y
After basic installation of the server as a VM on ESX, I ran the following ./configure to compile and install :
export CPPFLAGS="-I /usr/lib64/perl5/CORE"
export LDFLAGS="-L/usr/lib64 -L/usr/lib64/perl5/CORE"
export PERL_CPPFLAGS="`perl -MExtUtils::Embed -e ccopts -I/usr/lib64/perl5/CORE`"
ldconfig
./configure \
--prefix=/ \
--enable-shared --enable-debug --enable-dynamic --enable-syslog --enable-proctitle --enable-ipv6 \
--enable-local --enable-slapd --enable-cleartext --enable-crypt --enable-lmpasswd --enable-spasswd \
--enable-modules --enable-rewrite --enable-rlookups --enable-slapi --enable-slp --enable-wrappers \
--enable-backends --enable-bdb --enable-dnssrv --enable-hdb --enable-ldap --enable-mdb \
--enable-meta --enable-monitor --enable-null --enable-passwd --enable-perl --enable-relay \
--enable-shell --enable-sock --enable-sql --enable-overlays --enable-accesslog --enable-auditlog \
--enable-collect --enable-constraint --enable-dds --enable-deref --enable-dyngroup --enable-dynlist \
--enable-memberof --enable-ppolicy --enable-proxycache --enable-refint --enable-retcode --enable-rwm \
--enable-seqmod --enable-sssvlv --enable-syncprov --enable-translucent --enable-unique --enable-valsort \
--enable-perl --disable-ndb --with-cyrus-sasl --with-threads --with-tls --with-yielding-select \
--with-mp
I've taken the basic slapd.conf and only added my own dn.
When I run slaptest this is what I get:
slaptest -f /etc/openldap/slapd.conf -F /etc/openldap/slapd.d/
5125cefd register_matching_rule: could not locate associated matching rule generalizedTimeMatch for ( 2.5.13.28 NAME 'generalizedTimeOrderingMatch' SYNTAX 1.3.6.1.4.1.1466.115.121.1.24 )
slap_schema_init: Error registering matching rule ( 2.5.13.28 NAME 'generalizedTimeOrderingMatch' SYNTAX 1.3.6.1.4.1.1466.115.121.1.24 )
5125cefd slaptest: slap_schema_init failed
The only schema with some kind of clue is ppolicy.schema, but I'm at a loss as to what to do.
Those matching rules are internal, both are defined in OpenLDAP's servers/slapd/schema_init.c with no conditionals.
generalizedTimeMatch is defined first, and then generalizedTimeOrderingMatch, which refers to generalizedTimeMatch in its "associated" matching rule. The error originates in servers/slapd/mr.c as the matching rules are added.
The matching rules are built in an array of struct slap_mrule_defs_rec, and iterated over in order. There's no obvious way for that to fail.
Your list of options and overlays is quite, um, complete.
There is a chance that there's some incompatibility or dependency problem with the overlays, but I don't see it (several overlays add to the schema and use those matching rules as a side-effect of some attributes: dds, ppolicy, accesslog; as does the monitor backend).
My best guess is that there is some compile problem, possibly arising from compiler options, either optimisation/alignment and/or some stale .o file, but I'm guessing here. You don't include your actual make and install steps, there's a similarly slim chance that you have some conflict arising from an incomplete installation, or previous installation (old binaries or schema files).
I'd suggest:
make clean
make depend && make && make test
and see what happens (make test will take quite some time). It that works, then you might consider installing to /usr/local to avoid conflicting files. If that doesn't work then try a simple configure with minimal options:
./configure --with-threads --with-tls
and then add in just the modules and backend(s) you need.

Resources