Spring Boot Actuator Kubernetes Probes returning 404 - spring-boot-actuator

I'm trying to use Kubernetes Probes from Spring Boot Actuator, but it isn't working.
I have set the following in application.properties:
management.endpoints.web.path-mapping.health=probes
management.endpoint.health.group.ping.include=ping
management.endpoint.health.group.liveness.include=livenessState
management.endpoint.health.group.readiness.include=readinessState
The groups are listed as expected:
$ curl http://localhost:8080/actuator/probes
{"status":"UP","groups":["liveness","ping","readiness"]}
And ping works as expected:
$ curl http://localhost:8080/actuator/probes/ping
{"status":"UP"}
However both liveness and readiness return Status Code: 404 and Content-Length: 0.
I'm using spring-boot-starter-parent version 2.3.1.RELEASE.
The probes I want are documented in the list of Auto-configured HealthIndicators.
The feature is also described at: https://spring.io/blog/2020/03/25/liveness-and-readiness-probes-with-spring-boot.
I've tried several spellings of livenessState, inluding livenessProbe (which is in the blog post), with no effect.
Here's a related answer, but it doesn't directly address my problem: Kubernetes - Liveness and Readiness probe implementation
What bit of configuration am I missing?
Update
There is some verbiage in the linked sites that indicates a potential clue...
If deployed in a Kubernetes environment, actuator will gather the "Liveness" and "Readiness" information...
Maybe this indicates that the probes only work if deployed in a Kubernetes environment -- although I don't know how that would be detected or why that would be the case.

Adding following configurations worked for me (by trial and error)
management.health.livenessstate.enabled=true
management.health.readinessstate.enabled=true
If you are running locally, you will also need to add
management.endpoint.health.probes.enabled=true
I am using spring-boot-starter-parent version 2.3.2.RELEASE.

It is correct to use include=livenessState or include=readinessState. The example here (which shows include=livenessProbe) is wrong.
I identified the correct name by trial-and-error -- checking the result while running in a Kubernetes environment.
management.endpoint.health.group.exploratory.include=livenessState,readinessState,ping
management.endpoint.health.group.exploratory.show-details=always
By some magic, and for no obvious reason, these indicators aren't available when you do a standard local run -- and you get 404 for health groups that have no available includes.
For reference:
Auto-configured HealthIndicators (list)
ApplicationAvailabilityBean which is the source of truth for liveness/readiness

Related

How do I use JARs stored in Artifactory in spark submits?

I am trying to configure the spark-submits to use JARs that are stored in artifactory.
I've tried a few ways to do this
Attempt 1: Changing the --jars parameter to point to the https end point
Result 1: 401 Error. Credentials are being passed like so: https://username:password#jfrog-endpoint. The link was tested using wget and it authenticates and downloads the JAR fine.
Error
Attempt 2: Using a combination of --packages --repositories
Result 2: URL doesn't resolve to the right location of the jar
Attempt 3:Using combination of --packages and modified ivysettings.xml (containing repo and artifact pattern)
ivy settings
Result 3: URL resolves correctly but still results in "Not Found"
After some research it looks like the error might say "Not Found" and the it looks like it has "tried" the repo, it could still very well be a 401 error.
Error
Any ideas would be helpful! Links i've explored:
Can i do spark-submit application jar directly from maven/jfrog artifactory
spark resolve external packages behind corporate artifactory
How to pass jar file (from Artifactory) in dcos spark run?
https://godatadriven.com/blog/spark-packages-from-a-password-protected-repository/
https://spark.apache.org/docs/latest/submitting-applications.html#advanced-dependency-management
You can use https://username:password#jfrog.com/rep, however you need to specify port.
https://username:password#jfrog.com:443/repo
I discovered this using Artifactory's "set me up" tool for package type Ivy. If you look in the resolver url it specifies the port.

Docker container failed to start when deploying to Google Cloud Run

I am new to GCP, and am trying to teach myself by deploying a simple R script in a Docker container that connects to BigQuery and writes the system time. I am following along with this great tutorial: https://arbenkqiku.github.io/create-docker-image-with-r-and-deploy-as-cron-job-on-google-cloud
So far, I have:
1.- Made the R script
library(bigrquery)
library(tidyverse)
bq_auth("/home/rstudio/xxxx-xxxx.json", email="xxxx#xxxx.com")
project = "xxxx-project"
dataset = "xxxx-dataset"
table = "xxxx-table"
job = insert_upload_job(project=project, data=dataset, table=table, write_disposition="WRITE_APPEND",
values=Sys.time() %>% as_tibble(), billing=project)
2.- Made the Dockerfile
FROM rocker/tidyverse:latest
RUN R -e "install.packages('bigrquery', repos='http://cran.us.r-project.org')"
ADD xxxx-xxxx.json /home/rstudio
ADD big-query-tutorial_forQuestion.R /home/rstudio
EXPOSE 8080
CMD ["Rscript", "/home/rstudio/big-query-tutorial.R", "--host", "0.0.0.0"]
3.- Successfully run the container locally on my machine, with the system time being written to BigQuery
4.- Pushed the container to my container registry in Google Cloud
When I try to deploy the container to Cloud Run (fully managed) using the Google Cloud Console I get this error: 
"Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information. Logs URL:https://console.cloud.google.com/logs/xxxxxx"
When I review the log, I find these noteworthy entries:
1.- A warning that says "Container called exit(0)"
2.- Two bugs that say "Container Sandbox: Unsupported syscall setsockopt(0x8,0x0,0xb,0x3e78729eedc4,0x4,0x8). It is very likely that you can safely ignore this message and that this is not the cause of any error you might be troubleshooting. Please, refer to https://gvisor.dev/c/linux/amd64/setsockopt for more information."
When I check BigQuery, I find that the system time was written to the table, even though the container failed to deploy.
When I use the port specified in the tutorial (8787) Cloud Run throws an error about an "Invalid ENTRYPOINT".
What does this error mean? How can it be fixed? I'd greatly appreciate input as I'm totally stuck!
Thank you!
H.
The comment of John is the right source of the errors: You need to expose a webserver which listen on the $PORT and answer to HTTP/1 HTTP/2 protocols.
However, I have a solution. You can use Cloud Build for this. Simply define your step with your container name and the args if needed
Let me know if you need more guidance on this (strange) workaround.
Log information "Container Sandbox: Unsupported syscall setsockopt" from Google Cloud Run is documented as an issue for gVisor.

how to use Sense (or any other suggested tool) to help beginners with ELK

I have tried few approaches:
1)
-I downloaded Sense from https://github.com/bleskes/sense, unzipped and pasted it in kibana-5.3.0-amd64/usr/share/kibana/plugins
-I started kibana and then I got "[warning] Plugin "Sense" was disabled because it expected Kibana version "2.0.0-snapshot", and found "5.3.0"
demetrio#nodejs ~/Servers/DBs/elasticsearch-5.3.0/bin $ cd /home/demetrio/Servers/DBs/kibana-5.3.0-amd64/usr/share/kibana/bin
demetrio#nodejs ~/Servers/DBs/kibana-5.3.0-amd64/usr/share/kibana/bin $ ./kibana
undefined accessed the autoload lists which are no longer available via the Plugin API.Use the `ui/autoload/*` modules instead.
undefined accessed the autoload lists which are no longer available via the Plugin API.Use the `ui/autoload/*` modules instead.
log [20:25:48.403] [warning] Plugin "Sense" was disabled because it expected Kibana version "2.0.0-snapshot", and found "5.3.0".
log [20:25:49.123] [info][status][plugin:kibana#5.3.0] Status changed from uninitialized to green - Ready
log [20:25:49.279] [info][status][plugin:elasticsearch#5.3.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [20:25:49.326] [info][status][plugin:console#5.3.0] Status changed from uninitialized to green - Ready
log [20:25:49.842] [info][status][plugin:timelion#5.3.0] Status changed from uninitialized to green - Ready
log [20:25:49.873] [info][listening] Server running at http://localhost:5601
log [20:25:49.875] [info][status][ui settings] Status changed from uninitialized to yellow - Elasticsearch plugin is yellow
log [20:25:50.041] [info][status][plugin:elasticsearch#5.3.0] Status changed from yellow to green - Kibana index ready
log [20:25:50.043] [info][status][ui settings] Status changed from yellow to green - Ready
2)
I tried installed, as suggested in https://github.com/bleskes/sense by following
demetrio#nodejs ~/Servers/DBs/kibana-5.3.0-amd64/usr/share/kibana/bin $ ./kibana plugin --install elastic/sense
ERROR unknown command plugin
3)
I tried to install
demetrio#nodejs ~/Servers/DBs/kibana-5.3.0-amd64/usr/share/kibana/bin $ ./kibana-plugin install elastic/sense
Attempting to transfer from elastic/sense
Attempting to transfer from https://artifacts.elastic.co/downloads/kibana-plugins/elastic/sense/elastic/sense-5.3.0.zip
Error: Client request error: connect ETIMEDOUT 107.21.249.70:443
Plugin installation was unsuccessful due to error "Client request error: connect ETIMEDOUT 107.21.249.70:443"
P.S. I checked my Debian proxy configuration and it is correct. Should I add my proxy configs in some specific Kibana file?
4)
I tried installed it from Google Chrome Store but it seems that such plugin is no mode maintained after version 2 and ELK is currently in 5.3
5)
I installed X-Pack (https://www.elastic.co/downloads/x-pack) but, as far as I noted until now, the only significant different was an user creation for each of ELK. I was locking for some thing like described here https://github.com/bleskes/sense
Handy API suggestions
Format validation
Scope collapsing
Auto formatting
Submit multiple requests at once
Copy and Paste cURL commands
To sum up, I am locking for some tool that could make a bit easier for a dummy like to make queries and interact with ELK. I saw few pictures that drove to conclusion that Sense is a good tool for that but I couldn't make it work.
Sense is now included in Kibana. Start Kibana (default port 5601) and look at DevTools. X-Pack is actually an expansion for monitoring, alerting and security purposes. kibana console doc
So for you it would be best to start exploring elasticsearch by using kibana devtools.

Rmpi, OpenCPU, and Apparmor: DENIED request for "/"

I have an R package that sends out a job to the OpenMPI cluster I have running by means of the Rmpi package. All works as expected within an R session run from the console. However, when I try to execute the relevant function with from my OpenCPU server like this (details changed to protect the innocent):
curl -XPOST http://99.999.999.99/ocpu/library/MyPackage/R/my_cluster_function
I get this error:
R call failed: process died.
(Other, non-cluster calling functions within the package work as expected via OpenCPU). I noticed in /var/log/kern.log a variety of requests being DENIED by apparmor, and I have been able to resolve most of them by adding entries into /etc/apparmor.d/opencpu.d/custom to allow OpenMPI to access the files it needs. However, I cannot resolve these two issues (again, IP address changed) related to "open" requests for location "/":
Oct 26 03:49:58 99.999.999.99 kernel: [142952.551234] type=1400 audit(1414295398.849:957): apparmor="DENIED" operation="open" profile="opencpu-main" name="/" pid=22486 comm="orted" requested_mask="r" denied_mask="r" fsuid=33 ouid=0
Oct 26 03:49:58 99.999.999.99 kernel: [142952.556422] type=1400 audit(1414295398.857:958): apparmor="DENIED" operation="open" profile="opencpu-main" name="/" pid=22485 comm="apache2" requested_mask="r" denied_mask="r" fsuid=33 ouid=0
Adding this to my apparmor rules did not help:
/* r,
Two questions:
Why is opencpu trying to read from my root level directory (or does this mean something else)?
More urgently, how can I resolve this apparmor issue?
Thanks.
You might need to add both apparmor rules
/ r,
/* r,
The first rule allows directory listing of / and the second rule allows read access to any file under /.
I don't understand why Rmpi wants to read / or why were you getting process died error instead of access denied. Are you sure the problem is completely resolved?

How to install R packages via proxy [user + password]

I need authentication to use internet, say these are my variables:
Proxy : 1ncproxy1
Port : 80
Loggin : MyLoGiN
Pass : MyPaSs
How can I install a package in R and its addon packages ?
Such that the following would work:
install.packages("TSA", dependencies=TRUE)
Without our having internet connection failutes?
I tried this:
Sys.setenv("ftp_proxy" = "1ncproxy1","ftp_proxy_user"="MyLoGiN","ftp_proxy_password"="MyPaSs")#Port = 80
ButI get :
Warning: unable to access index for repository http://cran.ma.imperial.ac.uk/src/contrib
# or
cannot open: HTTP status was '407 Proxy Authentication Required'
Many thanks,
You are probably on Windows, so I would advice you to check the 'R on Windows FAQ' that came with your installation, particularly Question 2.19: The Internet download functions fail. You may need to restart R with the --internet2 option (IIRC) for the proxy settings to come into effect.
I always found this very cumbersome. An alternative is to install a proxy-aware webdownloader as eg wget (as a windows binary) where you set the proxy options in a file in your home directory. This is all from memory, I think the last time I was faced with such a proxy was in 2005 so YMMV.
As #juba states, I think you want to set the http_proxy. From ?download.file:
Usernames and passwords can be set for HTTP proxy transfers via
environment variable http_proxy_user in the form user:passwd.
Alternatively, http_proxy can be of the form
"http://user:pass#proxy.dom.com:8080/"
So, try: Sys.setenv(http_proxy="http://MyLoGiN:MyPaSs#1ncproxy1:80")
Be aware though:
These environment variables must be set before the download code is
first used: they cannot be altered later by calling Sys.setenv.
So you are best off calling it in your .Rprofile
+1 for Juba, above. This worked for me:
$ export http_proxy=http://username:password#the-proxy.mycompany.com:80
$ R
> install.packages("quantmod")
I tried to install swirl package, and had the same problem - proxy with authorisation.
After some experiments i found decision.
May be my answer will help for anybody.
On Windows 7 :
set 1 or more (if ou need) env variables http_proxy (https_proxy and ftp_proxy if you need) (If you dont know how - read there http://www.computerhope.com/issues/ch000549.htm )
Its look like that
env variables for proxy
format http_proxy="http://Proxyusername:ProxyUserPassw#proxyServName:ProxyPort"
Use '#' instead of %40
In RStudio Tools->Global Options->Packages release check box "Use Internet Explorer library /proxy for HTTP"
As Jeff Taylor wrote, R can indirectly make use of a proxy server. You need to specify the proxy server for both, http and https protocols, as follows:
$ export http_proxy=http://user:pass#proxy_server:port
$ export https_proxy=http://user:pass#proxy_server:port
$ R
> install.packages("<package_name>")
I just tested this solution and it works like a charm. The answer from Jeff was correct but unfortunatelly for most cases incomplete, as most of the servers are nowadays accesible over https.

Resources