How to use nginx module on Filebeat in k8s - nginx

I am trying to use filebeat with nginx module to collect logs from nginx-ingress-controller and send directly to elasti but I keep getting an error:
Provided Grok expressions do not match field value: [172.17.0.1 - - [03/Dec/2022:00:05:01 +0000] \"GET /healthz HTTP/1.1\" 200 0 \"-\" \"kube-probe/1.24\" \"-\"]
This appear on Kibana under message error.
Note that I am running the latest helm for filebeat (8.5) and the nginx controller is nginx-ingress-controller-9.2.15 1.2.1.
My filebeat setting:
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: false
templates:
- condition:
contains:
kubernetes.pod.name: redis
config:
- module: redis
log:
input:
type: container
containers.ids:
- "${data.kubernetes.container.id}"
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
- condition:
contains:
kubernetes.pod.name: nginx
config:
- module: nginx
access:
enabled: true
input:
type: container
containers.ids:
- "${data.kubernetes.container.id}"
paths:
- /var/lib/docker/containers/${data.kubernetes.container.id}/*.log
output.elasticsearch:
host: '${NODE_NAME}'
hosts: '["https://${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}"]'
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
protocol: https
ssl.certificate_authorities: ["/usr/share/filebeat/certs/ca.crt"]
setup.ilm:
enabled: true
overwrite: true
policy_file: /usr/share/filebeat/ilm.json
setup.dashboards.enabled: true
setup.kibana.host: "http://kibana:5601"
ilm.json: |
{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_age": "1d"
}
}
},
"delete": {
"min_age": "7d",
"actions": {
"delete": {}
}
}
}
}
}
And the logs from the controller are:
172.17.0.1 - - [02/Dec/2022:23:43:49 +0000] "GET /healthz HTTP/1.1" 200 0 "-" "kube-probe/1.24" "-"
Can someone help me understand what am I doing wrong?

Related

What is the correct way of mounting appsettings file for .Net Core Worker Service?

I have a .Net Worker service that runs as a K8S Cronjob but when it starts up it is failing to mount the appsettings file. The pod remains stuck in CrashLoopBackoff error state and the logs have the following :
Error: failed to create containerd task: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/var/lib/kubelet/pods/axxxxxx-1xxxx-4xxx-8xxx-4xxxxxxxxxx/volume-subpaths/secrets/ftp-client/1"
to rootfs at "/app/appsettings.ftp.json" caused: mount through procfd: not a directory: unknown
In the deployment I have mounted the appsettings file as follows :
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: ftp-client
spec:
schedule: "*/6 * * * *" #Cron job everyday 6 minutes
# startingDeadlineSeconds: 60
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/name: taat
topologyKey: "kubernetes.io/hostname"
initContainers:
- name: ftp-backup
image: registry.xxx.com/xxx/xxx:latest-ftp
imagePullPolicy: "Always"
env:
- name: WEB_URL
valueFrom:
secretKeyRef:
key: url
name: web-url
volumeMounts:
- mountPath: /tmp
name: datadir
command: ['sh', '-c',"./myscript.sh"]
containers:
- name: ftp-client
image: registry.xxx.com/xxx/xxx:latest-ftp
imagePullPolicy: "Always"
resources:
limits:
memory: 500Mi
requests:
cpu: 100m
memory: 128Mi
volumeMounts:
- mountPath: /tmp
name: datadir
- mountPath: /app/appsettings.ftp.json
subPath: appsettings.ftp.json
name: secrets
env:
- name: DOTNET_ENVIRONMENT
value: "Development"
- name: DOTNET_HOSTBUILDER__RELOADCONFIGONCHANGE
value: "false"
restartPolicy: OnFailure
imagePullSecrets:
- name: mycredentials
volumes:
- name: datadir
persistentVolumeClaim:
claimName: labs
- name: secrets
secret:
secretName: ftp-secret
And Program.cs for the Worker Service
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using FtpClientCron;
IHost host = Host.CreateDefaultBuilder(args)
.ConfigureServices(services =>
{
services.AddHostedService<Worker>();
})
.Build();
await host.RunAsync();
appsettings.ftp.json
{
"ApplicationSettings": {
"UserOptions": {
"Username": "xxxxxx",
"Password": "xxxxxxxxx",
"Url": "xxx.xxx.com",
"Port": "xxxx"
}
}
}
appsettings.json
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.Hosting.Lifetime": "Information"
}
}
}
Dockerfile
FROM mcr.microsoft.com/dotnet/runtime:6.0 AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /app
COPY ./publishh .
ENTRYPOINT ["dotnet", "SftpClientCron.dll"]
What am I missing ?

Envoy: REST gateway + multiple GRPC clusters

I'm trying to config envoy as rest api gateway with multiple grpc servers and have a problem with routing. The only way to match endpoint to grpc cluster, that i've found is to match via request header (http request /first must be resolved by first cluster, /second - by second):
...
routes:
- match:
prefix: "/"
headers:
- name: x-service
exact_match: "first"
route:
cluster: first
- match:
prefix: "/"
headers:
- name: x-service
exact_match: "second"
route:
cluster: second
...
But, in this case i need to set custom header 'x-service' at the client (frontend). This looks like a bad idea, 'couse frontend shouldn't know anything about backend infrastructure.
Is there any other way to match http route with grpc service? Or, can i set such headers somewhere in envoy?
The Envoy configuration pasted below registers a HTTP listener on port 51051 that proxies to helloworld.Greeter service in the cluster grpc1 on port 50051 and bookstore.Bookstore service in the cluster grpc2 on port 50052 by using the gRPC route as the match prefix.
This ensures clean segregation of responsibilities and isolation since the client will not need to inject custom HTTP headers to make multi-gRPC cluster routing work.
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener1
address:
socket_address: { address: 0.0.0.0, port_value: 51051 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
access_log:
- name: envoy.access_loggers.file
typed_config:
"#type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: /dev/stdout
stat_prefix: grpc_json
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
# NOTE: by default, matching happens based on the gRPC route, and not on the incoming request path.
# Reference: https://www.envoyproxy.io/docs/envoy/latest/configuration/http_filters/grpc_json_transcoder_filter#route-configs-for-transcoded-requests
- match: { prefix: "/helloworld.Greeter" }
route: { cluster: grpc1, timeout: 60s }
- match: { prefix: "/bookstore.Bookstore" }
route: { cluster: grpc2, timeout: 60s }
clusters:
- name: grpc1
connect_timeout: 1.25s
type: LOGICAL_DNS
lb_policy: ROUND_ROBIN
dns_lookup_family: V4_ONLY
typed_extension_protocol_options:
envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
"#type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
explicit_http_config:
http2_protocol_options: {}
load_assignment:
cluster_name: grpc1
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 50051
- name: grpc2
connect_timeout: 1.25s
type: LOGICAL_DNS
lb_policy: ROUND_ROBIN
dns_lookup_family: V4_ONLY
typed_extension_protocol_options:
envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
"#type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
explicit_http_config:
http2_protocol_options: {}
load_assignment:
cluster_name: grpc2
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 50052
https://github.com/envoyproxy/envoy/blob/main/test/proto/helloworld.proto
syntax = "proto3";
package helloworld;
import "google/api/annotations.proto";
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello(HelloRequest) returns (HelloReply) {
option (google.api.http) = {
get: "/say"
};
}
}
https://github.com/envoyproxy/envoy/blob/main/test/proto/bookstore.proto
syntax = "proto3";
package bookstore;
import "google/api/annotations.proto";
import "google/api/httpbody.proto";
import "google/protobuf/empty.proto";
import "google/protobuf/struct.proto";
// A simple Bookstore API.
//
// The API manages shelves and books resources. Shelves contain books.
service Bookstore {
// Returns a list of all shelves in the bookstore.
rpc ListShelves(google.protobuf.Empty) returns (ListShelvesResponse) {
option (google.api.http) = {
get: "/shelves"
};
}
...

How to use paths with ingress on Pulumi?

Following this pulumi walkthrough, I need to expose 2 services: kuard and rstudiogp through an nginx ingress controller. The kuard app is just there to demonstrate that kubernetes is up , the rstudio app is something I'd like to add to the cluster.
I would like to access the kuard service at apps.example.com, and the rstudio service at apps.example.com/rstudio. However, I was only able to have both online at the same time by changing one of the hosts to for example apps.example.rstudio.com, so I have 2 hosts rather than one.
Isn't it possible to use ingress paths to expose two services using the same ingress rule? How to use the same hostname to access both services with different paths, e.g. apps.example.com/kuard and apps.example.com/rstudio?
curl -Lv -H 'Host: apps.example.com' <PUBLIC-IP>.
Current Kuard ingress:
// Create the kuard Ingress
const ingress = new k8s.extensions.v1beta1.Ingress(namekuard,
{
metadata: {
labels: labels,
namespace: namespaceName,
annotations: {"kubernetes.io/ingress.class": "nginx"},
},
spec: {
rules: [
{
host: "apps.example.com",
http: {
paths: [
{
path: "/",
backend: {
serviceName: serviceName,
servicePort: "http",
}
},
],
},
}
]
}
},
{provider: clusterProvider}
);
Current RStudio ingress (see the annotated lines that I tried without success):
const ingress_rs = new k8s.extensions.v1beta1.Ingress(rsname,
{
metadata: {
labels: labels_rs,
namespace: namespaceName,
annotations: {"kubernetes.io/ingress.class": "nginx"},
},
spec: {
rules: [
{
host: "apps.example.rstudio.com",
http: {
paths: [
{
path: "/",
backend: {
serviceName: serviceName_rs,
servicePort: "http",
}
},
// {
// path: "/rstudio",
// pathType: "Prefix",
// backend: {
// serviceName: serviceName_rs,
// servicePort: "http",
// }
// },
],
},
}
]
}
},
{provider: clusterProvider}
);
Found out that the problem was with my application expecting to be served from the root url.

GRPC-Web connectivity issue in TLS

I have a GRPC Web client and a GRPC Server and I am using envoy proxy from the conversion of HTTP 1.1 to HTTP2.
My server creation Logic uses TLS. The code is as follows:
var opts []grpc.ServerOption
creds, err := credentials.NewServerTLSFromFile("cert/server.crt", "cert/server.key")
if err != nil {
log.Fatalf("Failed to generate credentials %v", err)
}
opts = []grpc.ServerOption{grpc.Creds(creds)}
server := grpc.NewServer(opts...)
I am calling the From my react client as follows:
const client = new LiveClient('http://localhost:8080')
const request = new GetLiveRequest()
request.setApi(1)
request.setTrackkey(trackKey)
// on success response
const stream = client.getLive(request, {})
stream.on('data', response => {
console.log(response);
}
The envoy.yaml is as follows:
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: greeter_service
max_grpc_timeout: 0s
cors:
allow_origin:
- "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: custom-header-1,grpc-status,grpc-message
http_filters:
- name: envoy.grpc_web
- name: envoy.cors
- name: envoy.router
tls_context:
common_tls_context:
alpn_protocols: "h2"
tls_certificates:
- certificate_chain:
filename: "/etc/server.crt"
private_key:
filename: "/etc/server.key"
clusters:
- name: greeter_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
hosts: [{ socket_address: { address: app, port_value: 3000 }}]
The Dockerfile for envoy is as follows:
FROM envoyproxy/envoy:36f39c746eb7d03b762099b206403935b11972d8
COPY ./envoy.yaml /etc/envoy/envoy.yaml
ADD ./cert/server.crt /etc/server.crt
ADD ./cert/server.key /etc/server.key
ADD ./cert/server.csr /etc/server.csr
WORKDIR /etc/envoy
CMD /usr/local/bin/envoy -c /etc/envoy/envoy.yaml
I am getting the following error when:
{code: 2, message: "Http response at 400 or 500 level"}
But when I remove the SSL authentication from backend server. It is working fine. I have also created a grpc client and TLS is working fine with it.
I am unable to find what is going wrong in my envoy configuration for TLS.
On further investigation in am getting following in envoy logs.
TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
The TLS certificates are working fine if I use it with envoy by directly using a GRPC client.
Your backend is already talking HTTPS through. So you don't need to configure tls_context in the envoy's config. And you have to use tcp_proxy instead of http_connection_manager Here. You also need to configure transport_socket for the proxy of your TLS backend Here.
static_resources:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/config/listener/v3/listener.proto#config-listener-v3-listener
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 8080
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/network/http_connection_manager/v3/http_connection_manager.proto#extensions-filters-network-http-connection-manager-v3-httpconnectionmanager
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
access_log:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/file/v3/file.proto
#
# You can also configure this extension with the qualified
# name envoy.access_loggers.http_grpc
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/grpc/v3/als.proto
- name: envoy.access_loggers.file
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/file/v3/file.proto#extensions-access-loggers-file-v3-fileaccesslog
"#type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
# Console output
path: /dev/stdout
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "*"
routes:
- match:
prefix: /
grpc:
route:
cluster: greeter_service
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
# custom-header-1 is just an example. the grpc-web
# repository was missing grpc-status-details-bin header
# which used in a richer error model.
# https://grpc.io/docs/guides/error/#richer-error-model
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,grpc-status-details-bin,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout,authorization
expose_headers: grpc-status-details-bin,grpc-status,grpc-message,authorization
max_age: "1728000"
http_filters:
- name: envoy.filters.http.grpc_web
# This line is optional, but adds clarity to the configuration.
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/grpc_web/v3/grpc_web.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.grpc_web.v3.GrpcWeb
- name: envoy.filters.http.cors
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/cors/v3/cors.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.cors.v3.Cors
- name: envoy.filters.http.router
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/router/v3/router.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/transport_sockets/tls/v3/tls.proto#extensions-transport-sockets-tls-v3-downstreamtlscontext
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
common_tls_context:
tls_certificates:
- certificate_chain:
# Certificate must be PEM-encoded
filename: /etc/fullchain.pem
private_key:
filename: /etc/privkey.pem
clusters:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/config/cluster/v3/cluster.proto#config-cluster-v3-cluster
- name: greeter_service
type: LOGICAL_DNS
connect_timeout: 0.25s
lb_policy: round_robin
load_assignment:
cluster_name: greeter_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: app
port_value: 3000
http2_protocol_options: {} # Force HTTP/2
# Your grpc server communicates over TLS. You must configure the transport
# socket. If you care about the overhead, you should configure the grpc
# server to listen without TLS. If you need to listen to grpc-web and grpc
# over HTTP/2 both you can also proxy your TCP traffic with the envoy.
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
Have you tried calling HTTPS from client?
const client = new LiveClient('https://localhost:8080')
Without this I'm getting Http response at 400 or 500 level as well.

Ghost config.js file

I m actually trying to create a simple blog using ghost, and I m facing a problem when starting in production envrionnement.
I m having the v0.7.1 and here's my config file (production part)
production: {
url: 'http://<my-public-ip>',
mail: {},
database: {
client: 'sqlite3',
connection: {
filename: path.join(__dirname, '/content/data/ghost.db')
},
debug: false
},
server: {
host: '127.0.0.1',
port: '2368'
}
}
The fact is that when I try to access my public IP on a browser, I cant get anything at all on the screen(404 not found), even if I try on the 2368 port.
My firewall rules are well set.
what am I doing wrong ?
In the server object the host should be 0.0.0.0
server: {
host: '0.0.0.0',
port: '2368'
}
In the server object change the host .
host: '127.0.0.1', --> host: '0.0.0.0'
now start the ghost server by
npm start --production

Resources