traefik access backend service through ip - grpc

Because I don't have a domain name, I want to use IP to directly access the /ping method in the backend service running in docker. How do I configure it?
Here is the code part:
func main() {
r := gin.Default()
r.GET("/ping", func(c *gin.Context) {
c.JSON(http.StatusOK, gin.H{
"message": "pong",
})
})
r.Run(":3002")
}
Here is the docker-compose configuration:
version: '3.3'
services:
main:
container_name: main
image: hello:latest
working_dir: /hello
ports:
- "3002:3002"
restart: always
labels:
- "traefik.frontend.main.rule=PathPrefixStrip: /hello"
network_mode: traefik
correct format:
labels:
- "traefik.enable=true"
- "traefik.http.routers.hello.service=hello"
- "traefik.http.routers.hello.rule=Host(`192.168.000.001`)"
- "traefik.http.services.hello.loadbalancer.server.port=3002"

Related

What is the correct way of mounting appsettings file for .Net Core Worker Service?

I have a .Net Worker service that runs as a K8S Cronjob but when it starts up it is failing to mount the appsettings file. The pod remains stuck in CrashLoopBackoff error state and the logs have the following :
Error: failed to create containerd task: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/var/lib/kubelet/pods/axxxxxx-1xxxx-4xxx-8xxx-4xxxxxxxxxx/volume-subpaths/secrets/ftp-client/1"
to rootfs at "/app/appsettings.ftp.json" caused: mount through procfd: not a directory: unknown
In the deployment I have mounted the appsettings file as follows :
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: ftp-client
spec:
schedule: "*/6 * * * *" #Cron job everyday 6 minutes
# startingDeadlineSeconds: 60
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/name: taat
topologyKey: "kubernetes.io/hostname"
initContainers:
- name: ftp-backup
image: registry.xxx.com/xxx/xxx:latest-ftp
imagePullPolicy: "Always"
env:
- name: WEB_URL
valueFrom:
secretKeyRef:
key: url
name: web-url
volumeMounts:
- mountPath: /tmp
name: datadir
command: ['sh', '-c',"./myscript.sh"]
containers:
- name: ftp-client
image: registry.xxx.com/xxx/xxx:latest-ftp
imagePullPolicy: "Always"
resources:
limits:
memory: 500Mi
requests:
cpu: 100m
memory: 128Mi
volumeMounts:
- mountPath: /tmp
name: datadir
- mountPath: /app/appsettings.ftp.json
subPath: appsettings.ftp.json
name: secrets
env:
- name: DOTNET_ENVIRONMENT
value: "Development"
- name: DOTNET_HOSTBUILDER__RELOADCONFIGONCHANGE
value: "false"
restartPolicy: OnFailure
imagePullSecrets:
- name: mycredentials
volumes:
- name: datadir
persistentVolumeClaim:
claimName: labs
- name: secrets
secret:
secretName: ftp-secret
And Program.cs for the Worker Service
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using FtpClientCron;
IHost host = Host.CreateDefaultBuilder(args)
.ConfigureServices(services =>
{
services.AddHostedService<Worker>();
})
.Build();
await host.RunAsync();
appsettings.ftp.json
{
"ApplicationSettings": {
"UserOptions": {
"Username": "xxxxxx",
"Password": "xxxxxxxxx",
"Url": "xxx.xxx.com",
"Port": "xxxx"
}
}
}
appsettings.json
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.Hosting.Lifetime": "Information"
}
}
}
Dockerfile
FROM mcr.microsoft.com/dotnet/runtime:6.0 AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /app
COPY ./publishh .
ENTRYPOINT ["dotnet", "SftpClientCron.dll"]
What am I missing ?

How to use nginx module on Filebeat in k8s

I am trying to use filebeat with nginx module to collect logs from nginx-ingress-controller and send directly to elasti but I keep getting an error:
Provided Grok expressions do not match field value: [172.17.0.1 - - [03/Dec/2022:00:05:01 +0000] \"GET /healthz HTTP/1.1\" 200 0 \"-\" \"kube-probe/1.24\" \"-\"]
This appear on Kibana under message error.
Note that I am running the latest helm for filebeat (8.5) and the nginx controller is nginx-ingress-controller-9.2.15 1.2.1.
My filebeat setting:
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: false
templates:
- condition:
contains:
kubernetes.pod.name: redis
config:
- module: redis
log:
input:
type: container
containers.ids:
- "${data.kubernetes.container.id}"
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
- condition:
contains:
kubernetes.pod.name: nginx
config:
- module: nginx
access:
enabled: true
input:
type: container
containers.ids:
- "${data.kubernetes.container.id}"
paths:
- /var/lib/docker/containers/${data.kubernetes.container.id}/*.log
output.elasticsearch:
host: '${NODE_NAME}'
hosts: '["https://${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}"]'
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
protocol: https
ssl.certificate_authorities: ["/usr/share/filebeat/certs/ca.crt"]
setup.ilm:
enabled: true
overwrite: true
policy_file: /usr/share/filebeat/ilm.json
setup.dashboards.enabled: true
setup.kibana.host: "http://kibana:5601"
ilm.json: |
{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_age": "1d"
}
}
},
"delete": {
"min_age": "7d",
"actions": {
"delete": {}
}
}
}
}
}
And the logs from the controller are:
172.17.0.1 - - [02/Dec/2022:23:43:49 +0000] "GET /healthz HTTP/1.1" 200 0 "-" "kube-probe/1.24" "-"
Can someone help me understand what am I doing wrong?

NGINX not accepting json-ld mime type (Bad inMimeType)

I'm new with nginx and I'm trying to send and payload through nginx to an API called NGSI-LD, and NGNIX is sending me an error message: Bad inMimeType. I have no idea what the cause of this problem is. To send payload to the API I'm using the code:
import requests
import json
url = "http://localhost/api/ngsi-ld/v1/entities/"
payload = json.dumps({
"id": "urn:ngsi-ld:Project:MC_MUEBLETV_A",
"type": "Project",
"name": {
"type": "Property",
"value": "O nome"
},
"category": {
"type": "Property",
"value": "Movel de sala"
}
"orderBy": {
"type": "Relationship",
"object": "urn:ngsi-ld:Owner:Joao_Luis"
},
"#context": [
"https://raw.githubusercontent.com/More-Collaborative-Laboratory/ww4zero/main/ww4zero.context.normalized.jsonld",
"https://uri.etsi.org/ngsi-ld/v1/ngsi-ld-core-context.jsonld"
]
})
headers = {
'Content-Type': 'application/json',
'Fiware-Service': 'woodwork40'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
And the docker to create all services is kind like:
version: "3.9"
services:
nginx:
build: settings/nginx
ports:
- 80:80
- 434:434
tty: true
volumes:
- ./data-models/context:/srv/ww4/context
- ./:/srv/ww4
networks:
- default
mongo-db:
image: mongo:4.4
hostname: mongo-db
container_name: db-mongo
restart: always
expose:
- "27017"
ports:
- "27017:27017"
networks:
- default
volumes:
- mongo_db:/data/db
- mongo_config:/data/configdb
orion:
image: fiware/orion-ld:1.0.1
hostname: orion
container_name: fiware-orion
restart: always
depends_on:
- mongo-db
- timescale-db
networks:
- default
expose:
- 1026
environment:
- ORIONLD_TROE=TRUE
- ORIONLD_TROE_USER=orion
- ORIONLD_TROE_PWD=orion
- ORIONLD_TROE_HOST=timescale
- ORIONLD_MONGO_HOST=mongo-db
- ORION_LD_PORT=1026
- ORION_LD_VERSION=1.0.0
- ORIONLD_MULTI_SERVICE=TRUE
- ORIONLD_DISABLE_FILE_LOG=TRUE
- Access-Control-Allow-Origin=62.28.95.49
command: -dbhost mongo-db -logLevel ERROR -troePoolSize 10 -forwarding
mintaka:
image: fiware/mintaka:0.5.9
hostname: mintaka
restart: always
container_name: mintaka
environment:
- DATASOURCES_DEFAULT_HOST=timescale
- DATASOURCES_DEFAULT_USERNAME=orion
- DATASOURCES_DEFAULT_PASSWORD=orion
- DATASOURCES_DEFAULT_DATABSE=orion
expose:
- "8080"
networks:
- default
timescale-db:
image: timescale/timescaledb-postgis:1.7.5-pg12
hostname: timescale
container_name: timescale
healthcheck:
test: [ "CMD-SHELL", "pg_isready -U orion" ]
interval: 15s
timeout: 15s
retries: 5
start_period: 60s
environment:
- POSTGRES_USER=orion
- POSTGRES_PASSWORD=orion
- POSTGRES_HOST_AUTH_METHOD=trust
expose:
- "5432"
ports:
- "5432:5432"
networks:
- default
volumes:
- timescale_db:/var/lib/postgresql/data
volumes:
mongo_db:
driver: local
mongo_config:
timescale_db:
The DockerFIle Looks like
FROM nginx:latest
RUN mkdir -pv /srv/ww4/context/ && mkdir -pv /srv/ww4/projects
COPY nginx.conf /etc/nginx/nginx.conf
COPY mime.types /etc/nginx/mime.types
And the mime.type are the standard nginx mime types plus the field:
application/ld+json jsonld;
....
My Proxy Server Conf is basicaly:
worker_processes auto;
events {
worker_connections 512;
}
http {
include /etc/nginx/mime.types;
server_tokens off;
upstream orion{
server orion:1026;
}
server {
listen 80;
root /srv/ww4;
index index.html;
location / {
try_files $uri $uri/ =404;
}
location /api/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://orion;
}
}
}
What could be the cause of this error? It seems that nginx is either not passing the header content type correctly or it doesn't recognize the json-ld type? how could i get around this problem?

Envoy: REST gateway + multiple GRPC clusters

I'm trying to config envoy as rest api gateway with multiple grpc servers and have a problem with routing. The only way to match endpoint to grpc cluster, that i've found is to match via request header (http request /first must be resolved by first cluster, /second - by second):
...
routes:
- match:
prefix: "/"
headers:
- name: x-service
exact_match: "first"
route:
cluster: first
- match:
prefix: "/"
headers:
- name: x-service
exact_match: "second"
route:
cluster: second
...
But, in this case i need to set custom header 'x-service' at the client (frontend). This looks like a bad idea, 'couse frontend shouldn't know anything about backend infrastructure.
Is there any other way to match http route with grpc service? Or, can i set such headers somewhere in envoy?
The Envoy configuration pasted below registers a HTTP listener on port 51051 that proxies to helloworld.Greeter service in the cluster grpc1 on port 50051 and bookstore.Bookstore service in the cluster grpc2 on port 50052 by using the gRPC route as the match prefix.
This ensures clean segregation of responsibilities and isolation since the client will not need to inject custom HTTP headers to make multi-gRPC cluster routing work.
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener1
address:
socket_address: { address: 0.0.0.0, port_value: 51051 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
access_log:
- name: envoy.access_loggers.file
typed_config:
"#type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: /dev/stdout
stat_prefix: grpc_json
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
# NOTE: by default, matching happens based on the gRPC route, and not on the incoming request path.
# Reference: https://www.envoyproxy.io/docs/envoy/latest/configuration/http_filters/grpc_json_transcoder_filter#route-configs-for-transcoded-requests
- match: { prefix: "/helloworld.Greeter" }
route: { cluster: grpc1, timeout: 60s }
- match: { prefix: "/bookstore.Bookstore" }
route: { cluster: grpc2, timeout: 60s }
clusters:
- name: grpc1
connect_timeout: 1.25s
type: LOGICAL_DNS
lb_policy: ROUND_ROBIN
dns_lookup_family: V4_ONLY
typed_extension_protocol_options:
envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
"#type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
explicit_http_config:
http2_protocol_options: {}
load_assignment:
cluster_name: grpc1
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 50051
- name: grpc2
connect_timeout: 1.25s
type: LOGICAL_DNS
lb_policy: ROUND_ROBIN
dns_lookup_family: V4_ONLY
typed_extension_protocol_options:
envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
"#type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
explicit_http_config:
http2_protocol_options: {}
load_assignment:
cluster_name: grpc2
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 50052
https://github.com/envoyproxy/envoy/blob/main/test/proto/helloworld.proto
syntax = "proto3";
package helloworld;
import "google/api/annotations.proto";
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello(HelloRequest) returns (HelloReply) {
option (google.api.http) = {
get: "/say"
};
}
}
https://github.com/envoyproxy/envoy/blob/main/test/proto/bookstore.proto
syntax = "proto3";
package bookstore;
import "google/api/annotations.proto";
import "google/api/httpbody.proto";
import "google/protobuf/empty.proto";
import "google/protobuf/struct.proto";
// A simple Bookstore API.
//
// The API manages shelves and books resources. Shelves contain books.
service Bookstore {
// Returns a list of all shelves in the bookstore.
rpc ListShelves(google.protobuf.Empty) returns (ListShelvesResponse) {
option (google.api.http) = {
get: "/shelves"
};
}
...

How to configure the envoy proxy for my go app?

I am trying to call from vue app via grpc to the go app as the follows:
Between them, it is an envoy proxy and is neccessary as described here https://github.com/grpc/grpc-web#2-run-the-server-and-proxy. The envoy proxy is running in the docker container, the vue and go app on localhost. The problem is, the vue app can not reach the go app, because I think, the envoy is running on different network than the go app.
Envoy is configured as follows:
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 9000 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: identity_service
max_grpc_timeout: 0s
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: custom-header-1,grpc-status,grpc-message
http_filters:
- name: envoy.grpc_web
- name: envoy.cors
- name: envoy.router
clusters:
- name: identity_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
hosts: [{ socket_address: { address: 0.0.0.0, port_value: 9090 }}]
What is wrong with the envoy configuration?

Resources