disabling discovery for k8s api client - spring-cloud-kubernetes

Right now i used the way from the 1st answer from:
Cannot read configmap with name: [xx] in namespace ['default'] Ignoring
But in application logs:
2022-04-19 14:14:57.660 [controller-reflector-io.kubernetes.client.openapi.models.V1ConfigMap-1] [] INFO i.k.c.informer.cache.ReflectorRunnable - class io.kubernetes.client.openapi.models.V1ConfigMap#Start listing and watching...
2022-04-19 14:14:57.662 [controller-reflector-io.kubernetes.client.openapi.models.V1ConfigMap-1] [] ERROR i.k.c.informer.cache.ReflectorRunnable - class io.kubernetes.client.openapi.models.V1ConfigMap#Reflector loop failed unexpectedly
io.kubernetes.client.openapi.ApiException:
at io.kubernetes.client.openapi.ApiClient.handleResponse(ApiClient.java:974)
at io.kubernetes.client.openapi.ApiClient.execute(ApiClient.java:886)
at io.kubernetes.client.informer.SharedInformerFactory$1.list(SharedInformerFactory.java:207)
at io.kubernetes.client.informer.cache.ReflectorRunnable.run(ReflectorRunnable.java:88)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
And its works properly and reading configmaps mounted in deployment.
How can i disable the all another features?
Using implementation("org.springframework.cloud:spring-cloud-starter-kubernetes-client-config:2.1.1")
In bootstrap.yml:
spring:
application:
name: toast
cloud:
vault:
enabled: false
kubernetes:
reload:
enabled: true
mode: event
strategy: refresh
config:
sources:
- name: ${spring.application.name}-common
- name: ${spring.application.name}
enabled: true
paths:
#- { { .Values.application } }-common-config/data.yml
#- { { .Values.application } }-config/application.yml
- /etc/${spring.application.name}-common/config/application.yml
- /etc/${spring.application.name}/config/data.yml
enabled: true
I need can use it without rbac resource in k8s.

Related

CSRF Token with the “SameSite” attribute value “Lax” or “Strict” was omitted because of a cross-site redirect

I am using Kratos in my local environment just fine, works great and does what I want it to do, however the main issue I am facing is when I moved my ory kratos location to a droplet and my frontend to my vercel. On the droplet I have setup kratos as show here with nginx. After setting it up I tested my login flow and got these error's when I requested /self-service/registration/flows?id=${flowId}
Thinking I setup my config incorrectly I set it up as seen here.
version: v0.10.1
dsn: postgres://kratos:MYPASSWORD/kratos?sslmode=disable&max_conns=20&max_idle_conns=4
serve:
public:
base_url: http://127.0.0.1:4433/
host: 127.0.0.1
cors:
allow_credentials: true
allowed_origins:
- https://frontend.example.com
allowed_methods:
- POST
- GET
- PUT
- PATCH
- DELETE
allowed_headers:
- Authorization
- Cookie
- Content-Type
exposed_headers:
- Content-Type
- Set-Cookie
enabled: true
admin:
base_url: http://kratos:4434/
host: 127.0.0.1
selfservice:
default_browser_return_url: https://frontend.example.com
allowed_return_urls:
- https://frontend.example.com
methods:
password:
enabled: true
oidc:
enabled: false
flows:
error:
ui_url: https://frontend.example.comerror
settings:
ui_url: https://frontend.example.comsettings
privileged_session_max_age: 15m
recovery:
enabled: true
ui_url: https://frontend.example.comrecovery
verification:
enabled: true
ui_url: https://frontend.example.comverification
after:
default_browser_return_url: https://frontend.example.com/
logout:
after:
default_browser_return_url: https://frontend.example.com/
login:
ui_url: https://frontend.example.com/login
lifespan: 10m
registration:
lifespan: 10m
ui_url: https://frontend.example.com/registration
after:
password:
hooks:
- hook: session
oidc:
hooks:
- hook: session
log:
leak_sensitive_values: false
# set in SECRETS_COOKIE and SECRETS_DEFAULT env variables
secrets:
default:
- 795135465767325463454 #fake
cookie:
- 223108c7839f6324242342 #fake
cookies:
domain: frontend.example.com
path: /
same_site: Lax
session:
lifespan: 72h
cookie:
domain: frontend.example.com
path: /
same_site: Lax
hashers:
argon2:
parallelism: 1
memory: 128MB
iterations: 2
salt_length: 16
key_length: 16
identity:
default_schema_id: default
schemas:
- id: default
url: file:///root/kratos/config/identity.schema.json
courier:
smtp:
connection_uri: smtps://test:test#mailslurper:1025/?skip_ssl_verify=true
The issue still persisted so I checked my code
const req: any = await fetch(`https://frontend.example.com/self-service/${flow}/flows?id=${flowId}`, {
method: 'GET',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
},
credentials: 'include',
}).catch((err) => {
throw err;
}); //error
everything seems fine, I have credentials and my Content-Type was correct, Then doing some research I found this https://github.com/ory/kratos/issues/662 I was wondering, would this effect me in my situation (I have my frontend domain https://frontend.example.com and my kratos droplet https://kratos.example.com are examples), and if so how I could go about fixing it(could I use the ory-proxy to get cookies to my application safely?)? My idea was that I could simply setup ory hydra has the oauth provider for my platform but I am not sure.
Also,
Thanks in advance!

VirtualService routing only uses one host

I have the following VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: external-vs
namespace: dev
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- name: "postauth"
match:
- uri:
exact: /postauth
route:
- destination:
port:
number: 8080
host: postauth
- name: "frontend"
match:
- uri:
exact: /app
route:
- destination:
port:
number: 8081
host: sa-frontend
I would expect that calls to the /postauth endpoint would be routed to the postauth service and calls to the /app endpoint would be routed to the sa-frontend service. What is happening is that all calls end up being routed to the first router in the file, in the above case to postauth, but if I change the order it will be to sa-frontend
All services and deployments are in the same namespace (dev).
Is that somehow the expected behaviour? My interpretation is that the above should only allow calls to the /postauth and /app endpoints and nothing else, and route these to their respective services.
As per documentaion for Istio 1.3 in HTTPMatchRequest you can find
Field: name, Type: string
I have compared those settings between 1.1 and 1.3 versions:
In version 1.3.4 this paramereter is working properly and the routes were propagated with the names:
[
{
"name": "http.80",
"virtualHosts": [
{
"name": "*:80",
"domains": [
"*",
"*:80"
],
"routes": [
{
"name": "ala1",
"match": {
"prefix": "/hello1",
"caseSensitive": true
},
"route": {
"cluster": "outbound|9020||hello1.default.svc.cluster.local",
.
.
.
{
"name": "ala2",
"match": {
"prefix": "/hello2",
"caseSensitive": true
},
"route": {
"cluster": "outbound|9030||hello2.default.svc.cluster.local",
While in version 1.1 it's not working properly. In those cases please verify your settings with appropriate release.
In addition please refer to Troubleshooting section.
You can verify your applied configuration (changes) inside the cluster, f.e.:
How Envoy instance was configured:
istioctl proxy-config cluster -n istio-system your_istio-ingressgateway-name
Verify routes configuration and virtual hosts for services:
istioctl proxy-config routes -n istio-system your_istio-ingressgateway-name -o json
Hope this help.

FosElasticaBundle: how to dump the actual JSON passed to ElasticSearch?

I am using FosElasticaBundle in a Symfony project. I configured my mappings but I get exception "expected a simple value for field [_id] but found [START_OBJECT]]".
I'd like to see the actual JSON created by FosElasticaBundle so I can directly test it against my ElasticSearch server, and understand more about the exception.
According to FosElastica documentation, everything should be logged when debug mode is enabled (i.e. in DEV environment) but I can't see this happening; I only see Doctrine queries, but no JSON.
How can I dump the JSON created by FosElasticaBundle?
Update: mappings
# FOSElasticaBundle
fos_elastica:
clients:
default: { host: %elasticsearch_host%, port: %elasticsearch_port%, logger: false }
indexes:
app:
types:
user:
mappings:
name: ~
surname: ~
persistence:
driver: orm
model: AppBundle\Entity\User
provider: ~
listener: ~
finder: ~
I think you should only set your logger to true instead of false
fos_elastica:
clients:
default:
host: %elasticsearch_host%
port: %elasticsearch_port%
logger: true <---- set true here
...

Connect Doctrine to memcached pool

Perhaps anybody know, how to connect Doctrine to memcached pool, to use it as a cache driver?
I've check official bundle documentation, and lot of another sources, but didn't find any examples of such connection.
Also due to source code, I could not find any options to use pool, but perhaps I miss something.
Didn't test, but the following should work:
in app/config/parameters.yml, set/add
parameters:
memcached.servers:
- { host: 127.0.0.1, port: 11211 }
- { host: 127.0.0.2, port: 11211 }
in app/config/config.yml set/add
services:
memcache:
# class 'Memcache' or 'Memcached', depending on which PHP module you use
class: Memcache
calls:
- [ addServers, [ %memcached.servers% ]]
doctrine.cache.memcached:
class: Doctrine\Common\Cache\MemcachedCache
calls:
- [setMemcached, [#memcached]]
in app/config/config_prod.yml, set
doctrine:
orm:
metadata_cache_driver:
type: service
id: doctrine.cache.memcached
query_cache_driver:
type: service
id: doctrine.cache.memcached
result_cache_driver:
type: service
id: doctrine.cache.memcached
As I said, I can't test it, but this is the combination of several known-to-work techniques.
UPDATE: solution updated based on CrazySquirrel's findings.
Thanks lxg for your ideas. I've build right configuration using your ideas. Please find correct service definition below:
application config:
result_cache_driver:
type: service
id: doctrine.cache.memcached
service.yml
memcached:
class: Memcached
calls:
- [ addServers, [ %memcached_servers% ]]
doctrine.cache.memcached:
class: Doctrine\Common\Cache\MemcachedCache
calls:
- [setMemcached, [#memcached]]

Redis with Symfony2 causes problems between sites on my server

I'm using symfony2 snc-redis bundle for caching.
On my server, redis has been installed and working correctly.
My problem is; when i try to clear or flush db with redis, all sites on my server that using redis, crashes. Giving internal server error because of prod env.
I'v tried to change redis configuration ports in my config.yml for every single site on my server but i think didn't work.
My sample snc-redis configuration:
snc_redis:
clients:
default:
type: predis
alias: default
dsn: redis://localhost
logging: %kernel.debug%
cache:
type: predis
alias: cache
dsn: redis://localhost/1
logging: true
cluster:
type: predis
alias: cluster
dsn:
- redis://127.0.0.1/5
- redis://127.0.0.2/6
- redis://pw#/var/run/redis/redis-1.sock/7
- redis://127.0.0.1:6379/8
options:
profile: 2.4
connection_timeout: 10
connection_persistent: true
read_write_timeout: 30
iterable_multibulk: false
throw_errors: true
cluster: Snc\RedisBundle\Client\Predis\Connection\PredisCluster
monolog:
type: predis
alias: monolog
dsn: redis://localhost/1
logging: false
options:
connection_persistent: true
session:
client: default
prefix: foo
use_as_default: true
doctrine:
metadata_cache:
client: cache
entity_manager: default
document_manager: default
result_cache:
client: cache
entity_manager: [default, read]
document_manager: [default, slave1, slave2]
namespace: "dcrc:"
query_cache:
client: cache
entity_manager: default
monolog:
client: monolog
key: monolog
swiftmailer:
client: default
key: swiftmailer
monolog:
handlers:
main:
type: service
id: monolog.handler.redis
level: debug
What i'm doing wrong? How can i get it work correctly and will not cause crashing.
My Redis Bundle for Symfon2:
Snc\RedisBundle\SncRedisBundle()
https://github.com/snc/SncRedisBundle
You can define prefix for each site like this:
snc_redis:
clients:
default:
dsn: "redis://localhost:6379"
options:
prefix : "site_name"
type: phpredis
alias: default
logging: %kernel.debug%
Note: You must to be considered to put this prefix to all clients ;)
Did you try to change client alias for every site ?

Resources