Keycloak Kubernetes GKE NGINX Ingress - Session get lost after pod restart on page reload and returns 502 Bad Gateway - nginx

I have setup a Keycloak Cluster in GKE with NGINX as Ingress Controller. I have use the Codecentrics Helm Chart: [https://github.com/codecentric/helm-charts/tree/master/charts/keycloak][Keycloak Helm Chart]
I am using JDBC_PING for JGroups and have the following cli script and Ingress config. I have replicas set to 2. When I kill a pod the session is still usable and everything is working fine, I can navigate in the keycloak admin interface and do everything. But when I hit F5 to reload the page I receive an 502 Bad Gateway error. Sometimes it does recover and I can just reload and everything is just fine, but sometimes I have to delete the cookies completely to make it work again.
I am not sure where the issue is coming from.
Cookies in Browser:
MySQL Table JGROUPSPING:
Ingress Annotations:
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/limit-rate: "150"
nginx.ingress.kubernetes.io/limit-rps: "150"
nginx.ingress.kubernetes.io/session-cookie-change-on-failure: "true"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-expires: "21600"
nginx.ingress.kubernetes.io/session-cookie-max-age: "21600"
nginx.ingress.kubernetes.io/server-snippet: |
location /auth/realms/master/metrics {
return 403;
}
extra envs:
# Additional environment variables for Keycloak
extraEnv: |
- name: KEYCLOAK_STATISTICS
value: all
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: KEYCLOAK_USER
value: '{{ .Values.ADMIN_USER }}'
- name: KEYCLOAK_PASSWORD
value: '{{ .Values.ADMIN_PASS }}'
- name: JAVA_OPTS
value: >-
-XX:+UseContainerSupport
-XX:MaxRAMPercentage=50.0
-Djava.net.preferIPv4Stack=true
-Djboss.modules.system.pkgs=$JBOSS_MODULES_SYSTEM_PKGS
-Djava.awt.headless=true
- name: JGROUPS_DISCOVERY_PROTOCOL
value: JDBC_PING
- name: CACHE_OWNERS_COUNT
value: "2"
- name: CACHE_OWNERS_AUTH_SESSIONS_COUNT
value: "2"
- name: DB_VENDOR
value: mysql
- name: DB_ADDR
value: "127.0.0.1"
- name: DB_PORT
value: "3306"
- name: DB_DATABASE
value: keycloak_prod
- name: DB_USER
value: '{{ .Values.SQL_USER }}'
- name: DB_PASSWORD
value: '{{ .Values.SQL_PASS }}'
Keycloak CLI script:
embed-server --server-config=standalone-ha.xml --std-out=echo
batch
echo Configuring node identifier
## Sets the node identifier to the node name (= pod name). Node identifiers have to be unique. They can have a
## maximum length of 23 characters. Thus, the chart's fullname template truncates its length accordingly.
/subsystem=transactions:write-attribute(name=node-identifier, value=${jboss.node.name})
echo NodeName: ${jboss.node.name}
echo Finished configuring node identifier
echo CUSTOM_CONFIG: executing CONFIG FOR K8S Failover Support
echo "------------------------------------------------------------------------------------------------------------"
echo "---------------------------------CUSTOM STARTUP CONFIG------------------------------------------------------"
echo "------------------------------------------------------------------------------------------------------------"
## JDBC PING
/subsystem=infinispan/cache-container=keycloak/distributed-cache=sessions:write-attribute(name=owners, value=${env.CACHE_OWNERS_COUNT:2})
/subsystem=infinispan/cache-container=keycloak/distributed-cache=authenticationSessions:write-attribute(name=owners, value=${env.CACHE_OWNERS_COUNT:2})
/subsystem=infinispan/cache-container=keycloak/distributed-cache=offlineSessions:write-attribute(name=owners, value=${env.CACHE_OWNERS_COUNT:2})
/subsystem=infinispan/cache-container=keycloak/distributed-cache=loginFailures:write-attribute(name=owners, value=${env.CACHE_OWNERS_COUNT:2})
/subsystem=jgroups/stack=tcp:remove()
/subsystem=jgroups/stack=tcp:add()
/subsystem=jgroups/stack=tcp/transport=TCP:add(socket-binding="jgroups-tcp")
/subsystem=jgroups/stack=tcp/protocol=JDBC_PING:add()
/subsystem=jgroups/stack=tcp/protocol=JDBC_PING/property=datasource_jndi_name:add(value=java:jboss/datasources/KeycloakDS)
/subsystem=jgroups/stack=tcp/protocol=JDBC_PING/property=initialize_sql:add(value="CREATE TABLE IF NOT EXISTS JGROUPSPING (own_addr varchar(200) NOT NULL, cluster_name varchar(200) NOT NULL, updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, ping_data varbinary(5000) DEFAULT NULL, PRIMARY KEY (own_addr, cluster_name)) ENGINE=InnoDB DEFAULT CHARSET=utf8")
/subsystem=jgroups/stack=tcp/protocol=MERGE3:add()
/subsystem=jgroups/stack=tcp/protocol=FD_SOCK:add(socket-binding="jgroups-tcp-fd")
/subsystem=jgroups/stack=tcp/protocol=FD:add()
/subsystem=jgroups/stack=tcp/protocol=VERIFY_SUSPECT:add()
/subsystem=jgroups/stack=tcp/protocol=pbcast.NAKACK2:add()
/subsystem=jgroups/stack=tcp/protocol=UNICAST3:add()
/subsystem=jgroups/stack=tcp/protocol=pbcast.STABLE:add()
/subsystem=jgroups/stack=tcp/protocol=pbcast.GMS:add()
/subsystem=jgroups/stack=tcp/protocol=pbcast.GMS/property=max_join_attempts:add(value=5)
/subsystem=jgroups/stack=tcp/protocol=MFC:add()
/subsystem=jgroups/stack=tcp/protocol=FRAG3:add()
/subsystem=jgroups/stack=udp:remove()
/subsystem=jgroups/channel=ee:write-attribute(name=stack, value=tcp)
/socket-binding-group=standard-sockets/socket-binding=jgroups-mping:remove()
## Cache Setup for Failover
/subsystem=infinispan/cache-container=keycloak/distributed-cache=sessions:remove()
/subsystem=infinispan/cache-container=keycloak/distributed-cache=authenticationSessions:remove()
/subsystem=infinispan/cache-container=keycloak/distributed-cache=offlineSessions:remove()
/subsystem=infinispan/cache-container=keycloak/distributed-cache=clientSessions:remove()
/subsystem=infinispan/cache-container=keycloak/distributed-cache=offlineClientSessions:remove()
/subsystem=infinispan/cache-container=keycloak/distributed-cache=loginFailures:remove()
/subsystem=infinispan/cache-container=keycloak/replicated-cache=sessions:add()
/subsystem=infinispan/cache-container=keycloak/replicated-cache=authenticationSessions:add()
/subsystem=infinispan/cache-container=keycloak/replicated-cache=offlineSessions:add()
/subsystem=infinispan/cache-container=keycloak/replicated-cache=clientSessions:add()
/subsystem=infinispan/cache-container=keycloak/replicated-cache=offlineClientSessions:add()
/subsystem=infinispan/cache-container=keycloak/replicated-cache=loginFailures:add()
echo "------------------------------------------------------------------------------------------------------------"
echo "---------------------------------CUSTOM STARTUP CONFIG DONE!------------------------------------------------"
echo "------------------------------------------------------------------------------------------------------------"
run-batch
try
:resolve-expression(expression=${env.JGROUPS_DISCOVERY_EXTERNAL_IP})
/subsystem=jgroups/stack=tcp/transport=TCP/property=external_addr/:add(value=${env.JGROUPS_DISCOVERY_EXTERNAL_IP})
catch
echo "JGROUPS_DISCOVERY_EXTERNAL_IP maybe not set."
end-try
stop-embedded-server
Log of the restarted Pod:
log-restarted-pod.txt
Log of the still running pod:
log-still-running-pod.txt

I managed to figure out this issue, we need to add below annotation to our ingress.yaml file.
nginx.ingress.kubernetes.io/proxy-buffer-size: "12k"

Related

How to add nested dictionary to dynamic host in Ansible

I have application details in respective vars like below. For example, myapp1 in "QA" environment would look like the below:
cat myapp1_QA.yml
---
APP_HOSTS:
- myapphost7:
- logs:
- /tmp/web/apphost7_access
- /tmp/web/apphost7_error
- myapphost9:
- logs:
- /tmp/web/apphost9_access
- /tmp/web/apphost9_error
- /tmp/web/apphost9_logs
WEB_HOSTS:
- mywebhost7:
- logs:
- /tmp/webserver/webhost7.pid
In this example I wish to create a dynamic host containing the 3 hosts
myapphost7
myapphost9
mywebhost7
and each host has variable log which can be looped to get the file paths:
Below is my ansible play:
---
- hosts: localhost
tasks:
- include_vars:
file: "{{ playbook_dir }}/{{ appname }}_{{ myenv }}.yml"
- name: Dsiplay dictionary data
debug:
msg: "{{ item[logs] }}"
loop: "{{ APP_HOSTS }}"
I get the below error:
ansible-playbook read.yml -e appname=myapp1 -e myenv=QA
TASK [Dsiplay dictionary data] *********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'logs' is undefined\n\nThe error appears to be in '/root/read.yml': line 8, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Dsiplay dictionary data\n ^ here\n"}
My requirement is to store "myapphost7", "myapphost9", "mywebhost7" in group using add_hosts: hosts: while a variable logs: having the list of log files.
Note: if no hosts mywebhost7 is defined under WEB_HOSTS: or APP_HOSTS: then nothing should be added to the dynamic host.
Can you please suggest?

Ansible Nested Loop for Cisco ACL

I'm creating a playbook for an ACL update, where the existing ACL needs to be updated, but before adding the new set of IP addresses to that ACL, I need to make sure that the ACL is present and that the IP hasn't already been configured.
Process:
Need to add the below IP addresses
ACL NAME: 11, 13, DATA_TEST, dummy
Check if the list of ACL are present
commands: "show access-lists {{item}}"
Check if ACL Exist
Q: Can't figure out how to access each item in the result of the first action to see if ACL has been configured. Ex. We can see from the output that dummy has no output, how can I exclude that and process if exist. (refer code below)
Check if IP addresses already added
Q: What is the best approach here? I'm thinking using when then comparing the ACL output from stdout vs the given variables content (ex. parents/lines)?
Add the set of IP addresses on target ACL
Q: What is the best approach here? Need to match the ACL name and configure using the variable.
If somebody is knowledgeable about Ansible, perhaps you could assist me in creating this project? I'm still doing some research, so any assistance you can give would be greatly appreciated. Thanks
My Code:
---
- name: Switch SVU
hosts: Switches
gather_facts: False
vars:
my_acl_list:
- 11
- 13
- DATA_TEST
- dummy
fail: "No such access-list {{item}}"
UP_ACL11:
parents:
- access-list 11 permit 192.168.1.4
- access-list 11 permit 192.168.1.5
UP_ACL13:
parents: access-list 13 permit 10.22.1.64 0.0.0.63
UP_ACLDATA:
lines:
- permit 172.11.1.64 0.0.0.63
- permit 172.12.2.64 0.0.0.63
parents: ip access-list standard DATA_TEST
tasks:
- name: Check if the ACL Name already exists.
ios_command:
commands: "show access-lists {{item}}"
register: acl_result
loop: "{{my_acl_list}}"
- debug: msg="{{acl_result}}"
- name: Check if ACL Exist
debug:
msg: "{{item.stdout}}"
when: item.stdout.exists
with_items: "{{acl_result.results}}"
loop_control:
label: "{{item.item}}"
# Pending - Need to know how to match if ACL name exist on stdout.
- name: Check if IP addresses already added
set_fact:
when:
# pending - ansible lookup?
# when var: UP_ACL11, UP_ACL13, UP_ACLDATA IPs are not in ACL then TRUE
- name: Add the set of IP addresses on target ACL
ios_config:
# pending - if doest exist on particular ACL name then configure using the var: UP_ACL11, UP_ACL13, UP_ACLDATA
Given the simplified data for testing
acl_result:
results:
- item: DATA_TEST
stdout:
- "Standard ... 10 permit ... 20 permit ..."
stdout_lines:
- - "Standard ..."
- "10 permit ..."
- "20 permit ..."
- item: dummy
stdout:
- ""
stdout_lines:
- - ""
Q: "Check if ACL Exists"
A: If ACL doesn't exist the attribute stdout is a list of empty strings. Test it
- name: Check if ACL Exists
debug:
msg: "{{ item.item }} exists: {{ item.stdout|map('length')|select()|length > 0 }}"
loop: "{{ acl_result.results }}"
loop_control:
label: "{{item.item}}"
gives
TASK [Check if ACL Exists] ********************************************
ok: [localhost] => (item=DATA_TEST) =>
msg: 'DATA_TEST exists: True'
ok: [localhost] => (item=dummy) =>
msg: 'dummy exists: False'
Notes:
In the filter select, "If no test is specified, each object will be evaluated as a boolean". The number 0 evaluates to false.
Example of a complete playbook for testing
- hosts: localhost
vars:
acl_result:
results:
- item: DATA_TEST
stdout:
- "Standard ... 10 permit ... 20 permit ..."
stdout_lines:
- - "Standard ..."
- "10 permit ..."
- "20 permit ..."
- item: dummy
stdout:
- ""
stdout_lines:
- - ""
tasks:
- name: Check if ACL Exists
debug:
msg: "{{ item.item }} exists: {{ item.stdout|map('length')|select()|length > 0 }}"
loop: "{{ acl_result.results }}"
loop_control:
label: "{{item.item}}"
The test can be simplified if you're sure stdout is a list with a single line only
msg: "{{ item.item }} exists: {{ item.stdout|first|length > 0 }}"

How do I combine two commands in one task? | Ansible

So, my problem is that I want to check if nginx is installed on two different OS with different package managers.
- name: Veryfying nginx installation # RedHat
command: "rpm -q nginx"
when: ansible_facts.pkg_mgr in ["yum","dnf","rpm"] #or (ansible_os_family == "RedHat")
- name: Veryfying nginx installation # Debian
command: "dpkg -l nginx"
when: ansible_facts.pkg_mgr in ["dpkg", "apt"] #or (ansible_os_family == "Debian")
Can I combine it in one task and how if it is possible? Because I need to register the output result and then use it onwards. Can't figure it out.
An alternative solution is to use the package_facts module, like this:
- hosts: localhost
tasks:
- package_facts:
- debug:
msg: "Nginx is installed!"
when: "'nginx' in packages"
But you could also register individual variables for your two tasks, and then combine the result:
- hosts: localhost
tasks:
- name: Veryfying nginx installation # RedHat
command: "rpm -q nginx"
when: ansible_facts.pkg_mgr in ["yum","dnf","rpm"] #or (ansible_os_family == "RedHat")
failed_when: false
register: rpm_check
- name: Veryfying nginx installation # Debian
command: "dpkg -l nginx"
when: ansible_facts.pkg_mgr in ["dpkg", "apt"] #or (ansible_os_family == "Debian")
failed_when: false
register: dpkg_check
- set_fact:
nginx_result: >-
{{
(rpm_check is not skipped and rpm_check.rc == 0) or
(dpkg_check is not skipped and dpkg_check.rc == 0)
}}
- debug:
msg: "nginx is installed"
when: nginx_result

how to check 777 permission in multiple directory by ansible

For a single directory my script is running fine, but how to check the same for multiple directories?
Code for a single directory:
---
- name: checking directory permission
hosts: test
become: true
tasks:
- name: Getting permission to registered var 'p'
stat:
path: /var/SP/Shared/
register: p
- debug:
msg: "permission is 777 for /var/SP/Shared/
when: p.stat.mode == "0777" or p.stat.mode == "2777" or p.stat.mode == "4777"
Reading stat_module shows that there is no parameter for recursion. Testing with_fileglob: did not gave the expected result.
So it seems you would need to loop over the directories in a way like
- name: Get directory permissions
stat:
path: "{{ item }}"
register: result
with_items:
- "/tmp/example"
- "/tmp/test"
tags: CIS
- name: result
debug:
msg:
- "{{ result }}"
tags: CIS
but I am sure there can be still more advanced solutions found.

Can't generate lets-encrypt certificate using saltStack

I am trying to generate the lets-encrypt certificate and here are the steps that I followed:
Under /srv/salt/pillars/minion I added the file init.sls
letsencrypt:
config: |
email = email
auth:
method: standalone
type: http-01
port: 8080
agree-tos = True
renew-by-default = True
domainsets:
mydomain:
- mydomain.com
After that I updated the salt_pillar:
# . update_salt.sh
# salt 'minion' state.sls letsencrypt
I got this result:
ID: letsencrypt-crontab-mydomain.com
Function: cron.present
Name: /usr/local/bin/renew_letsencrypt_cert.sh mydomain.com
Result: False
Comment: One or more requisite failed: letsencrypt.domains.create-initial-
cert-mydomain.com
Started:
Duration:
Changes:
------------
ID: create-fullchain-privkey-pem-for-mydomain.com
Function: cmd.run
Name: cat /etc/letsencrypt/live/mydomain.com/fullchain.pem \
/etc/letsencrypt/live/mydomain.com/privkey.pem \
> /etc/letsencrypt/live/mydomain.com/fullchain-privkey.pem && \
chmod 600 /etc/letsencrypt/live/mydomain.com/fullchain-privkey.pem
Result: False
Comment: One or more requisite failed: letsencrypt.domains.create-initial-cert-mydomain.com
Started:
Duration:
Changes:
What should I modify in my configuration to get the certificate?

Resources