cryptography.fernet.invalidtoken airflow error - airflow

Unfortunately I deleted airflow on eks which is deployed via helm chart. Connections and variables are encrypted with fernet key in the databse.when I deployed new airflow via helm again, Everything seems to be running up and fine. Only issue is I see the connections are not able to read by airflow and it is showing the below error in connections tab
cryptography.fernet.invalidtoken airflow
I also checked the fernet key in secrets and as well by doing exec into webserver pod. Both are same.
Can someone help me please.

Related

How can I add EFS to an Airflow deployment on Amazon-EKS?

Kubernetes and EKS newbie here.
I've set up an Elastic Kubernetes Service (EKS) cluster and added an Airflow deployment on top of it using the official HELM chart for Apache Airflow. I configured gitsync and can successfully run my DAGS. For some of the DAGs, I need to save the data to an Amazon EFS. I installed the Amazon EFS CSI driver on eks following the instruction on the amazon documentation.
Now, I can create a new pod with access to the NFS but the airflow deployment broke and stay in a state of Back-off restarting failed container. I also got the events with kubectl -n airflow get events --sort-by='{.lastTimestamp} and I get the following messages:
TYPE REASON OBJECT MESSAGE
Warning BackOff pod/airflow-scheduler-599fc856dc-c4pgz Back-off restarting failed container
Normal FailedBinding persistentvolumeclaim/redis-db-airflow-redis-0 no persistent volumes available for this claim and no storage class is set
Warning ProvisioningFailed persistentvolumeclaim/ebs-claim storageclass.storage.k8s.io "ebs-sc" not found
Normal FailedBinding persistentvolumeclaim/data-airflow-postgresql-0 no persistent volumes available for this claim and no storage class is set
I have tried this on EKS version 1.22.
I understand from this that airflow is expecting to get an EBS volume for its pods but the NFS driver changed the configuration of the pvs.
The pvs before I install the driver are this:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-###### 100Gi RWO Delete Bound airflow/logs-airflow-worker-0 gp2 1d
pvc-###### 8Gi RWO Delete Bound airflow/data-airflow-postgresql-0 gp2 1d
pvc-###### 1Gi RWO Delete Bound airflow/redis-db-airflow-redis-0 gp2 1d
After I install the EFS CSI driver, I see the pvs have changed.
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
efs-pvc 5Gi RWX Retain Bound efs-storage-claim efs-sc 2d
I have tried deploying airflow before or after installing the EFS driver and in both cases I get the same error.
How can I get access to the NFS from within Airflow without breaking the Airflow deployment on EKS. Any help would be appreciated.
As stated in the error above no persistent volumes available for this claim and no storage class is set and storageclass.storage.k8s.io "ebs-sc" not found, you have to deploy a storage class called efs-sc using the EFS CSI driver as a provisioner.
Further documentation could be found here
An example of creating your missing storage class and persistent volume could be found here
These steps are also described in the AWS EKS user guide

Airflow Remote logging doesn't work for both scheduler & dag_process_manager logs

Hi based on airflow docs I am able to set up cloud/remote logging.
Remote logging is working for dag and task logs but it's not able to back up or remotely store following logs of.
scheduler
dag_processing_manager
I am using docker_hub airflow docker image.

airflow ssh connection , connection type not displaying

I am trying to add connection to the airflow server. I wanted to have ssh added. only the below listed 5 is being displayed. Can you point me how to add the ssh connection ?
In Airflow < 2.0.0 all connections are available by default.
In Airflow >= 2.0.0 Airflow automatically discovers which providers additional capabilities (connections, extra links etc...) once you install provider package and re-start Airflow, those become automatically available. More information about this can be found in the docs. Specifically for SSH connection you will need to install SSH provider:
pip install apache-airflow-providers-ssh
In Airflow >= 2.0.0 if have installed the providers after you started the airflow and you are directly going to the UI and refreshing the page and expecting the provider to show up then it won't happen.
It is must that you will have to re-start the airflow once again in order for the latest provider that you have installed to show up in the dropdownlist.

How configure Octavia in Openstack Kolla?

Im trying to deploy Octavia in Kolla Openstack, my global.yml is:
config_strategy: "COPY_ALWAYS"
kolla_base_distro: "ubuntu"
kolla_install_type: "source"
kolla_internal_vip_address: "169.254.1.11"
network_interface: "eth0"
neutron_external_interface: "eth1"
neutron_plugin_agent: "openvswitch"
enable_neutron_provider_networks: "yes"
enable_haproxy: "yes"
enable_cinder: "yes"
enable_cinder_backend_lvm: "yes"
keystone_token_provider: 'fernet'
cinder_volume_group: "openstack_cinder"
nova_compute_virt_type: "kvm"
enable_octavia: "yes"
octavia_network_interface: "eth2"
I use a default/automatic configuration, keypair, network and flavor are created in service project. Then I create the amphora image for this project.
All this is indicated in the Openstack guide, but it doesn't work.
When I create a loadbalancer, the amphora is deployed but the loadbalancer is "Pending Create" status. I saw that the created network is vxlan, a tenant network, and I think that it should have external conectivity, I tried but didn't work.
I check the openvswitch configuration and don't see any difference deploying with or without Octavia.
Do I miss something? I don't know what to do at this point, I even tried the manual config but I couldn't make it work.
I can't speak to the kolla part of this issue, but with the load balancer in PENDING_CREATE, the controller (worker) logs should show where it is retrying to take some action against your cloud and failing. It will retry for some time, then move to an ERROR status if the cloud issue is not resolved in time.
Without seeing the logs, my guess is kolla did not setup the lb-mgmt-net correctly.
I don't know how to get it working with an external network - but for the tenant network it appears the solution is:
Setting octavia_network_interface will make kolla create that interface, so any name will do. When referencing other setups (ie. the devstack plugin) they name this o-hm0. So this is what I did.
Set octavia_network_type to tenant in globals.yml. Note this requires the host to have dhclient available, kolla didn't seem to install this for me.
I tested this on stable/zed and it appears to work for me.

woocommerce webhooks not firing

woocommerce webhooks aren't firing at all for me, even on a fresh install. I did the following:
Create a new MySQL database
Install WP from the zip file.
Set up WP.
Install Woocommerce.
Enable REST API and create a key.
Added "Coupon created" webhook, made sure it's set to active, and set it to a publicly accessible site.
When I create a coupon, the webhook does not fire, and no entry is created in the log. I tried this with orders as well and also doesn't work.
I think it's a machine configuration problem, but not sure what to change. The machine is an EC2 instance and has all ports opened in its security group policy.
Weirdest of all is that on a different EC2 instance does work, but it's a production machine and I want to have a dev server work so I can test out things. The only config differences between the production and dev machines that I can think of are the subnets and the firewall, but I don't understand why the subnet should matter and I opened all the firewall ports on the dev machine.
what Linux distributions are you running for prod and dev?
CentOS with SELinux enabled with not allow HTTPD scripts and modules to connect to network by default.
setsebool -P httpd_can_network_connect on
If above is not valid, please identify network problems by trying connecting to AWS RDS via SSH CLI. If you can open a connection via SSH CLI, the problem will be with your application. If you can't, it will be network problem. First thing to check in that case is AWS RDS security group. For testing you can open 3306 to public.
Let me know how it goes.

Resources