Where WordPress install is located on Google Cloud Kubernetes Cluster - wordpress

I have Wordpress run as an app container in Google Cloud Kubernetes Cluster.
I've ruined my site a bit by wrong modifications of theme's functions.php file. So now i would like to remove my bad code to make site working. Hoever I can not find where Wordpress is located.
As all I need is to remove couple lines of PHP code I thought it might be easier to do it right from the SSH command line without playing with SFTP and keys (sorry I'm newby in WordPress/Sites in general)
This how it looks like in Google Cloud Console
Wordpress install
Google Cloud Console: my cluster
I'm connecting to cluster through SSH by pressing "Connect" button.
And... tada! I see NO "/var/www/html" in "var" folder! ".../www/html" folder is not exists/visible even under root
Can someone help me with finding WordPress install, please :)
Here is the output for $ kubectl describe pod market-engine-wordpress-0 mypod -n kalm-system comand
Name: market-engine-wordpress-0
Namespace: kalm-system
Priority: 0
Node: gke-cluster-1-default-pool-6c5a3d37-sx7g/10.164.0.2
Start Time: Thu, 25 Jun 2020 17:35:54 +0300
Labels: app.kubernetes.io/component=wordpress-webserver
app.kubernetes.io/name=market-engine
controller-revision-hash=market-engine-wordpress-b47df865b
statefulset.kubernetes.io/pod-name=market-engine-wordpress-0
Annotations: <none>
Status: Running
IP: 10.36.0.17
IPs:
IP: 10.36.0.17
Controlled By: StatefulSet/market-engine-wordpress
Containers:
wordpress:
Container ID: docker://32ee6d8662ff29ce32a5c56384ba9548bdb54ebd7556de98cd9c401a742344d6
Image: gcr.io/cloud-marketplace/google/wordpress:5.3.2-20200515-193202
Image ID: docker-pullable://gcr.io/cloud-marketplace/google/wordpress#sha256:cb4515c3f331e0c6bcca5ec7b12d2f3f039fc5cdae32f0869abf19238d580575
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 29 Jun 2020 15:37:38 +0300
Finished: Mon, 29 Jun 2020 15:40:08 +0300
Ready: False
Restart Count: 774
Environment:
POD_NAME: market-engine-wordpress-0 (v1:metadata.name)
POD_NAMESPACE: kalm-system (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4f6xq (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
market-engine-wordpress-pvc:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: market-engine-wordpress-pvc-market-engine-wordpress-0
ReadOnly: false
apache-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: market-engine-wordpress-config
Optional: false
config-map:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: market-engine-wordpress-config
Optional: false
default-token-4f6xq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4f6xq
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 8m33s (x9023 over 2d15h) kubelet, gke-cluster-1-default-pool-6c5a3d37-sx7g Readiness probe failed: HTTP probe failed with statuscode: 500
Warning BackOff 3m30s (x9287 over 2d15h) kubelet, gke-cluster-1-default-pool-6c5a3d37-sx7g Back-off restarting failed container

As you described, your application is crashing because of a change you have made in the code. This is making your website to fail and your pod is configured to check if the website is running fine and if not, the container will be restarted. The configuration that makes it happen is the LivenessProbe and the ReadinessProbe.
The problem here is that prevents you from fixing the problem.
The good news is that your data is saved under /var/www/html and this directory is on a external storage.
So, the easiest solution is to create a new pod and attach this storage to this pod. Problem is that this storage cannot be mounted on more than one container at the same time.
Creating this new pod, requires you to temporarily remove your wordpress pod. I know, it may be scary but we will recreate it after.
I reproduced your scenario and tested these steps. So Let's start. (All steps as mandatory)
Before we start, let's save your market-engine-wordpress manifest:
$ kubectl get statefulsets market-engine-wordpress -o yaml > market-engine-wordpress.yaml
Delete your wordpress statefulset:
$ kubectl delete statefulsets market-engine-wordpress
This commands delete the instruction that creates your wordpress pod.
Now, let's create a new pod using the following manifest:
apiVersion: v1
kind: Pod
metadata:
name: fenix
namespace: kalm-system
spec:
volumes:
- name: market-engine-wordpress-pvc
persistentVolumeClaim:
claimName: market-engine-wordpress-pvc-market-engine-wordpress-0
containers:
- name: ubuntu
image: ubuntu
command: ['sh', '-c', "sleep 36000"]
volumeMounts:
- mountPath: /var/www/html
name: market-engine-wordpress-pvc
subPath: wp
To create this pod, save this content in a file as fenix.yaml and run the following command:
$ kubectl apply -f fenix.yaml
Check if the pod is ready:
$ kubectl get pods fenix
NAME READY STATUS RESTARTS AGE
fenix 1/1 Running 0 5m
From this point, you can connect to this pod and fix your functions.php file:
$ kubectl exec -ti fenix -- bash
root#fenix:/# cd /var/www/html/wp-includes/
root#fenix:/var/www/html/wp-includes#
When you are done fixing your code, we can delete this pod and re-create your wordpress pod.
$ kubectl delete pod fenix
pod "fenix" deleted
$ kubectl apply -f market-engine-wordpress.yaml
statefulset.apps/market-engine-wordpress created
Check if the pod is ready:
$ kubectl get pod market-engine-wordpress-0
NAME READY STATUS RESTARTS AGE
market-engine-wordpress-0 2/2 Running 0 97s
If you need to exec into the wordpress container, your application uses the concept of multi-container pod and connecting to the right container requires you to indicate what container you want to connect.
To check how many containers and the name of which one you can run kubectl get pod mypod -o yaml or run kubectl describe pod mypod.
To finally exec into it, use the following command:
$ kubectl exec -ti market-engine-wordpress-0 -c wordpress -- bash
root#market-engine-wordpress-0:/var/www/html#

Related

How to get root password in Bitnami Wordpress from kubernetes shell?

I have installed Worpress in Rancher, (docker.io/bitnami/wordpress:5.3.2-debian-10-r43) I have to make wp-config writable but I get stuck, when get shell inside this pod to log as root :
kubectl exec -t -i --namespace=annuaire-p-brqcw annuaire-p-brqcw-wordpress-7ff856cd9f-l9gf7 bash
I cannot login to root, no password match with Bitnami Wordpress installation.
wordpress#annuaire-p-brqcw-wordpress-7ff856cd9f-l9gf7:/$ su root
Password:
su: Authentication failure
What is the default password, or how to change it ?
I really need your help!
The WordPress container has been migrated to a "non-root" user
approach. Previously the container ran as the root user and the Apache
daemon was started as the daemon user. From now on, both the container
and the Apache daemon run as user 1001. You can revert this behavior
by changing USER 1001 to USER root in the Dockerfile.
No writing permissions will be granted on wp-config.php by default.
This means that the only way to run it as root user is to create own Dockerfile and changing user to root.
However it's not recommended to run those containers are root for security reasons.
The simplest and most native Kubernetes way to change the file content on the Pod's container file system is to create a ConfigMap object from file using the following command:
$ kubectl create configmap myconfigmap --from-file=foo.txt
$ cat foo.txt
foo test
(Check the ConfigMaps documentation for details how to update them.)
then mount the ConfigMap to your container to replace existing file as follows:
(example requires some adjustments to work with Wordpress image):
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: nginx
volumeMounts:
- name: volname1
mountPath: "/etc/wpconfig.conf"
readOnly: true
subPath: foo.txt
volumes:
- name: volname1
configMap:
name: myconfigmap
In the above example, the file in the ConfigMap data: section replaces original /etc/wpconfig.conf file (or creates if the file doesn't exist) in the running container without necessity to build a new container.
$ kubectl exec -ti mypod -- bash
root#mypod:/# ls -lah /etc/wpconfig.conf
-rw-r--r-- 1 root root 9 Jun 4 16:31 /etc/wpconfig.conf
root#mypod:/# cat /etc/wpconfig.conf
foo test
Note, that the file permissions is 644 which is enough to be readable by non-root user.
BTW, Bitnami Helm chart also uses this approach, but it relies on the existing configMap in your cluster for adding custom .htaccess and persistentVolumeClaim for mounting Wordpress data folder.

Automation own job through Jenkins And publish over Kubernetes through HTTPD or NGINX

I have my some files which is in react lng. I am doing build from npm.I am doing locally this build.I have the build path.
I want to deploy this build to pods of the kubernetes.
how to write the deployment.yaml?
how to configure my nginx or httpd root folder which can publish my codes?
If first i have to make the docker image of that project file , then how?
First you have to create Dockerfile:
Egg. Dockerfile:
FROM golang
WORKDIR /go/src/github.com/habibridho/simple-go
ADD . ./
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix .
EXPOSE 8888
ENTRYPOINT ./simple-go
Build your image and try to run it.
$ docker build -t simple-go .
$ docker run -d -p 8888:8888 simple-go
Next step is transferring image to the server. You can use Docker Hub. You can push image to the repository and pull it from the server.
-- on local machine
$ docker tag simple-go habibridho/simple-go
$ docker push habibridho/simple-go
-- on server
$ docker pull habibridho/simple-go
You have to note that the default docker repository visibility is public, so if your project is private, you need to change the project visibility from the Docker Hub website.
Useful information about that process you can find here:
docker-images
Once we have the image in our server, we can run the app just like what we have done in our local machine by creating deployment.
The following is an example of a Deployment. It creates a ReplicaSet to bring up three your app Pods:
apiVersion: apps/v1
kind: Deployment
metadata:
name: your-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: your-app
template:
metadata:
labels:
app: your-app
spec:
containers:
- name: your-app
image: your-app:version
ports:
- containerPort: port
In this example:
A Deployment named your-deployment is created, indicated by the .metadata.name field.
The Deployment creates three replicated Pods, indicated by the replicas field.
The selector field defines how the Deployment finds which Pods to manage. In this case, you simply select a label that is defined in the Pod template (app: your-app).
However, more sophisticated selection rules are possible, as long as the Pod template itself satisfies the rule.
To create Deployment, run the following command:
$ kubectl create -f your_deployment_file_name.yaml
More information you can find here: kubernetes-deployment.

kubectl port-forward connection refused [ socat ]

I am running pyspark on one of the ports of kubernetes. I am trying to port forward to my local machine. I am getting this error while executing my python file.
Forwarding from 127.0.0.1:7077 -> 7077
Forwarding from [::1]:7077 -> 7077
Handling connection for 7077
E0401 01:08:11.964798 20399 portforward.go:400] an error occurred forwarding 7077 -> 7077: error forwarding port 7077 to pod 68ced395bd081247d1ee6b431776ac2bd3fbfda4d516da156959b6271c2ad90c, uid : exit status 1: 2019/03/31 19:38:11 socat[1748104] E connect(5, AF=2 127.0.0.1:7077, 16): Connection refused
this a few lines of my python file. I am getting error in the lines where conf is defined.
from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext
conf = SparkConf().setMaster("spark://localhost:7077").setAppName("Stand Alone Python Script")
I already tried installing socat on the kubernetes. I am using spark version 2.4.0 locally. I even tried exposing port 7077 in YAML file. Did not work out.
This is the YAML file used for deployment.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
creationTimestamp: 2018-10-07T15:23:35Z
generation: 16
labels:
chart: spark-0.2.1
component: m3-zeppelin
heritage: Tiller
release: m3
name: m3-zeppelin
namespace: default
resourceVersion: "55461362"
selfLink: /apis/apps/v1beta1/namespaces/default/statefulsets/m3-zeppelin
uid: f56e86fa-ca44-11e8-af6c-42010a8a00f2
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
component: m3-zeppelin
serviceName: m3-zeppelin
template:
metadata:
creationTimestamp: null
labels:
chart: spark-0.2.1
component: m3-zeppelin
heritage: Tiller
release: m3
spec:
containers:
- args:
- bash
- -c
- wget -qO- https://archive.apache.org/dist/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz
| tar xz; mv spark-2.2.2-bin-hadoop2.7 spark; curl -sSLO https://storage.googleapis.com/hadoop-lib/gcs/gcs-connector-latest-hadoop2.jar;
mv gcs-connector-latest-hadoop2.jar lib; ./bin/zeppelin.sh
env:
- name: SPARK_MASTER
value: spark://m3-master:7077
image: apache/zeppelin:0.8.0
imagePullPolicy: IfNotPresent
name: m3-zeppelin
ports:
- containerPort: 8080
name: http
protocol: TCP
resources:
requests:
cpu: 100m
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /zeppelin/conf
name: m3-zeppelin-config
- mountPath: /zeppelin/notebook
name: m3-zeppelin-notebook
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: m3-zeppelin-config
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10G
storageClassName: standard
status:
phase: Pending
- metadata:
creationTimestamp: null
name: m3-zeppelin-notebook
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10G
storageClassName: standard
status:
phase: Pending
status:
collisionCount: 0
currentReplicas: 1
currentRevision: m3-zeppelin-5779b84d99
observedGeneration: 16
readyReplicas: 1
replicas: 1
updateRevision: m3-zeppelin-5779b84d99
updatedReplicas: 1
Focusing specifically on the error from the Kubernetes perspective it could be related to:
Mismatch between the ports that request is sent to and the receiving end (for example sending a request to an NGINX instance on port: 1234)
Pod not listening on a desired port.
I've managed to reproduce this error with a Kubernetes cluster created with kubespray.
Assuming that you've run following steps:
$ kubectl create deployment nginx --image=nginx
$ kubectl port-forward deployment/nginx 8080:80
Everything should be correct and the NGINX welcome page should appear when running: $ curl localhost:8080.
If we made a change to the port-forward command like below (notice the 1234 port):
$ kubectl port-forward deployment/nginx 8080:1234
You will get following error:
Forwarding from 127.0.0.1:8080 -> 1234
Forwarding from [::1]:8080 -> 1234
Handling connection for 8080
E0303 22:37:30.698827 625081 portforward.go:400] an error occurred forwarding 8080 -> 1234: error forwarding port 1234 to pod e535674b2c8fbf66252692b083f89e40f22e48b7a29dbb98495d8a15326cd4c4, uid : exit status 1: 2021/03/23 11:44:38 socat[674028] E connect(5, AF=2 127.0.0.1:1234, 16): Connection refused
This would also work on a Pod that application haven't bound to the port and/or is not listening.
A side note!
You can simulate it by running an Ubuntu Pod where you try to curl its port 80. It will fail as nothing listens on its port. Try to exec into it and run $ apt update && apt install -y nginx and try to curl again (with kubectl port-forward configured). It will work and won't produce the error mentioned (socat).
Addressing the part of the question:
I even tried exposing port 7077 in YAML file. Did not work out.
If you mean that you've included the - containerPort: 8080. This field is purely informational and does not carry any configuration to be made. You can read more about it here:
Stackoverflow.com: Answer: Why do we need a port/containerPort in a Kuberntes deployment/container definition?
(which besides that I consider incorrect as you are using the port: 7077)
As for the $ kubectl port-forward --address 0.0.0.0. It's a way to expose your port-forward so that it would listen on all interfaces (on a host machine). It could allow for an access to your port-forward from LAN:
$ kubectl port-forward --help (part):
# Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod
kubectl port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000
Additional resources:
Kubernetes.io: Docs: Tasks: Access application cluster: Port forward access to application cluster
Maybe you should use command:
kubectl get pods
to check whether your pods are running.
In my case, I start minikube without network connection, so I meet the same question when I use port-forward to trans pod's port to local mechine's port.

Windows container deployed in ACS kubernetes cluster not able to be reached using the assigned Public IP?

I have deployed a windows container which runs successfully in my local sytem using docker. Moved the image to Azure container registry and deployed the image from ACR to Azure Container service kubernetes cluster
cluster. It says it has been deployed successfully but we can't access it using the public IP assigned to it.
Docker File
# The `FROM` instruction specifies the base image. You are
# extending the `microsoft/aspnet` image.
FROM microsoft/aspnet
# The final instruction copies the site you published earlier into the container.
COPY . /inetpub/wwwroot
Manifest File YAML
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: ewimscloudpoc-v1
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: ewimscloudpoc-v1
spec:
containers:
- name: ewims
image: acraramsam.azurecr.io/ewims:v1
ports:
- containerPort: 80
args: ["-it"]
resources:
requests:
cpu: 250m
limits:
cpu: 500m
env:
- name: dev
value: "ewimscloudpoc-v1"
nodeSelector:
beta.kubernetes.io/os: windows
---
apiVersion: v1
kind: Service
metadata:
name: ewimscloudpoc-v1
spec:
loadBalancerIP: 104.40.9.103
type: LoadBalancer
ports:
- port: 80
selector:
app: ewimscloudpoc-v1
This is the code written in yaml file for deployment from ACR to ACS
Command used to deploy: kubectl create -f filename.yaml
While reaching the IP assigned in browser it says site not reached.
D:\>kubectl describe po ewimscloudpoc-v1-2192714781-hg5z3
Name: ewimscloudpoc-v1-2192714781-hg5z3
Namespace: default
Node: 54d99acs9000/10.240.0.4
Start Time: Fri, 21 Dec 2018 18:42:38 +0530
Labels: app=ewimscloudpoc-v1
pod-template-hash=2192714781
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"ewimscloudpoc-v1-2192714781","uid":"170fbfeb-0522-11e9-9805-000d...
Status: Pending
IP:
Controlled By: ReplicaSet/ewimscloudpoc-v1-2192714781
Containers:
ewims:
Container ID:
Image: acraramsam.azurecr.io/ewims:v1
Image ID:
Port: 80/TCP
Host Port: 0/TCP
Args:
-it
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Limits:
cpu: 500m
Requests:
cpu: 250m
Environment:
dev: ewimscloudpoc-v1
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-8nmv0 (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-8nmv0:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-8nmv0
Optional: false
QoS Class: Burstable
Node-Selectors: beta.kubernetes.io/os=windows
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned ewimscloudpoc-v1-2192714781-hg5z3 to 54d99acs9000
Normal SuccessfulMountVolume 11m kubelet, 54d99acs9000 MountVolume.SetUp succeeded for volume "default-token-8nmv0"
Normal Pulling 1m (x7 over 11m) kubelet, 54d99acs9000 pulling image "acraramsam.azurecr.io/ewims:v1"
Warning FailedSync 7s (x56 over 11m) kubelet, 54d99acs9000 Error syncing pod
Normal BackOff 7s (x49 over 11m) kubelet, 54d99acs9000 Back-off pulling image "acraramsam.azurecr.io/ewims:v1"
your pod fails to get created due to you not having secret for ACR:
kubectrl create secret docker-registry <SECRET_NAME> --docker-server <REGISTRY_NAME>.azurecr.io --docker-email <YOUR_MAIL> --docker-username=<SERVICE_PRINCIPAL_ID> --docker-password <YOUR_PASSWORD>
https://thorsten-hans.com/how-to-use-a-private-azure-container-registry-with-kubernetes-9b86e67b93b6
Added the security rules for ACS to access ACR repos as stated in this link - https://thorsten-hans.com/how-to-use-a-private-azure-container-registry-with-kubernetes-9b86e67b93b6 and updated my docker file as below fixed my issues,
FROM microsoft/iis:10.0.14393.206
SHELL ["powershell"]
RUN Install-WindowsFeature NET-Framework-45-ASPNET ; \
Install-WindowsFeature Web-Asp-Net45
COPY sampleapp sampleapp
RUN Remove-WebSite -Name 'Default Web Site'
RUN New-Website -Name 'sampleapp' -Port 80 \
-PhysicalPath 'c:\sampleapp' -ApplicationPool '.NET v4.5'
EXPOSE 80
CMD ["ping", "-t", "localhost"]

Kubernetes Minikube Secrets appear not mounted in Pod

I have a "Deployment" in Kubernetes which works fine in GKE, but fails in MiniKube.
I have a Pod with 2 containers:-
(1) Nginx as reverse proxy ( reads secrets and configMap volumes at /etc/tls & /etc/nginx respectively )
(2) A JVM based service listening on localhost
The problem in the minikube deployment is that the Nginx container fails to read the TLS certs which appear not to be there - i.e. the volume mount of the secrets to the Pod appears to have failed.
nginx: [emerg] BIO_new_file("/etc/tls/server.crt") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/tls/server.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file)
But if I do "minikube logs" I get a large amount of seemingly "successful" tls volume mounts...
MountVolume.SetUp succeeded for volume "kubernetes.io/secret/61701667-eca7-11e6-ae16-080027187aca-scriptwriter-tls" (spec.Name: "scriptwriter-tls")
And the secret themselves are in the cluster okay ...
$ kubectl get secrets scriptwriter-tls
NAME TYPE DATA AGE
scriptwriter-tls Opaque 3 1h
So it would appear that as far as miniKube is concerned all is well from a secrets point of view. But on the other hand the nginx container can't see it.
I can't logon to the container either since it keeps terminating.
For completeness the relevant sections from the Deployment yaml ...
Firstly the nginx config...
- name: nginx
image: nginx:1.7.9
imagePullPolicy: Always
ports:
- containerPort: 443
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx", "-s", "quit"]
volumeMounts:
- name: "nginx-scriptwriter-dev-proxf-conf"
mountPath: "/etc/nginx/conf.d"
- name: "scriptwriter-tls"
mountPath: "/etc/tls"
And secondly the volumes themselves at the container level ...
volumes:
- name: "scriptwriter-tls"
secret:
secretName: "scriptwriter-tls"
- name: "nginx-scriptwriter-dev-proxf-conf"
configMap:
name: "nginx-scriptwriter-dev-proxf-conf"
items:
- key: "nginx-scriptwriter.conf"
path: "nginx-scriptwriter.conf"
Any pointers of help would be greatly appreciated.
I am a first class numpty! :-) Sometimes the error is just the error! So the problem was that the secrets are created using local $HOME/.ssh/* certs ... and if you are generating them from different computers with different certs then guess what?! So all fixed now :-)

Resources