kubernetes application throws DatastoreException, Missing or insufficient permissions. Service key file provided - google-cloud-datastore

I am deploying java application at google kubernetes engine. Application correctly starts but fails when trying to request data. Exception is "DatastoreException, Missing or insufficient permissions". I created service account with "Owner" role and provided service account key to kubernetes. Here is how i apply kubernetes deployment:
# delete old secret
kubectl delete secret google-key --ignore-not-found
# file with key
kubectl create secret generic google-key --from-file=key.json
kubectl apply -f prod-kubernetes.yml
Here is deployment config:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: user
labels:
app: user
spec:
type: NodePort
ports:
- port: 8000
name: user
targetPort: 8000
nodePort: 32756
selector:
app: user
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: userdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: user
spec:
volumes:
- name: google-cloud-key
secret:
secretName: google-key
containers:
- name: usercontainer
image: gcr.io/proj/user:v1
imagePullPolicy: Always
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
ports:
- containerPort: 8000
I wonder why it is not working? I have used this config in previous deployment and had success.
UPD: I made sure that /var/secrets/google/key.json exist at pod. I print Files.exists(System.getEnv("GOOGLE_APPLICATION_CREDENTIALS")) to log. I also print content of this file - it seems not corrupted.

Solved, reason was incorrect evn name GOOGLE_CLOUD_PROJECT

Related

How to add a VirtualServer on top of a service in Kubernetes?

I have the following deployment and service config files for deploying a service to Kubernetes;
apiVersion: apps/v1
kind: Deployment
metadata:
name: write-your-own-name
spec:
replicas: 1
selector:
matchLabels:
run: write-your-own-name
template:
metadata:
labels:
run: write-your-own-name
spec:
containers:
- name: write-your-own-name-container
image: gcr.io/cool-adviser-345716/automl:manual__2022-10-10T07_54_04_00_00
ports:
- containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: write-your-own-name
spec:
ports:
- port: 80
targetPort: 80
selector:
run: write-your-own-name
type: LoadBalancer
K8s exposes the service on the endpoint (let's say) http://12.239.21.88
I then created a namespace nginx-deploy and installed nginx-controller using the command
helm install controller-free nginx-stable/nginx-ingress \
--set controller.ingressClass=nginx-deployment \
--set controller.service.type=NodePort \
--set controller.service.httpPort.nodePort=30040 \
--set controller.enablePreviewPolicies=true \
--namespace nginx-deploy
I then added a rate limit policy
apiVersion: k8s.nginx.org/v1
kind: Policy
metadata:
name: rate-limit-policy
spec:
rateLimit:
rate: 1r/s
key: ${binary_remote_addr}
zoneSize: 10M
And then finally a VirtualServer
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: write-your-name-vs
spec:
ingressClassName: nginx-deployment
host: 12.239.21.88
policies:
- name: rate-limit-policy
upstreams:
- name: write-your-own-name
service: write-your-own-name
port: 80
routes:
- path: /
action:
pass: write-your-own-name
- path: /hehe
action:
redirect:
url: http://www.nginx.com
code: 301
Before adding the VirtualServer, I could go to 12.239.21.88:80 and access my service and I can still do that after adding the virtual server. But when I try accessing the page 12.239.21.88:80/hehe, I get detail not found error.
I am guessing that this is because the VirtualServer is not working on top of the service. How do I expose my service with a VirtualServer? Or alternatively, I want rate limiting on my service and how do I achieve this?
I used the following tutorial to get rate-limiting to work:
NGINX Tutorial: Protect Kubernetes APIs with Rate Limiting
I am sorry if the question is too long but I have been trying to figure out rate limiting for a while and can't get it to work. Thanks in advance.

Custom ngninx.conf can be applied through ConfigMap to Nginx pod but can we achieve through Secrets in Kubernetes for more security

I was able to accomplish the setup of ConfigMap for custom nginx.conf and mount to the Nginx pod and this works well.
My requirement is to make the credentials inside nginx.conf to be more secure and achieve through the usage of Secret.
I have tried with encoding(base 64) the nginx.conf file and applied on secret yaml file but applying deployment file throws an error.
Kindly, guide with some insights if this could be achieved with the Secrect usage as the issue lies with the secret data portion.
Secret file:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: nginx-secret
data:
nginx.conf: |
*************************************************
Below shows the error while running the nginx deployment file:
error validating data: ValidationError(Deployment.spec.template.spec.volumes[0].secret): unknown field "name" in io.k8s.api.core.v1.SecretVolumeSource; if you choose to ignore these errors, turn validation off with --validate=false
Secrets and ConfigMaps are essentially identical. Both can be mounted as volumes in your pods; if you want to use a Secret instead of ConfigMap, replace:
volumes:
- name: nginx-config
configMap:
name: nginx-config
With:
volumes:
- name: nginx-config
secret:
secretName: nginx-secret
But note that this does not get you any additional security! Values in a Secret are stored in plaintext and can be read by anyone with the necessary permissions to read the secret, just like a ConfigMap.
A complete Deployment might look like:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: docker.io/nginx:mainline
imagePullPolicy: Always
name: nginx
volumeMounts:
- mountPath: /etc/nginx/conf.d
name: nginx-config
volumes:
- name: nginx-config
secret:
secretName: nginx-secret
This will mount keys from the secret (such as default.conf) in /etc/nginx/conf.d. The contents will not be base64 encoded.

Kubernetes Mariadb service cannot be accessed

i wanted to make wordpress with kubernetes, but wordpress cant use host from mariadb-service. This is my script
---
apiVersion: v1
kind: Service
metadata:
name: db-wordpress
labels:
app: mariadb-database
spec:
selector:
app: mariadb-database
ports:
- port: 3306
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mariadb-database
spec:
selector:
matchLabels:
app: mariadb-database
template:
metadata:
labels:
app: mariadb-database
spec:
containers:
- name: mariadb-database
image: darywinata/mariadb:1.0
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: database-secret
key: password
- name: MYSQL_USER
value: blibli
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: database-secret
key: password
- name: MYSQL_DATABASE
value: wpdb
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: servicetype
operator: In
values:
- database-mariadb
im already fight this error for 1 week, can somebody help me with this?
note: inside docker container port 3306 is not listening, idk this is wrong or not.
Hi there and welcome to stackoverflow.
There are two issues with your setup. First of all, I have tried running your mysql docker image locally and compared to the official mysql image it is not listening to any port. Without the mysql process listening to any port, you will not be able to connect to it.
Also, you might want to consider a standard internal service type instead of one with clusterIP: None which is called a headless service and usually used for statefulsets and not deployments. more information can be found on the official documentation
So in order to connect from your application to your pod:
Fix problem with your custom mysql image so it actually listens on port 3306 (or whatever you have configured in your image)

kubernetes persistent volume for nginx not showing the default index.html file

I am testing out something with the PV and wanted to get some clarification. We have an 18 node cluster(using Docker EE) and we have mounted NFS share on each of this node to be used for the k8s persistent storage. I created a PV (using hostPath) to bind it with my nginx deployment(mounting the /usr/share/nginx/html to PV).
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-test-namespace-pv
namespace: test-namespace
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/nfs_share/docker/mynginx/demo"
How to create the PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-test-namespace-pvc
namespace: test-namespace
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
Deployment File:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mynginx
specs:
selector:
matchLabels:
run: mynginx-apps
replicas:2
template:
metadata:
labels:
run: mynginx-apps
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: nfs-test-namespace-pvc
containers:
- name: mynginx
image: dtr.midev.spglobal.com/spgmi/base:mynginx-v1
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
So i assume when my pod starts the default index.html file from the nginx image should be available at the /usr/share/nginx/html within my pod and it should also be copied/available at my /nfs_share/mynginx/demo.
However i am not seeing any file here and when i expose this deployment and access the service it gives me 403 error as the index file is not available. Now when i create an html file either from inside the pod or from the node on the nfs share mounted as PV, it works as expected.
Is my assumption of the default file getting copied to hostpath correct? or am i missing something?
Your /nfs_share/docker/mynginx/demo will not be available in pod, explanation is available here:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
The configuration file specifies that the volume is at /mnt/data on the cluster’s Node. The configuration also specifies a size of 10 gibibytes and an access mode of ReadWriteOnce, which means the volume can be mounted as read-write by a single Node. It defines the StorageClass name manual for the PersistentVolume, which will be used to bind PersistentVolumeClaim requests to this PersistentVolume.
You do not see PV on your pod, it's being used to utilize as PVC which then can be mounted inside a pod.
You can read the whole article Configure a Pod to Use a PersistentVolume for Storage which should answer all the questions.
the "/mnt/data" directory should be created on the node which your pod running actually.

Kubernetes deployment fails

I have Pod and Service ymal files in my system. I want to run these two using kubectl create -f <file> and connect from outside browser to test connectivity.Here what I have followed.
My Pod :
apiVersion: v1
kind: Pod
metadata:
name: client-nginx
labels:
component: web
spec:
containers:
- name: client
image: nginx
ports:
- containerPort: 3000
My Services file :
apiVersion: v1
kind: Service
metadata:
name: client-nginx-port
spec:
type: NodePort
ports:
- port: 3050
targetPort: 3000
nodePort: 31616
selector:
component: web
I used kubectl create -f my_pod.yaml and then kubectl get pods shows my pod client-nginx
And then kubectl create -f my_service.yaml, No errors here and then shows all the services.
When I try to curl to service, it gives
curl: (7) Failed to connect to 192.168.0.10 port 31616: Connection refused.
kubectl get deployments doesnt show my pod. Do I have to deploy it? I am a bit confused. If I use instructions given here, I can deploynginxsuccessfully and access from outside browsers.
I used instructions given here to test this.
Try with this service:
apiVersion: v1
kind: Service
metadata:
name: client-nginx-port
spec:
type: NodePort
ports:
- port: 3050
targetPort: 80
nodePort: 31616
selector:
component: web
You missed selector name to be given to pod yaml which will be picked by service where you have mentioned the selector as component
Use this in pod yaml
apiVersion: v1
kind: Pod
metadata:
name: client-nginx
labels:
component: web
spec:
selector:
component: nginx
containers:
- name: client
image: nginx
ports:
- containerPort: 3000
Useful links:
https://kubernetes.io/docs/concepts/services-networking/service/
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/

Resources