I'm attempting to encrypt our existing PVs in EKS. Basically been searching the net and haven't come up with a solid solution. Our K8s version is 1.11 on EKS. Our PVs are EBS volumes. We currently have account level ebs encryption enabled but didn't when these resources were made.
I attempted to stop the ASG and the node. Created a snapshot and new encrypted volume then ran:
kubectl patch pv pvc-xxxxxxxx-xxxxx-xxxxxxxx -p '{"spec":{"awsElasticBlockStore":{"volumeID":"aws://us-east-1b/vol-xxxxx"}}}'
but was met with:
The PersistentVolume is invalid: spec.persistentvolumesource: Forbidden: is immutable after creation
I'm looking for a solution to this issue or to validate that is in fact not possible. Potential relevant github issue:
https://github.com/kubernetes/kubernetes/issues/59642
Thanks in advance
Related
I'm learning Kubernetes, and trying to setup a cluster that could handle a single Wordpress site with high traffic. From reading multiple examples online from both Google Cloud and Kubernetes.io - they all set the "accessMode" - "readWriteOnce" when creating the PVCs.
Does this mean if I scaled the Wordpress Deployment to use multiple replicas, they all use the same single PVC to store persistent data - read/write data. (Just like they use the single DB instance?)
The google example here only uses a single-replica, single-db instance - https://cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk
My question is how do you handle persistent storage on a multiple-replica instance?
ReadWriteOnce means all replicas will use the same volume and therefore they will all run on one node. This can be suboptimal.
You can set up ReadWriteMany volume (NFS, GlusterFS, CephFS and others) storage class that will allow multiple nodes to mount one volume.
Alternatively you can run your application as StatefulSet with volumeClaimTemplate which ensures that each replica will mount its own ReadWriteOnce volume.
If on AWS (and therefore blocked by the limitation of EBS volumes only mountable on a single instance at a time), another option here is setting up the Pod Affinity to schedule on the same node. Not ideal from an HA standpoint, but it is an option.
If after setting that up you start running into any wonky issues (e.g. unable to login to the admin, redirect loops, disappearing media), I wrote a guide on some of the more common issues people run into when running Wordpress on Kubernetes, might be worth having a look!
Ok so this might be a basic question, but i'm new to kubernetes and tried to install wordpress using helm unto it, using the stable/wordpress chart, but i keep getting an error "pod has unbound immediate PersistentVolumeClaims (repeated 2 times)" is this because of the requirement in here https://github.com/helm/charts/tree/master/stable/wordpress. "PV provisioner support in the underlying infrastructure" how do i enable this in my infrastructure, i have setup my cluster across three nodes on digitalocean, i've tried searching for tutorials on this, with no luck until now. Please let me know what i'm missing, thanks.
PersistentVolume types are implemented as plugins. Kubernetes currently supports the following plugins:
GCEPersistentDisk
AWSElasticBlockStore
AzureFile
AzureDisk
FC (Fibre Channel)
Flexvolume
Flocker
NFS
iSCSI
RBD (Ceph Block Device)
CephFS
Cinder (OpenStack block storage)
Glusterfs
VsphereVolume
Quobyte Volumes
HostPath (Single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster)
Portworx Volumes
ScaleIO Volumes
StorageOS
You can enable support for PVs or Dynamic PVs using thoese plugins.
detail reference
On Digital Ocean you can use block storage for volumes.
details
Kubernetes can be set-up for Dynamic Volume Provisioning. This would allow the Chart to run to completion using the default configuration as the PVs would be provisioned on-demand.
I'm working on a project to run a Kubernetes cluster on GCE. My goal is to run a cluster containing a WordPress site in multiple zones. I've been reading a lot of documentation, but I can't seem to find anything that is direct and to the point on persistent volumes and statefulsets in a multiple zone scenario. Is this not a supported configuration? I can get the cluster up and the statefulsets deployed, but I'm not getting the state replicated throughout the cluster. Any suggestions?
Thanks,
Darryl
Reading the docs, I see that the recommended configuration would be to create a MySQL cluster with replication: https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/. This way, you would have the data properly replicated between the instances of your cluster (if you are in a multi-zone deployment you may have to create an external endpoint).
Regarding the Wordpress data, my advice would be to go for an immutable deployment: https://engineering.bitnami.com/articles/why-your-next-web-service-should-be-immutable.html . This way, if you need to add a plugin or perform upgrades, you would create a new container image and re-deploy it. Regarding the media library assets and immutability, I think the best option would be to use an external storage service like S3 https://wordpress.org/plugins/amazon-s3-and-cloudfront/
So, to answer the original question: I think that statefulset synchronization is not available in K8s (at the moment). Maybe using a volume provider that allows ReadWriteMany access mode could fit your needs (https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes), though I am quite unsure about the stability of it.
According to this page https://developer.swisscom.com/pricing it is possible to define instances count for every plan. Does it mean that if I would need additional GBs for the system I would just need to add more instances and that's it? Nothing to change in code and I could use same connection parameters?
To add to Fyodor Glebov's answer:
There is an easy way towards one-click upgrades: Push2Cloud.
Using custom workflows you can automate every interaction with CloudFoundry. We provide two workflows/Docker Images that migrate Redis and MongoDB instances:
migrate-redis
migrate-mongodb
The same approach would also work for Maria DB. If you are interested in implementing the workflow, open an issue on the main Push2Cloud repo.
In this graph you see Apps (not services for persistent data). With apps you can add instances and memory very dynamically. Apps are stateless.
Please read twelve-factor app for more info about howto develop apps for CF.
In the modern era, software is commonly delivered as a service: called
web apps, or software-as-a-service. The twelve-factor app is a
methodology for building software-as-a-service apps.
For services (with persistent data) you have to choose a plan. For example if you use small and you need more connections/storage (for example large), you can't upgrade with one command.
$ cf m -s mariadb
Getting service plan information for service mariadb as admin...
OK
service plan description free or paid
small Maximum 10 concurrent connections, 1GB storage paid
medium Maximum 15 concurrent connections, 8GB storage paid
large Maximum 100 concurrent connections, 16GB storage paid
You need to
dump the database (use service connector plugin and mysqldump on local device)
create a new service (cf cs mariadb large ...)
restore data to new service (service connector and mysql client)
delete old service (cf ds -f...)
There is no "one-click" upgrade at the moment.
Here's a step by step guide for MongoDB:
Stop apps connected to old DB (to ensure data consistency)
Create service key for old mongodb (cf create-service-key <mongodb-name> migration)
Retrieve service key: cf service-key <mongodb-name> migration
cf ssh into any app in the same space as DB: cf ssh <app-name> -L 13000:<mongodb-host>:<mongodb-port> (host and port from service key)
The credentials for the following command can all be found in the service key you retrieved in step 3. Open up a new terminal window and run mongodump --host 127.0.0.1:13000 --authenticationDatabase <mongodb-database> --username <mongodb-username> --password <mongodb-password> --db <mongodb-database> --out=dbbackup/dump
Create new database with cf create-service (list available plans with cf m -s mongodb)
Create service key for new db and retrieve it
Close tunnel from above and create a new one with host and port from the new db
Run mongorestore --host 127.0.0.1:13000 --authenticationDatabase <new-mongodb-database> --username <new-mongodb-username> --password <new-mongodb-password> --db <new-mongodb-database> <path-to-dump-file>
In relation to my earlier similar SO question , I tried using snow/snowfall on AWS for parallel computing.
What I did was:
In the sfInit() function, I provided the public DNS to socketHosts parameter like so
sfInit(parallel=TRUE,socketHosts =list("ec2-00-00-00-000.compute-1.amazonaws.com"))
The error returned was Permission denied (publickey)
I then followed the instructions (I presume correctly!) on http://www.imbi.uni-freiburg.de/parallel/ in the 'Passwordless Secure Shell (SSH) login' section
I just cat the contents of the .pem file that I created on AWS into the ~/.ssh/authorized_keys of the AWS instance I want to connect to from my master AWS instance and for the master AWS instance as well
Is there anything I am missing out ?
I would be very grateful if users can share their experiences in the use of snow on AWS.
Thank you very much for your suggestions.
UPDATE:
I just wanted to update the solution I found to my specific problem:
I used StarCluster to setup my AWS cluster : StarCluster
Installed package snowfall on all the nodes of the cluster
From the master node issued the following commands
hostslist <- list("ec2-xxx-xx-xxx-xxx.compute-1.amazonaws.com","ec2-xx-xx-xxx-xxx.compute-1.amazonaws.com")
sfInit(parallel=TRUE, cpus=2, type="SOCK",socketHosts=hostslist)
l <- sfLapply(1:2,function(x)system("ifconfig",intern=T))
lapply(l,function(x)x[2])
sfStop()
The ip information confirmed that the AWS nodes were being utilized
Looks not that bad but the pem file is wrong. But it is sometimes not that simple and many people have to fight with this issues. A lot of tips you can find in this post:
https://forums.aws.amazon.com/message.jspa?messageID=241341
Or check google for other posts.
From my experience most people have problems in these steps:
Can you log onto the machines via ssh? (ssh ec2-00-00-00-000.compute-1.amazonaws.com). Try to use the public DNS, not the public IP to connect.
You should check your "Security groups" in AWS if the 22 port is open for all machines!
If you plan to start more than 10 worker machines you should work on a MPI installation on your machines (much better performance!)
Markus from cloudnumbers.com :-)
I believe #Anatoliy is correct: you're using an X.509 certificate. For the precise steps to take to add the SSH keys, look at the "Types of credentials" section of the EC2 Starters Guide.
To upload your own SSH keys, take a look at this page from Alestic.
It is a little confusing at first, but you'll want to keep clear which are your access keys, your certificates, and your key pairs, which may appear in text files with DSA or RSA.