According to this page https://developer.swisscom.com/pricing it is possible to define instances count for every plan. Does it mean that if I would need additional GBs for the system I would just need to add more instances and that's it? Nothing to change in code and I could use same connection parameters?
To add to Fyodor Glebov's answer:
There is an easy way towards one-click upgrades: Push2Cloud.
Using custom workflows you can automate every interaction with CloudFoundry. We provide two workflows/Docker Images that migrate Redis and MongoDB instances:
migrate-redis
migrate-mongodb
The same approach would also work for Maria DB. If you are interested in implementing the workflow, open an issue on the main Push2Cloud repo.
In this graph you see Apps (not services for persistent data). With apps you can add instances and memory very dynamically. Apps are stateless.
Please read twelve-factor app for more info about howto develop apps for CF.
In the modern era, software is commonly delivered as a service: called
web apps, or software-as-a-service. The twelve-factor app is a
methodology for building software-as-a-service apps.
For services (with persistent data) you have to choose a plan. For example if you use small and you need more connections/storage (for example large), you can't upgrade with one command.
$ cf m -s mariadb
Getting service plan information for service mariadb as admin...
OK
service plan description free or paid
small Maximum 10 concurrent connections, 1GB storage paid
medium Maximum 15 concurrent connections, 8GB storage paid
large Maximum 100 concurrent connections, 16GB storage paid
You need to
dump the database (use service connector plugin and mysqldump on local device)
create a new service (cf cs mariadb large ...)
restore data to new service (service connector and mysql client)
delete old service (cf ds -f...)
There is no "one-click" upgrade at the moment.
Here's a step by step guide for MongoDB:
Stop apps connected to old DB (to ensure data consistency)
Create service key for old mongodb (cf create-service-key <mongodb-name> migration)
Retrieve service key: cf service-key <mongodb-name> migration
cf ssh into any app in the same space as DB: cf ssh <app-name> -L 13000:<mongodb-host>:<mongodb-port> (host and port from service key)
The credentials for the following command can all be found in the service key you retrieved in step 3. Open up a new terminal window and run mongodump --host 127.0.0.1:13000 --authenticationDatabase <mongodb-database> --username <mongodb-username> --password <mongodb-password> --db <mongodb-database> --out=dbbackup/dump
Create new database with cf create-service (list available plans with cf m -s mongodb)
Create service key for new db and retrieve it
Close tunnel from above and create a new one with host and port from the new db
Run mongorestore --host 127.0.0.1:13000 --authenticationDatabase <new-mongodb-database> --username <new-mongodb-username> --password <new-mongodb-password> --db <new-mongodb-database> <path-to-dump-file>
Related
I am trying to enable SSE with a Customer-Managed CMK in my production Redshift cluster to follow certain security protocols.
For POC purposes, I spun up a 1 Node dc2.large Redshift cluster and following this doc, I was able to enable SSE.
However, my question is, does enabling SSE encrypt the existing data in the cluster? If not, what steps should be taken?
Overall what are the downsides, if any, of enabling encryption at rest in a production Redshift cluster and what are the best practices?
There is no need to change anything in your code or existing pipelines/process. This is Disk encryption. Its nothing to do with your database connections or code.
To know more about the process then read these links.
https://aws.amazon.com/about-aws/whats-new/2018/10/encrypt-amazon-redshift-1-click/
https://docs.aws.amazon.com/redshift/latest/mgmt/changing-cluster-encryption.html
Ok so this might be a basic question, but i'm new to kubernetes and tried to install wordpress using helm unto it, using the stable/wordpress chart, but i keep getting an error "pod has unbound immediate PersistentVolumeClaims (repeated 2 times)" is this because of the requirement in here https://github.com/helm/charts/tree/master/stable/wordpress. "PV provisioner support in the underlying infrastructure" how do i enable this in my infrastructure, i have setup my cluster across three nodes on digitalocean, i've tried searching for tutorials on this, with no luck until now. Please let me know what i'm missing, thanks.
PersistentVolume types are implemented as plugins. Kubernetes currently supports the following plugins:
GCEPersistentDisk
AWSElasticBlockStore
AzureFile
AzureDisk
FC (Fibre Channel)
Flexvolume
Flocker
NFS
iSCSI
RBD (Ceph Block Device)
CephFS
Cinder (OpenStack block storage)
Glusterfs
VsphereVolume
Quobyte Volumes
HostPath (Single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster)
Portworx Volumes
ScaleIO Volumes
StorageOS
You can enable support for PVs or Dynamic PVs using thoese plugins.
detail reference
On Digital Ocean you can use block storage for volumes.
details
Kubernetes can be set-up for Dynamic Volume Provisioning. This would allow the Chart to run to completion using the default configuration as the PVs would be provisioned on-demand.
I am trying to understand clustering concept of WSO2. My basic understanding of cluster is that there are 2 or more server with same function using VIP or load balance in front. So I would like to know which of the WSO2 components can be clustered. I am trying to achieve configuration mentioned in this diagram.
Image of Config I am trying to achieve:
Can this configuration is achievable or not?
Can we cluster 2 Publisher nodes and 2 store nodes or not?
And how do we cluster Key Manager use same setting as Identity Manager?
Should we use port offset when running 2 components on the same server? And if yes how we make sure that components are using the ports as mentioned in port offset?
Should we create separate external database for each CarnonDB datasource entry in master_datasource.xml file or we can keep using local H2 database for this. I have created following databases Let me know if I am correct in doing this or not. wso2 databases I created:
I made several copies of wso2 binary files as shown in Image and copied them to the servers where I want to run 2 components on same server. Is this correct way of running 2 components on same server?
For Load balancing which components should we load balance and what ports should be use for load balancing?
That configuration is achievable. But Analytics servers are best to run on separate servers as they utilize a lot of resources.
Yes, you can.
Yes, you need port-offset. If you're on Linux, you can use netstat -pln command and filter by server PID.
Every server needs a local database and other databases are shared as mentioned in https://docs.wso2.com/display/CLUSTER44x/Clustering+API+Manager+2.0.0
Having copies is one way of doing that. Another way is letting a single server act as multiple components. For example, you can run publisher and store components together. You can see the recommended patterns in https://docs.wso2.com/display/AM210/Deployment+Patterns.
Except for Traffic manager, you can load balance every other component. For traffic manager, you can use fail-over. Here are the ports you need to load balance.
Servlet port - 9443(https)/9763 (For admin console and admin services)
NIO port - 8243(https)/8280 (For API calls at gateway)
If not use simulator or devstack, but use real production cluster, very necessary need will cost how many hosts(or nodes)?
CloudStack: 2 (management-servers and DBs) + 2 (Hypervisors) + 1 Storage(If you do not have a Storage Device, maybe you need a server for NFS or iSCSI)
Total: 5 servers for a minimal environment with load-balance and HA.
OpenStack: It depends on the component you have chosen. Every component can be installed in the right one server. But you need one more server for load-balance and HA.
Total: 2 servers for a minimal environment with load-balance and HA.
When planning a cloud platform, the total resource = ManagementServer*2 + Hypervisor*N + Storage(Server Or Storage Device)
Hypervisor number is the total cpus and memorys of how much vms you planned to run.
Storage is how much volumes you want to allocate for all vms.
For Cloudstack, unlike OpenStack, you can use just one physical machine or server for the installation of both the management server as well as agent (for execution of VMs) and yes, the database and NFS shares can be set up on the same machine too (assuming you need it for testing purpose).
You can follow the quick installation guide of Cloudstack here: http://docs.cloudstack.apache.org/projects/cloudstack-installation/en/4.11/qig.html
I have personally installed using the above documentation and can assure you the above works fine with CentOS 7.4 too. For more complex setup and architecture you can find more documentation here: http://docs.cloudstack.apache.org. Just be sure to have some free IPs available ;)
Can anyone help me with this ...
I have a 3 node sql server cluster lets say N1, N2 and N3. The name for the three node cluster is SQLCLUS.
The application connects to database using the name SQLCLUS in connections strings name.
The application uses SQL Server session manangement. So I remote desktopped to N1 (which is active while N2 and N3 are passive) and from the locaiton
C:\Windows\Microsoft.NET\Framework64\v2.0.50727
I executed the following command
aspnet_regsql.exe -S SQLCLUS -E -ssadd -sstype p
The command executed successfully. I could then login into SQLCLUS and see ASpState database created with 2 tables.
I then tested the applciation which uses the SQL Server session and it also works fine.
Now my question is ...
If there is a fail over to node N2 or N3 will my application still work ?. I did not execute the above command (aspnet_regsql.exe ) from N2.
Should I execute the command , aspnet_regsql.exe -S SQLCLUS -E -ssadd -sstype p , in N2 and N3 too ?
What changes happens in a sql server after executing the above command ?. I mean , is there any kind of service ot settings changes that can be seen ?.
Greatly apprecite any in puts regarding this....
Thanks in advance...
3.
Sql Server failover clustering can be conceptually explained as a smoke-and-mirrors dns hack. Thinking of clustering in familiar terms makes you realize how simple a technology it really is.
Simplified description of Sql Server Failover Clustering
Imagine you have two computers: SrvA and SrvB
You plug an external HD (F:) into SrvA, Install Sql Server and configure it to store its database files on f:\ (The executable is under C:\Program Files).
Unplug the HD, plug it into SrvB, Install Sql Server and configure it to store its database files on F:\ in the exact same location.
Now, you create a dns alias "MyDbServer" that points to SrvA, plug the external HD back into SrvA, and start sql server.
All is good until one day when the power supply fails on SrvA and the machine goes down.
To recover from this disaster you do the following:
Plug the external drive into SrvB
Start sql server on SrvB
Tweak the dns entry for "MyDbServer" to point to SrvB.
You're now up and going on SrvB, and your client applications are blissfully unaware that SrvA failed because they only ever connected using the name "MyDbServer".
Failover Clustering in the Reality
SrvA and SrvB are the cluster nodes.
The External HD is Shared SAN Storage.
The three step recovery process is what happens during a cluster failover and is managed automatically by the Windows Failover Clustering service.
What kinds of tasks need to be run on each Sql Node?
99.99% of the tasks that you perform in Sql Server will be stored in the database files on shared storage and therefore will move between nodes during a failover. This includes everything from creating logins, creating databases, INSERTS/UPDATES/DELETES on tables, Sql Agent jobs and just about everything else you can think of. This also includes all of the tasks that aspnet_regsql command performs (it does nothing special from a database perspective).
The remaining .01% of things that would have to be done on each individual node (because they aren't stored on shared storage) are things like applying service packs (remember that the executable is on c:), certain registry settings (some Sql Server registry settings are "checkpointed" and failover, some aren't), registering 3rd party COM dll's (no one really does this anymore) and changing the service account that Sql Server runs under.
Try it for yourself
If you want to verify that aspnet_regsql doesn't need to be run on each node, then try failing over and verify that your app still works. If you do run aspnet_regsql on each node and reference the clustered name (SQLCLUS) then you will effectively be over-writing the database, so if it doesn't error out, it will just wipe out your existing data.