Goldengate Microservices and Yum update - oracle-golden-gate

I am in the process of using Ansible to run yum updates on my servers. One I am in charge of has GG Microservices 19.1 installed on it. Is it best practice to just shut down the Service Manager, therefore, stopping all replication and do the yum update, or do I need to stop each replication first then stop Service Manager before the yum update. I am just trying to see what everyone else is doing. Thanks in advance.

Yum is not supported with GoldenGate. You have to use OPatch to patch GG Microservices.

Related

Install kubernetes cluster in devstack

I have installed devstack in my server as per this steps and I was looking for some updated instructions to install kubernates cluster in it. Even though my question is on kubernetes I would like to clarify few points.
Is Openstack opensource ? or the opensource version is called devstack. Because I was trying to install a production ready environment but everywhere I see examples to install devstack or the one is few years old.
How to Install Openstack not Devstack
And finally can someone please help me with instruction to install kubernetes on devstack as thats the one I could install now and I guess the instructions would be almost similar.
I know there are posts but almost all of them are few years old so a help would be greatly appreciated.
Hoping that it is allowed to reference my own work: I wrote a short series of articles about Kubernetes on Devstack, both Kubernetes from scratch and using OpenStack Magnum.
The document that you used to install OpenStack describes not Devstack, but Microstack.
OpenStack is 100% open-source, yes. See https://www.openstack.org/.
Devstack is one of the many ways to deploy an OpenStack cloud. Its original purpose is to set up a test environment for OpenStack developers, and not so much to be user-friendly, but it is often used for training or proof-of-concept.
There are many other deployment methods: Microstack (easy but not very flexible), Packstack (requires RHEL or Centos), Tripleo (also requires RHEL or Centos and a bit more powerful hardware), Kolla-Ansible, and the best method for learners in my opinion: Manual setup. This list is far from complete.

What is the ideal environment to run Apache Airflow on?

I am currently running Airflow through Ubuntu WSL on my PC, which is working great. However I am setting up a pipeline which will need to be running constantly (24/7), so I am looking for ideas and recommendations on what to run Airflow on. I do not want to have my system on all the time obviously.
Surprisingly, I cannot find much information on this! It seems it is not discussed at length...
It depends on your workload.
If you have few tasks to run you can just create a VM on any cloud provider (GCP, AWS, Azure, etc.) and install Airflow on it that would run 24x7.
If your workload is high you can user K8s (GKE, EKS, etc.) and install Airflow on it.

How to install OpenStack Mitaka + OVS Bridge + DVR

All:
I want to install OpenStack Mitaka + OVS Bridge + DVR for CentOS-7 and i don't want to use any automatic tools but RDO. However i can't find any installation guides for my purpose, which confuse me very much!
Can anyone give me some help here? Thanks advance!
I know you don't want to use automated tools, but, I have automated installers constructed in basic shell language (sh), which includes both Centos 7 using RDO repos, and Ubuntuses using ubuntu-cloud-archive.
The reason I'm using simple shell instead of puppet or ansible, is due the fact I also use them for teach people how to "manually install" OpenStack. You can see my installers on my site at github, and specifically the one you need for Mitaka (I'll publish the Newton-based in about two weeks):
https://github.com/tigerlinux/openstack-mitaka-installer-centos7
Note that due the fact my installer is completely modular, and each module takes care of a single component, it can be easier to you to self-teach yourself how to manually install OpenStack.
Also, I have some I.T. recipes in the following link, with a section with tips-and-tricks for OpenStack:
http://tigerlinux.github.io/
It's not much, but I hope it helps you !.

How Can I run two versions of Sonatype Nexus on the same machine?

so i just started working on a project, and my task is to upgrade sonatype nexus 1.9.x running on CentOS6 to 2.11.x. The old version is currently deployed via a war file. The goal is to get the new version deployed while not breaking builds when devs try to build their project.
My plan of attack is to download nexus. Make the current nexus that is deployed via tomcat, run on a different port, make the new nexus run on the current port, then proxy the old nexus.
Im running into a couple problems though. the old nexus uses java 1.6. If update java to 1.8, would this break the current running nexus?
Would I be able to run two version of nexus on the same vm? If so, how would i do that and minimize the change of messing something up?
Thanks everyone. Im just starting out and this is all very new to me.
Since you Nexus install is very old you have to consider your options:
You could upgrade the existing instance. 1.9 is VERY old so you have to upgrade in multiple steps. First to 2.0, then 2.7 and then 2.11. This is necessary due to data storage changes for configuration and removed upgrade steps.
You could just reconfigure a new server from scratch with the same configuration in terms of repositories and other things and simply rsync the repsitories over to the new storage. You really only have to do this for hosted repositories since the proxy repositories will hopefully still be online and you will just download whatever is requested anew.
If your setup is not too complex I would personally go with option 2. It gives you a chance to revisit things and clean up your setup.
For that setup the steps are roughly.
Install Java 8 in parallel to Java 6
Install Nexus 2.11 from the bundle so it runs with Eclipse Jetty. Do NOT try to run on Tomcat.
Configure it to run on port 9081 or some other non-conflicting port with your original setup and do all the other config including creating the repositories as desired as well as security setup.
Now you should be able to have both servers running.
Create a script that rsyncs the repositories (located in sonatype-work/nexus/storage) and run it with the new server offline
Start the new Nexus in parallel and run a number of tests against it.
Once you have confirmed everything is working plan for a specific time for the cutover and do this
Disable any deployment to Nexus (CI servers, tell people, switch hosted repositories to read only)
Run the rsync script one last time
Turn the old Nexus server off
Configure the new server to use the port of the old one
Start the new one up
You are done. Everything should be good now so the last step is to delete the old Nexus and Tomcat setup.
There are various variations for this process of course. Here are some tips for the rsync.
Also feel free to ping us on the mailing list or chat for further help and check out the comprehensive documentation as well.

Openstack and devstack

Does devstack completely install openstack? I read somewhere that devStack is not and has never been intended to be a general OpenStack installer. So what does devstack actually install? Is there any other scripted method available to completely install openstack(grizzly release) or I need to follow the manual installation steps given on openstack website?
devstack does completely install from git openstack.
for lesser values of completely anyways. devstack is the version of openstack used in jenkins gate testing by developers committing code to the openstack project.
devstack as the name suggests is specifically for developing for openstack. as such it's existence is ephemeral. in short, after running stack.sh the resulting ( probably ) functioning openstack is setup... but upon reboot it will not come back up. there are no upstart or systemd or init.d scripts for restarting services. there is no high availability, no backups, no configuration management. And following the latest git releases in the development branch of openstack can be a great way to discover just how unstable openstack is before a feature freeze.
there are several vagrant recipes in the world for deploying openstack, and openstack-puppet is a puppet recipe for deploying openstack. chef also maintains an openstack recipe as well.
Grizzly is a bit old now. Havana is the current stable release.
https://github.com/stackforge/puppet-openstack
http://docs.opscode.com/openstack.html
http://cloudarchitectmusings.com/2013/12/01/deploy-openstack-havana-on-your-laptop-using-vagrant-and-chef/
and ubuntu even maintains a system called maas and juju for deploying openstack super quickly on their OS.
https://help.ubuntu.com/community/UbuntuCloudInfrastructure
http://www.youtube.com/watch?v=mspwQfoYQks
so lots of ways to install openstack.
however most folks pushing a production cloud use some form of configuration management system. that way they can deploy compute nodes automatically. and recover systems quickly.
also check out openstack on openstack.
https://wiki.openstack.org/wiki/TripleO
I think the code should be same, but at least the configuration is not same, for example, devstack will by default use nova network. In a manual installation, you can choose neutron. so:
if you are starting to learn openstack, devstack is a good starting point. with it, you can quickly have a development env.
if you are deploying openstack env, devstack is not a choice, and
instead you need install it following the installation guide.
If you would like another scripted option for deployment, you can try Packstack. This will work only on Fedora and RHEL.
https://wiki.openstack.org/wiki/Packstack
https://www.rdoproject.org/install/quickstart/
In this, you can choose which services you would like to install. For example you may choose to install Neutron for networking purposes, instead of using nova.
Also, it lets you deploy multiple instances of compute nodes by just providing it's IP !!
Yes. Devstack is a tool which help you build all in one for Openstack environment in quickly (Just take a coffee cup and wait until complete). Normally they were using for developer to develop new features and/ or test code quickest. For operator, we need to setup by manual step by step for each services.
To build via devstack repo then you need pull newest source-code from http://git.openstack.org/openstack-dev/devstack. then create new local.conf in devstack folder. And run ./stack.sh.
For example local.conf: https://github.com/pshchelo/stackdev/blob/master/conf/local.conf.sample
Yes, Devstack install all the components of Openstack. But when you use basic configuration then it will install core components of openstack which are the base of openstack cloud platform to run some basic things.
And in Advance configuration of openstack you should configure your local.conf file for what type of services and components you want to install or use in your cloud.
https://github.com/openstack/tacker/blob/master/devstack/local.conf.example

Resources