i have a pgpool2 3.1.3 installed with 2 postgresql 9.1.3 backend configured as master/slave with streaming replication.
If the master fails everything is ok, the slave takes over and becomes the new master.
The problem is that if i want to rejoin the old master in the cluster this is added as master also instead of slave.
I use pgpooladmin Version 3.1.1.
any idea ?
i found the problem. When i first created the streaming replication i configured postgresql as in streaming replication tutorial. Here the slave and master had different settings activated in config file, and this was the problem.
After i read one more time the pgpool2 multi server tutorial i found that in the first stage both servers have the same setup, and the slave would be started through pgpool.
After i corrected that everything is ok.
If you have successfully attached old master, then detaching new master wouldn't trigger old master get promoted again?
http://www.pgpool.net/docs/latest/pgpool-en.html#pcp_detach_node
Related
What am I trying to achieve?
1.Trying to build a fault-tolerant WordPress website.
2.Tried installing the webserver on one AZ with Muti-AZ RDS deployment.It was quite successful.
Set up is as follows
AZ-1 Public subnet - Launched one ec-2 instance, Installed httpd, PHP, PHP-MySQL, WORDPRESS.
AZ-1 Private subnet - Launched a Multi-AZ RDS instance
Problem Encountered:
When I Wanted to expand to another availability zone for fault tolerance.
Launched another ec-2 instance in different availability zone [AZ-2] and installed httpd, PHP, PHP-MySQL, WORDPRESS
I DID NOT launch an another RDS.I wanted to connect to the RDS in [AZ-1]coz its already a Multi-AZ, So wanted to have the fault tolerance set up only for the Web server. I was able to install WordPress on AZ-2 public subnet, but I was unable to connect to the
RDS[MYSQL]endpoint in AZ-1.
Getting the error message.
"Already installed.You appear to have already installed WordPress. To reinstall please clear your old database tables first".
"Already installed.You appear to have already installed WordPress. To
reinstall please clear your old database tables first".
This means your second web server can successfully connect to the RDS instance. Instead of trying to "install" WordPress, just copy all your WordPress files from the first web server and you'll be fine.
I've created a VM with a VNET attached on Opennebula, after a while I changed the params of the VNET but those changes do not persist on the VM after my (physical)host is restarted.
I’ve changed the /var/lib/one/vms/{$VM_ID}/context.sh file but still no luck persisting the changes.
Do you know what it could be?
I'm using OpenNebula with KVM on a Debian8 host.
After a while I figure out how to do this myself.
It seems that when the VM is started, the file /var/lib/one/datastores/0/$VM_ID/disk.1 is attached as /dev/sr0.
During boot process /usr/sbin/one-contextd mounts this unit an uses the variables inside it, they usually look like this:
DISK_ID='1'
ETH0_IP='192.168.168.217'
ETH0_MAC='02:00:c0:a8:a8:d9'
ETH0_DNS='192.168.168.217'
ETH0_GATEWAY='192.168.168.254'
This info are used to export ENV variables (the exported variables can be found on /tmp/one_env) which are used by the script /etc/one-context.d/00-network to set network configuration.
OpenNebula doesn't provide a simple way of replacing this configs after the VM is created, but you can do the following:
Edit /var/lib/one/datastores/0/$VM_ID/disk.1 and make the required
changes
Restart opennebula service
Restart the VM
Hope this is useful to someone :)
Yes, the issue is that this functionality is not supported in current versions of OpenNebula. This will be supported in the upcoming 5.0 version.
You can power off the VM and change most of the parameters(not network parameters as they are linked to a vnet) in the conf tab of the VM.
For a network-specific change only, you can simply log-in to the VM and mv the file /etc/one-context.d/00-network to some other place and your changes to the network configuration of VM won't be overwritten by the network context script.
I have created a Zone, Pod and Cluster on CloudStack.I have also added a host in the Cluster, added Primary Storage and Secondary Storage. But in System VMs, nothing is listed. Also, in the logs a message "No running ssvm is found, so command will be sent to LocalHostEndPoint" comes.
Somehow I deduced that due to this, template is not being added and consequently Instances can't be created as Instances use templates to add OS in VMs.
Can anybody please help to point out and sort the problem which may be the cause here.
You need to manually install the "system VM" templates. These are the images for worker VMs that CloudStack deploys to run system services. SSVM is an example of a SystemVM. It is responsible for copying templates to secondary storage.
See Prepare the System VM Template in the installation guide.
Currently, we have a nexus hosted-repository remotely (in a different geographic location). We have a local-proxy-repository locally to the hosted-repository.
Whenever there are new versions of a few files added to remote-hosted-repository, the first request for the newly added file from the build system is downloading it to local-proxy-repository.
The problem I have now is that some of the files being added are really huge (say around 400 MB). Therefore the first build consumes lot of time to finish.
Is there a way we can poll on remote-hosted-repository and auto-mirror it ?
Nexus Professional 2.+ supports this as experimental feature of the Smart Proxy feature set. It is an experimental feature that is off by default, but should work just fine. Give it a go!
To turn it on go to "administration/capabilities". Check "show advanced" and then select the "smart proxy: subscribe" and enable preemptive fetch.
Update: as of Nexus 2.3 this is no longer deemed experimental and you can configure if for each repository that you proxy off.
I cannot comment on Manfred's answer, so here is a new variant:
If you are running Nexus Professional, you can use Smart Proxy to synchronize repositories.
You need to go through the general setup as described on http://www.sonatype.com/books/nexus-book/reference/smartproxy.html first (establish trust, set up publishing hosted repo, set up receiving proxy repo). Only then is the capability created and Manfreds answer applies:
Go to "Administration/Capabilities", check "Show Advanced" and select the
"Subscribe" capability for your proxy repo. There you can turn on preemptive
fetching, which will automatically download new artifacts in your hosted repository on the proxy.
Can you throw some points on how it is a best way, best practice
to install web application on Unixes?
Like:
where to place app and its bases and so for,
how to configure to be secure and easy to backup,
etc
For example I know such suggestion -- to set uniq user for each app.
App in question is Jira on FreeBSD, but more general suggestions are also welcomed.
Here's what I did for my JIRA install on Fedora Linux:
Create a separate user to run JIRA
Install JIRA under the JIRA user's home directory
Made a soft link "/home/jira/jira" pointing to the JIRA installation directory (the directory as installed contains the version number, something like /home/jira/atlassian-jira-enterprise-4.0-standalone)
Created an /etc/init.d script to run JIRA as a service, and added it to chkconfig so that it runs at system startup - see these instructions
Created a MySQL database for JIRA on a separate data volume
Set up scheduled XML backups via the JIRA admin interface
Set up a remote backup script to dump the MySQL database and copy the DB dump and XML backups to a separate backup server
In order to avoid having to open extra firewall ports, set up an Apache virtual host "jira.myhost.com" and used mod_proxy to forward requests to the JIRA URL.
I set everything up on a virtual machine (an Amazon EC2 instance in my case) and cloned the machine image so that I can easily restart a new instance if the current one goes down.