Zenoss auto start off upon system reboot - zenoss

I have installed Zenoss 5.2.4 on a machine which runs centos 7. I wanted to reboot the machine and thus stopped the serviced for graceful shutdown of all the internal services for Zenoss.
Upon rebooting, I see the serviced is already running. This shows that the Zenoss.core is started upon system boot time. I want to start the serviced and Zenoss.core manually after the system reboots. How can I turn on this feature??
I checked in the /etc/default/serviced configuration file but couldn't get any such parameter.
Thanks.

I did a systemctl disable serviced before restart and it didn't started up anymore automatically. I enabled and started serviced manually.

Related

Apache2 server getting too many connection requests

I am trying to deploy a Nodejs & WordPress website together on an ubuntu server and I am using apache2 for my server config. Everything works fine initially. But after some time my website start showing connection timeout in the browser and when I checked the Ubuntu machine then I found the following:
I checked the error log and increased the worker limit but then got this error again.
I also checked the tasks using htop command then I found this and as I am increasing the worker request number in the apache2 config file the tasks increase automatically.
I also checked which IP calling so many requests and then I found this:
Access logs file :
I have no idea what is going on here and don't know how to fix it. If anyonw know the solution to this problem then please reply.
Thanks

Weblogic 12c, task in progress forever

I have my domain configured in weblogic 12c. When I try to start servers, they come up(State is changed to Running) and web services are active. However, the Status of Last Action in the weblogic console is always Task in Progress
What are the possible reasons that this is not changed to completed.
Also, it gets changed to None after I restart my Admin.
I would check your AdminServer and Managed servers for any errors. I suspect some communication issues between the Admin and the managed servers

AWS CodeDeploy vs Windows 2016 in ASG

I use AWS CodeDeploy to deploy builds from GitHub to EC2 instances in AutoScaling Group.
It's working fine for Windows 2012 R2 with all Deployment configurations.
But for Windows 2016 it totally fails on "OneAtTime" deploy;
During "AllAtOnce" deploy only one or two instances deployed successfully, all other fails.
In the logfile on agent this suspicious message is present:
ERROR [codedeploy-agent(1104)]: CodeDeploy Instance Agent Service: CodeDeploy Instance Agent Service: error during start or run: Errno::ETIMEDOUT
- A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. - connect(2)
All policies, roles, software, builds and other stuff are the same, I even tested this on brand new AWS account.
Does anybody faced such behaviour?
I ran into the same problem, but during my investigation, I found out that server's route table had wrong routes for 169.254.169.254 network (there was specified the gateway from the network where my template was captured), so that it couldn't read instance metadata.
From the above error it looks like the agent isn't able to talk to CodeDeploy endpoint after instance starts up. Please check if the routing tables and other proxy related settings are set up correctly. Also if you do not have it already, you can turn on the debug log by setting :verbose to true in the agent config and restart the agent. This would help debug the issue better.

How to stop service without restart

I have deployed an "helloworld" service on cloudify 2.7 and OpenStack cloud. I would stop the service tomcat without the service is being restarted.
So, in the cloudify shell I have execute:
cloudify#default> connect cloudify-manager-1_IP
Connected successfully
cloudify#default> use-application helloworld
Using application helloworld
cloudify#helloworld> invoke tomcat cloudify:start-maintenance-mode 60
Invocation results:
1: OK from instance #1#tomcat_IP, Result: agent failure detection disabled successfully for a period of 60 minutes
invocation completed successfully
At this point, I have connected via ssh into the tomcat VM and ran:
CATALINA_HOME/bin/catalina.sh stop
In the CATALINA_HOME/log/catalina.out I can see that the app server is being stopped and immediately restarted!
So, what should I do in order to stop the app server and restart it only when I decide to restart it?
Maintenance mode in Cloudify 2.7 is used to prevent the system from starting a new VM if a service VM has failed.
What you are looking for is to prevent Cloudify from auto-healing a process - Cloudify checks for the liveness of the configured process, and if it dies, it executes the 'start' lifecycle again.
In your case, the monitored process can change, since you will be restarting it manually. So you should not use the default process monitoring. There is a similar question here: cloudify 2.7 locator NO_PROCESS_LOCATORS

Unable to request a user password reset

I am in Plone 4.1.6, if you go in Site setup > Users and Groups and then click the checkbox "Reset Password" for a user and click "Apply changes", than the system hangs and after 5 min I have this error from Apache:
Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request POST /##usergroup-userprefs.
Reason: Error reading from remote server
Apache/2.2.22 (Ubuntu) Server at 192.168.1.4 Port 443
After the error, I have to restart Plone to make Plone respond again.
My Environment:
Plone 4.1.6 (4115)
CMF 2.2.6
Zope 2.13.15
Python 2.6.8 (unknown, Apr 27 2013, 22:01:31) [GCC 4.6.3]
Addons:
Diazo theme support 1.0b8
Installs a control panel to allow on-the-fly theming with Diazo
Thème Plone classique 1.1.2
L'ancien thème utilisé dans Plone 3 et versions antérieures.
Static resource storage 1.0b5
A folder for storing and serving static resource files
I am running Plone behind Aapache
Testing locally
Running a virtual machine with VirtualBox 4.2.12
Plone is install on the Virtual machine
Plone version is 4.1.6
Virtual machine is running Ubuntu 12.04 AMD64
Zeocluster with 2 clients
Email is properly configured in the Plone instance
As I know, everything is working fine with my Plone instance including the other checkboxes available in Users and Groups.
I did a test with ssmtp to send an email to myself from my node on the vm and I have no problem sending the email.
I did try fg mode and everything seems OK.
I did check the Apache logs and everything seems OK too.
If a create a ssh tunnel to avoid Apache and access Plone directly, I don't have a proxy error but the system hang forever.
I don't know what to do to solve this stuff problem. Any idea?
Does the python process use a lot of CPU when it hangs? Check using top.
Install ZopeHealthWatcher, then when it hangs again, use zope health watcher to get a list of what each thread is doing. That will often give you an idea of where the code is sitting either in a loop, in infinite recursion of some sort (this can happen in the zodb, especially with acquisition and similarly named things), or if it is merely blocking on something (eg, mtu issues on the network link to the smtp server for example, so small emails work but big ones hang).
You could also stop the smtp server (or just change the port in the plone control panel) and see if you at least get an exception out of that. Plone should, by default, raise an exception if it cannot connect to the smtp server.
In really extreme cases, you can use gdb to connect to the hanging python process (I usually use "top" to find the one sitting at 100% CPU), and you could then find where it is hanging. This is a lot more complex than using ZopeHealthWatcher, but I successfully traced a hang to a race condition in reportlabs font code recently using precisely this method, it is very powerful. What is nice about gdb, is it stops the process and allows you to step through the code, and up and down the calling stack, unlike ZopeHealthWatcher which just gives you a snapshot (a bit like that Heissenberg uncertainty thing, you can observe where it is now...)

Resources