I have a website with NGINX & PHP-FPM. A few hours ago I checked my website and NGINX throw an error. My website was down.
I check the status pages of NGINX and the fpm pool, and the fpm status page did not load.
Then I checked the status of PHP-FPM with "service php5-fpm status" and it showed that it wasn't running. So I restarted it.
Right now everything is fine, and commonly nothing wrong happens, but now I want to know what could possibly turn off php-fpm? Could it be a script? A memory problem? Is it a common practice to have a script controlling that PHP-FPM is running?
Thanks you!
Who knows, this is why us sysadmins install monitoring solutions and 24hour text notifications, and install things like puppet and template it to ensure certain services are always running... You'll get a more detailed answer on www.serverfault.com - this is a programming site.
Probably a segmentation fault in a php child. Do your php-fpm logs show segmentation faults or other errors?
Yes, it's common practice to have a service running that checks to make sure php-fpm and other services are running.
Related
I'm seeing this error spewing in my log file for Glassfish 3.1.2.2:
[#|2019-01-18T18:52:34.603+0000|WARNING|glassfish3.1.2|com.sun.grizzly.config.GrizzlyServiceListener|_ThreadID=36;_ThreadName=Thread-2;|GRIZZLY0023: Interrupting idle Thread: http-thread-pool-8080(1).|#]
I understand that I have a runaway thread but it looks like glassfish is not correctly killing that thread. Because when I do a jstack I still see the stuck thread. I saw in other StackOverflow posts that people suggest disabling the request timeout, but I don't want to disable this check because I need to be able to recover from runaway thread.
I know GlassFish 3.1.2.2 is old but I need this application to run until we have the resources to port/convert it into another environment.
I'm hoping its as easy as patch a few jar files and restart the server. Any help would be greatly appreciated.
Thanks!
We tried this POC to deploy code via AWS Code deploy on 20 live servers, which are behind Load balancer. We are having nginx running in front of Hiphop. We tried hot deployment, i.e. deploying while nginx was running.
As soon as the deployment process moves the new file to the designated place in the production servers, we start getting the following errors, which continues indefinitely on some servers, and the Jenkins jobs times out after polling for 50 minutes -
\nFatal error: syntax error, unexpected $end in /path/to/file.php on line 19477
It appears like only a part of the file gets loaded and read, even though the file in its entirety has no syntax errors.
Restarting nginx on such servers manually fixes the problem, but that does not seem to be a good solution.
We are trying to find out the reason behind this issue.
HHVM version being used - HipHop VM 3.12.0-dev (rel)
Nginx version - 1.8.0
Alternative approach
We are now trying to do cold deployment (shut down nginx then do deployment and then turn on nginx again), but that too is throwing its own issues. I will not post those details here, but the idea is to take the advantage of the large number of servers we have, and do cold deployment in such a way that only a small percentage of the servers behind LB have nginx off at a time, so that it does not cause too much load on the running servers.
CodeDeploy will indeed replace files during a deployment. I recommend you try your approach of doing a cold deployment in which you fully shut down before deploying and startup again after it's done.
I have been experiencing MySQL crashing recently and really need to figure out what I need to do to get this to stop.
I have a 2GB Digital Ocean server running the following:
Ubuntu 14.04
PHP v5.5.9
Apache v20120211
MySQL v5.5.43
Wordpress v4.2
I also have 2GB of swap.
The last time MySQL crashed this was in my error log
http://laravel.io/bin/E304E
The important part seems (to me) to be this
InnoDB: Fatal error: cannot allocate memory for the buffer pool
I am getting about 2000 page views per day. I thought this should be easily enough memory to run the site.
Can anyone give me some ideas what I can do or what I definitely need to do to stop this happening?
Thanks
2000 page views per day is well within the range of what your server can handle. It's possible you're getting hit by bots and/or Apache isn't configured well for your server size.
Apache2Buddy is a quick diagnostic tool to help with your Apache configurations. $ curl -L http://apache2buddy.pl/ | perl. It'll print out a report with suggested configuration adjustments given your RAM available and application size. My guess is that you'll need to update MaxRequestWorkers (located at /etc/apache2/mods-available/mpm_prefork.conf) to something smaller.
I'm also guessing that you have bots hitting your site, which is causing the huge volume of traffic that is crashing Apache. Check your access logs $ cat /var/log/apache2/access.log.
I wrote an article on this situation if you want a deeper explanation, a method to stress test, or ideas on how to block some of the bot traffic: http://brunzino.github.io/blog/2016/05/21/solution-how-to-debug-intermittent-error-establishing-database-connection/
I am struggling with Symfony 2 error reporting because I can't find out what really is happing when a 500 Error is triggered.
I have XDebug correctly installed, but it seems like Symfony rules everything.
The custom pages just says:
Oops! An Error Occurred
The server returned a "500 Internal Server Error".
Something is broken. Please e-mail us at [email] and let us know what you were doing when this error occurred. We will fix it as soon as possible. Sorry for any inconvenience caused.
That's pretty much funny! Something is broken, but, the fact is that I need to fix my code, but can't know what or where it is the problem! And sure, If I send an email for the Symfony team will they be able to solve my problem once I not even know what to say them?
Some clue on what I am missing here?
For Symfony 2 and below,
First, look at the logs in app/logs depending on your environment — dev.log for development, prod.log for production, etc.
If the code crashes before Symfony has a chance to run, check the logs of your web server — e.g. nginx.
If you have a symfony flex project, it is also possible that you forgot to install monolog ;) Run:
composer req log
For Symfony versions 3.x, 4.x, 5.x, 6.x and above,
The logs will be found depending on the environment
For dev env - var/logs/dev.log
For prod env- var/logs/prod.log
Also, as the accepted answer suggests, you might have to check the server logs if your app crashes before even reaching Symfony.
Usual location for server logs is listed below -
For apache server - ls /var/log/apache2
For Nginx server - ls /var/log/nginx
sudo chmod -R o+w var/cache/
from inside the project's folder,
solved the problem for me, at least this time :)
There were no errors in Symfony's or web server's logs.
What caused the problem
It turned out that the problem was caused by me previously removing the cache folder, and regenerating the cache, i.e.
rm -fr var/cache/
./bin/console cache:clear
After that, var/cache became drwxr-xr-x.
An even better way to improve your Symfony debugging process is to have Monit monitor your Symfony and Apache logs for any errors and send you emails whenever something bad happens: http://intelligentbee.com/blog/2016/01/12/how-to-monitor-symfony-and-apache-logs-with-mmonit/
Today I have deployed an app to our production application server GlassfishV3 through Jenkins CI to the autodeploy folder. The app server went down, and I cannot bring it back up.
My goal is to have the server up and running the same as prior to deploying the application. This is what I have done:
First find the PID of the process running at port 4848: nestat -nlept
Then kill the PID by doing kill -9 PID
Remove the war file that Jenkinks just put in the autodeploy directory just in case if that is the problem.
Start the server again by doing ./asadmin start-domain domain1
The server takes FOREVER to start !!! In fact it never starts successfully as I cannot access the admin console at 4848 or any of the other apps that were already running. However, it leaves a process running at 4848.
I looked at the jvm.log and server.log and I found a java.net.BindException:No free port within range.........
So my questions are as follows:
Do you know what is going on?
Do you know how to fix it?
Do you know of a way to speed up the ./asadmin start-domain domain1 process?
Note: In our QA app server (Same version, same OS, Same Java, Same Grails) it does not happen. Really frustrated with this issue.
Thanks a lot for your help. Any help would be very much appreciated as this is a production issue that has several applications down for a few hours already.
Dario
I can deploy my application now, basically it boiled down to increasing the MaxPermSize jvm option
Under the config folder, edit domain.xml and change the default size to this:
-XX:MaxPermSize=256m
You can always increase it as necessary.
Also, if that is not enough you can also change the max heap size in that same file
-Xmx512m . I have left it as is but if required you can change that to 6g or more on a 64 bit OS. On a 32 bit OS it will only recognize up to 3.5g.
Hope this helps somebody else in the future, as this issue kept me at work until 9:00PM
UPDATE:
I had peformance issues again and I found this other solution in Joshi's tech blog:
http://joshitech.blogspot.com/2009/09/glassfish-application-server.html
Basically add the following jvm options in the domain.xml. It should increase Glassfish boot up and deployment performance:
<jvm-options>-server</jvm-options>
<jvm-options>-Xms3000m</jvm-options>
<jvm-options>-Xmx3000m</jvm-options>
<jvm-options>-XX:MaxPermSize=192m</jvm-options>
<jvm-options>-XX:NewRatio=2</jvm-options>
<jvm-options>-XX:+AggressiveHeap</jvm-options>
<jvm-options>-XX:+AggressiveOpts</jvm-options>
<jvm-options>-XX:+UseParallelGC</jvm-options>
<jvm-options>-XX:+UseParallelOldGC</jvm-options>
<jvm-options>-XX:ParallelGCThreads=5</jvm-options>