Tomcat 8 -XX:OnOutOfMemoryError unable to restart Tomcat - out-of-memory

I'm unable to autorestart a Tomcat instance when an OOM occurs.
I tried several different versions of defining the XX:OnOutOfMemoryError value:
-XX:OnOutOfMemoryError='kill -9 %p;/application/tomcat/bin/start.sh'"
-XX:OnOutOfMemoryError='kill -9 %p;./application/tomcat/bin/start.sh'"
-XX:OnOutOfMemoryError="kill -9 %p;cd /application/tomcat8/bin/;./application/tomcat8/bin/start.sh"
But whatever I try, the start.sh script is never executed, the catalina.out reveals:
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError="kill -9 %p;/application/tomcat8/bin/start_commons.sh"
# Executing /bin/sh -c "kill -9 28005"...
The Tomcat instance is succesfully killed but then nothing happens anymore.
Any ideas?

The only thing really works is adding XX:OnOutOfMemoryError as follows:
export CATALINA_OPTS="-Xms512m -Xmx1024m -XX:OnOutOfMemoryError='kill -9 %p' "
and having a separate script checking if the process is still running and if not restart the Tomcat instance.

Related

Docker: Permission denied when running rocker/shiny-verse and rocker/shiny images

I'm trying to run Shiny Server on an EC2 instance running Ubuntu.
Following this page, I run this command: docker run --rm -p 3838:3838 rocker/shiny.
I get the following warning and error:
s6-supervise shiny-server: warning: unable to spawn ./run - waiting 10 seconds
s6-supervise (child): fatal: unable to exec run: Exec format error
After that, those two outputs just repeat every 10 seconds. I can't even kill the process, so I have to close the terminal and start another one.
I'm new to Docker and have no real clue how to proceed from here. Any help would be appreciated.
Updates:
At least I've tracked down what s6-supervise is: part of a set of utilities revolving around process supervision and management, logging, and system initialization.

net.corda.core.serialization.SerializationWhitelist: Error reading configuration file

I downloaded cordapp template (Java) from https://github.com/corda/cordapp-template-java.
Everytime I make chage to the project gradlew deplyNodes fails with below error. However, it automatically gets resolved once I restart my system.
Is there anything, I am missing?
> Configure project :
Gradle now uses separate output directories for each JVM language, but this build assumes a single directory for all classes from a source set. This behaviour has been deprecated and is scheduled to be removed in Gradle 5.0
at build_d668pifueefmtb65xfqnh374z$_run_closure5.doCall(C:\Users\amit.pamecha\Documents\workspace\abcdwork\capital-coin\cordapp-template-java\build.gradle:83)
The setTestClassesDir(File) method has been deprecated and is scheduled to be removed in Gradle 5.0. Please use the setTestClassesDirs(FileCollection) method instead.
at build_d668pifueefmtb65xfqnh374z$_run_closure5.doCall(C:\Users\amit.pamecha\Documents\workspace\abcdwork\capital-coin\cordapp-template-java\build.gradle:83)
> Task :deployNodes
Bootstrapping local network in C:\Users\amit.pamecha\Documents\workspace\abcdwork\capital-coin\cordapp-template-java\build\nodes
Node config files found in the root directory - generating node directories
Generating directory for Notary
Generating directory for PartyA
Generating directory for PartyB
Nodes found in the following sub-directories: [Notary, PartyA, PartyB]
Waiting for all nodes to generate their node-info files...
Distributing all node info-files to all nodes
Gathering notary identities
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':deployNodes'.
> net.corda.core.serialization.SerializationWhitelist: Error reading configuration file
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
* Get more help at https://help.gradle.org
BUILD FAILED in 26s
12 actionable tasks: 4 executed, 8 up-to-date
This is caused by a stale Gradle process. You need to kill this process.
You can use killall java -9 or pkill java on Unix, or wmic process where "name like '%java%'" delete, to kill all Java processes.
Or you can use something like:
lsof -nP +c 15 | grep LISTEN to find processes and ports
ps ax | grep <pid> to confirm the command line of the process
kill -9 <pid>

Convert Command line to a service in ubuntu 16.04 LTS

When I am running a command bitcore-node start it starts two services.
Screenshot of ps aux is attached.
I created a service in /etc/init.d
description "Bitcoin Core for Bitcore"
author "BitPay, Inc."
limit nofile 20000 30000
start on runlevel [2345]
stop on runlevel [016]
kill timeout 300
kill signal SIGINT
# user/group for bitcore daemon to run as
setuid ubuntu
setgid ubuntu
# home dir of the bitcore daemon user
env HOME=/home/ubuntu
respawn
respawn limit 5 15
script
exec bitcore-node -conf=/home/ubuntu/love/data/bitcoin.conf -datadir=/home/ubuntu/love/data -testnet
end script
I am getting error to while running it.
qT.png
Any Idea ?
You have written a script as an Upstart Init Script, and executed it as a SysV init using the systemd init systemd.
You could try placing the script in /etc/init/ instead of
/etc/init.d. That way, it might be processed as the Upstart init script that it actually is.
However, Upstart is being replaced with systemd, so following a tutorial on translating your Upstart init script into a systemd .service file is recommended.

Memory issue with meteor up (mup) on Digital Ocean

I couldn't find existing posts related to my issue. On a Digital Ocean Droplet, mup setup went fine, but when I try to deploy, I get the following error. Any ideas? Thanks!
root#ts:~/ts-deploy# mup deploy
Meteor Up: Production Quality Meteor Deployments
Building Started: /root/TS/
Bundling Error: code=137, error:
-------------------STDOUT-------------------
Figuring out the best package versions to use. This may take a moment.
-------------------STDERR-------------------
bash: line 1: 31217 Killed meteor build --directory /tmp/dc37af3e-eca0-4a19-bf1a-d6d38bb8f517
Below are the logs. node -v indicates I am using 0.10.31. How do I check which script is exiting with the error? Any other ideas? Thanks!
error: Forever detected script exited with code: 1
error: Script restart attempt #106
Meteor requires Node v0.10.29 or later.
error: Forever detected script exited with code: 1
error: Script restart attempt #107
Meteor requires Node v0.10.29 or later.
error: Forever detected script exited with code: 1
error: Script restart attempt #108
stepping down to gid: meteoruser
stepping down to uid: meteoruser
After I went back to an old backup of the DO Droplet, and re-ran mup setup and mup deploy, I now get this in the command line output
Building Started: /root/TS
Bundling Error: code=134, error:
-------------------STDOUT-------------------
Figuring out the best package versions to use. This may take a moment.
-------------------STDERR-------------------
FATAL ERROR: JS Allocation failed - process out of memory
bash: line 1: 1724 Aborted (core dumped) meteor build --directory /tmp/bfdbcb45-9c61-435f-9875-3fb304358996
and this in the logs:
>> stepping down to gid: meteoruser
>> stepping down to uid: meteoruser
Exception while invoking method 'login' TypeError: Cannot read property '0' of undefined
at ServiceConfiguration.configurations.remove.service (app/server/accounts.js:7:26)
at Object.Accounts.insertUserDoc (packages/accounts-base/accounts_server.js:1024)
at Object.Accounts.updateOrCreateUserFromExternalService (packages/accounts-base/accounts_server.js:1189)
at Package (packages/accounts-oauth/oauth_server.js:45)
at packages/accounts-base/accounts_server.js:383
at tryLoginMethod (packages/accounts-base/accounts_server.js:186)
at runLoginHandlers (packages/accounts-base/accounts_server.js:380)
at Meteor.methods.login (packages/accounts-base/accounts_server.js:434)
at maybeAuditArgumentChecks (packages/ddp/livedata_server.js:1594)
at packages/ddp/livedata_server.js:648
Exception while invoking method 'login' TypeError: Cannot read property '0' of undefined
at ServiceConfiguration.configurations.remove.service (app/server/accounts.js:7:26)
at Object.Accounts.insertUserDoc (packages/accounts-base/accounts_server.js:1024)
at Object.Accounts.updateOrCreateUserFromExternalService (packages/accounts-base/accounts_server.js:1189)
at Package (packages/accounts-oauth/oauth_server.js:45)
at packages/accounts-base/accounts_server.js:383
at tryLoginMethod (packages/accounts-base/accounts_server.js:186)
at runLoginHandlers (packages/accounts-base/accounts_server.js:380)
at Meteor.methods.login (packages/accounts-base/accounts_server.js:434)
at maybeAuditArgumentChecks (packages/ddp/livedata_server.js:1594)
at packages/ddp/livedata_server.js:648
The memory issue stems from using DigitalOcean's $5 Droplet. To solve the problem, I added swap to the server, as explained in detail below.
Create and enable the swap file using the dd command :
sudo dd if=/dev/zero of=/swapfile bs=1024 count=256k
“of=/swapfile” designates the file’s name. In this case the name is swapfile.
Next prepare the swap file by creating a linux swap area:
sudo mkswap /swapfile
The results display:
Setting up swapspace version 1, size = 262140 KiB
no label, UUID=103c4545-5fc5-47f3-a8b3-dfbdb64fd7eb
Finish up by activating the swap file:
sudo swapon /swapfile
You will then be able to see the new swap file when you view the swap summary.
swapon -s
Filename Type Size Used Priority
/swapfile file 262140 0 -1
This file will last on the virtual private server until the machine reboots. You can ensure that the swap is permanent by adding it to the fstab file.
Open up the file:
sudo nano /etc/fstab
Paste in the following line:
/swapfile none swap sw 0 0
Swappiness in the file should be set to 10. Skipping this step may cause both poor performance, whereas setting it to 10 will cause swap to act as an emergency buffer, preventing out-of-memory crashes.
You can do this with the following commands:
echo 10 | sudo tee /proc/sys/vm/swappiness
echo vm.swappiness = 10 | sudo tee -a /etc/sysctl.conf
To prevent the file from being world-readable, you should set up the correct permissions on the swap file:
sudo chown root:root /swapfile
sudo chmod 0600 /swapfile
This only worked for me by increasing the swap space to 1gb:
Make all swap off
sudo swapoff -a
Resize the swapfile
sudo dd if=/dev/zero of=/swapfile bs=1M count=1024
Make swapfile usable
sudo mkswap /swapfile
Make swapon again
sudo swapon /swapfile

How to daemonize process?

On my hosting account I run chat in Node.js. All works fine however my hosting timeout processes every 12 hours. Apparently when the process is deamonized it will not timeout and so I tried to demonize with:
using Forever.js - running forever start chat.js . Running forever list confirms it runs and ps -ef command shows ? in TTY column
tried nohup node chat.js - running ps -ef TTY column shows pts/0 and PPID is 1
I tried to disconnect stdin, stdout, and stderr, and make it ignore the hangup signal (SIGHUP) so nohup ./myscript 0<&- &> my.admin.log.file & with no luck. ps -ef TTY column is pts/0 and PPID is anything but 1
I tried (nohup ./myscript 0<&- &>my.admin.log.file &) with no luck again. ps -ef TTY column is pts/0 and PPID is 1
After all this process always timouts in about 12hrs.
Now I tried (nohup ./myscript 0<&- &>my.admin.log.file &) & and am waiting, but do not keep my hopes up and need someones help.
Hosting guys claim that daemon processes do not timeout but how can I make sure my process is a daemon? Noting I tried seems to work even though with my limited understanding ps -ef seems to suggest process is deamonized.
What shall I do to demonize the process without moving to much more expensive hosting plans? Can I argue with hosting that after all this porcess is a daemon and they just got it wrong somewhere?
Upstart is a really easy way to daemonize processes
http://upstart.ubuntu.com/
There's some info on using it with node and monit, which will restart Node for you if it crashes
http://howtonode.org/deploying-node-upstart-monit

Resources