We are deploying a Meteor application on Swisscom's Cloud Foundry. The meteor build command, part of the buildpack we are using, is being killed. This does not happen always, but the deplyoment process is not reliable because of this.
The error message is:
/tmp/buildpacks/ccde798f181156726dc68059bc038932/bin/compile: line 64: 99 Killed meteor build --directory deploy --server http://localhost:3000 --architecture os.linux.x86_64
We are using a buildpack forked from cloudfoundry-community. It can be found here:
https://github.com/vl4d1m1r4/cf-meteor-buildpack. The command being killed can be found on line 70 inside bin/compile.
Any insight on why this is happening would be highly appreciated.
This could be due to a lack of sufficient resource assigned to the app, if it's running out of memory during compilation, CF will destroy the container and the meteor process within. Run cf events <appname> to see if the container is being purposefully destroyed and if it is due to insufficient memory, assign more!
Related
firebase-tools is an NPM package that uses the JVM. It includes an emulators:start command which fires up the JVM, and it has the JVM use some ports (defined in settings) for the emulators' input/output.
I use MacOS. I use a VSCode terminal to start/end the emulators, and define the details of how to start using an NPM script. Like this:
npm run build && firebase emulators:start --only functions,auth,firestore,database --import ./db-temp
In other words, run the build script, then start some emulator services.
So when running the emulators, I:
Open a terminal,
Navigate to the folder containing the script
Run the script (npm run serve)
This gives me a convenient place to get feedback from the emulators for things like errors, network activity, etc.
If I close VSCode using VSCode's X button, everything in VSCode is immediately shut down. When the terminal is closed, the JVM doesn't detect this and quit: it keeps running. Then when I open VSCode and try running my script again, it tells me that the port normally used by the emulators is still open, because the JVM didn't shut down properly. So I have to track down the JVM process, Force Quit it, and then everything works fine.
To shut the emulators down properly, I have to go to the terminal process running the emulators, press ALT+C to close the process, and then close VSCode. This extra step is inconvenient, and easily forgotten.
Can I convince the JVM to shut down when its originating process terminates via a configuration option? I looked through the configuration options on my local machine via java and java -x, and didn't see anything promising, but I am sure that there are people who know much more about Java than I do.
The Firebase emulator has an environment variable available for Java configuration.
I would be open to other options if available: telling VSCode to automatically shut down terminals properly before exiting, for example.
Or must this be handled by firebase-tools itself, perhaps via a ShutdownHook?
After a little conversation with Firebase and some troubleshooting, I learned a few things:
This issue only occurs when running firebase emulators:start --only firestore,database: when they run together, the Firestore process never closes. Start either one by itself and everything closes fine. This leads me to suspect that this is a bug.
This is reproducible outside VSCode, leading me to suspect that the issue has nothing to do with VSCode.
I filed https://github.com/firebase/firebase-tools/issues/4657. We'll see how it turns out.
I'm able to deploy a .net core console app on PCF which raises some internal events and runs for sometime(with help of Thread.Delay()) and exits. I want to be able to start and stop this app remotely, using a batch file from windows machine.
When I push this app to PCF I have explicitly put --no-start flag in the push command. The app gets deployed and doesn't starts and I can start this remotely with cf start command. Once it exits successfully PCF tries to restart it considering it as crashed so in order to restart i would first need to cf stop and then use start command.
I need help in understanding - if there is any other better way to do this. Originally we were planning to use Tasks on PCF; but as per my understanding Tasks are command which runs on other application(please correct me if I am wrong)
Any thoughts will help.
Thanks in advance.
I modified my app logic to achieve this. I did the following -
Deployed app with --no-start flag
In the app entry method I checking the value of arg passed from command line
if arg==required_key then run the job else return
I do a cf start which builds, stages the app and the app gets started but no results displayed on console
cf stop
cf run-task APP_NAME "dotnet appname.dll required_key"
The above task runs one time and destroys itself.
We tried this POC to deploy code via AWS Code deploy on 20 live servers, which are behind Load balancer. We are having nginx running in front of Hiphop. We tried hot deployment, i.e. deploying while nginx was running.
As soon as the deployment process moves the new file to the designated place in the production servers, we start getting the following errors, which continues indefinitely on some servers, and the Jenkins jobs times out after polling for 50 minutes -
\nFatal error: syntax error, unexpected $end in /path/to/file.php on line 19477
It appears like only a part of the file gets loaded and read, even though the file in its entirety has no syntax errors.
Restarting nginx on such servers manually fixes the problem, but that does not seem to be a good solution.
We are trying to find out the reason behind this issue.
HHVM version being used - HipHop VM 3.12.0-dev (rel)
Nginx version - 1.8.0
Alternative approach
We are now trying to do cold deployment (shut down nginx then do deployment and then turn on nginx again), but that too is throwing its own issues. I will not post those details here, but the idea is to take the advantage of the large number of servers we have, and do cold deployment in such a way that only a small percentage of the servers behind LB have nginx off at a time, so that it does not cause too much load on the running servers.
CodeDeploy will indeed replace files during a deployment. I recommend you try your approach of doing a cold deployment in which you fully shut down before deploying and startup again after it's done.
I am trying to generate Eclipse and IDEA projects for a play project using activator. When I tried doing this, it does a little, but then it hangs at:
Waiting for lock on C:\Users\James\.ivy2\.sbt.ivy.lock to be available...
At first I thought my running session of Intellij IDEA may be conflicting, I killed IDEA and the issue persisted. I closed activator, deleted the lock file, and restarted activator, but then it recreated the lock file and gave me the same issue. This is the full log I have been getting.
[info] Loading project definition from C:\Users\James\play-java\project
[info] Set current project to play-java (in build file:/C:/Users/James/play-java/)
[info] Applying State transformations com.typesafe.sbtrc.SetupSbtChild from C:/Users/James/.sbt/boot/scala-2.10.2/com.typesafe.sbtrc/sbt-rc-probe-0-13/1.0-1a8f7afd5ba98b45834ff53dd349130c3ade22f1/sbt-rc-probe-0-13-1.0-1a8f7afd5ba98b45834ff53dd349130c3ade22f1.jar;C:/Users/James/.sbt/boot/scala-2.10.2/com.typesafe.sbtrc/sbt-rc-probe-0-13/1.0-1a8f7afd5ba98b45834ff53dd349130c3ade22f1/sbt-rc-props-1.0-1a8f7afd5ba98b45834ff53dd349130c3ade22f1.jar
[info] Updating {file:/C:/Users/James/play-java/}root...
Waiting for lock on C:\Users\James\.ivy2\.sbt.ivy.lock to be available...
How can I fix the lockfile issue?
The .sbt.ivy.lock file is used to synchronize access to your local ivy2 repository between several processes so they cannot modify the directory simultaneously. Usually the situation that you described happens when you have an IDE and Activator/sbt terminal running at the same time.
Even though you killed the Idea Process there could be another process spawned which was causing the lock so next time make sure that you kill all Java processes. However, the best solution for this issue is to avoid locking one process with another. So run only one process which uses the ivy2 repository at the same time.
If nothing works, the last step is to kill the .lock file.
I faced a similar issue, I was also getting the same error
Waiting for lock on C:\Users\ajain9\.ivy2\.sbt.ivy.lock to be available...
Then finally the issue was that there was another sbt process running on my system, and because of that the .sbt.ivy.lock file was not being available. As explained well by Daniel, lock file is used for synchronization purpose.
Once the previous process ended,I did not face this error again.
mac users, just do ps -ef | grep -i sbt, and kill the process
I am doing some testing to determine resource usage of a rails war. I have used Warbler to package the "15-minute Blog" application using Rails 2.3.5 and JRuby 1.4.0. I am deploying into Tomcat 6.0.24 and create multiple deployments by copying the blog.war file as blogN.war.
This worked great for the first 4 deployments but I can't seem to deploy any more than 4 instances of the war; in other words, the catalina.out log hangs with "Deploying web application archive blog5.war".
Any ideas on what the problem might be or how I might better trouble-shoot this?
Increasing PermGenSpace memory to "-XX:PermSize=64m -XX:MaxPermSize=128m" corrected this problem.
Check your log files, may be the case that your java process, in which tomcat executes, runs out of memory, see java parameters ( -Xmx -Xms ) and http://wiki.apache.org/tomcat/FAQ/Memory . Increasing the available memory may allow you to run more instances of the application.