I have this same error as others when running php ~/composer.phar update:
The following exception is caused by a lack of memory and not having swap configured
Check https://getcomposer.org/doc/articles/troubleshooting.md#proc-open-fork-failed-errors for details
Fatal error: Uncaught exception 'ErrorException' with message 'proc_open(): fork failed - Cannot allocate memory' in phar:///home/tea/composer.phar/vendor/symfony/console/Symfony/Component/Console/Application.php:974
Stack trace:
0 [internal function]: Composer\Util\ErrorHandler::handle(2, 'proc_open(): fo...', 'phar:///home/te...', 974, Array)
1 phar:///home/tea/composer.phar/vendor/symfony/console/Symfony/Component/Console/Application.php(974): proc_open('stty -a | grep ...', Array, NULL, NULL, NULL, Array)
2 phar:///home/tea/composer.phar/vendor/symfony/console/Symfony/Component/Console/Application.php(784): Symfony\Component\Console\Application->getSttyColumns()
3 phar:///home/tea/composer.phar/vendor/symfony/console/Symfony/Component/Console/Application.php(745): Symfony\Component\Console\Application->getTerminalDimensions()
4 phar:///home/tea/composer.phar/vendor/symfony/console/Symfony/Component/Console/Application.php(675): Symfony\Component\Console\Application->getTerminalWidth()
5 phar:///home/tea/composer in phar:///home/tea/composer.phar/vendor/symfony/console/Symfony/Component/Console/Application.php on line 974
...but with a large instance: 4gb RAM and 4gb swap. The free RAM is never exhausted, let alone the available/cached RAM, and the swap isn't touched!
total used free shared buff/cache available
Mem: 3788 885 1908 9 993 2692
Swap: 3967 0 3967
It's the first time running composer update on this new machine, CentOS/CloudLinux 7.1 (with cPanel).
In desperation, I've tried
# php -dmemory_limit=1G ../composer.phar update --no-scripts --prefer-dist
and I've tried removing the composer.lock and vendor files and even tried adding a temporary swap file but it really doesn't seem to be a memory problem - could the error be misguided?
proc_open is not disabled and I also tried with shell fork bomb protection disabled but no jive.
Would love a heads up.
N.B. I'm aware of the advice to commit the composer.lock file and do a composer install but this instance is being used for dev (as was the previous CentOS/CloudLinux 6.x machine with smaller resource specs) so we need to use the same methods we were using previously.
OK so it was CloudLinux limiting the memory for the user to 1024mb, because it works when the limit is doubled to 2048mb.
That's the same setting on our previous server (CentOS/CloudLinux 6.x) but it looks like each version of CentOS is much more memory hungry than the rest.
Whats weird is that running composer with --profile shows the most it uses is 482mb. Even if it doubles when forking (as has been suggested) that's still below the 1024mb limit.
I ran into the same problem. My system had 1.5GB RAM free and it was not enough...Composer was eating too much memory very fast.
My only solution was to clear cache and update to latest version (1.4.2):
composer clear-cache
sudo composer selfupdate
That happens when you have low memory resources and you don't have swap enabled, i had the same problem and fixed using this few commands bellow, or you can create a swap partition or file, just make sure that you activate swap.
$ /bin/dd if=/dev/zero of=/var/swap.1 bs=1M count=1024
$ /sbin/mkswap /var/swap.1
$ /sbin/swapon /var/swap.1
I hope it works for you...
Related
I get the following error when importing an SVN repository to git with svn2git:
fatal: EOF in data (285 bytes remaining)
Does anyone know what this error means?
This is caused by a segmentation fault, there is a branch/tag/ in your repository that is causing it to core.
To get the core files you will need to enable cores:
Uncomment this line in /etc/security/limits.conf
soft core unlimited
Run svn2git, it may take up to 2 hours to get the segmentation fault. Install gdb:
yum install gdb
Analyse the core:
gdb svn2git/svn-all-fast-export core.NNNN
Get a back trace, type:
bt
You should see the branch/tag which caused problems in the back trace. Exclude the branch from processing by updating your ruleset:
match /branches/broken_branch_name
end match
See issue opened with owner of svn2git here:
https://github.com/svn-all-fast-export/svn2git/issues/26
Or even easier, pstack <pid of svn2git> and you will see where it is stuck, then Ctrl + C, add the dud branch to your rule set and start svn2git again.
I'm trying to clone a reasonably big svn repository with git-svn and at a certain point I get a error message:
Failure loading plugin: APR: Can't create a character converter from 'UTF-8' to native encoding: Cannot allocate memory at /usr/libexec/git-core/git-svn line 5061
And sometimes a
Cannot allocate memory: zlib (compress2): out of memory: Compression of svndiff data failed at /usr/libexec/git-core/git-svn line 5061
error message. I still have ~3GB RAM free. What should I do so git-svn can utilize it?
(I'm doing this on RedHat Enterprise Linux 6.5 if that makes any difference)
From:
This error message is about the memory git is trying to allocate --
it's more than what is free. This is most likely caused by a large
file having been checked into SVN. Unfortunately, there's no easy way
to fix it (apart from buying more memory) -- you would have to remove
the large file and the commit adding it from SVN.
However try following:
Increase swap memory
Increase ulimit
I'm trying to install Heroku / HHVM / WordPress on a Debian 6 64 bits VPS to test this kind of setup for my blog (Nginx + MySQL + HP-FPM + Varnish + WordPress on another Debian 6 64 bits VPS) following the recent and promising guide done by Xiao Yu and available on GitHub.
I'm absolutelly new to Heroku/Ruby and I'm affraid I'm quite lost when something not expected is happening. The installation guide seemed to be straightforward, but it isn't clear which packages do I need to install first (PHP-FPM? Nginx? Or does this script install those by itself?) and I'm stuck on this step:
git push heroku production:master
When I execute that, I get this:
Initializing repository, done.
Counting objects: 344, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (162/162), done.
Writing objects: 100% (344/344), 72.73 KiB, done.
Total 344 (delta 139), reused 342 (delta 139)
-----> PHP app detected
! ERROR: Could not resolve composer.lock requirement for HHVM 3.1.0,
please adjust the version selector. The following runtimes are available:
hhvm-3.2.0 php-5.5.11 php-5.5.12 php-5.5.13 php-5.5.14 php-5.5.15
php-5.6.0RC4
! Push rejected, failed to compile PHP app
To git#heroku.com:xxxxxx-fortress-xxxx.git
! [remote rejected] production -> master (pre-receive hook declined)
error: failed to push some refs to 'git#heroku.com:xxxxxxx-fortress-xxxx.git'
I've tried to take a look at composer.json, edit it and include a
"php": "~5.5.11",
line on the require section, but that doesn't work... unless I have to do something before (update composer.lock? How?), which I'm not sure about.
What am I doing wrong?
Thanks!
HHVM 3.1.0 is not available (anymore), as the error message points out. You would however have to update composer.lock too.
Your best bet is to just update from the template; it's been fixed there: https://github.com/xyu/heroku-wp/commit/2a0ea2097597f72c401a63c070a14ec5031ffc9d
I am currently trying to use pkgmk on a Solaris10-x86 machine. When I run the command
pkgmk -o -b $(HOME)/solbuild/pkg_solaris
it returns this error:
## Building pkgmap from package prototype file.
pkgmk: ERROR: memory allocation failure
## Packaging was not successful.
My first thought was that this is an out of free memory error, however I am not sure that it could be. I have close to a gigabyte free in the / partition and 12 gigabytes free in the $(HOME) partition.
Any help would be greatly appreciated.
I saw this error when /var was full.
Deleting some file from /var resolved this problem.
I'll be answering my own question to provide bread crumbs for the next person who hits this:
Problem
x86Linux Maven build fails during flexmojos-maven-plugin with
load-config+=...flex-config.xml -static-link-runtime-shared-libraries...
-metadata.language+=en_US
[INFO] Loading configuration file .../flex-config.xml
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 37064 bytes for Chunk::new
[ERROR] OutOfMemoryError -> [Help 1]
Solution
Increase system swap. The flexmojos call out fo the flex compiler (a native executable that requires memory outside of that which is allocated for the JVM). If you run low on memory and can't swap out maven's jvm them the flex compiler fails.
I added additional swap and was able to complete the build successfully.
from enter link description here
# create swap file
dd if=/dev/zero of=/opt/swapfile.1 bs=1M count=2048
# Set Permissions
chmod 600 /opt/swapfile.1
# Define as swap
mkswap /opt/swapfile.1
# Add to active swap
swapon /opt/swapfile.1
# Verify
free -m