Increase memory-limit Composer doesn't work - symfony

I'm trying to require the bundle in my Symfony project: FOSUserBundle. But it doesn't work, apparently I have to increase the memory_limit. But I can't make it work.
I've already tried to look for the php.ini file to change the memory_limit. But there is no php.ini file in my Symfony project. Why is it not there? Have I forgotten to install something? And if I want to add it manually, where can I put it? There are PHP.INI files in the MAMP folder. I've tried to change the memory_limit value, doesn't help.
And I've tried to run this in the terminal after navigating to the right project folder: php -d memory_limit=2G composer update.
The reaction I get is more or less this:
Nothing to install or update
Generating autoload files
Incenteev\ParameterHandler\ScriptHandler::buildParameters
Updating the "app/config/parameters.yml" file
Sensio\Bundle\DistributionBundle\Composer\ScriptHandler::buildBootstrap
Sensio\Bundle\DistributionBundle\Composer\ScriptHandler::clearCache
Here is the error I get after trying to require the bundle:
Fatal error: Allowed memory size of 1610612736 bytes exhausted (tried to allocate 4096 bytes) in...
As you might see, I'm new to Symfony and Composer. Can you please help me out?
Thanks in advance!

There are a few things which will help a lot.
Ensure you are running the latest version of Composer.
Ensure you are running at least a recent version of PHP - version 7 improved the memory use substantially - sometimes halving the amount of memory used. Versions 7.2 or (better) 7.3 should be the versions being used now (summer 2019).
Actively limit the number of different versions of packages that Composer must check to see what could be valid to use.
Roave/SecurityAdvisories is a good start. This will also stop you installing versions of packages that have known security issues. It will also limit the search-space for valid packages, allowing Composer to ignore large swathes of possible packages, meaning that it does not need to hold large amounts of data for the various potential combinations.
You can add other versions of packages to further narrow the search-space. For example, you may have a number of wildcard "*" versions (also known as 'The Death Star Version Constraint') - which are almost always a bad idea. Mostly, a version number of the form "^3.4" or "^4.3" would be better - allowing upgrades from bug-fix versions and features (the 3rd and 2nd numbers), but not major versions, which can often contain breaking changes.

I solved it by setting the composer_memory_limit in the composer.json file under the 'config' part:
"COMPOSER_MEMORY_LIMIT": "2G"

Related

Can I point pre-commit mypy hook to use a requirements.txt for the additional_dependencies?

I would like to use the exactly same version of flake8 in requirements.txt and in .pre-commit-config.yaml.
To avoid redundancy I would like to keep the version number of flake8 exactly once in my repo.
Can pre-commit.com read the version number of flake8 from requirements.txt?
it cannot
pre-commit intentionally does not read from the repository under test as this makes caching intractable
you can read more in this issue and the many duplicate issues linked there
for me, I no longer include flake8, etc. in my requirements files as pre-commit replaces the need to install linters / code formatters elsewhere
disclaimer: I created pre-commit

How can you make ASDF stop trying to load a nonexistent file?

On Debian, I had a bunch of cruft installed in /usr/lib/sbcl/site-systems that wouldn't load because the FASLs didn't match the version of SBCL that is actually installed.
For some reason, none of these files were associated with any Debian package (this is an old computer that has been running the same Debian install for over a decade– it's on Debian Sid).
I deleted the bad systems one at a time, and for most of them, Quicklisp did the right thing and downloaded the Quicklisp version. Sometimes, ASDF would insist that the system should exist at its previous path, but restarting SBCL got past that problem.
But for one system, ASDF has persistently cached the location of its .asd file as being in the /usr/lib/sbcl/site-systems/ directory. Loading this system is impossible because ASDF will not look anywhere else, even after restarting SBCL.
I tried looking in all the paths specified in various config files under /etc/common-lisp. None of those files contain a reference to the now-missing library.
I've resorted to doing a grep -rli across all the files under /usr. I don't expect that to complete in less than a day, and it might not find anything, in which case I'll be forced to grep the whole hard drive, which might take a whole week. Hopefully, the cache isn't compressed, because then I'll never find it.
Does anyone happen to know how ASDF persists the paths of files?
After a lot of excruciating debugging, I discovered that the files in /usr/lib/sbcl/site-systems/ actually do exist. They're broken symlinks.
The files I deleted were in the similar-looking path /usr/lib/sbcl/site/, to which the symlinks pointed.
Removing the symlinks fixed all the loading errors.
A couple of ideas about troubleshooting Quicklisp, particularly if your getting bizarre behavior.:
If you use Quicklisp for any length of time you'll probably eventually use local packages, found here by default, ~/quicklisp/local-projects It's valid to symlink you're projects into that directory. If you ever rename one of your projects, of course, don't forget to create a new symlink and delete the old one
Likewise, if you rename a local project, also delete the system index which Quicklisp will then recreate the next time it runs: ~/quicklisp/local-projects/system-index.txt Doesn't hurt to delete it from time to time just to keep your system fresh.
your *.fasl files can become stale too, deleting the system cache forces quicklisp to recompile everything. On an Ubuntu system running SBCL that would mean deleting the contents of:
rm -rf ~/.cache/common-lisp
Try updating the Quicklisp client
(ql:update-client)
Potentially deleting and reinstalling Quicklisp itself at ~/quicklisp can be necessary. (It's possible to inadvertently edit source files when your debugging and using Swanks lookup definition feature, breaking installed packages that used to work. Not that I would ever have done something as careless as that.)
Also, don't forget that ASDF decends into directories looking for *.asd files. If you have a stray one that's improperly structured that can cause havoc on you build system. (See my comment above about registering local projects to Quicklisp)
Finally, don't forget to check your lisp init file, e.g. .sbclrc for any debugging or quick and dirty hacks you might have left there and forgot about.
These are all things that have worked for me at one time or another, hopefully I'm not perpetuating legend and cant on things have have long since been fixed!

meld - GLib-GIO-ERROR**: No GSettings schemas are installed on the system

I have installed meld 3.14.2, at last (on NFS share in Redhat 6.3 server), after nearly 40 hours of efforts , installing each and every dependency and at last seems to be successful. But one finale error needs to be solved:
(meld:20703): GLib-GIO-ERROR **: No GSettings schemas are installed on the system
Trace/breakpoint trap (core dumped)
There was answer here: GLib-GIO-ERROR**: No GSettings schemas are installed on the system
I am not aware of these jargons before. So, please explain in detail what to do.
Do I need to set the variable $XDG_DATA_DIR or not? And if, why and what should be the value?
And I can find that the compiled file is already located in MyApp/share/glib-2.0/schemas.
However, I have also tried the following, even though the compiled schema is already there:
glib-complile-schemas <PATH_TO_SCHEMAS> --targetdir=MyApp/share/glib-2.0/schemas
But still I am getting the error. I have tried the variable too by setting it to MyApp and MyApp/share/glib-2.0/schemas. That too doesn't work.
I have also tried reinstalling gsettings-desktop-config. Still error. In my case, it's 3.12 version.
So, what's going on here?? Please explain. I have been sleepless. :(
Thanks you!
And also for your information, I have installed all the dependencies GTK+,ATK,CAIRO,PANGO etc... under the same installation directory with prefix=<base>/meld/deps.
Example:
meld binaries resides as follows: <base>/meld/bin/
cairo binaries are installed as follows: <base>/meld/deps/bin/
atk binaries are installed as follows: <base>/meld/deps/bin/
Similarly, you can think of other dependencies
Well I am unsure why you are installing it to its own prefix... but just setting GSETTINGS_SCHEMA_DIR to the full path to the schema dir should work.

Changed setting in php.ini for max_execution_time, Drupal still having problems

Ok, I am trying to do some work for a client. Currently we're just trying to do a kludgy fix for a custom function another developer created, which exports a CSV of all of their contacts.
As they've added more contacts, the function takes longer, and it requires more memory (we've had to up their memory limit from 128M to 256M already, and now up to 512M).
Anyway, I'm trying to run the CSV exporter, and I'm getting the following errors, on different runs:
Fatal error: Maximum execution time of 30 seconds exceeded in /home/readyby2/public_html/includes/module.inc on line 19
Fatal error: Maximum execution time of 30 seconds exceeded in /home/readyby2/public_html/sites/all/modules/contrib/views/modules/field/views_handler_field_field.inc on line 0
and every now and again I am getting an Internal Server Error 500
Note, this function HAS worked before - and I have seen it work.
Now, as far as I can tell, this means that I need to change the maximum execution time setting in php.ini.
There is a php.ini file outside of the /public_html folder (which is where the files for the Drupal install are kept) and I've modified that setting - but still get all of the errors above. I've changed it to 60, 90, and 0, respectively, no change.
Is there anywhere else where timeout settings are set for a Drupal install? Am I finding the right php.ini file? I'm not seeing one in the /public_html directory, or any others.
How can I solve this issue? I've already upped the memory limit (could the server error imply that there isn't enough memory available on the server to perform this function?)
Any advice or help would be useful.
As an FYI, this is in Drupal 7. PHP version is 5.x (I don't remember the exact one, think it is 5.1, can find out if necessary).
Where the correct php.ini is, is going to depend a lot on your hosting environment but you can quickly get around it by using either ini_set or the set_time_limit function on the script.
You can do the ini set in settings.php but then it will be global for the whole site.
ini_set('max_execution_time', 120); to settings.php
or in your script set_time_limit ( 120 ) this doesn't work if PHP safemode is on!
Usually the php.ini file is somewhere like /etc/php5/apache/php.ini or similar. Maybe just /etc/php.ini on more simple setups.
You might want to consult your software management system: $> rpm -ql php5 or the apt alternative, if you on a debian based system. That lists you the files contained in the package. The php.ini file should be amongst them. Depending on your distribution you might also have to look at the package apache-mod_php5 or similar. Search for php inside your package management, for example with $> rpm -q --whatprovides /usr/bin/php5 or $> rpm -qa | grep -i php.
Call phpinfo() within the drupal environment. In the output php specifies which php.ini file has been parsed.
You can always try (in settings.php of you sites (sites/default/settings.php) see if you got rights write it.
ini_set('memory_limit', '256M');
it's overwride php.ini.
But I don't think you can fix it by increase memory(you can't increase your limit every time you got memory problem).
You must find what produce this reach limit (infinite boucle, too much result...)
You can always use Views module and csv export to make a good csv with a good views :D Just think to overwride pager for csv.
Another options is to add the following to drupal's .htaccess:
php_value max_execution_time 0

How to make Buildout to leave temporary files around for debugging

When running bin/buildout I get
Develop: '/fast/vs/zinstance/src/plonetheme.x'
Develop: '/fast/vs/zinstance/src/x.content'
Develop: '/fast/vs/zinstance/src/x.puhelinluettelo'
Updating zope2.
Updating fake eggs
Updating productdistros.
Installing instance.
Getting distribution for 'simplejson==2.0.9'.
No eggs found in /var/folders/3Z/3Z3hsxKxGm8ULSBqCyTuBk+++TI/-Tmp-/easy_install-EOxukI/simplejson-2.0.9/egg-dist-tmp-TCeJDh (setup script problem?)
While:
Installing instance.
Getting distribution for 'simplejson==2.0.9'.
Error: Couldn't install: simplejson 2.0.9
The easy_install temp folder is not available after buildout run. How do I tell buildout not to delete temporary files to inspect the problem? Don't focus on the egg. Manual download of this particular egg works - I want to solve the problem why buildout is doing something I don't understand and fails there.
I am not yet aware of any switch to get it to do this.
Instead, I've manually debugged the process from the python debugger, which not something I can recommend you do unless you enjoy exploring the deep, dark, twisted passages of easy_install, all alike.

Resources