Local by Flywheel - Adminer - memory exhausted fatal error - out-of-memory

this is the error when trying to import a 30MB SQL file:
Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 264241184 bytes) in /Applications/Local.app/Contents/Resources/extraResources/adminer/adminer.php on line 95
i'm going to try & work my way through this, & will post the answer here if i sort it out. if anyone has a quick/simple fix - please share.
thanks,
Jason

Click on the Open site shell
Once you in terminal, move your sql file to your app's public folder, and then run
wp db import your-file-name.sql

Related

DBD::SQLite::db commit failed: disk I/O error

I have a system writing data to an sqlite file. I had everything operational under CentOS 8. After upgrading the system to Rocky Linux 9 I see this error when running a commit command: DBD::SQLite::db commit failed: disk I/O error
I have checked file permissions, disk space, SMART readings, everything disk related that I can think of but without success.
Has anyone encountered this error before? What could I try to fix it?
The problem turned out to be a missing Perl module (LWP::https) that was causing DBD::SQLite not to get the data it wanted. Apparently, DBD::SQLite says Disk I/O error for that case.

reading a huge csv file using cudf

I am trying to read a huge csv file CUDF but gets memory issues.
import cudf
cudf.set_allocator("managed")
cudf.__version__
user_wine_rate_df = cudf.read_csv('myfile.csv',
sep = "\t",
parse_dates = ['created_at'])
'0.17.0a+382.gbd321d1e93'
terminate called after throwing an instance of 'thrust::system::system_error'
what(): parallel_for failed: cudaErrorIllegalAddress: an illegal memory access was encountered
Aborted (core dumped)
If I remove cudf.set_allocator("managed")
I get
MemoryError: std::bad_alloc: CUDA error at: /opt/conda/envs/rapids/include/rmm/mr/device/cuda_memory_resource.hpp:69: cudaErrorMemoryAllocation out of memory
I am using CUDF through rapidsai/rapidsai:cuda11.0-runtime-ubuntu16.04-py3.8
I wonder whar could be the reason of hitting memory, while I can read this big file with pandas
**Update
I installed dask_cudf
and used dask_cudf.read_csv('myfile.csv') - but still get the
parallel_for failed: cudaErrorIllegalAddress: an illegal memory access was encountered
If the file you are reading is larger than the memory available then you will observe an OOM(Out Of Memory) error as cuDF runs on a sigle GPU.
In order to read files which are very large I would recommend using dask_cudf.
Check out this blog by Nick Becker on reading larger than GPU memory files. It should get you on your way.

csync/sqlite error when running ownCloud command

I am running owncloudcmd to sync files from a local* path to an ownCloud/Nextcloud server, all running Debian 8. However it fails with the error:
[5] csync_statedb_query sqlite3_compile error: disk I/O error - on
query PRAGMA quick_check; [6] csync_statedb_load ERR: sqlite3
integrity check failed - bail out: disk I/O error. #### ERROR during
csync_update : "CSync failed to load the journal file. The journal
file is corrupted."
I am not very familiar with csync or sqlite so I am a bit in the dark and although I can find talk of this issue through googling, I can't find a fix. The data in this case can be dumped to start over so I'm happy to flush any database or anything else. I've trying removing the created csync and journal files assuming one of them was corrupted but it doesn't seem to change anything.
I have read talk about changing PRAGMA settings to ignore the error (or check) but I can't see how this is implemented either.
Is anyone able to show me how to clear out the corruption?
*the local file is a mounted path to an AWS S3 bucket but I think this is irrelevant because it is working on other systems fine.

JBoss 6 Starrtup failed : HSQLDB - out of memory issue

Please explain why I am not able to start JBoss server if I am adding any EAR file. While starting I am getting an error like this:
Deployment
"vfs:///D:/Servers/jboss-6.0.0.Final/server/all/deploy/hsqldb-ds.xml"
is in error due to the following reason(s): java.sql.SQLException: Out
of Memory
Please help me.
Thanks in advance.
Finally I was able to find out the issue. The localDB.backup, localDB.data, localDB.lck, localDB.log,localDB.properties and localDB.script file will be saved in jboss6/server/all/data/hypersonic data. So delete all those files and restart the server. It will be perfect. The reason is that whenever we try to start the server it ll check this folder and try to load the previous deployed info from this backup files. So if any incomplete deployment will corrupt these files.

Memory size of 1073741824 exhausted - Twig Environment Symfony2

For some reason, I am suddenly getting the following Fatal error when clearing my cache in my Symfony2 project:
Fatal error: Allowed memory size of 1073741824 bytes exhausted (tried to allocate 130968 bytes) in /Applications/XAMPP/xamppfiles/htdocs/instorecrm/vendor/twig/twig/lib/Twig/Environment.php on line 286
When I refer to the line 286 in the Environment.php file, it is this function:
public function getCacheFilename($name)
{
#trigger_error(sprintf('The %s method is deprecated and will be removed in Twig 2.0.', __METHOD__), E_USER_DEPRECATED);
$key = $this->cache->generateKey($name, $this->getTemplateClass($name));
return !$key ? false : $key;
}
I cannot think of anthing I've done that would cause this, the only change I have made is to the parameters.yml file to stipulate Gmail as my mail host. It does not seem to affect the working of the site (at least I have not found it to as yet) but it worries me that something might be broken.
I am using localhost, if that helps any.
Any help appreciated,
Thank you
Michael
Try cleaning the cache manually by deleting the app/cache folder.

Resources