Exit status 223 (out of memory) when pushing to IBM Cloud - out-of-memory

I am running into trouble deploying apps from my local dev environment. My cf push always fails with a Exit status 223 (out of memory) error (irrespective of the app).
I am certain both my IBM Cloud Org and my local environment have sufficient space to work with.
Here is the stack trace:
REQUEST: [2018-02-14T09:02:04-05:00]
GET /v2/apps/7426064e-0d6c-469e-8d6d-01e47728be01 HTTP/1.1
Host: api.ng.bluemix.net
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Connection: close
Content-Type: application/json
User-Agent: go-cli 6.32.0+0191c33d9.2017-09-26 / darwin
10% building modules 8/17 modules 9 active ...node_modules/fbjs/lib/containsNode.js
89% additionsets processing Hash: 9d08b2614d7a87cb99ad
Version: webpack 2.7.0
js/bundle.9d08b2614d7a87cb99ad.js 297 kB 0 [emitted] [big] main
js/bundle.9d08b2614d7a87cb99ad.js.map 466 kB 0 [emitted] main
index.html 304 bytes [emitted]
[0] ./~/react/index.js 190 bytes {0} [built]
[4] ./client/app/App.jsx 858 bytes {0} [built]
[5] ./~/react-dom/index.js 1.36 kB {0} [built]
[6] ./client/default.scss 1.03 kB {0} [built]
[8] ./~/css-loader!./~/sass-loader/lib/loader.js!./client/default.scss 193 kB {0} [built]
[9] ./~/css-loader/lib/css-base.js 2.26 kB {0} [built]
[12] ./~/fbjs/lib/containsNode.js 923 bytes {0} [built]
Time: 73789ms
Asset Size Chunks Chunk Names
[7] ./client/index.jsx 222 bytes {0} [built]
[10] ./~/fbjs/lib/EventListener.js 2.25 kB {0} [built]
[11] ./~/fbjs/lib/ExecutionEnvironment.js 935 bytes {0} [built]
[13] ./~/fbjs/lib/focusNode.js 578 bytes {0} [built]
[14] ./~/fbjs/lib/getActiveElement.js 912 bytes {0} [built]
[18] ./~/react-dom/cjs/react-dom.production.min.js 92.7 kB {0} [built]
[19] ./~/react/cjs/react.production.min.js 5.41 kB {0} [built]
[20] ./~/style-loader/addStyles.js 6.91 kB {0} [built]
+ 6 hidden modules
Child html-webpack-plugin for "index.html":
[0] ./~/lodash/lodash.js 540 kB {0} [built]
[1] ./~/html-webpack-plugin/lib/loader.js!./client/index.html 590 bytes {0} [built]
[2] (webpack)/buildin/global.js 509 bytes {0} [built]
[3] (webpack)/buildin/module.js 517 bytes {0} [built]
-----> Build failed
Failed to compile droplet: Failed to compile droplet: exit status 137
Exit status 223 (out of memory)
Staging failed: STG: Exited with status 223 (out of memory)
Stopping instance 0ee88ef2-8cd4-4096-9c3c-dee1870cf758
Destroying container
Successfully destroyed container
Has anyone run into this issue? Does anyone have any ideas on what might be wrong?

Here's what you could try:
Restarting the app
Re-installing npm packages (npm install)
Updating node, npm versions
Increasing the app space on IBM Cloud
Reduce the overall memory used by the app
Looking for possible memory leaks
Possible issues with packages (webpack etc)
Here's what worked for me:
In my NodeJS package.json, I added:
"engines": {
"node": ">= 7.0.0",
"npm": ">= 4.2.0"
}
I believe the issue was with IBM Cloud's default npm version, versus the version I was using in my local environment. Once I specified the version in my package.json, IBM Cloud was able to complete the build and deploy.
If people have a better understanding of what the error was and why this solution worked, please share.

Please check your application's available memory.
and check wether your application produces any memory leak.
The quickest thing to try is :
You can increase memory allocation for your app.
Login to the Cloud's Dashboard.
Select your app and increase MEMORY QUOTA.
This will restart the app.
Try pushing again.

The error is saying that staging has failed because the process used up too much memory. In short, whatever is running is exceeding the memory limit for the staging container.
Failed to compile droplet: Failed to compile droplet: exit status 137
Exit status 223 (out of memory)
Staging failed: STG: Exited with status 223 (out of memory)
You've got three options for working around this.
Cloud Foundry will set the memory limit for the staging container to either an operator defined value or the size of the memory limit you picked for your app, whichever one is larger. I can't say what the operator defined limit is for your platform, but you can probably work around this by simply setting a larger memory limit. Just try pushing again with larger values until it succeeds. Then after the push is successful, you can cf scale -m and lower the memory limit back down to what you need for runtime.
The other option would be to take a look at your build scripts or whatever runs to stage your application, and work to reduce the memory it requires. Making it consume less memory should also resolve this issue.
Lastly, you can stage your app locally. To do this you would run your build scripts on your local machine and then push the final product. You can't skip the staging process altogether, but if things are already prepared to run then staging usually becomes a no-op.
Hope that helps!

Related

How to solveError: EPERM: operation not permitted, symlink 'contact.func' -> '\.vercel\output\functions\api\getExperience.func'?

I'm deploying a NextJs project with sanity build, and I get the following error:
Error: EPERM: operation not permitted, symlink 'contact.func' -> 'C:\Users\Arotiana's\portfolio-react\.vercel\output\functions\api\getExperience.func'
I've tried to change the authorizations on the folder, but the same error appears again
this is the whole process:
PS C:\Users\Arotiana's\portfolio-react> vercel build
Vercel CLI 28.10.1
WARNING: You should not upload the `.next` directory.
Installing dependencies...
up to date in 15s
105 packages are looking for funding
run `npm fund` for details
Detected Next.js version: 13.0.6
Detected `package-lock.json` generated by npm 7+...
Running "npm run build"
> portfolio-react#0.1.0 build
> next build
info - Linting and checking validity of types
info - Creating an optimized production build
info - Compiled successfully
info - Collecting page data
[ =] info - Generating static pages (2/3)(node:5472) ExperimentalWarning: The Fetch API is an experimental feature. This feature could change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
info - Generating static pages (3/3)
info - Finalizing page optimization
Route (pages) Size First Load JS
┌ ● / (14855 ms) 112 kB 185 kB
├ └ css/53a9169c96cbb4d8.css 2.2 kB
├ /_app 0 B 73.2 kB
├ ○ /404 181 B 73.4 kB
├ λ /api/contact 0 B 73.2 kB
├ λ /api/getExperience 0 B 73.2 kB
├ λ /api/getPageinfo 0 B 73.2 kB
├ λ /api/getProject 0 B 73.2 kB
├ λ /api/getSkills 0 B 73.2 kB
└ λ /api/hello 0 B 73.2 kB
+ First Load JS shared by all 77.2 kB
├ chunks/framework-3b5a00d5d7e8d93b.js 45.4 kB
├ chunks/main-f2e125da23ccdc4a.js 26.7 kB
├ chunks/pages/_app-a96cacb95f41a3ef.js 286 B
├ chunks/webpack-59c5c889f52620d6.js 819 B
└ css/bfe58de89cfcdd1e.css 3.95 kB
λ (Server) server-side renders at runtime (uses getInitialProps or getServerSideProps)
○ (Static) automatically rendered as static HTML (uses no initial props)
● (SSG) automatically generated as static HTML + JSON (uses getStaticProps)
Traced Next.js server files in: 5.050s
Created all serverless functions in: 2.294s
Collected static files (public/, static/, .next/static): 518.68ms
Error: EPERM: operation not permitted, symlink 'contact.func' -> 'C:\Users\Arotiana's\portfolio-react\.vercel\output\functions\api\getExperience.func'
Help would be appreciated

ROS Crashing on macOS X Sierra with JavaScript heap out of memory error

I'm running the developers edition of Realm Object Server v1.8.3 as a mac app. I start it with the start-object-server.command. It has been running fine for a number of days and everything was working really well, but ROS is now crashing within seconds of starting it.
Clearly the issue is with the JavaScript element, but I am not sure what has led to this position, nor how best to recover from this error. I have not created any additional functions, so not adding any NODE.js issues: it's just ROS with half a dozen realms.
The stack dump I get from the terminal session is as below. Any thoughts on recovery steps and how to prevent it happening again would be appreciated.
Last few GCs
607335 ms: Mark-sweep 1352.1 (1404.9) -> 1351.7 (1402.9) MB, 17.4 / 0.0 ms [allocation failure] [GC in old space requested].
607361 ms: Mark-sweep 1351.7 (1402.9) -> 1351.7 (1367.9) MB, 25.3 / 0.0 ms [last resort gc].
607376 ms: Mark-sweep 1351.7 (1367.9) -> 1351.6 (1367.9) MB, 15.3 / 0.0 ms [last resort gc].
JS stacktrace
Security context: 0x3eb4332cfb39
1: DoJoin(aka DoJoin) [native array.js:~129] [pc=0x1160420f24ad] (this=0x3eb433204381 ,w=0x129875f3a8b1 ,x=3,N=0x3eb4332043c1 ,J=0x3828ea25c11 ,I=0x3eb4332b46c9 )
2: Join(aka Join) [native array.js:180] [pc=0x116042067e32] (this=0x3eb433204381 ,w=0x129875f3a8b1
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
1: node::Abort() [/Applications/realm-mobile-platform/realm-object-server/.prefix/bin/node]
2: node::FatalException(v8::Isolate*, v8::Local, v8::Local) [/Applications/realm-mobile-platform/realm-object-server/.prefix/bin/node]
3: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/Applications/realm-mobile-platform/realm-object-server/.prefix/bin/node]
4: v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [/Applications/realm-mobile-platform/realm-object-server/.prefix/bin/node]
5: v8::internal::Runtime_StringBuilderJoin(int, v8::internal::Object**, v8::internal::Isolate*) [/Applications/realm-mobile-platform/realm-object-server/.prefix/bin/node]
6: 0x1160411092a7
/Applications/realm-mobile-platform/start-object-server.command: line 94: 39828 Abort trap: 6 node "$package/node_modules/.bin/realm-object-server" -c configuration.yml (wd: /Applications/realm-mobile-platform/realm-object-server/object-server)
Your ROS instance has run out of memory. To figure out why it runs out of memory, it would be helpful to see the log file of the server. Can you turn
on the debug level for logging.
If you want to send a log file to Realm, it is better to open an issue for this at https://github.com/realm/realm-mobile-platform/issues.

502 Gitlab is taking too much time to respond

After taking gitlab backup everyday gitlab is throwing 502 error.
I saw nginx logs but did not find that much information.
After gitlab-ctl restart it starts working again.
System Configurations:
OS : Ubuntu 16.04 LTS
4 GB Ram
200 GB Disk Space
can anyone give permanent solution for it.
There is a high possibility that it run out of shared memory. As each time after the backup you got the 502 error.
To check it with gitlab-ctl tail tail detail
It will show something like:
2019-04-12_12:37:17.27154 FATAL: could not map anonymous shared memory: Cannot allocate memory
2019-04-12_12:37:17.27157 HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap space, or huge pages. To reduce the request size (currently 4345470976 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
2019-04-12_12:37:17.27171 LOG: database system is shut down
Then check it with free -m, which shows there is no available shared memory.
total used free shared buffers cached
Mem: 16081 13715 2365 0 104 753
-/+ buffers/cache: 12857 3223
Then you need to check if there is some process take too many shared memory, or too many zomibe process, then kill it with command like ps -aef | grep ffmpeg | awk '{print $2}' | xargs kill 9
Check it with free -h, there is about 112M shared memory now.
total used free shared buffers cached
Mem: 15G 4.4G 11G 112M 46M 416M
-/+ buffers/cache: 3.9G 11G
Swap: 0B 0B 0B
At last,restart you gitlab with gitlab-ctl restart, after sometime the gitlab booted, the 502 gone.
After long search i got something about it. After taking backup my gitlab-workhorse is getting ideal and gitlab.socket is refusing the connection. As temporary solution i have installed a new cron job for restarting gitlab service after the complpetion of gitlab backup cronjob.
If the gitlab is installed in Virtual-Box - Ubuntu server either 18.04 or 20.04,
please increase the RAM to 4gb and the provide atleast 3 processors.

Error establishing a database connection EC2 Amazon

I hope you can help me. I can not stand having to keep restarting my ec2 instance on Amazon.
I have two wordpress sites hosted there. My sites have always worked well until two months ago, one of them started having this problem. I tried all ways pack up, and the only solution was to reconfigure.
Now that all was right with the two. The second site started the same problem. I think Amazon is clowning me.
I am using a free micro instance. If anyone knows what the problem is, please help me!
Your issue will be the limited memory that is allocated to the T1 Micro instances in EC2. I'm assuming you are using ANI Linux in this case and if an alternate version of Linux is used then you may have different locations for your log and config files.
Make sure you are the root user.
Have a look at your MySQL logs in the following location:
/var/log/mysqld.log
If you see repeated instances of the following it's pretty certain that the 0.6GB of memory allocated to the micro instance is not cutting it.
150714 22:13:33 InnoDB: Initializing buffer pool, size = 12.0M
InnoDB: mmap(12877824 bytes) failed; errno 12
150714 22:13:33 InnoDB: Completed initialization of buffer pool
150714 22:13:33 InnoDB: Fatal error: cannot allocate memory for the buffer pool
150714 22:13:33 [ERROR] Plugin 'InnoDB' init function returned error.
150714 22:13:33 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
150714 22:13:33 [ERROR] Unknown/unsupported storage engine: InnoDB
150714 22:13:33 [ERROR] Aborting
You will notice in the log excerpt above that my buffer pool size is set to 12MB. This can be configured by adding the line innodb_buffer_pool_size = 12M to your MySQL config file /etc/my.cnf.
A pretty good way to deal with InnoDB chewing up your memory is to create a swap file.
Start by checking the status of your memory:
free -m
You will most probably see that your swap is not doing much:
total used free shared buffers cached
Mem: 592 574 17 0 15 235
-/+ buffers/cache: 323 268
Swap: 0 0 0
To start ensure you are logged in as the root user and run the following command:
dd if=/dev/zero of=/swapfile bs=1M count=1024
Wait for a bit as the command is not verbose but you should see the following response after about 15 seconds when the process is complete:
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 31.505 s, 34.1 MB/s
Next set up the swapspace with:
mkswap /swapfile
Now set up the swap event:
swapon /swapfile
If you get a permissions response you can ignore it or address the swap file by changing the permissions to 600 with the chmod command.
chmod 600 /swapfile
Now add the following line to /etc/fstab to create the swap spaces on server start:
/swapfile swap swap defaults 0 0
Restart your MySQL instance:
service mysqld restart
Finally check to see if your swap file is working correctly with the free -m command.
You should see something like:
total used free shared buffers cached
Mem: 592 575 16 0 16 235
-/+ buffers/cache: 323 269
Swap: 1023 0 1023
Hope this helps.

NodeJS + AppJS + Sqlite3

I am trying to build the SQLite3 module into my project.
If I run NPM install sqlite3 it fails. Here is my npm-debug.log relevant:
235 info install sqlite3#2.1.5
236 verbose unsafe-perm in lifecycle true
237 silly exec cmd "/c" "node-gyp rebuild"
238 silly cmd,/c,node-gyp rebuild,C:\NodeWorkbench\AppJS Workspace\template\data\node_modules\sqlite3 spawning
239 info sqlite3#2.1.5 Failed to exec install script
240 info C:\NodeWorkbench\AppJS Workspace\template\data\node_modules\sqlite3 unbuild
241 verbose from cache C:\NodeWorkbench\AppJS Workspace\template\data\node_modules\sqlite3\package.json
242 info preuninstall sqlite3#2.1.5
243 info uninstall sqlite3#2.1.5
244 verbose true,C:\NodeWorkbench\AppJS Workspace\template\data\node_modules,C:\NodeWorkbench\AppJS Workspace\template\data\node_modules unbuild sqlite3#2.1.5
245 info postuninstall sqlite3#2.1.5
246 error sqlite3#2.1.5 install: `node-gyp rebuild`
246 error `cmd "/c" "node-gyp rebuild"` failed with 1
247 error Failed at the sqlite3#2.1.5 install script.
247 error This is most likely a problem with the sqlite3 package,
247 error not with npm itself.
247 error Tell the author that this fails on your system:
247 error node-gyp rebuild
247 error You can get their info via:
247 error npm owner ls sqlite3
247 error There is likely additional logging output above.
248 error System Windows_NT 6.1.7600
249 error command "C:\\Program Files\\nodejs\\\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\bin\\npm-cli.js" "install" "sqlite3"
250 error cwd C:\NodeWorkbench\AppJS Workspace\template\data
251 error node -v v0.8.14
252 error npm -v 1.1.65
253 error code ELIFECYCLE
254 verbose exit [ 1, true ]
I have node-gyp installed, as well as python (3.3 I believe). Thanks for the help. I really need this resolved ASAP, so if you could point me in a direction I would appreciate it greatly!
Ideally, I would like to use Nano and couchdb for my project. JSON from front to back would be great. But nano was throwing C++ exceptions during run time so I had to recompile the stack and start over (it recompiled AppJS when I installed nano which I assume put some faulty extensions in and messed up the whole works) My stack is as follows:
Database > AppJS (NodeJS included in this) > SocketIO > AngularJS
The point of this project is to assemble a stack that I can use as a replacement for server2go. My company has had severe stability issues regarding server2go, including data loss and DB corruption (MyISAM with MySQL).
Found a great solution. Persist works perfectly with AppJS, and has a great non-blocking sqlite3 driver. Just in case anyone else was wondering

Resources