apache ignite out of memory exception - out-of-memory

I got out of memory exception and ignite got crashed. After going through the ignite logs, in last metrics I could see heap, off-heap memory usage was about 171 MB,70MB respectively and after 10 secs, ignite logs shows out of memory exception. also, other flags in metrics looks ok
Below is the log snippet
[01:04:29,690][INFO][grid-timeout-worker-#22][IgniteKernal]
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=8a034404, uptime=39 days, 15:50:23.086]
^-- Cluster [hosts=1, CPUs=4, servers=1, clients=1, topVer=22, minorTopVer=0]
^-- Network [addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 172.17.0.1, 172.28.230.222], discoPort=47500, commPort=47100]
^-- CPU [CPUs=4, curLoad=0.07%, avgLoad=0.15%, GC=0%]
^-- Heap [used=171MB, free=95.15%, comm=254MB]
^-- Off-heap memory [used=70MB, free=98.02%, allocated=3377MB]
^-- Page memory [pages=17878]
^-- sysMemPlc region [type=internal, persistence=true, lazyAlloc=false,
... initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.98%, allocRam=100MB, allocTotal=0MB]
^-- default region [type=default, persistence=true, lazyAlloc=true,
... initCfg=256MB, maxCfg=3177MB, usedRam=70MB, freeRam=97.78%, allocRam=3177MB, allocTotal=69MB]
^-- metastoreMemPlc region [type=internal, persistence=true, lazyAlloc=false,
... initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.95%, allocRam=0MB, allocTotal=0MB]
^-- TxLog region [type=internal, persistence=true, lazyAlloc=false,
... initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%, allocRam=100MB, allocTotal=0MB]
^-- volatileDsMemPlc region [type=user, persistence=false, lazyAlloc=true,
... initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%, allocRam=0MB]
^-- Ignite persistence [used=69MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=0, qSize=0]
^-- System thread pool [active=0, idle=7, qSize=0]
^-- Striped thread pool [active=0, idle=8, qSize=0]
[01:04:38,584][INFO][db-checkpoint-thread-#104][Checkpointer] Checkpoint started [checkpointId=41e99f38-7359-4af1-945f-61c92d2a5fb7, startPtr=WALPointer [idx=147, fileOff=11684440, len=381549], checkpointBeforeLockTime=9ms, checkpointLockWait=0ms, checkpointListenersExecuteTime=17ms, checkpointLockHoldTime=19ms, walCpRecordFsyncDuration=2ms, writeCheckpointEntryDuration=2ms, splitAndSortCpPagesDuration=0ms, pages=9, reason='timeout']
[01:04:38,619][SEVERE][db-checkpoint-thread-#104][] Critical system error detected. Will be handled accordingly to configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext [type=CRITICAL_ERROR, err=class o.a.i.IgniteCheckedException: Compound exception for CountDownFuture.]]
class org.apache.ignite.IgniteCheckedException: Compound exception for CountDownFuture.
at org.apache.ignite.internal.util.future.CountDownFuture.addError(CountDownFuture.java:72)
at org.apache.ignite.internal.util.future.CountDownFuture.onDone(CountDownFuture.java:46)
at org.apache.ignite.internal.util.future.CountDownFuture.onDone(CountDownFuture.java:28)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:478)
at org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointPagesWriter.run(CheckpointPagesWriter.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Suppressed: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.addWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.execute(Unknown Source)
at sun.nio.ch.SimpleAsynchronousFileChannelImpl.implWrite(Unknown Source)
at sun.nio.ch.AsynchronousFileChannelImpl.write(Unknown Source)
at org.apache.ignite.internal.processors.cache.persistence.file.AsyncFileIO.write(AsyncFileIO.java:177)
at org.apache.ignite.internal.processors.cache.persistence.file.AbstractFileIO$5.run(AbstractFileIO.java:117)
at org.apache.ignite.internal.processors.cache.persistence.file.AbstractFileIO.fully(AbstractFileIO.java:53)
at org.apache.ignite.internal.processors.cache.persistence.file.AbstractFileIO.writeFully(AbstractFileIO.java:115)
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.write(FilePageStore.java:748)
at org.apache.ignite.internal.processors.cache.persistence.pagemem.PageReadWriteManagerImpl.write(PageReadWriteManagerImpl.java:116)
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.write(FilePageStoreManager.java:636)
at org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointManager.lambda$new$0(CheckpointManager.java:175)
at org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointPagesWriter$1.writePage(CheckpointPagesWriter.java:266)
at org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.copyPageForCheckpoint(PageMemoryImpl.java:1343)
at org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.checkpointWritePage(PageMemoryImpl.java:1250)
at org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointPagesWriter.writePages(CheckpointPagesWriter.java:207)
at org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointPagesWriter.run(CheckpointPagesWriter.java:151)
... 3 more
[01:04:38,620][SEVERE][db-checkpoint-thread-#104][FailureProcessor] No deadlocked threads detected.
[01:04:38,749][SEVERE][db-checkpoint-thread-#104][FailureProcessor] Thread dump at 2022/02/06 01:04:38 CST

unable to create new native thread
This seems to be a non-Ignite exception and most likely is about your system configuration.
Check your Process File Descriptor Limit by running the ulimit -a command and increase it if required. The recommended value is 32768 or above. If it requires an adjustment that can be accomplished by either running ulimit -n 32768 -u 32768 or by modifying the /etc/security/limits.conf

Related

dotnet-gcdump - ETL file shows the start of a heap dump but not its completion

I have a .NET Core v6 self-contained application running in K8S pod, base image is mcr.microsoft.com/dotnet/runtime-deps:6.0-bullseye-slim.
First I open a shell in the pod, and installed DotNet SDK and tools
Then when I execute dotnet-gcdump collect -v -p PID , it ends with following error.
Writing gcdump to '/app/20221224_072905_8.gcdump'...
0.0s: Creating type table flushing task
0.3s: Flushing the type table
0.5s: Done flushing the type table
0.5s: Requesting a .NET Heap Dump
17.0s: gcdump EventPipe Session started
17.1s: Starting to process events
17.2s: .NET Dump Started...
Found a Gen2 Induced non-background GC Start at 3.562 msec GC Count 138
17.3s: Making GC Heap Progress...
30.1s: Timed out after 30 seconds
30.1s: Shutting down gcdump EventPipe session
30.7s: EventPipe Listener dying
31.1s: still reading...
32.1s: still reading...
33.1s: still reading...
34.1s: still reading...
35.1s: still reading...
36.1s: still reading...
37.1s: still reading...
38.1s: still reading...
39.1s: still reading...
40.1s: still reading...
41.1s: still reading...
42.1s: still reading...
43.1s: still reading...
43.3s: gcdump EventPipe session shut down
43.3s: gcdump EventPipe Session closed
43.4s: [Error] Exception during gcdump: System.ApplicationException: ETL file shows the start of a heap dump but not its completion.
at DotNetHeapDumpGraphReader.ConvertHeapDataToGraph() in /_/src/Tools/dotnet-gcdump/DotNetHeapDump/DotNetHeapDumpGraphReader.cs:line 512
at Microsoft.Diagnostics.Tools.GCDump.EventPipeDotNetHeapDumper.DumpFromEventPipe(CancellationToken ct, Int32 processID, MemoryGraph memoryGraph, TextWriter log, Int32 timeout, DotNetHeapInfo dotNetInfo) in /_/src/Tools/dotnet-gcdump/DotNetHeapDump/EventPipeDotNetHeapDumper.cs:line 205
[ 43.6s: Done Dumping .NET heap success=False]
According to this documentation,
dotnet-gcdump is unable to generate a .gcdump file due to missing
information, for example, [Error] Exception during gcdump:
System.ApplicationException: ETL file shows the start of a heap dump
but not its completion.. Or, the .gcdump file doesn't include the
entire heap.
Can't I dump a self-contained dotnet process?

SolrCloud with Zookeeper - cancel_stream_error & TimeoutException: Idle timeout expired: 120000/120000 ms

I have a solrCloud setup in Kubernetes with 2 Solr instances and 3 ZooKeeper instances with 1 shard. It is configured with 8G persistent storage for each Solr and Zookeeper. The Memory allocated for Solr is 16G with 10G Heap size. There are a max of 2.5million records indexed. There scheduler client which will call the Solr with url - /update/json?wt=json&commit=true - to do the add/update/delete operations. Occasionally there will be a huge update/delete happens with 1 million records which will call the api (/update/json?wt=json&commit=true ) with 500 documents at a time, but this is called in multiple threads. Everything works fine 1 week, but suddenly we saw errors in Solr.log which makes the solr in an error state and I had to restart one of the solr node. The error are:
Node 1:
021-04-09 08:20:56.657 ERROR (updateExecutor-5-thread-169-processing-x:datacore_shard1_replica_n1 r:core_node3 null n:solr-1.solrcluster:8983_solr c:datacore s:shard1) [c:datacore s:shard1 r:core_node3 x:datacore_shard1_replica_n1] o.a.s.u.ErrorReportingConcurrentUpdateSolrClient Error when calling SolrCmdDistributor$Req: cmd=add{,id=S-170262-P-108028200-F-800001737-E-180905508}; node=ForwardNode: http://solr-0.solrcluster:8983/solr/datacore_shard1_replica_n2/ to http://solr-0.solrcluster:8983/solr/datacore_shard1_replica_n2/ => java.io.IOException: java.io.IOException: cancel_stream_error
at org.eclipse.jetty.client.util.DeferredContentProvider.flush(DeferredContentProvider.java:193)
java.io.IOException: java.io.IOException: cancel_stream_error
at org.eclipse.jetty.client.util.DeferredContentProvider.flush(DeferredContentProvider.java:193) ~[?:?]
Node2:
2021-04-09 08:22:56.661 INFO (qtp1632497828-35124) [c:datacore s:shard1 r:core_node4 x:datacore_shard1_replica_n2] o.a.s.u.p.LogUpdateProcessorFactory [datacore_shard1_replica_n2] webapp=/solr path=/update params={update.distrib=TOLEADER&distrib.from=http://solr-1.solrcluster:8983/solr/datacore_shard1_replica_n1/&wt=javabin&version=2}{} 0 119999
2021-04-09 08:22:56.661 ERROR (qtp1632497828-35124) [c:datacore s:shard1 r:core_node4 x:datacore_shard1_replica_n2] o.a.s.h.RequestHandlerBase java.io.IOException: java.util.concurrent.TimeoutException: Idle timeout expired: 120000/120000 ms
at org.eclipse.jetty.server.HttpInput$ErrorState.noContent(HttpInput.java:1085)
at org.eclipse.jetty.server.HttpInput.read(HttpInput.java:318)
And on both nodes we can see the below error as well -
2021-04-09 08:21:00.812 INFO (qtp1632497828-35036) [c:datacore s:shard1 r:core_node4 x:datacore_shard1_replica_n2] o.a.s.u.p.LogUpdateProcessorFactory [datacore_shard1_replica_n2] webapp=/solr path=/update params={update.distrib=TOLEADER&distrib.from=http://solr-1.solrcluster:8983/solr/datacore_shard1_replica_n1/&wt=javabin&version=2}{} 0 120770
2021-04-09 08:21:00.812 ERROR (qtp1632497828-35036) [c:datacore s:shard1 r:core_node4 x:datacore_shard1_replica_n2] o.a.s.h.RequestHandlerBase java.io.IOException: Task queue processing has stalled for 90013 ms with 0 remaining elements to process.
at org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient.blockUntilFinished(ConcurrentUpdateHttp2SolrClient.java:501)
The stall time is set at 90000ms.
Why we are getting these errors?
Why is it stalling for long? We have the average doc size of 1Kb.
How can we resolve this problem?

How can I solve this problem loading Grakn schema and data at localhost:48555?

I'm using grakn core 1.8.4 with Windows 10. The grakn server and grakn storage are starting up normally, but when trying to load a schema, Grakn returns the following error message:
Unable to create connection to Grakn instance at localhost:48555
Cause: io.grpc.StatusRuntimeException
UNKNOWN: ID block allocation on partition(30)-namespace(0) timed out in 2.000 min. Please check server logs for the stack trace.
I already checked and there are no other processes listening to the same port. I also disabled the Firewall and that did not solve the problem. Does anyone have any indication of how I should proceed?
Below is part of the log:
2021-01-05 14:19:24,387 [JanusGraphID(30)(0)[0]] WARN g.c.g.d.i.ConsistentKeyIDAuthority - Temporary storage exception while acquiring id block - retrying in PT0.32S: {}
grakn.core.graph.diskstorage.TemporaryBackendException: Wrote claim for id block [1, 10001) in PT0.016S => too slow, threshold is: PT0.01S
at grakn.core.graph.diskstorage.idmanagement.ConsistentKeyIDAuthority.getIDBlock(ConsistentKeyIDAuthority.java:320)
at grakn.core.graph.graphdb.database.idassigner.StandardIDPool$IDBlockGetter.call(StandardIDPool.java:262)
at grakn.core.graph.graphdb.database.idassigner.StandardIDPool$IDBlockGetter.call(StandardIDPool.java:232)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
2021-01-05 14:19:24,688 [grpc-request-handler-1] ERROR grakn.core.server.rpc.SessionService - An error has occurred
grakn.core.graph.core.JanusGraphException: ID block allocation on partition(30)-namespace(0) timed out in 2.000 min
at grakn.core.graph.graphdb.database.idassigner.StandardIDPool.waitForIDBlockGetter(StandardIDPool.java:146)
at grakn.core.graph.graphdb.database.idassigner.StandardIDPool.nextBlock(StandardIDPool.java:165)
at grakn.core.graph.graphdb.database.idassigner.StandardIDPool.nextID(StandardIDPool.java:185)
at grakn.core.graph.graphdb.database.idassigner.VertexIDAssigner.assignID(VertexIDAssigner.java:334)
at grakn.core.graph.graphdb.database.idassigner.VertexIDAssigner.assignID(VertexIDAssigner.java:184)
at grakn.core.graph.graphdb.database.idassigner.VertexIDAssigner.assignID(VertexIDAssigner.java:154)
at grakn.core.graph.graphdb.database.StandardJanusGraph.assignID(StandardJanusGraph.java:416)
at grakn.core.graph.graphdb.transaction.StandardJanusGraphTx.addVertex(StandardJanusGraphTx.java:636)
at grakn.core.graph.graphdb.transaction.StandardJanusGraphTx.addVertex(StandardJanusGraphTx.java:653)
at grakn.core.graph.graphdb.transaction.StandardJanusGraphTx.addVertex(StandardJanusGraphTx.java:649)
at grakn.core.concept.structure.ElementFactory.addVertexElement(ElementFactory.java:104)
at grakn.core.concept.manager.ConceptManagerImpl.addTypeVertex(ConceptManagerImpl.java:188)
at grakn.core.server.session.TransactionImpl.createMetaConcepts(TransactionImpl.java:1297)
at grakn.core.server.session.SessionImpl.initialiseMetaConcepts(SessionImpl.java:123)
at grakn.core.server.session.SessionImpl.<init>(SessionImpl.java:85)
at grakn.core.server.session.SessionFactory.session(SessionFactory.java:115)
at grakn.core.server.rpc.ServerOpenRequest.open(ServerOpenRequest.java:40)
at grakn.core.server.rpc.SessionService.open(SessionService.java:122)
at grakn.protocol.session.SessionServiceGrpc$MethodHandlers.invoke(SessionServiceGrpc.java:339)
at io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:172)
at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:331)
at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:814)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.util.concurrent.TimeoutException: null
at java.util.concurrent.FutureTask.get(Unknown Source)
at grakn.core.graph.graphdb.database.idassigner.StandardIDPool.waitForIDBlockGetter(StandardIDPool.java:126)
... 26 common frames omitted
If you get this error, you can try to increase the wait time with the following two parameters in the grakn.properties file:
# The number of milliseconds that the JanusGraph id pool manager will wait before
# giving up on allocating a new block of ids
root.ids.renew-timeout=10
# The number of milliseconds the system waits for an ID block reservation to be acknowledged by the storage backend
ids.authority.wait-time=10
I solved this by increasing the below parameters from 10 o 100 in the .\grakn\server\conf\grakn.properties configuration file.
Not sure if this affects performances.
# The number of milliseconds that the JanusGraph id pool manager will wait before
# giving up on allocating a new block of ids
root.ids.renew-timeout=100
# The number of milliseconds the system waits for an ID block reservation to be acknowledged by the storage backend
ids.authority.wait-time=100

Karaf out of memory error

I am using Karaf 2.3.0 for deploying my OSGi activator bundles and exposing my remote services as rest enabled. Thigs are working fine. Only once I got an out of memory error on the karaf logs (attaching the logs), after which I was not able to access my rest serviced. When I took the heap and thread dump after an hour of server crash (my karaf process is still running even after the OOMemory error), I could find nothing big in the dumps. Checked the process using jvisualvm (karaf process Xmx-1024 MB) and the memory and cpu consumptions are the bare minimal. But I am not bot able to access any of my services. Trying to access the services keeps trying for minutes without any error until timeout. I do not see the server log printing any signs of my access as well.. Is'nt the process supposed to stop once the out of mem error occurs? How can i identify what could caused the problem? Attaching the logs below
2014-07-16 12:19:51,461 | WARN | qtp1863802945-52 | ServletHandler | lipse.jetty.util.log.JavaUtilLog 70 | 52 - org.eclipse.jetty.util - 7.6.7.v20120910 | /services/alertThreshold/F5/VirtualServer
java.lang.reflect.UndeclaredThrowableException
at org.ops4j.pax.web.service.internal.$Proxy11.service(Unknown Source)[69:org.ops4j.pax.web.pax-web-runtime:1.1.3]
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:652)[60:org.eclipse.jetty.servlet:7.6.7.v20120910]
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:447)[60:org.eclipse.jetty.servlet:7.6.7.v20120910]
at org.ops4j.pax.web.service.jetty.internal.HttpServiceServletHandler.doHandle(HttpServiceServletHandler.java:70)[70:org.ops4j.pax.web.pax-web-jetty:1.1.3]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:559)[59:org.eclipse.jetty.security:7.6.7.v20120910]
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:227)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1038)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.ops4j.pax.web.service.jetty.internal.HttpServiceContext.doHandle(HttpServiceContext.java:117)[70:org.ops4j.pax.web.pax-web-jetty:1.1.3]
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:374)[60:org.eclipse.jetty.servlet:7.6.7.v20120910]
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:189)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:972)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.ops4j.pax.web.service.jetty.internal.JettyServerHandlerCollection.handle(JettyServerHandlerCollection.java:74)[70:org.ops4j.pax.web.pax-web-jetty:1.1.3]
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.server.Server.handle(Server.java:363)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:483)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:920)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:982)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:635)[54:org.eclipse.jetty.http:7.6.7.v20120910]
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)[54:org.eclipse.jetty.http:7.6.7.v20120910]
at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:627)[53:org.eclipse.jetty.io:7.6.7.v20120910]
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:51)[53:org.eclipse.jetty.io:7.6.7.v20120910]
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)[52:org.eclipse.jetty.util:7.6.7.v20120910]
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)[52:org.eclipse.jetty.util:7.6.7.v20120910]
at java.lang.Thread.run(Thread.java:722)[:1.7.0_17]
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[:1.7.0_17]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)[:1.7.0_17]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[:1.7.0_17]
at java.lang.reflect.Method.invoke(Method.java:601)[:1.7.0_17]
at org.ops4j.pax.web.service.internal.HttpServiceStarted$2.invoke(HttpServiceStarted.java:210)[69:org.ops4j.pax.web.pax-web-runtime:1.1.3]
... 27 more
Caused by: javax.servlet.ServletException: java.lang.OutOfMemoryError: Java heap space
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:420)[93:com.sun.jersey.servlet:1.15.0]
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:538)[93:com.sun.jersey.servlet:1.15.0]
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:716)[93:com.sun.jersey.servlet:1.15.0]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:806)[51:org.apache.geronimo.specs.geronimo-servlet_2.5_spec:1.1.2]
... 32 more
Caused by: java.lang.OutOfMemoryError: Java heap space
2014-07-16 12:19:51,460 | WARN | 2.168.31.36:5000 | cluster | verMonitor$ServerMonitorRunnable 117 | 115 - org.db.mongo - 1.0.0.201407140926 | Exception in monitor thread during notification of server state change
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.Arrays.copyOf(Arrays.java:2367)[:1.7.0_17]
at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)[:1.7.0_17]
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)[:1.7.0_17]
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415)[:1.7.0_17]
at java.lang.StringBuilder.append(StringBuilder.java:132)[:1.7.0_17]
at com.mongodb.ServerDescription.getShortDescription(ServerDescription.java:467)[115:org.db.mongo:1.0.0.201407140926]
at com.mongodb.ClusterDescription.getShortDescription(ClusterDescription.java:191)[115:org.db.mongo:1.0.0.201407140926]
at com.mongodb.BaseCluster.updateDescription(BaseCluster.java:158)[115:org.db.mongo:1.0.0.201407140926]
at com.mongodb.MultiServerCluster.updateDescription(MultiServerCluster.java:240)[115:org.db.mongo:1.0.0.201407140926]
at com.mongodb.MultiServerCluster.onChange(MultiServerCluster.java:149)[115:org.db.mongo:1.0.0.201407140926]
at com.mongodb.MultiServerCluster.access$100(MultiServerCluster.java:40)[115:org.db.mongo:1.0.0.201407140926]
at com.mongodb.MultiServerCluster$DefaultServerStateListener.stateChanged(MultiServerCluster.java:111)[115:org.db.mongo:1.0.0.201407140926]
at com.mongodb.DefaultServer$DefaultServerStateListener.stateChanged(DefaultServer.java:104)[115:org.db.mongo:1.0.0.201407140926]
at com.mongodb.ServerMonitor$ServerMonitorRunnable.run(ServerMonitor.java:114)[115:org.db.mongo:1.0.0.201407140926]
at java.lang.Thread.run(Thread.java:722)[:1.7.0_17]
2014-07-16 12:19:51,458 | ERROR | qtp1863802945-55 | ContainerResponse | .spi.container.ContainerResponse 406 | 92 - com.sun.jersey.jersey-server - 1.15.0 | The exception contained within MappableContainerException could not be mapped to a response, re-throwing to the HTTP container
java.lang.OutOfMemoryError: Java heap space
at sun.nio.cs.UTF_8.newDecoder(UTF_8.java:68)[:1.7.0_17]
at java.lang.StringCoding$StringDecoder.<init>(StringCoding.java:131)[:1.7.0_17]
at java.lang.StringCoding$StringDecoder.<init>(StringCoding.java:122)[:1.7.0_17]
at java.lang.StringCoding.decode(StringCoding.java:187)[:1.7.0_17]
at java.lang.String.<init>(String.java:416)[:1.7.0_17]
at org.bson.BasicBSONDecoder$BSONInput.readUTF8String(BasicBSONDecoder.java:544)
at org.bson.BasicBSONDecoder.decodeElement(BasicBSONDecoder.java:230)
at org.bson.BasicBSONDecoder._decode(BasicBSONDecoder.java:154)
at org.bson.BasicBSONDecoder.decode(BasicBSONDecoder.java:132)
at com.mongodb.DefaultDBDecoder.decode(DefaultDBDecoder.java:62)
at com.mongodb.Response.<init>(Response.java:85)
at com.mongodb.DBPort$1.execute(DBPort.java:141)
at com.mongodb.DBPort$1.execute(DBPort.java:135)
at com.mongodb.DBPort.doOperation(DBPort.java:164)
at com.mongodb.DBPort.call(DBPort.java:135)
at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:292)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:271)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:237)
at com.mongodb.QueryResultIterator.getMore(QueryResultIterator.java:137)
at com.mongodb.QueryResultIterator.hasNext(QueryResultIterator.java:127)
at com.mongodb.DBCursor._hasNext(DBCursor.java:551)
at com.mongodb.DBCursor.hasNext(DBCursor.java:571)
at com.testing.apptest.search.dao.ObjectSearchDao.getObjects(ObjectSearchDao.java:1648)
at com.testing.apptest.core.service.ObjectDictionaryManagement.getObjectList(ObjectDictionaryManagement.java:572)
at com.testing.apptest.core.service.ObjectDictionaryManagement.getObjectDict(ObjectDictionaryManagement.java:358)
at com.testing.apptest.core.service.alertSettings.ThresholdSettingsManagement.getObjectList(ThresholdSettingsManagement.java:239)
at com.testing.apptest.rest.ThresholdSettingsRest.getDeviceList(ThresholdSettingsRest.java:315)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[:1.7.0_17]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)[:1.7.0_17]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[:1.7.0_17]
at java.lang.reflect.Method.invoke(Method.java:601)[:1.7.0_17]
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)[92:com.sun.jersey.jersey-server:1.15.0]
2014-07-16 12:19:51,457 | ERROR | qtp1863802945-57 | ContainerResponse | .spi.container.ContainerResponse 406 | 92 - com.sun.jersey.jersey-server - 1.15.0 | The exception contained within MappableContainerException could not be mapped to a response, re-throwing to the HTTP container
java.lang.OutOfMemoryError: GC overhead limit exceeded
2014-07-16 12:19:52,669 | WARN | qtp1863802945-57 | ServletHandler | lipse.jetty.util.log.JavaUtilLog 70 | 52 - org.eclipse.jetty.util - 7.6.7.v20120910 | /services/notificationCenter
java.lang.reflect.UndeclaredThrowableException
at org.ops4j.pax.web.service.internal.$Proxy11.service(Unknown Source)[69:org.ops4j.pax.web.pax-web-runtime:1.1.3]
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:652)[60:org.eclipse.jetty.servlet:7.6.7.v20120910]
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:447)[60:org.eclipse.jetty.servlet:7.6.7.v20120910]
at org.ops4j.pax.web.service.jetty.internal.HttpServiceServletHandler.doHandle(HttpServiceServletHandler.java:70)[70:org.ops4j.pax.web.pax-web-jetty:1.1.3]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:559)[59:org.eclipse.jetty.security:7.6.7.v20120910]
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:227)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1038)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.ops4j.pax.web.service.jetty.internal.HttpServiceContext.doHandle(HttpServiceContext.java:117)[70:org.ops4j.pax.web.pax-web-jetty:1.1.3]
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:374)[60:org.eclipse.jetty.servlet:7.6.7.v20120910]
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:189)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:972)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.ops4j.pax.web.service.jetty.internal.JettyServerHandlerCollection.handle(JettyServerHandlerCollection.java:74)[70:org.ops4j.pax.web.pax-web-jetty:1.1.3]
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.server.Server.handle(Server.java:363)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:483)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:920)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:982)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:635)[54:org.eclipse.jetty.http:7.6.7.v20120910]
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)[54:org.eclipse.jetty.http:7.6.7.v20120910]
at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:627)[53:org.eclipse.jetty.io:7.6.7.v20120910]
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:51)[53:org.eclipse.jetty.io:7.6.7.v20120910]
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)[52:org.eclipse.jetty.util:7.6.7.v20120910]
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)[52:org.eclipse.jetty.util:7.6.7.v20120910]
at java.lang.Thread.run(Thread.java:722)[:1.7.0_17]
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[:1.7.0_17]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)[:1.7.0_17]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[:1.7.0_17]
at java.lang.reflect.Method.invoke(Method.java:601)[:1.7.0_17]
at org.ops4j.pax.web.service.internal.HttpServiceStarted$2.invoke(HttpServiceStarted.java:210)[69:org.ops4j.pax.web.pax-web-runtime:1.1.3]
... 27 more
Caused by: javax.servlet.ServletException: java.lang.OutOfMemoryError: GC overhead limit exceeded
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:420)[93:com.sun.jersey.servlet:1.15.0]
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:538)[93:com.sun.jersey.servlet:1.15.0]
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:716)[93:com.sun.jersey.servlet:1.15.0]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:806)[51:org.apache.geronimo.specs.geronimo-servlet_2.5_spec:1.1.2]
... 32 more
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
2014-07-16 12:19:52,669 | WARN | qtp1863802945-55 | ServletHandler | lipse.jetty.util.log.JavaUtilLog 70 | 52 - org.eclipse.jetty.util - 7.6.7.v20120910 | /services/alertThreshold/F5/WideIp
java.lang.reflect.UndeclaredThrowableException
at org.ops4j.pax.web.service.internal.$Proxy11.service(Unknown Source)[69:org.ops4j.pax.web.pax-web-runtime:1.1.3]
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:652)[60:org.eclipse.jetty.servlet:7.6.7.v20120910]
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:447)[60:org.eclipse.jetty.servlet:7.6.7.v20120910]
at org.ops4j.pax.web.service.jetty.internal.HttpServiceServletHandler.doHandle(HttpServiceServletHandler.java:70)[70:org.ops4j.pax.web.pax-web-jetty:1.1.3]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:559)[59:org.eclipse.jetty.security:7.6.7.v20120910]
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:227)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1038)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.ops4j.pax.web.service.jetty.internal.HttpServiceContext.doHandle(HttpServiceContext.java:117)[70:org.ops4j.pax.web.pax-web-jetty:1.1.3]
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:374)[60:org.eclipse.jetty.servlet:7.6.7.v20120910]
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:189)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:972)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.ops4j.pax.web.service.jetty.internal.JettyServerHandlerCollection.handle(JettyServerHandlerCollection.java:74)[70:org.ops4j.pax.web.pax-web-jetty:1.1.3]
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.server.Server.handle(Server.java:363)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:483)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:920)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:982)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:635)[54:org.eclipse.jetty.http:7.6.7.v20120910]
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)[54:org.eclipse.jetty.http:7.6.7.v20120910]
at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)[58:org.eclipse.jetty.server:7.6.7.v20120910]
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:627)[53:org.eclipse.jetty.io:7.6.7.v20120910]
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:51)[53:org.eclipse.jetty.io:7.6.7.v20120910]
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)[52:org.eclipse.jetty.util:7.6.7.v20120910]
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)[52:org.eclipse.jetty.util:7.6.7.v20120910]
at java.lang.Thread.run(Thread.java:722)[:1.7.0_17]
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[:1.7.0_17]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)[:1.7.0_17]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[:1.7.0_17]
at java.lang.reflect.Method.invoke(Method.java:601)[:1.7.0_17]
at org.ops4j.pax.web.service.internal.HttpServiceStarted$2.invoke(HttpServiceStarted.java:210)[69:org.ops4j.pax.web.pax-web-runtime:1.1.3]
... 27 more
Caused by: javax.servlet.ServletException: java.lang.OutOfMemoryError: Java heap space
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:420)[93:com.sun.jersey.servlet:1.15.0]
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:538)[93:com.sun.jersey.servlet:1.15.0]
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:716)[93:com.sun.jersey.servlet:1.15.0]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:806)[51:org.apache.geronimo.specs.geronimo-servlet_2.5_spec:1.1.2]
... 32 more
Caused by: java.lang.OutOfMemoryError: Java heap space
at sun.nio.cs.UTF_8.newDecoder(UTF_8.java:68)[:1.7.0_17]
at java.lang.StringCoding$StringDecoder.<init>(StringCoding.java:131)[:1.7.0_17]
at java.lang.StringCoding$StringDecoder.<init>(StringCoding.java:122)[:1.7.0_17]
at java.lang.StringCoding.decode(StringCoding.java:187)[:1.7.0_17]
at java.lang.String.<init>(String.java:416)[:1.7.0_17]
at org.bson.BasicBSONDecoder$BSONInput.readUTF8String(BasicBSONDecoder.java:544)
at org.bson.BasicBSONDecoder.decodeElement(BasicBSONDecoder.java:230)
at org.bson.BasicBSONDecoder._decode(BasicBSONDecoder.java:154)
at org.bson.BasicBSONDecoder.decode(BasicBSONDecoder.java:132)
at com.mongodb.DefaultDBDecoder.decode(DefaultDBDecoder.java:62)
at com.mongodb.Response.<init>(Response.java:85)
at com.mongodb.DBPort$1.execute(DBPort.java:141)
at com.mongodb.DBPort$1.execute(DBPort.java:135)
at com.mongodb.DBPort.doOperation(DBPort.java:164)
at com.mongodb.DBPort.call(DBPort.java:135)
at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:292)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:271)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:237)
at com.mongodb.QueryResultIterator.getMore(QueryResultIterator.java:137)
at com.mongodb.QueryResultIterator.hasNext(QueryResultIterator.java:127)
at com.mongodb.DBCursor._hasNext(DBCursor.java:551)
at com.mongodb.DBCursor.hasNext(DBCursor.java:571)
at com.testing.apptest.search.dao.ObjectSearchDao.getObjects(ObjectSearchDao.java:1648)
at com.testing.apptest.core.service.ObjectDictionaryManagement.getObjectList(ObjectDictionaryManagement.java:572)
at com.testing.apptest.core.service.ObjectDictionaryManagement.getObjectDict(ObjectDictionaryManagement.java:358)
at com.testing.apptest.core.service.alertSettings.ThresholdSettingsManagement.getObjectList(ThresholdSettingsManagement.java:239)
at com.testing.apptest.rest.ThresholdSettingsRest.getDeviceList(ThresholdSettingsRest.java:315)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[:1.7.0_17]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)[:1.7.0_17]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[:1.7.0_17]
at java.lang.reflect.Method.invoke(Method.java:601)[:1.7.0_17]
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)[92:com.sun.jersey.jersey-server:1.15.0]
First of all a process isn't supposed to stop after a OOMError. An OutOfMemoryError is a Throwable and therefore can be handled by the virtual machine. In your case the bundle in question did get removed and therefore the memory consumption is ok again. You'll need to restart your application Bundle and maybe the Pax Web one too.
Regarding your exception I've seen that there is an issue with an ArrayCopy, you seem to have a lot of "big" objects in your application, maybe just increasing the Heap-Size is already enough.
You can set the Xmx (JAVA_MAX_MEM and JAVA_MAX_PERM_MEM) in $KARAF_HOME/bin/setenv

Aptana crashes out of memory

Aptana crashes with
java.lang.OutOfMemoryError: PermGen space
Error while logging event loop exception:
Java HotSpot(TM) Server VM warning: Exception java.lang.OutOfMemoryError occurred dispatching signal SIGINT to handler- the VM may need to be forcibly terminat
Increase the memory adding/modifing
AptanaStudio3.ini
the following lines
-Xms1024m
-Xmx1024m
-XX:NewSize=256m
-XX:MaxNewSize=356m
-XX:PermSize=256m

Resources