We were deleting old docker images with keeping last 10 of them. We tried Compact blob store task to delete them physically but on the administration/Repository settings, Blob store still shows the same size after deleting images.
This is the compact blob store log:
2018-06-28 14:18:40,709+0200 INFO [quartz-6-thread-20] *SYSTEM org.sonatype.nexus.blobstore.compact.internal.CompactBlobStoreTask - Task information:
2018-06-28 14:18:40,712+0200 INFO [quartz-6-thread-20] *SYSTEM org.sonatype.nexus.blobstore.compact.internal.CompactBlobStoreTask - ID: 2bf9a574-f3e6-4f8e-8351-d98e4abc5103
2018-06-28 14:18:40,712+0200 INFO [quartz-6-thread-20] *SYSTEM org.sonatype.nexus.blobstore.compact.internal.CompactBlobStoreTask - Type: blobstore.compact
2018-06-28 14:18:40,712+0200 INFO [quartz-6-thread-20] *SYSTEM org.sonatype.nexus.blobstore.compact.internal.CompactBlobStoreTask - Name: cbs
2018-06-28 14:18:40,712+0200 INFO [quartz-6-thread-20] *SYSTEM org.sonatype.nexus.blobstore.compact.internal.CompactBlobStoreTask - Description: Compacting default blob store
2018-06-28 14:18:40,713+0200 INFO [quartz-6-thread-20] *SYSTEM org.sonatype.nexus.blobstore.file.FileBlobStore - Deletions index file rebuild not required
2018-06-28 14:18:40,713+0200 INFO [quartz-6-thread-20] *SYSTEM org.sonatype.nexus.blobstore.file.FileBlobStore - Begin deleted blobs processing
2018-06-28 14:18:41,551+0200 INFO [quartz-6-thread-20] *SYSTEM org.sonatype.nexus.blobstore.file.FileBlobStore - Elapsed time: 837.6 ms, processed: 45/45
2018-06-28 14:18:41,551+0200 INFO [quartz-6-thread-20] *SYSTEM org.sonatype.nexus.blobstore.compact.internal.CompactBlobStoreTask - Task complete
Docker layers can be shared across many different images, so the layers associated with an image are not deleted automatically when you delete an image. First run a "docker - delete unused manifests and images" task, then try running the compact blobstore again.
Related
The nexus is located in kubernetis, before that it was updated without problems. At the moment I get the following error, no matter which version I upgrade to 3.41.1/3.42.0.
Log output from version 3.41.1:
-------------------------------------------------
Started Sonatype Nexus OSS 3.41.1-01
-------------------------------------------------
2022-10-19 11:34:30,682+0000 INFO [jetty-main-1] *SYSTEM org.eclipse.jetty.server.AbstractConnector - Started ServerConnector#31451f5e{HTTP/1.1, (http/1.1)}{0.0.0.0:8086}
2022-10-19 11:34:30,686+0000 INFO [jetty-main-1] *SYSTEM org.eclipse.jetty.server.AbstractConnector - Started ServerConnector#2105be54{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}
2022-10-19 11:34:30,689+0000 INFO [jetty-main-1] *SYSTEM org.eclipse.jetty.server.AbstractConnector - Started ServerConnector#5f41ab0d{HTTP/1.1, (http/1.1)}{0.0.0.0:8085}
2022-10-19 11:34:31,834+0000 WARN [Timer-0] *SYSTEM java.util.prefs - Could not lock User prefs. Unix error code 2.
2022-10-19 11:34:31,835+0000 WARN [Timer-0] *SYSTEM java.util.prefs - Couldn't flush user prefs: java.util.prefs.BackingStoreException: Couldn't get file lock.
2022-10-19 11:34:48,228+0000 INFO [qtp491323871-560] *UNKNOWN org.apache.shiro.session.mgt.AbstractValidatingSessionManager - Enabling session validation scheduler...
2022-10-19 11:34:48,238+0000 INFO [qtp491323871-556] *UNKNOWN org.sonatype.nexus.internal.security.anonymous.AnonymousManagerImpl - Loaded configuration: OrientAnonymousConfiguration{enabled=true, userId='anonymous', realmName='NexusAuthorizingRealm'}
2022-10-19 11:35:01,488+0000 WARN [Timer-0] *SYSTEM java.util.prefs - Could not lock User prefs. Unix error code 2.
2022-10-19 11:35:01,489+0000 WARN [Timer-0] *SYSTEM java.util.prefs - Couldn't flush user prefs: java.util.prefs.BackingStoreException: Couldn't get file lock.
2022-10-19 11:35:31,489+0000 WARN [Timer-0] *SYSTEM java.util.prefs - Could not lock User prefs. Unix error code 2.
2022-10-19 11:35:31,489+0000 WARN [Timer-0] *SYSTEM java.util.prefs - Couldn't flush user prefs: java.util.prefs.BackingStoreException: Couldn't get file lock.
2022-10-19 11:36:01,489+0000 WARN [Timer-0] *SYSTEM java.util.prefs - Could not lock User prefs. Unix error code 2.
2022-10-19 11:36:01,490+0000 WARN [Timer-0] *SYSTEM java.util.prefs - Couldn't flush user prefs: java.util.prefs.BackingStoreException: Couldn't get file lock.
2022-10-19 11:36:18,170+0000 WARN [SIGTERM handler] *SYSTEM com.orientechnologies.orient.core.OSignalHandler - Received signal: SIGTERM
I get a similar error when updating to 3.42.0.
What could be the problem?
I was following the tutorials on Openstack (Stein) docs website to launch an instance on my provider network. I am using networking option 2. I run the following command to create the instance, replacing PROVIDER_NET_ID with my provider network interface id.
openstack server create --flavor m1.nano --image cirros \
--nic net-id=PROVIDER_NET_ID --security-group default \
--key-name mykey provider-instance1
I run openstack server list to check the status of my instance. It shows a status of ERROR.
I checked the /var/log/nova/nova-compute.log on my compute node (I only have one compute node) and came across the following error.
ERROR nova.compute.manager [req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6
995cce48094442b4b29f3fb665219408 429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] Instance failed to spawn: PermissionError:
[Errno 13] Permission denied: '/var/lib/nova/instances/19a6c859-5dde-4ed3-9010-4d93ebe9a942'
However, logs before this error in the log file seem like everything was okay before this error occured.
2022-05-23 12:40:21.011 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942]
Attempting claim on node compute1: memory 64 MB, disk 1 GB, vcpus 1 CPU
2022-05-23 12:40:21.019 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] Total memory: 3943 MB, used: 512.00 MB
2022-05-23 12:40:21.020 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] memory limit not specified, defaulting to unlimited
2022-05-23 12:40:21.020 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] Total disk: 28 GB, used: 0.00 GB
2022-05-23 12:40:21.021 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] disk limit not specified, defaulting to unlimited
2022-05-23 12:40:21.024 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] Total vcpu: 4 VCPU, used: 0.00 VCPU
2022-05-23 12:40:21.025 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] vcpu limit not specified, defaulting to unlimited
2022-05-23 12:40:21.028 9627 INFO nova.compute.claims
[req-0a1a6ddf-edc7-4987-a018-32af7b6a29b6 995cce48094442b4b29f3fb665219408
429965cc45a743a5b00f3a1bd098e1ab - default default]
[instance: 19a6c859-5dde-4ed3-9010-4d93ebe9a942] Claim successful on node compute1
Anyone have any ideas one what I may be doing wrong?
I'll be thankful.
I've set 'execution_timeout': timedelta(seconds=300) parameter on many tasks. When the execution timeout is set on task downloading data from Google Analytics it works properly - after ~300 seconds is the task set to failed. The task downloads some data from API (python), then it does some transformations (python) and loads data into PostgreSQL.
Then I've a task which executes only one PostgreSQL function - execution sometimes takes more than 300 seconds but I get this (task is marked as finished successfully).
*** Reading local file: /home/airflow/airflow/logs/bulk_replication_p2p_realtime/t1/2020-07-20T00:05:00+00:00/1.log
[2020-07-20 05:05:35,040] {__init__.py:1139} INFO - Dependencies all met for <TaskInstance: bulk_replication_p2p_realtime.t1 2020-07-20T00:05:00+00:00 [queued]>
[2020-07-20 05:05:35,051] {__init__.py:1139} INFO - Dependencies all met for <TaskInstance: bulk_replication_p2p_realtime.t1 2020-07-20T00:05:00+00:00 [queued]>
[2020-07-20 05:05:35,051] {__init__.py:1353} INFO -
--------------------------------------------------------------------------------
[2020-07-20 05:05:35,051] {__init__.py:1354} INFO - Starting attempt 1 of 1
[2020-07-20 05:05:35,051] {__init__.py:1355} INFO -
--------------------------------------------------------------------------------
[2020-07-20 05:05:35,098] {__init__.py:1374} INFO - Executing <Task(PostgresOperator): t1> on 2020-07-20T00:05:00+00:00
[2020-07-20 05:05:35,099] {base_task_runner.py:119} INFO - Running: ['airflow', 'run', 'bulk_replication_p2p_realtime', 't1', '2020-07-20T00:05:00+00:00', '--job_id', '958216', '--raw', '-sd', 'DAGS_FOLDER/bulk_replication_p2p_realtime.py', '--cfg_path', '/tmp/tmph11tn6fe']
[2020-07-20 05:05:37,348] {base_task_runner.py:101} INFO - Job 958216: Subtask t1 [2020-07-20 05:05:37,347] {settings.py:182} INFO - settings.configure_orm(): Using pool settings. pool_size=10, pool_recycle=1800, pid=26244
[2020-07-20 05:05:39,503] {base_task_runner.py:101} INFO - Job 958216: Subtask t1 [2020-07-20 05:05:39,501] {__init__.py:51} INFO - Using executor LocalExecutor
[2020-07-20 05:05:39,857] {base_task_runner.py:101} INFO - Job 958216: Subtask t1 [2020-07-20 05:05:39,856] {__init__.py:305} INFO - Filling up the DagBag from /home/airflow/airflow/dags/bulk_replication_p2p_realtime.py
[2020-07-20 05:05:39,894] {base_task_runner.py:101} INFO - Job 958216: Subtask t1 [2020-07-20 05:05:39,894] {cli.py:517} INFO - Running <TaskInstance: bulk_replication_p2p_realtime.t1 2020-07-20T00:05:00+00:00 [running]> on host dwh2-airflow-dev
[2020-07-20 05:05:39,938] {postgres_operator.py:62} INFO - Executing: CALL dw_system.bulk_replicate(p_graph_name=>'replication_p2p_realtime',p_group_size=>4 , p_group=>1, p_dag_id=>'bulk_replication_p2p_realtime', p_task_id=>'t1')
[2020-07-20 05:05:39,960] {logging_mixin.py:95} INFO - [2020-07-20 05:05:39,953] {base_hook.py:83} INFO - Using connection to: id: postgres_warehouse. Host: XXX Port: 5432, Schema: XXXX Login: XXX Password: XXXXXXXX, extra: {}
[2020-07-20 05:05:39,973] {logging_mixin.py:95} INFO - [2020-07-20 05:05:39,972] {dbapi_hook.py:171} INFO - CALL dw_system.bulk_replicate(p_graph_name=>'replication_p2p_realtime',p_group_size=>4 , p_group=>1, p_dag_id=>'bulk_replication_p2p_realtime', p_task_id=>'t1')
[2020-07-20 05:23:21,450] {logging_mixin.py:95} INFO - [2020-07-20 05:23:21,449] {timeout.py:42} ERROR - Process timed out, PID: 26244
[2020-07-20 05:23:36,453] {logging_mixin.py:95} INFO - [2020-07-20 05:23:36,452] {jobs.py:2562} INFO - Task exited with return code 0
Does anyone know how to enforce execution timeout out for such long running functions? It seems that the execution timeout is evaluated once the PG function finish.
Airflow uses the signal module from the standard library to affect a timeout. In Airflow it's used to hook into these system signals and request that the calling process be notified in N seconds and, should the process still be inside the context (see the __enter__ and __exit__ methods on the class) it will raise an AirflowTaskTimeout exception.
Unfortunately for this situation, there are certain classes of system operations that cannot be interrupted. This is actually called out in the signal documentation:
A long-running calculation implemented purely in C (such as regular expression matching on a large body of text) may run uninterrupted for an arbitrary amount of time, regardless of any signals received. The Python signal handlers will be called when the calculation finishes.
To which we say "But I'm not doing a long-running calculation in C!" -- yeah for Airflow this is almost always due to uninterruptable I/O operations.
The highlighted sentence above (emphasis mine) nicely explains why the handler is still triggered even after the task is allowed to (frustratingly!) finish, well beyond your requested timeout.
The problem
The app can be launched quickly when I use the Appium Desktop to create a session for inspector, but it takes too long to load the elements tree for inspector. This phenomenon happens only in some apps.
Environment
Appium version (or git revision) that exhibits the issue: Appium Desktop 1.6.1 (Appium Server 1.8.0)
Last Appium version that did not exhibit the issue (if applicable):
Desktop OS/version used to run Appium:MacOS 10.13.5
Node.js version (unless using Appium.app|exe):10.1.0
Mobile platform/version under test:iOS 10.3.3
Real device or emulator/simulator:iPhone 7 real device
Appium CLI or Appium.app|exe:
Details
It's about 5 minutes to get a response loading the elements tree.
2018-06-28 09:40:05:339 - [debug] [XCUITest] Failed to create WDA session. Retrying...
2018-06-28 09:40:06:346 - [debug] [BaseDriver] Event 'wdaSessionAttempted' logged at 1530150006346 (09:40:06 GMT+0800 (CST))
2018-06-28 09:40:06:346 - [debug] [XCUITest] Sending createSession command to WDA
2018-06-28 09:40:06:347 - [debug] [JSONWP Proxy] Proxying [POST /session] to [POST http://localhost:8100/session] with body: {"desiredCapabilities":{"bundleId":"com .chinaums.ttf6","arguments":[],"environment":{},"shouldWaitForQuiescence":true,"shouldUseTestManagerForVisibilityDetection":true,"maxTypingFrequency":60,"shouldUseS ingletonTestManager":true}}
2018-06-28 09:45:28:282 - [debug] [JSONWP Proxy] Got response with status 200: {"value":{"sessionId":"9A6749D7-5A1D-431F-AA57- E1126F189E95","capabilities":{"device":"iphone","browserName":null,"sdkVersion":"10.3.3","CFBundleIdentifier":null}},"sessionId":"9A6749D7-5A1D-431F-AA57- E1126F189E95","status":0}
2018-06-28 09:45:28:282 - [debug] [BaseDriver] Event 'wdaSessionStarted' logged at 1530150328282 (09:45:28 GMT+0800 (CST))
2018-06-28 09:45:28:472 - [debug] [XCUITest] Cannot find a match for DerivedData folder path from lsof. Trying to access logs
2018-06-28 09:45:28:482 - [debug] [BaseDriver] Event 'wdaStarted' logged at 1530150328481 (09:45:28 GMT+0800 (CST))
2018-06-28 09:45:28:482 - [XCUITest] Skipping setting of the initial display orientation. Set the "orientation" capability to either "LANDSCAPE" or "PORTRAIT", if this is an undesired behavior.
Link to Appium logs
appium_server_log
I sloved it following this citation Creating session takes over 5 minutes now with Xcode 9.
Just add capability: waitForQuiescence = false
We're running Sonatype Nexus 3 on an Odroid C1+, about equivalent to a RPi 2. Yes, not recommended, but for our development team of 2 performance is acceptable.
Except for Start-up. The Nexus 3 OSS Server takes an hour or more to be available.
Is that normal?
Any ideas why is is so slow?
Here are some of the log entries from a start-up.
2017-05-16 05:36:29,185+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.server.OServer - $ANSI{green:italic OrientDB Server is active} v2.2.13.
2017-05-16 05:36:29,189+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.internal.orient.DatabaseServerImpl - Activated
2017-05-16 05:36:29,242+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusLifecycleManager - Start UPGRADE
2017-05-16 05:36:35,919+0000 WARN [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=config}} Storage 'config' was not closed properly. Will try to recover from write ahead log
2017-05-16 05:36:35,931+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=config}} Looking for last checkpoint...
2017-05-16 05:36:36,515+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=config}} Found FUZZY checkpoint.
2017-05-16 05:36:36,530+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=config}} Data restore procedure from FUZZY checkpoint is started.
2017-05-16 05:36:36,562+0000 WARN [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=config}} Record com.orientechnologies.orient.core.storage.impl.local.paginated.wal.OFuzzyCheckpointEndRecord{lsn=OLogSequenceNumber{segment=0, position=6856901}} will be skipped during data restore
2017-05-16 05:36:36,570+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=config}} 1 operations were processed, current LSN is OLogSequenceNumber{segment=0, position=6856901} last LSN is OLogSequenceNumber{segment=0, position=6856954}
2017-05-16 05:36:36,579+0000 WARN [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=config}} Record OFuzzyCheckpointStartRecord{lsn=OLogSequenceNumber{segment=0, position=6856908}} com.orientechnologies.orient.core.storage.impl.local.paginated.wal.OFuzzyCheckpointStartRecord{lsn=null, previousCheckpoint=OLogSequenceNumber{segment=0, position=6856861}} will be skipped during data restore
2017-05-16 05:36:36,822+0000 WARN [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=config}} Record com.orientechnologies.orient.core.storage.impl.local.paginated.wal.OFuzzyCheckpointEndRecord{lsn=OLogSequenceNumber{segment=0, position=6856948}} will be skipped during data restore
2017-05-16 05:36:36,829+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=config}} Storage data recover was completed
2017-05-16 05:42:54,800+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.extender.NexusLifecycleManager - Start SCHEMAS
2017-05-16 05:42:55,883+0000 WARN [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=analytics}} Storage 'analytics' was not closed properly. Will try to recover from write ahead log
2017-05-16 05:42:55,894+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=analytics}} Looking for last checkpoint...
2017-05-16 05:42:55,904+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=analytics}} Found FUZZY checkpoint.
2017-05-16 05:42:55,913+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=analytics}} Data restore procedure from FUZZY checkpoint is started.
2017-05-16 05:42:55,921+0000 WARN [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=analytics}} Record com.orientechnologies.orient.core.storage.impl.local.paginated.wal.OFuzzyCheckpointEndRecord{lsn=OLogSequenceNumber{segment=0, position=2246069}} will be skipped during data restore
2017-05-16 05:42:55,929+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=analytics}} 1 operations were processed, current LSN is OLogSequenceNumber{segment=0, position=2246069} last LSN is OLogSequenceNumber{segment=0, position=2246122}
2017-05-16 05:42:55,938+0000 WARN [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=analytics}} Record OFuzzyCheckpointStartRecord{lsn=OLogSequenceNumber{segment=0, position=2246076}} com.orientechnologies.orient.core.storage.impl.local.paginated.wal.OFuzzyCheckpointStartRecord{lsn=null, previousCheckpoint=OLogSequenceNumber{segment=0, position=2246029}} will be skipped during data restore
2017-05-16 05:42:55,946+0000 WARN [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=analytics}} Record com.orientechnologies.orient.core.storage.impl.local.paginated.wal.OFuzzyCheckpointEndRecord{lsn=OLogSequenceNumber{segment=0, position=2246116}} will be skipped during data restore
2017-05-16 05:42:55,952+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=analytics}} Storage data recover was completed
2017-05-16 05:49:05,078+0000 WARN [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=audit}} Storage 'audit' was not closed properly. Will try to recover from write ahead log
2017-05-16 05:49:05,089+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=audit}} Looking for last checkpoint...
2017-05-16 05:49:05,097+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=audit}} Found FUZZY checkpoint.
2017-05-16 05:49:05,105+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=audit}} Data restore procedure from FUZZY checkpoint is started.
2017-05-16 05:49:05,113+0000 WARN [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=audit}} Record com.orientechnologies.orient.core.storage.impl.local.paginated.wal.OFuzzyCheckpointEndRecord{lsn=OLogSequenceNumber{segment=0, position=2268394}} will be skipped during data restore
2017-05-16 05:49:05,121+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=audit}} 1 operations were processed, current LSN is OLogSequenceNumber{segment=0, position=2268394} last LSN is OLogSequenceNumber{segment=0, position=2268447}
2017-05-16 05:49:05,129+0000 WARN [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=audit}} Record OFuzzyCheckpointStartRecord{lsn=OLogSequenceNumber{segment=0, position=2268401}} com.orientechnologies.orient.core.storage.impl.local.paginated.wal.OFuzzyCheckpointStartRecord{lsn=null, previousCheckpoint=OLogSequenceNumber{segment=0, position=2268354}} will be skipped during data restore
2017-05-16 05:49:05,138+0000 WARN [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=audit}} Record com.orientechnologies.orient.core.storage.impl.local.paginated.wal.OFuzzyCheckpointEndRecord{lsn=OLogSequenceNumber{segment=0, position=2268441}} will be skipped during data restore
2017-05-16 05:49:05,144+0000 INFO [FelixStartLevel] *SYSTEM com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage - $ANSI{green {db=audit}} Storage data recover was completed
A few notes:
This is cool! Someone on our internal team got it working on a Raspberry Pi 3 a while ago, so we love seeing stuff like this
We don't support this configuration so any help from us is kinda on a "wow this is cool" sort of level
Our internal dude said he noticed that NXRM 3 runs out of memory while updating schemas during boot on the Raspberry Pi 3, and he's yet to find a workaround.
He found a workaround. He's got a great big beard, too. He edited nexus.vmoptions to have the following:
-Xms256M
-Xmx256M
-XX:MaxDirectMemorySize=512M
Orient uses system memory, which is distinct from Nexus Repo which uses the Java Heap, if that helps you at all. The note above about Orient and shutdown is also probably very wildly relevant.
Here's some info we put together about Orient and memory, and tuning:
Optimizing OrientDB Database Memory
Also here is a more generic article on system requirements in regards to Nexus Repository Manager 3:
Nexus Repository Manager 3 System Requirements
What I would suggest for your team is wildly different, potentially use our Docker image and spin Nexus Repo up on whatever hardware you've got to use.
Docker Nexus3
Best luck either way!