Storage sizing for index rebuild - nexus

I'm trying to rebuild an index for a proxy maven repository (https://oss.sonatype.org/content/groups/staging/) which has a downloaded index size of 5.4GiB.
Although the nexus-tmp-folder's partition has about 30GiB free space the reindexing fails because of
2016-07-21 12:58:30,718+0200 WARN [pxpool-1-thread-13] org.sonatype.nexus.index.tasks.RepairIndexTask - Scheduled task (RepairIndexTask) failed :: Repairing repository index "Java/Sonatype OSSRH Staging" from path / and below. (started 2016-07-21T11:51:54+02:00, runtime 1:06:36.533)
java.io.IOException: No space left on device
...
I am using nexus oss 2.11.1-01.
Can anyone tell me how much storage space to provide for indexing operations?

Nexus needs to uncompress that index in the temporary directory, and process it. I'm not sure exactly what the uncompressed size of that index plus processing space needed is, but seems it's more than 30Gb. :-)
I'd suggest moving the temporary directory location to a partition that has more space. You can find instructions for doing that here:
https://support.sonatype.com/hc/en-us/articles/213464938-How-to-configure-the-Nexus-temporary-directory

Related

How to extract the maps via AC Dashboard?

I ran into the problem that everything went well with the compilation and the database. But when I start the worldserver, I get an error
Loading world information...
> RealmID: 1
> Version DB world: ACDB 335.6-dev
Will clear `logs` table of entries older than 1209600 seconds every 10 minutes.
Using DataDir /azerothcore-wotlk/data/
WORLD: VMap support included. LineOfSight:true, getHeight:true, indoorCheck:true PetLOS:true
Map file '/azerothcore-wotlk/data/maps/0004331.map': does not exist!
exit code: 1
worldserver terminated, restarting...
worldserver Terminated after 1 seconds, termination count: : 6
worldserver Restarter exited. Infinite crash loop prevented. Please check your system
What could be the problem? I rechecked the permissions to the directory including the owner and everything is fine. Tried different paths DataDir, now it set to **DataDir = "/home/azcore/azerothcore-wotlk/data". I get an error, how to fix that?
**
First of all, if you only need to get latest maps compatible with AzerothCore you can download them from here.
Otherwise, change in your config.sh file CTOOLS_BUILD='all', afterwards, run again the build using
./acore.sh compiler build
This will generate the binaries to extract the data inside azerothcore-wotlk/env/dist/bin/.
Having the binaries you can follow the guide here to extract them manually, you only need to move the binaries into the WoW directory and run the binaries in the right order.

csync/sqlite error when running ownCloud command

I am running owncloudcmd to sync files from a local* path to an ownCloud/Nextcloud server, all running Debian 8. However it fails with the error:
[5] csync_statedb_query sqlite3_compile error: disk I/O error - on
query PRAGMA quick_check; [6] csync_statedb_load ERR: sqlite3
integrity check failed - bail out: disk I/O error. #### ERROR during
csync_update : "CSync failed to load the journal file. The journal
file is corrupted."
I am not very familiar with csync or sqlite so I am a bit in the dark and although I can find talk of this issue through googling, I can't find a fix. The data in this case can be dumped to start over so I'm happy to flush any database or anything else. I've trying removing the created csync and journal files assuming one of them was corrupted but it doesn't seem to change anything.
I have read talk about changing PRAGMA settings to ignore the error (or check) but I can't see how this is implemented either.
Is anyone able to show me how to clear out the corruption?
*the local file is a mounted path to an AWS S3 bucket but I think this is irrelevant because it is working on other systems fine.

Nexus won't index Central: Prefix file size exceeds maximum allowed size

My Nexus 2.13.0-01 instance which I installed fresh won't index Central (https://repo1.maven.org/maven2/). The Routing tab shows message
Remote strategy prefix-file on M2Repository(id=central) detected
invalid input, results discarded: Prefix file size exceeds maximum
allowed size (100000), refusing to load it.
Why is this happening? How can I fix this?
It's a bug, and it was fixed a while ago:
https://issues.sonatype.org/browse/NEXUS-10233
Upgrading will resolve this.

Error while uploading DICOM image to dcm4chee

Log Image: dcm image using dcmsnd utility.
When I try to upload dcm image to the pacs It generates the error.
2016-07-21 12:24:46,017 WARN STORESCU->DCM4CHEE (TCPServer-1-1)
[org.dcm4chex.archive.mbean.FileSystemMgt2Service] Failed to create
directory /var/lib/bahmni/dcm4chee-2.18.1-psql/server/default/archive
- try to switch to next configured storage directory 2016-07-21 12:24:46,026 ERROR STORESCU->DCM4CHEE (TCPServer-1-1)
[org.dcm4chex.archive.mbean.FileSystemMgt2Service] High Water Mark
reached on storage file system FileSystem[pk=1, archive,
groupID=ONLINE_STORAGE, aet=DCM4CHEE, ONLINE, RW+, userinfo=null] - no
alternative storage file system configured for file system group
ONLINE_STORAGE 2016-07-21 12:24:46,027 WARN STORESCU->DCM4CHEE
(TCPServer-1-1) [org.dcm4chex.archive.dcm.storescp.StoreScpService]
org.dcm4che.net.DcmServiceException
Here is the some log which shows in the dcm4chee.
I tried to give permission for the directory but still it gives an error to me.
I am not getting any solution for the error please provide any solution if any one have.
Thanks.
It looks like your current file system is full.
You should try adding another file system. Here you can find additional instructions about this using the MBeans based management interface.

Lucene index gets broken segments after every restart of liferay-tomcat

I have a corrupted Lucene index. If I run "CheckIndex -fix" the problem is resolved, but as soon as I restart tomcat it becomes corrupted again.
The index directory is shared between two application servers running Liferay-Tomcat. I am fixing the index on 1 server and restarting that whilst the other is running. This is a production environment so I cannot bring them both down.
Any suggestions please?
Before fix, CheckIndex says:
Opening index # /usr/local/tomcat/liferay/lucene/0
Segments file=segments_5yk numSegments=1 version=FORMAT_SINGLE_NORM_FILE [Lucene 2.2]
1 of 1: name=_2vg docCount=31
compound=false
hasProx=true
numFiles=8
size (MB)=0.016
no deletions
test: open reader.........FAILED
WARNING: fixIndex() would remove reference to this segment; full exception:
java.io.IOException: read past EOF
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:151)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:38)
at org.apache.lucene.store.IndexInput.readVInt(IndexInput.java:78)
at org.apache.lucene.index.FieldInfos.read(FieldInfos.java:335)
at org.apache.lucene.index.FieldInfos.<init>(FieldInfos.java:71)
at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(SegmentReader.java:119)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:652)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:605)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:491)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
WARNING: 1 broken segments (containing 31 documents) detected
WARNING: would write new segments file, and 31 documents would be lost, if -fix were specified
If you access your search index with more than one application server, I would suggest integrating a Solr Server. So you don't have the problem that 2 app servers are trying to write on the same file. This could be error prone as you already found out.
To get Solr up and running you have to follow those steps:
Install a Solr Server on any machine you like. A machine running only Solr would be quite preferable.
Install the Solr search portlet in Liferay
Adjust the config files according to the setup document of Sol Search portlet.
Here are some additional links:
http://www.liferay.com/de/marketplace/-/mp/application/15193648
http://www.liferay.com/de/community/wiki/-/wiki/Main/Pluggable+Enterprise+Search+with+Solr

Resources