Lucene index gets broken segments after every restart of liferay-tomcat - unix

I have a corrupted Lucene index. If I run "CheckIndex -fix" the problem is resolved, but as soon as I restart tomcat it becomes corrupted again.
The index directory is shared between two application servers running Liferay-Tomcat. I am fixing the index on 1 server and restarting that whilst the other is running. This is a production environment so I cannot bring them both down.
Any suggestions please?
Before fix, CheckIndex says:
Opening index # /usr/local/tomcat/liferay/lucene/0
Segments file=segments_5yk numSegments=1 version=FORMAT_SINGLE_NORM_FILE [Lucene 2.2]
1 of 1: name=_2vg docCount=31
compound=false
hasProx=true
numFiles=8
size (MB)=0.016
no deletions
test: open reader.........FAILED
WARNING: fixIndex() would remove reference to this segment; full exception:
java.io.IOException: read past EOF
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:151)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:38)
at org.apache.lucene.store.IndexInput.readVInt(IndexInput.java:78)
at org.apache.lucene.index.FieldInfos.read(FieldInfos.java:335)
at org.apache.lucene.index.FieldInfos.<init>(FieldInfos.java:71)
at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(SegmentReader.java:119)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:652)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:605)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:491)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
WARNING: 1 broken segments (containing 31 documents) detected
WARNING: would write new segments file, and 31 documents would be lost, if -fix were specified

If you access your search index with more than one application server, I would suggest integrating a Solr Server. So you don't have the problem that 2 app servers are trying to write on the same file. This could be error prone as you already found out.
To get Solr up and running you have to follow those steps:
Install a Solr Server on any machine you like. A machine running only Solr would be quite preferable.
Install the Solr search portlet in Liferay
Adjust the config files according to the setup document of Sol Search portlet.
Here are some additional links:
http://www.liferay.com/de/marketplace/-/mp/application/15193648
http://www.liferay.com/de/community/wiki/-/wiki/Main/Pluggable+Enterprise+Search+with+Solr

Related

How to extract the maps via AC Dashboard?

I ran into the problem that everything went well with the compilation and the database. But when I start the worldserver, I get an error
Loading world information...
> RealmID: 1
> Version DB world: ACDB 335.6-dev
Will clear `logs` table of entries older than 1209600 seconds every 10 minutes.
Using DataDir /azerothcore-wotlk/data/
WORLD: VMap support included. LineOfSight:true, getHeight:true, indoorCheck:true PetLOS:true
Map file '/azerothcore-wotlk/data/maps/0004331.map': does not exist!
exit code: 1
worldserver terminated, restarting...
worldserver Terminated after 1 seconds, termination count: : 6
worldserver Restarter exited. Infinite crash loop prevented. Please check your system
What could be the problem? I rechecked the permissions to the directory including the owner and everything is fine. Tried different paths DataDir, now it set to **DataDir = "/home/azcore/azerothcore-wotlk/data". I get an error, how to fix that?
**
First of all, if you only need to get latest maps compatible with AzerothCore you can download them from here.
Otherwise, change in your config.sh file CTOOLS_BUILD='all', afterwards, run again the build using
./acore.sh compiler build
This will generate the binaries to extract the data inside azerothcore-wotlk/env/dist/bin/.
Having the binaries you can follow the guide here to extract them manually, you only need to move the binaries into the WoW directory and run the binaries in the right order.

Firebird 4.0 release candidate 1 in C can't open embedded database

I am using Firebird 4.0 release candidate 1 on Linux (attempting to use it either in a C application or with the ODBC driver). When I attempt to open an embedded database (/path/to/db/name.db), I get "Unable to complete network request to host XXX. Failed to establish a connection." I know that this means that the libEngine13.so can't be found so it is defaulting to the localhost. However, libEngine13.so has been built, and is in the default install location for the git repo /path/to/firebird/gen/Release/firebird/lib/plugin so I'm not sure why it isn't being found. I have also tried adding the folder containing it (plugin) to LD_LIBRARY_PATH, copying libEngine13.so to the same directory as libfbclient.so (/path/to/firebird/gen/Release/firebird/lib), adding it to a plugin folder in the directory containing libfbclient.so (etc.) Any ideas?
I should clarify this issue was actually for Firebird 4.0 release candidate 1 (which I was not aware of, but should have been). I used strace to confirm that it was finding libEngine13.so. It was looking for it in /path/to/firebird/install/plugins instead of /path/to/firebird/gen/Release/firebird/plugins. When I copied it there, I then received an "unavailable database. SQLCODE:-904" error. When I switched the database location from /path/to/proj/databases to /path/to/my/dir/on/parallel/file/system/databases it worked.

Grakn Error; trying to load schema for "phone calls" example

I am trying to run the example grakn migration "phone_calls" (using python and JSON files).
Before reaching there, I need to load the schema, but I am having trouble with getting the schema loaded, as shown here: https://dev.grakn.ai/docs/examples/phone-calls-schema
System:
-Mac OS 10.15
-grakn-core 1.8.3
-python 3.7.3
The grakn server is started. I checked and the 48555 TCP port is open, so I don't think there is any firewall issue. The schema file is in the same folder (phone_calls) as where the json data files is, for the next step. I am using a virtual environment. The error is below:
(project1_env) (base) tiffanytoor1#MacBook-Pro-2 onco % grakn server start
Storage is already running
Grakn Core Server is already running
(project1_env) (base) tiffanytoor1#MacBook-Pro-2 onco % grakn console --keyspace phone_calls --file phone_calls/schema.gql
Unable to create connection to Grakn instance at localhost:48555
Cause: io.grpc.StatusRuntimeException
UNKNOWN: Could not reach any contact point, make sure you've provided valid addresses (showing first 1, use getErrors() for more: Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5f59fd46): com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...] init query OPTIONS: error writing ). Please check server logs for the stack trace.
I would appreciate any help! Thanks!
Nevermind -- I found the solution, in case any one else runs into a similar problem. The server configuration file needs to be edited: point the data directory to your project data files (here: the phone_calls data files) & change the server IP address to your own.

csync/sqlite error when running ownCloud command

I am running owncloudcmd to sync files from a local* path to an ownCloud/Nextcloud server, all running Debian 8. However it fails with the error:
[5] csync_statedb_query sqlite3_compile error: disk I/O error - on
query PRAGMA quick_check; [6] csync_statedb_load ERR: sqlite3
integrity check failed - bail out: disk I/O error. #### ERROR during
csync_update : "CSync failed to load the journal file. The journal
file is corrupted."
I am not very familiar with csync or sqlite so I am a bit in the dark and although I can find talk of this issue through googling, I can't find a fix. The data in this case can be dumped to start over so I'm happy to flush any database or anything else. I've trying removing the created csync and journal files assuming one of them was corrupted but it doesn't seem to change anything.
I have read talk about changing PRAGMA settings to ignore the error (or check) but I can't see how this is implemented either.
Is anyone able to show me how to clear out the corruption?
*the local file is a mounted path to an AWS S3 bucket but I think this is irrelevant because it is working on other systems fine.

fast export unexplained failure

I have roughly 14 million records that I am attempting to export from a Teradata table to file using a fast export connection object.
There is no size limit for fast export files on our Linux system, and there is 1.2 TB of available space in the target directory.
The session fails, and gives the following errors:
READER_2_1_1 FEXP_87011 Process [16022] exited with status [12]
SDKS_38200 Partition-level [SOURCE_TABLE_NAME]: Plug-in #305400 failed in deinit()
I googled the error message, and found this post:
Here
I followed the recommendations in the port to delete the .out file in the temp directory, delete the files that were partially filled in the target directory, and drop the error table and delete the log file. This did not fix the issue and the session still fails with the same error messages.
Try to use TPT Export plug-in instead. Also you can try to execute this FastExport using bteq scripts directly on your unix environment.

Resources