I am trying to index a Alfresco 4.0.d 5.0.d Community Repository (Alfresco Solr):
About 500.000 Documents
Repo-Size about 80GB
Metadata Indexing only: no problems: index is ready in about an hour.
Enabling Content Indexing too: The Solr Index seems to get stuck. After about 4 hours the Solr Webinterface is showing that no more transactions are left, but still the Index isn't marked as ready, and Solr keeps trying to create/update the index, when letting the indexer run. Stopped Indexing after about 12 hours, no progress shown in Solr Webinterface. Index Size kept growing all the time.
The "Troubleshooting Solr Index" tips from Alfresco Docs didn't make any difference.
I have enabled Debugging in Solr, and i am getting no obvious errors in there (no memory errors, no obvious errors at all). Only thing i see in the log files: Solr seems to try to Index the same Alfresco Transaction IDs over and over (see log excerpt, these lines are popping up over and over).
Any Idea how i can track down the cause of this?
Is it possible to find the Documents in the Repository belonging to the Transaction IDs?
Can some specific Transactions be excluded from indexing at all?
Thanks, Max
Log excerpt
2016-03-10 00:52:15,145 INFO [org.alfresco.solr.tracker.AclTracker] Scanning Acl change sets ...
2016-03-10 00:52:15,145 INFO [org.alfresco.solr.tracker.AclTracker] .... none found after lastTxCommitTime 1457481600850
2016-03-10 00:52:15,145 INFO [org.alfresco.solr.tracker.AclTracker] total number of acls updated: 0
2016-03-10 00:52:15,145 INFO [org.alfresco.solr.tracker.AbstractTracker] ... Running ContentTracker for core [archive].
2016-03-10 00:52:15,146 INFO [org.alfresco.solr.SolrInformationServer] .... registered Searchers for archive = 1
2016-03-10 00:52:15,146 INFO [org.alfresco.solr.Cloud] Running query FTSSTATUS:Dirty OR FTSSTATUS:New
2016-03-10 00:52:15,146 INFO [org.alfresco.solr.tracker.ContentTracker] total number of docs with content updated: 0
2016-03-10 00:52:15,146 INFO [org.alfresco.solr.tracker.AbstractTracker] ... Running MetadataTracker for core [archive].
2016-03-10 00:52:15,147 INFO [org.alfresco.solr.SolrInformationServer] .... registered Searchers for archive = 1
2016-03-10 00:52:15,155 INFO [org.alfresco.solr.Cloud] Running query TXID:1 AND TXCOMMITTIME:1399544992347
2016-03-10 00:52:15,155 INFO [org.alfresco.solr.tracker.MetadataTracker] Verified first transaction and timestamp in index
2016-03-10 00:52:15,156 INFO [org.alfresco.solr.tracker.MetadataTracker] Verified last transaction timestamp in index less than or equal to that of repository.
2016-03-10 00:52:15,161 INFO [org.alfresco.solr.tracker.MetadataTracker] Scanning transactions ...
2016-03-10 00:52:15,161 INFO [org.alfresco.solr.tracker.MetadataTracker] .... from Transaction [id=947618, commitTimeMs=1457521663509, updates=2, deletes=2]
2016-03-10 00:52:15,161 INFO [org.alfresco.solr.tracker.MetadataTracker] .... to Transaction [id=947654, commitTimeMs=1457524857746, updates=1, deletes=0]
2016-03-10 00:52:15,164 INFO [org.alfresco.solr.tracker.MetadataTracker] Scanning transactions ...
2016-03-10 00:52:15,164 INFO [org.alfresco.solr.tracker.MetadataTracker] .... from Transaction [id=947654, commitTimeMs=1457524857746, updates=1, deletes=0]
2016-03-10 00:52:15,165 INFO [org.alfresco.solr.tracker.MetadataTracker] .... to Transaction [id=947655, commitTimeMs=1457524858267, updates=2, deletes=1]
2016-03-10 00:52:15,180 INFO [org.alfresco.solr.tracker.MetadataTracker] Scanning transactions ...
2016-03-10 00:52:15,180 INFO [org.alfresco.solr.tracker.MetadataTracker] .... none found after lastTxCommitTime 1457524858267
2016-03-10 00:52:15,180 INFO [org.alfresco.solr.tracker.MetadataTracker] total number of docs with metadata updated: 0
2016-03-10 00:52:17,513 DEBUG [org.alfresco.solr.content.SolrContentUrlBuilder] Appending SOLR metadata: tenant - _DEFAULT_
2016-03-10 00:52:17,513 DEBUG [org.alfresco.solr.content.SolrContentUrlBuilder] Appending SOLR metadata: tenant - _DEFAULT_
2016-03-10 00:52:17,513 DEBUG [org.alfresco.solr.content.SolrContentUrlBuilder] Appending SOLR metadata: tenant - _DEFAULT_
2016-03-10 00:52:17,513 DEBUG [org.alfresco.solr.content.SolrContentUrlBuilder] Appending SOLR metadata: dbId - 124123
2016-03-10 00:52:17,513 DEBUG [org.alfresco.solr.content.SolrContentUrlBuilder] Converted SOLR metadata to URL: solr://
Edit: Adding Screenshots:
Solr Webadmin
Solr Health Report for Workspace Spaces Store
How did you check if solr is marked as ready?
Are you aware that there is a separate index for the trash (archive) and the "real" repository (workspace)? The log is showing output for the archive tracker.
Additionally it may help to downsize the tracker config and only allow one thread per tracker and or to disable the trash indexing.
Index Reports
Have you checked Index reports? s. https://wiki.alfresco.com/wiki/Alfresco_And_SOLR#Direct_URLs. You may need to import the repository certificates in your browser to be able to access the solr user interface and the alfresco solr reports
Could you please create and attach a alfresco-solr general report
http://<alfrescoserver>/solr/admin/cores?action=REPORT&wt=xml
and a a summary report
http://<alfrescoserver>/solr/admin/cores?action=SUMMARY&wt=xml
?
Transactions and nodes
You can check the transactions in the database. The log is telling you all the requird infos. In your snippet I can't find log entries reindexing the same node as you told but e.g. "Transaction id=947655" means the row in alf_transaction with id=947655. To find all nodes from a distinct transaction_id you can just
select * from alf_node where transaction_id=947655
It is not possible to skip distinct transactions but you can attach the cm:indexControl to nodes you don't want to index. Please check http://docs.alfresco.com/4.0/concepts/admin-indexes.html
Related
What is the right way to backup and restore a MariaDB database that has sequence generation enabled (i.e. NOT autoincrement)? (This includes migrating to a new server.)
Is it possible to instruct the sequence generator to pick up indexing table data at a specific ID value? How?
Steps I take to create my issue
I wish to transfer an application to a new server:
Backup data on source server:
mysqldump --skip-opt --no-create-db --no-create-info --hex-blob [database-name] [...list of tables...] > data-backup.sql
On target server, create new empty database (same name)
Build/run JHipster Spring application on target server: java -jar myapp.jar (Running this application recreates/configures a new instance of the database on the target server.)
Restore data:
mysql [database-name] < data-backup.sql
All the above steps produce no errors (so far).
Problem
When I follow these steps, the database is restored (apparently perfectly). I can log in to the application and access all information. BUT when I attempt to create new entities (i.e. save something to the database), I get an ID 'Duplicate entry' error in the server logs:
2022-03-24 12:54:43.775 ERROR 11277 --- [ XNIO-1 task-1] o.h.e.jdbc.batch.internal.BatchingBatch : HHH000315: Exception executing batch [java.sql.BatchUpdateException: (conn=33) Duplicate entry '1001' for key 'PRIMARY'], SQL: insert into product (name, id) values (?, ?)
2022-03-24 12:54:43.776 WARN 11277 --- [ XNIO-1 task-1] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 1062, SQLState: 23000
2022-03-24 12:54:43.776 ERROR 11277 --- [ XNIO-1 task-1] o.h.engine.jdbc.spi.SqlExceptionHelper : (conn=33) Duplicate entry '1001' for key 'PRIMARY'
2022-03-24 12:54:43.779 ERROR 11277 --- [ XNIO-1 task-1] o.z.problem.spring.common.AdviceTraits : Internal Server Error
org.springframework.dao.DataIntegrityViolationException: could not execute batch; SQL [insert into product (name, id) values (?, ?)]; constraint [PRIMARY]; nested exception is org.hibernate.exception.ConstraintViolationException: could not execute batch
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.convertHibernateAccessException(HibernateJpaDialect.java:276)
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.translateExceptionIfPossible(HibernateJpaDialect.java:233)
at org.springframework.orm.jpa.JpaTransactionManager.doCommit(JpaTransactionManager.java:566)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:743)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:711)
at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:654)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:407)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:119)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:698)
at com.mycompany.app.web.rest.ProductResource$$EnhancerBySpringCGLIB$$84c14d6d.createProduct(<generated>)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
...
Clearly my backup/restore process is not accounting properly for the sequence generator, which generates ID values that conflict with the existing data.
What I am doing wrong? What is the right process of backing up/restoring such a database?
Environment: JHipster 7.7.0 (Angular, monolithic), MariaDB 10.4, OpenJDK 16.0.2_7, OS Windows 10 Pro and openSUSE 15.2, Firefox 98.0.2 and Chrome 99.0.4844.84.
PS: I previously reported this issue here, aimed at the JHipster community, but got limited response. I think I need a MySQL/MariaDB expert opinion on this.
(Apologies in advance: I'm not a database expert. The technique I outline above has served me well for years, but previously I was dealing with AUTO_INCREMENT. This sequence generator has me baffled.)
Ok! I have solutions.
[For the sake if these notes, let's call the database: mydata. Also, in JHipster, the MariaDB sequence generator is called: sequence_generator]
Let's consider two situations:
(1) Simple migration
If you are merely migrating the application to a new server, the process is straight forward:
Step 1: On the original server backup and secure your database: mysqldump -u root -p mydata > mydata.sql
Step 2: Transfer the SQL file to the new server, along with the JHipster JAR file
Step 3: On the new server, create an empty database with the same name, and restore the data: mysql -u root -p mydata < mydata.sql
Step 4: Now launch your JHipster application, and everything should work
(2) Model modification
The assumption is that you have modified your model in some way (e.g. added properties to one or more entities). This solution is fiddly, but it works (for me).
Step 1: Backup your database, and secure it (in case something goes wrong): mysqldump -u root -p mydata > mydata.sql
Step 2: Backup and secure the original JHipster JAR that works with the original database
Step 3: Duplicate your database (schema and data) in a new table: mydata_bk
Step 4: Drop your original database, and create a new empty database
Step 5: Launch your new JHipster JAR, and give it time to create the new database schema, then stop the application
Step 6: Use a tool (DataGrip, sqlYog, etc) to compare the old (mydata_bk) and new schema (mydata), and modify the old schema to match the new schema
Step 7: Restore/copy all data from mydata_bk to mydata, EXCEPT for the tables DATABASECHANGELOG, DATABASECHANGELOGLOCK and the special sequence_generator table
Step 8: Open the mydata.sql SQL file, and at the top, after initial comments, one of the first instructions will read:
--
-- Sequence structure for `sequence_generator`
--
DROP SEQUENCE IF EXISTS `sequence_generator`;
CREATE SEQUENCE `sequence_generator` start with 2000 minvalue 1 maxvalue 9223372036854775806 increment by 50 cache 1000 nocycle ENGINE=InnoDB;
SELECT SETVAL(`sequence_generator`, 201050, 0);
The specific numbers may vary, but the broad details will be similar. In a MariaDB SQL console type/execute each of those SQL statements: DROP SEQUENCE ...;, CREATE SEQUENCE ...;, and SELECT SETVAL(...);
Step 9: Launch your JHipster application.
Hope this helps others that run into similar issues. Let me know if you have a better approach!
I just discovered the silly new issue of MariaDB's latest version having mysql.user as a view. All my imported Wordpress databases suddenly cannot connect from the blogs. When I try to even list mysql.user it shows me this:
> select * from mysql.user;
ERROR 1356 (HY000): View 'mysql.user' references invalid table(s)
or column(s) or function(s) or definer/invoker of view lack
rights to use them
What can we do to solve this?
Edit: Found this question, but it does not have a solution, only a suggestion. The ALTER USER command -- where to use and with what settings? Do I have to somehow alter the rights for every Blog database?
Update:
Further investigation revealed that the issue described in this Question, and my initial response to it (below) may be related to an Incorrect definition of table mysql.event problem. In my case, I had 1) loaded a full dump (including the mysql database) from MySQL 5.7.33 to a fresh installation of MariaDB10.5.9; 2) discovered that this was not a good idea; 3) edited my dump file to exclude the mysql database, and 4) repeated the load without deleting any databases or configurations.
This caused the database to function properly, but (in addition to the issue described in this Question) a) /usr/sbin/mariadbd --verbose --help would try to run the database server rather than print help, b) on startup the following error always occurred:
Apr 05 08:52:46 xxx mariadbd[22668]: 2021-04-05 8:52:46 0 [ERROR] Incorrect definition of table mysql.event: expected column 'sql_mode' at position 14 to have type set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','IGNORE_BAD_TABLE_OPTIONS','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH','EMPTY_STRING_IS_NULL','SIMULTANEOUS_ASSIGNMENT'), found type set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_B
Apr 05 08:52:46 xxx mariadbd[22668]: 2021-04-05 8:52:46 0 [ERROR] mariadbd: Event Scheduler: An error occurred when initializing system tables. Disabling the Event Scheduler
Today, I was able to correct these problems (under Amazon Linux 2) by:
Uninstalling MariaDB-server and MariaDB-client
Removing /etc/my.*
Removing /var/lib/mysql
Reinstalling MariaDB-server and MariaDB-client
Reloading the database dump, again omitting the dump of the mysql database
At this point, I not only have clean database startup and proper operation of /usr/sbin/mariadbd --verbose --help, I also find that select * from mysql.user works properly!
So the problem of not being able to select from mysql.user appears not to have resulted from the change of mysql.user from table to view as I had originally thought, but from some other issue related to my "improper" database migration.
My initial answer:
(included as a reference only)
After considerable research I have found at least part of the answer to this question:
tl;dr: select * from mysql.global_priv
then for each User,
show grants for 'XXX'#'localhost';
Longer version, from Authentication in MariaDB 10.4 — Understanding the Changes:
The password storage has changed. All user accounts, passwords, and global privileges are now stored in a mysql.global_priv table. What happened to the mysql.user table? It still exists and has exactly the same set of columns as before, but it’s now a view over mysql.global_priv...."
The aforementioned article provides not only what what but also the why. I do not agree with all of it. In particular the claim is made that Old mysql.user table still exists, you can select from it as before, but you cannot (hence this question). Nonetheless I am relieved to discover a relatively coherent explanation from MariaDB.
Finally, here is an example:
MariaDB [(none)]> select * from mysql.global_priv\G
*************************** 1. row ***************************
Host: localhost
User: mariadb.sys
Priv: {"access":0,"plugin":"mysql_native_password","authentication_string":"","account_locked":true,"password_last_changed":0}
*************************** 2. row ***************************
Host: localhost
User: root
Priv: {"access": 1844674407370915, "plugin": "mysql_native_password", "authentication_string": "*9A87226E872127C756290C504DB5D9076E", "auth_or": [{}, {"plugin": "unix_socket"}], "password_last_changed": 1617303275}
*************************** 3. row ***************************
Host: localhost
User: mysql
Priv: {"access":1844674407371615,"plugin":"mysql_native_password","authentication_string":"invalid","auth_or":[{},{"plugin":"unix_socket"}]}
*************************** 4. row ***************************
MariaDB [(none)]> show grants for 'root'#'localhost'\G
*************************** 1. row ***************************
Grants for root#localhost: GRANT ALL PRIVILEGES ON *.* TO `root`#`localhost` IDENTIFIED VIA mysql_native_password USING '*9A87226E872127C756290C5BF177504DB5D9076E' OR unix_socket WITH GRANT OPTION
*************************** 2. row ***************************
Grants for root#localhost: GRANT PROXY ON ''#'%' TO 'root'#'localhost' WITH GRANT OPTION
I scheduled a data extraction with an Xquery query on ML 8.0.6 using the "scheduler tasks".
My Xquery query (this query works if I copy/paste it in the ML web console and I get a file available on AWS S3):
xdmp:save("s3://XX.csv",let $nl := "
"
return
document {
for $book in collection("books")/books
return (root($book)/bookId||","||
$optin/updatedDate||$nl
)
})
My scheduled task :
Task enabled : yes
Task path : /home/bob/extraction.xqy
task root : /
task type: hourly
task period : 1
task start time: 8 past the hour
task database : mydatabase
task modules : file system
task user : admin
task host: XX
task priority : higher
Unfortunately, my script is not executed because no file is generated on AWS S3 (the storage used)and I do not have any logs.
Any idea to :
1/debug a job in the scheduler task?
2/ See the job running at the expected time ?
Thanks,
Romain.
First, I would try take a look at the ErrorLog.txt because it will probably show you where to look for the problem.
xdmp:filesystem-file(concat(xdmp:data-directory(),"/","Logs","/","ErrorLog.txt"))
Where is the script located logically: Has it been uploaded to the content database, modules database, or ./MarkLogic/Modules directory?
If this is a cluster, have you specified which host it's running on? If so, and using the filesystem modules, then ensure the script exists in the ./MarkLogic/Modules directory of the specified host. Inf not, and using the filesystem modules, then ensure the script exists in the ./MarkLogic/Modules directory of all the hosts of the cluster.
As for seeing the job running, you can check the http://servername:8002/dashboard/ and take a look at the Query Execution tab see the running processes, or you can get a snapshot of the process by taking a look at the Status page of the task server (Configure > Groups > [group name] > Task Server: Status and click show more button)
I am newbie to SQOOP 1.4.5. I have gone through the sqoop documentation. I have successfully Imported / Exported the simple datatypes kinds of records to and from hdfs.
NEXT I TRIED FOR LOB DATA FOR EXAMPLE CLOB.
I have a simple CLOB table that Create Query is as following...
CREATE TABLE “SCOTT”.”LARGEDATA” (“ID” VARCHAR2(20 BYTE), “IMG” CLOB ) SEGMENT CREATION DEFERRED PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING TABLESPACE “USERS” LOB (“IMG”) STORE AS BASICFILE (TABLESPACE “USERS” ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING );
I can successfully import data to hdsf
sqoop import –connect jdbc:oracle:thin:#:1522: –username –password –table ‘LARGEDATA’ -m 1 –target-dir /home/mydata/tej/LARGEDATA2 –fields-terminated-by , –escaped-by \\ –enclosed-by ‘\”‘
But when I tried to export this data BACK to ORACLE using following command
sqoop export –connect jdbc:oracle:thin:#:1522: –username –password –table ‘LARGEDATA’ -m 1 –export-dir /home/mydata/tej/LARGEDATA2 –fields-terminated-by , –escaped-by \\ –enclosed-by ‘\”‘
I got following exception
java.lang.CloneNotSupportedException: com.cloudera.sqoop.lib.ClobRef at java.lang.Object.clone(Native Method)
java.io.IOException: Could not buffer record at org.apache.sqoop.mapreduce.AsyncSqlRecordWriter.write(AsyncSqlRecordWriter.java:218)
and the error metioned in this link https://stackoverflow.com/questions/30778340/sqoop-export-4000-characters-column-data-into-oracle-clob
I google about it and got following links that have mentioned that sqoop does not support export for BLOB and CLOB data. Out of that some are of Jul 2015 post. and some jira issue shown it is still opened. forum links are as following…
https://issues.apache.org/jira/browse/SQOOP-991
Can sqoop export blob type from HDFS to Mysql?
http://sofb.developer-works.com/article/19310921/Can+sqoop+export+blob+type+from+HDFS+to+Mysql%3F
http://grokbase.com/t/sqoop/user/148te4tghg/sqoop-import-export-clob-datatype
Exporting sequence file to Oracle by Sqoop
Can anyone please let me know is SQOOP support export for LOB data? if yes then please guide me how can I do this?
Try creating a staging table in oracle and use --staging-table --clear-staging-table. Keep staging table column as varchar2(10000).
I have a corrupted Lucene index. If I run "CheckIndex -fix" the problem is resolved, but as soon as I restart tomcat it becomes corrupted again.
The index directory is shared between two application servers running Liferay-Tomcat. I am fixing the index on 1 server and restarting that whilst the other is running. This is a production environment so I cannot bring them both down.
Any suggestions please?
Before fix, CheckIndex says:
Opening index # /usr/local/tomcat/liferay/lucene/0
Segments file=segments_5yk numSegments=1 version=FORMAT_SINGLE_NORM_FILE [Lucene 2.2]
1 of 1: name=_2vg docCount=31
compound=false
hasProx=true
numFiles=8
size (MB)=0.016
no deletions
test: open reader.........FAILED
WARNING: fixIndex() would remove reference to this segment; full exception:
java.io.IOException: read past EOF
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:151)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:38)
at org.apache.lucene.store.IndexInput.readVInt(IndexInput.java:78)
at org.apache.lucene.index.FieldInfos.read(FieldInfos.java:335)
at org.apache.lucene.index.FieldInfos.<init>(FieldInfos.java:71)
at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(SegmentReader.java:119)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:652)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:605)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:491)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
WARNING: 1 broken segments (containing 31 documents) detected
WARNING: would write new segments file, and 31 documents would be lost, if -fix were specified
If you access your search index with more than one application server, I would suggest integrating a Solr Server. So you don't have the problem that 2 app servers are trying to write on the same file. This could be error prone as you already found out.
To get Solr up and running you have to follow those steps:
Install a Solr Server on any machine you like. A machine running only Solr would be quite preferable.
Install the Solr search portlet in Liferay
Adjust the config files according to the setup document of Sol Search portlet.
Here are some additional links:
http://www.liferay.com/de/marketplace/-/mp/application/15193648
http://www.liferay.com/de/community/wiki/-/wiki/Main/Pluggable+Enterprise+Search+with+Solr