I updated Alfresco Search Services from 1.3 to 1.4 which forced me to also update OpenJDK from 8 to 11. Running Alfresco Search Services 1.3 with jdk8 worked without any OutOfMemoryExceptions during (re)indexing but with jdk11 we see repeatable growing the heap until the solr oom killer kills the solr process. During indexing the jvm performs continuously GC with jdk but I guess jdk11 changed GC in a way that the objects will stay longer in memory. Continuous GC indicates inefficient object creatrion but this is nothing I can influence. I tried with UseConcMarkSweepGC and G1 garbage collector but with the same behavior. Does anybody know a way how to configure GC in OpenJDK 11 to behave similar as in OpenJDK8 with Alfresco Search Services / solr6?
My parameters in solr.in.sh
SOLR_JAVA_MEM="-Xms16g -Xmx30g"
SOLR_OPTS="$SOLR_OPTS -Dsolr.jetty.request.header.size=1000000 -Dsolr.jetty.threads.stop.timeout=300000 -Ddisable.configEdit=true -Dsolr.allow.unsafe.resourceloading=true"
SOLR_OPTS="$SOLR_OPTS -XX:+UseConcMarkSweepGC -XX:-DisableExplicitGC -XX:-UseGCOverheadLimit"
SOLR_OPTS="$SOLR_OPTS -server -Djava.net.preferIPv4Stack=true -Duser.language=en -Duser.country=US -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.net.preferIPv6Addresses=false"
SOLR_OPTS="$SOLR_OPTS -Dsun.security.ssl.allowUnsafeRenegotiation=true -Dsolr.allow.unsafe.resourceloading=true"
Issue is caused by a bug in the Alfresco Solr Tracker not recognizing recursions correctly (e.g. groups as member in groups or secondary child assocs in Alfresco). We worked around by replacing all secondary child associations by alfresco links.
Alfresco Search Services 2.0 should have fixes for that recursion isse but that requires Alfresco Content Services 6.2
Related
In our company we recently upgraded to artifactory-pro:7.38.10 from version 6. To cleanup old artifacts we are using lavatory which runs an aql-search to identify the artifacts to be removed by filtering them by the date. This worked without issues our previous installation based on artifactory 6. Now after the upgrade artifactory frequently crashes with an OutOfMemoryError and the instance seems to require either significantly more memory than before or there is a memory leak. After further investigation it turned out that when problem is caused by running the aql-search and the memory usage jumps from 4 GB to over 10 GB. That's +6 GB for something that hasn't changed.
After searching for known issue I found https://www.jfrog.com/jira/browse/RTFACT-26825 which is resolved and might solve our problem but there is no version specified containing a fix. Since there is a workaround and the issue was fixed, I expect that there must be a release.
Is there already a release containing a fix?
The JIRA that you are referring to is fixed in the Artifactory version 7.38.0. So mostly, that should not be causing an issue as you are in higher version than 7.38.0.
In order to confirm, you may try the below. Add the system property to $JFROG_HOME/artifactory/var/etc/artifactory/artifactory.system.properties file and restart Artifactory for changes to take effect:
artifactory.nuget.v2.search.page.size=1000
Alternatively, you may put all the Nuget Devexpress repositories as offline. Now, check if you are encountering the memory issue. If you are not encountering this issue, maybe there is a regression issue. But according to my assumption, your server needs more resources as there are a lot of microservices introduced in Artifactory 7, when compared to Artifactory 6.
Please check if you are satisfying the resources requirement as mentioned in this page. In that case, you would need to tune your Artifactory as per this article.
I am trying to perform a Revision Cleanup activity on AEM Repository to reduce the size of the same by Tar Compaction. The Repository Size is 730 GB and Adobe Version is 6.1 which is old. The estimated time is 7-8 hours for the activity to get completed. We ran it for 24 hours straight but the activity is still running and no output. We have also tried running all the commands to speed up the process but still it is taking time.
Kindly suggest an alternative to reduce the size of the repository.
Adobe does not provide support to older versions, so we cannot raise a ticket.
Try to check the memory assigned to your machine, RAM memory I mean for JVM. Maybe if you increase it will take less and finish.
The repository size is not big at all. Mine is more than 1TB and is working.
In order to clean your repo you can try to run the Garbage Collector directly from AEM on JMX console.
The only way to reduce the datastorage is compact the server, or delete content like big assets or also big packages. Create some queries to see which assets / packages are huge and also delete them.
Hope you can fix your issue.
Regards,
We are trying to migrate an Alfresco CE system from 5.2 with solr4 to Alfresco 6.1 with Alfresco Search Services (we tried 1.3 and 1.4) but we are facing massive performance problems using Alfresco Search Services / Solr6: Searches running on a similar setup take 3-5 x longer.
Some background:
Alfresco 5.2 / solr4 is running on Ubuntu 16 / OracleJdk 8
Alfresco 6.1 / ASS 1.4 is running on Ubuntu 18 / Adopt OpenJDK 11
Repository and ASS are running on dedicated servers (no docker involved)
solr index is stored on a very fast ssd SAN ext4 device having no issues for random and sequential access / number of IOPS
all boxes have 8 cores, 16 GB RAM
all boxes have jvm with 12 GB heap space
both solr versions have the same configuration for caches
both solr versions have the same memory configuration
number of solr docs: ~ 7,000,000
What we could observe:
searching for simple words like alfresco, christmas, ... Alfresco 5.2/solr4 returns a not already cached result in ~1-2 sec
searching for simple words like alfresco, christmas, ... Alfresco 6.1/solr6 returns a not already cached result in ~7-15 sec
Alfresco 5.2/solr4 shows in solr admin ui to use ~9 of 12 GB heap space
Alfresco 6.1/solr6 shows in solr admin ui to use ~3 of 12 GB heap space
We already tried to increase RAM, heap space, CPU without any change in the search performance.
I wonder why sol46/ASS consumes so little heap space.
Does anybody have similar experience?
What should we do to get more acceptable response times?
I also tried to configure sharding in solr6 (without being convinced that this solves the real problem) but Creating Solr shards in Alfresco 6.1 CE seams not to work either.
it pointed out that the search performance issue was caused by a fix from the community to work around localization restrictions (by adding locale = '*' in the search query).
Instead the index should be always created with cross locale properties which is not set by default. e.g. in shared.properties
# Data types that support cross locale/word splitting/token patterns if tokenised
alfresco.cross.locale.datatype.0={http://www.alfresco.org/model/dictionary/1.0}text
alfresco.cross.locale.datatype.1={http://www.alfresco.org/model/dictionary/1.0}content
alfresco.cross.locale.datatype.2={http://www.alfresco.org/model/dictionary/1.0}mltext
please check https://github.com/Alfresco/SearchServices/issues/234 for more details.
I am using IBM jdk 1.7(to support TLS cyphers) for an struts based application deployed with embedded tomcat.
We are running with memory leaks(OOM) that generated almost 30 gigs of dumps.This has become a rotine event.
We have tried increasing the heap mem by including
wrapper.java.additional.1="-XX:MaxPermSize=256m -Xss2048k" in the wrapper.conf.
But this didnt help much.
Try using Memory Analyzer, you can follow the instructions here to download and install it:
https://www.ibm.com/developerworks/java/jdk/tools/memoryanalyzer/
It should provide an overview of your heap usage.
I'd recommend starting with the dominator tree view to see which objects are responsible for keeping data alive on the heap. You can also run various reports which analyse the heap for you.
You should have core files (.dmp) and heap dumps (.phd), the core files will be large but may be faster to access and will also contain all the values for basic types in objects and strings. The phd files just contain object sizes and the connections between them. It may be easier to relate what you are seeing back to your code if you start with the core file.
after installing .NET 4.5 on a Windows Web Server 2008 R2, some performance counters shows the wrong values. I'm using the built in Performance Monitor and looking at the group
ASP.NET Apps v4.0.30319 for counters Anonymous Requests/Sec and Requests Executing.
Before those values reflected the current load with fluctuating values, but now they instead increases for each request, so they behave more like Total Requests and Total Requests Executing.
We have seen the same behavior in a production environment and also on internal servers.
Have any of you seen the same behavior, I can find anything when googling for it.
Regards
Patrik
Fix for this issue is available now. http://support.microsoft.com/kb/2805227 for windows8 and http://support.microsoft.com/kb/2805226 for windows 7 platform.
So below workaround is no longer required.
The workaround for this issue is to execute below commands from 4.5 installed directory, (e.g. “%systemroot%\Microsoft.Net\Framework64\v4.0.30319”)
aspnet_regiis -u
aspnet_regiis -I or aspnet_regiis -iru
One thing that comes to mind is permissions on this registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\version of ASP.NET\Names
I'd start off by checking those and ensuring they're correct. If you have multiple versions you can even cross-check/compare the settings (ie if 2.0 still works fine)..