I am trying to perform a Revision Cleanup activity on AEM Repository to reduce the size of the same by Tar Compaction. The Repository Size is 730 GB and Adobe Version is 6.1 which is old. The estimated time is 7-8 hours for the activity to get completed. We ran it for 24 hours straight but the activity is still running and no output. We have also tried running all the commands to speed up the process but still it is taking time.
Kindly suggest an alternative to reduce the size of the repository.
Adobe does not provide support to older versions, so we cannot raise a ticket.
Try to check the memory assigned to your machine, RAM memory I mean for JVM. Maybe if you increase it will take less and finish.
The repository size is not big at all. Mine is more than 1TB and is working.
In order to clean your repo you can try to run the Garbage Collector directly from AEM on JMX console.
The only way to reduce the datastorage is compact the server, or delete content like big assets or also big packages. Create some queries to see which assets / packages are huge and also delete them.
Hope you can fix your issue.
Regards,
Related
After I save my javascript file meteor will rebuild app but it takes so long time (I have to wait for 15 minute or more)
Are there solutions to solve it.
Thanks!
Based on my personal experience,
In general case, when you quit building in between there might be some folders undeleted like below images.
In such case, delete such folders first.
If you are using Windows, RAM can be issue. If you machine has very little memory allocated then building the ever growing Meteor project become very annoying. I came to know this when I upgraded my machine from 1st generation core i5-6GB DDR2 RAM-500GB HDD to 2nd generation core i7-8GB DDR3 RAM- 248GB SSD storage drive. Machine configuration plays a vital role for Meteor building for development env and production env. Earlier configuration took 6 min to build but new configuration took less than a minute for me now.
If you have any antivirus software running behind the scene then it too might cause the slow speed for meteor build process.
I made a little app and deployed it into a Ubuntu server using Meteor Up.
There are very few users each day (<10), but after few days a lot of the memory of the server is used.
So I think that there is a memory leak somewhere in my code.
How to find it ?
Thanks a lot !
We actually had a memory leak the last days and it was relatively easy to find using a package called heapdump, you can find it here: https://www.npmjs.com/package/heapdump
It is not made for meteor in specific, but for nodejs. Just read through the README carefully to install it. Afterwards find a good moment to get the first heapdump by running kill -USR2 <pid_of_meteor_app> on the server. A good moment is when there is not much going on on the server but enough so that the memory is leaking.
Afer a while, when you recognized a good amount of memory growth without a logical explanation: go make another heapdump and download both.
Hit F12 to open up you webdev console on your browser (Chrome, Firefox, Edge,...) and got to Memory there. Import both heapdumps after.
Now you need to find what changed between those two heapdumps, what actually helped me there was this article to understand how to do that: https://www.useanvil.com/blog/engineering/isolating-memory-leak-in-node/
Remember that you are most probably looking for memory reservations of the same size, sometimes just tiny amounts of kb as in our case, but hundreds of thousands of them. So sorting by space is a good idea.
In our case it was an outdated package called tslib which reserverd all the memory after a day or so. We were on 2.3.1, so we went to https://github.com/microsoft/tslib/releases/tag/2.4.0 and read there:
This release includes the __classPrivateFieldIn helper as well as an
update to __createBinding to reduce indirection between multiple
re-exports.
We updated the package, which was a dependency of another package and that fixed it.
Kadira, Monti APM and whatever is often absolutely useless in such cases, you cannot really track down the source more often than not.
There is a kadira package to check your app. have a look https://kadira.io/
I am using IBM jdk 1.7(to support TLS cyphers) for an struts based application deployed with embedded tomcat.
We are running with memory leaks(OOM) that generated almost 30 gigs of dumps.This has become a rotine event.
We have tried increasing the heap mem by including
wrapper.java.additional.1="-XX:MaxPermSize=256m -Xss2048k" in the wrapper.conf.
But this didnt help much.
Try using Memory Analyzer, you can follow the instructions here to download and install it:
https://www.ibm.com/developerworks/java/jdk/tools/memoryanalyzer/
It should provide an overview of your heap usage.
I'd recommend starting with the dominator tree view to see which objects are responsible for keeping data alive on the heap. You can also run various reports which analyse the heap for you.
You should have core files (.dmp) and heap dumps (.phd), the core files will be large but may be faster to access and will also contain all the values for basic types in objects and strings. The phd files just contain object sizes and the connections between them. It may be easier to relate what you are seeing back to your code if you start with the core file.
Updated to xcode4 days ago, xcode4 is really nicer to xcode3. But I met a memory issue when using xcode 4. The total active memory kept growing when the xcode4 war running, grew from 500m to 2.4G, the process memory is around 200m. It's strange~
After I closed xcode, the total active memory didn't go down soon, it was 2.4G for about 10 minutes.
Has anyone else met this issue too? Thanks for any info!
== Updates ==
Upgrade to XCode4.0.2, still has memory issue
I have the same problem. At times Xcode 4 starts to index your project (you can see "Indexing" message in the status bar at the top of window). During that it could use up to 2.8GB (!) of memory.
As soon as it happens I stop to use my laptop and start to make tea :)
If the swap exceeds 500M I restart my computer. I have 4GB of memory installed in my macbook 5.2 and there is no way to increase it :(
I don't know exactly what that "indexing" actually means. I supposed that it is connected with Code Sense in some way. But when I tried to disable code completion (preferences -> text editing -> editing), it didn't help.
I hope Apple would fix it in the next release. If not, the only way is upgrade my computer. Or use Xcode 3.2.
I'm having this same issue. Currently I'm using following workaround:
I have Activity Monitor opened on a second screen, and whenever Xcode reaches 1GB I restart it, and it works smoothly once again.
I know it's far from perfect workaround, and I'm looking forward for a better one.
I have Xcode 4.0.1 & OSX 10.6.7
I found a solution!!!
I wanted to clean my /Library/Cache. Accidentally I deleted part of my /Library :-) so I decided to do a full system restore using OSX DVD and my current (20 minutes old) Time Machine backup. I did the restore and ... It fixed the problem!. Time Machine restore cleans all cache! (it should be enough if you only delete the content of /Library/Cache and {HomeDirectory}/Library/Cache). Good luck!
After a lot of trial and error (mostly due to lack of documentation and examples) I have managed to create MSI installers that install custom DLLs to WinSxS as side-by-side assembly. There is only one problem: Uninstalling leaves all files (DLLs, manifests and catalogs) in the WinSxS directory. How can or should I best clean that up? I know for sure that nothing else references it.
I have read somewhere that WinSxS has a self-scavenging process that cleans up over time but I could not find more information about that. Can you manually invoke this to clean up stuff?
The only other way I see is manually deleting those bits. First you have to change the owner of all files (assembly, catalog, manifest and their respective directory) from SYSTEM to an administrator account, adjust the permissions and delete them. There are also pieces left in the registry (I think HKLM\COMPONENTS\DerivedData\Components may be one place), but since WinSxS should be treated as opaque it is hard to find any information.
Scavenging isn't exposed anywhere that I know of. I'm not even sure when it is kicked off automatically. Maybe on uninstall of a service pack? Maybe some tool admins can run? I really forget.
Anyway, my suggestion is don't fight it. There are so many twisty turns down there that it just isn't worth trying to get the disk space back. Once uninstalled the bits still in the SxS cache will not be activated so they are just wasting space.
It's a dumb design but blame Microsoft and don't try to overcompensate.
Here is an article, it's kinda complete guide to WinSxS.
So, shortly, you can only uninstall some components (all their versions are in this folder), and you can run Service Pack bridge burning utility (in Vista it is named VSP1CLN.EXE and shipped with SP1). Note, that after execution, you shouldn't be able to uninstall SP or any components to state, prior to SP release date.
No-one is convinced you can - short of a complete reinstall, your bloaty WinSxS directory is there to stay.
There's been a long "discussion" of the problem on technet.
There is no documentation of the format, or any instructions how to remove files that are no longer needed - MS seems to think that disc space is cheap. There is a self-scavenging feature, but no-one's convinced it works, or if it does, it is very conservative (as you'd hope as you don't want it to break your OS)
You can tell is the scavenger is working by checking the "C:\Windows\winsxs\Temp\PendingDeletes." folder, as this is where files are moved by windows update or an installer moves them to - the scavenger just deletes the files in here.
You'll notice that after you uninstall your assembly, while the files are still there, they can no longer be bound to - so they are just "staged", or cached, but not really installed.
Rob & gbjbaanb are correct - you cannot manually invoke a scavenge yourself. Don't try to delete the files yourself - there are multiple places in the registry where they are registered, DerivedData\Components being only one of the many references.
I think the rule for Vista is scavenging is kicked off by the TrustedInstaller service after 10 minutes of machine inactivity, after the last servicing operation (service pack, hotfix, etc). But it's very fickle, so it doesn't run as often as it should. So just be patient, and the files will disappear on their own.
Well i was having some issues as i have an 80GB SSD for my windows and the WinSxs folder was about 12gb's
I was searching the net and i found this command:
DISM.exe /online /Cleanup-Image /spsuperseded
And now my WinSxs is 7gb which was wonderful news.
There are a few updates regarding the cleanup method that apply to newer OS. Check http://www.karafilis.net/winsxs-cleanup