How to delete indices older than 30 days using automated policies in Open Distro for elasticsearch? - kibana

I am using Open Distro for elastic search and want to create a policy that deletes indices older than 30 days. I have seen this doc: https://opendistro.github.io/for-elasticsearch-docs/docs/im/index-rollups/ but the steps are not clear.

Related

How to set retention policy in jfrog Artifactory

I'm looking for a way to set retention period in JFrog Arttifactory, which will remove SNAPSHOT versions older than 100 days.
If any teams need particular files need to be keep in Artifactory forever, need to exclude some path alone in that repo from retention policy and other directories should be removed as per the retention policy.
According to this article you should be able to build an AQL
that will help you find the artifacts you would like to delete and then use the code from the article to delete it.
Other than that, JFrog has a few built in plug-ins on the github page - you can see if one of them suits you.

Install R on the nodes for Azure batch services

I can create the batch service resources using Power shell as described here: https://learn.microsoft.com/en-us/azure/batch/batch-powershell-cmdlets-get-started
I want to run a R script on the nodes and I need R installed on the nodes as none of the available VM's(windows or linux) come with R installed. I have currently installed R by manually logging into the VM. But I want to create the batch resources and then install R on the nodes preferably through a script before I run the R code. How can I go about this?
There are 4 main ways to load necessary software on to VMs:
Create a start task along with potentially resource files to prep the compute node per your requirements.
Create a custom image that already contains all of your software preconfigured.
Use containers instead of directly loading software on the compute node.
Utilize application packages.

JupyterLab in a psychology experiment where participants use JupyterLab

I'm preparing a series of experiments investigating how people learn while interacting with JupyterLab.
Like any psychology experiment, this will require experimental control, meaning that I need disable parts of the interface to prevent participants from going off task during the study.
It would also be helpful to put JupyterLab in a particular state for each user, e.g. extensions active and tabs/panes in a particular layout.
Any suggestions along the above would be appreciated. I have previously set up a JupyterHub (TLJH) and developed an extension, so I have some background knowledge on this topic.
Maybe you could use workspaces to 'setup' Jupyterlab when participants first come in? They're stored as json files (you can find the workspace directory using jupyter lab paths).
You can disable extensions with jupyter labextension disable <extension name>, even the non listed ones such as the filebrowser.
I think you will need to add your own extension too, to remove some of the 'elements' you don't want or re-organise them (such as some buttons in the toolbars, etc.).

How to migrate repository data from Alfresco 4 to 5?

I'm working on migration from Alfresco 4 to 5 and applying any add-ons on Alfresco 4 for the purpose is not applicable. Database used for the both versions are different from each other. I have tried with ACP files and it is very time consuming. Is there a size limitation on ACP files? What other methods can be used?
Use Standard Upgrade Procedure
What is your main intention? "Just" doing an upgrade from 4 to 5?
In that case the robust, easy way would be to:
Install required modules having custom models in your target sytstem (or if you customized models in the extension path than you have to copy that config)
backup and restore the alfresco repo database to your new (5.x) system. If your target system uses a different db product (not just a different version) you need to manage the db migration using db specific migration tools. It is no alternative to use Alfresco export/import.
sync alf_data/contentstore to your new system (make sure the db dump
is always older or you need to do an offline sync)
During startup Alfresco recognizes that the repo needs to be upgraded and does everything. Check the catalina.out for any output during migration.
If you need a subset from your previous system it is much easier to delete the content afterwards (don't forget to purge the trash and you should configure the cleaner job not to wait 14 days).
Some words concerning ACP
It is a nice tooling to export single directories but unfortunately it is limited:
no support accross Alfresco versions (exactly your case)
no support for site metadata / no site export/import (maybe it is working after the changes in 4.x when putting site metadata in nodes but I suppose nobody tested this)
must run in one transaction. So hard limits depend on your hardware / JVM configuration but I wouldn't recommend to export/import more than some thousand nodes at once.
If you really need to use export/import a huge number of documents you should use the import/export in a separate java process which means your Alfresco needs to be shut down. s. https://wiki.alfresco.com/wiki/Export_and_Import#Export_Tool
ACP does have a file limit (I can't remember the actual number), but we've had problems with ones below that limit too. We've given up on this approach in favor of using Alfresco bulk import tool.
One big advantage this tool has, it can continue a failed import from the point of failure, no need to delete the partially imported batch and start all over again. It can also update files as needed, something ACP method can't (would fail with DuplicateChildNameNotAllowed).

In what order to update to SVN revisions?

I recently started working with SVN, I am using it in combination with Wordpress.
I just made a number of updates to Wordpress and to some plugins, and I would like to know in what order I need to update, or whether it even matters.
Here is what I did:
Locally on my computer:
svn delete folder xyz
commit the deletion
add a new folder by the same name
svn add the folder
svn commit
Now if I log in to our development server, do I just to "svn update" ?
Or do I need to go through the various versions by updating to specific version numbers?
The reason I ask, is because I have had one or two tree conflicts in the past where I got:
Tree conflict (local dir unversioned, incoming dir add upon update) for location wp-content/plugins/ExamplePlugin/ExampleSubDir
Does my workflow lead to such errors? Am I overlooking something?
Your workflow is fine, and yes, you would just perform a single svn update on the other computer to get fully up-to-date.
The workflow you describe would produce a tree conflict if you happened to have an unversioned folder named "xyz" in the same location as the one you just committed (which is what the error says in the parenthetical remark). You should remove that unversioned folder and then let SVN add that folder itself (via the call to update).
If you haven't already, it might be worth reviewing some of the documentation to ensure you understand the fundamentals.

Resources