Is Kibana Watcher part of AWS - kibana

Is Kibana Watcher part of AWS Elasticearch open distro?
It should have been part of Stack management on AWS Kibana UI.
I donot see it. Would like to see if there is a way to enable it.

Watcher is neither available on the AWS service nor Open Distro. It's an Elastic product (see the documentation) and you can either get it on Elastic Cloud or by running Elasticsearch yourself (download).
PS: Watcher is a commercial product, so you will either need a license or use Elastic Cloud.

As mentioned, you can't use Watcher. But ODFE does provide Alerting of its own.
Plugin installation and configuration are documented here

Related

firebase - how do I install software onto a server I have deployed to?

I am getting an error with firebase functions. When I run the functions locally using emulators they work fine. But when I firebase deploy and on subsequent execution of a certain function, I get an error which suggests that the version of ffmpeg installed on the server I have deployed to is out of date
How do I update ffmpeg (or indeed any software) on the server? Maybe I SSH into it and update software? Maybe I should provide some config to define what software my code depends on prior to deployment? Please advise how an update can be done thanks
OPTIONAL READING:
My Node.js code uses execSync(myFfmpegCommand) which is why the dependency exists
Here's my findings.
You cannot install custom software onto a Node.js runtime environment as the disk is read only. If the software you want is not on the runtime environment already then you are stuffed.
I further found that functions are not meant for heavyweight work but just for lightweight ops. If a function needs to do some heavy lifting then it can ask Google App Engine to do it for it. The function communicates with App Engine over HTTP or PubSub.
What is App Engine I hear you ask! It basically allows you to deploy server code to the cloud. For example, think of an express.js server that you typically use for your backend. That's what App Engine is. Then you just deploy it!
App Engine can be configured through a yaml file to determine memory and optionally a Dockerfile to determine what OS to use and what software to install. The yaml looks like this:
runtime: custom # uses Dockerfile
env: flex
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
Consider App Engine as the heavy lifter. When a firebase function wants to do some hard work, it just asks App Engine to do it for it! In this manner, firebase functions become simple event handlers to trigger other work.

How to configurate on kubernetes to update from git repository?

I just installed bitnami/wordpress image using helm. Is it possible to sync it with git and when I will change some files in this git repository to update kubernetes pods?
I mean update Wordpress source code because Im modyficating plugins in wp-content/plugins dir.
You can use ArgoCD or Flux to automate this types of GitOps workflow. Check their documentation. They are pretty powerful and popular for GitOps in Kubernets.
A possible solution is to use git-sync in a sidecar container. It will periodically pull files down from a repository and copy them to a volume.
Here is a sample manifest which uses git-sync to update the content hosted on a simple nginx web server:
https://github.com/nigelpoulton/ps-vols-and-pods/blob/master/Multi-container-Pods/sidecar.yml
One way I managed it (although possibly a rookie way) was through github actions.
Here's an example of mine
And here's official docs from docker to configure with github actions
You basically want to tell github actions to recreate and push your image and then tell your cluster to refresh like so:
If you're using kubectl to manage your cluster check if your version supports kubectl rollout restart. You can use it to force any deployment to restart and smoothly recreate your pods (it also re-pulls the supporting image).
e.g.: kubectl rollout restart deployment/my_deployment

Can you just store binaries?

We are using Artifactory Enterprise and, in addition to "normal" usage, we would like to just store some binaries in Artifactory. This is so we can limit egress and pull the binaries from Artifactory instead of the general Internet. Is this possible? Is there a documentation link that will help explain the process?
Yes, this can be done by creating a generic local repository and deploy the binaries thru UI or using the REST API and you can use the binaries from generic local repository. Refer to this blog as well.

Upgrading Artifactory setup with Remote Repositories

I have an artifactory server, with a bunch of remote repositories.
We are planning to upgrade from 5.11.0 to 5.11.6 to take advantage of a security patch in that version.
Questions are:
do all repositories need to be on exactly the same version?
is there anything else i need to think about when upgrading multiple connected repositories (there is nothing specific about this in the manual)
do i need to do a system-level export just on the primary server? or should i be doing it on all of the remote repository servers
Lastly, our repositories are huge... a full System Export to backup will take too long...
is it enough to just take the config files/dirs
do i get just the config files/dirs by hitting "Exclude Content"
If you have an Artifactory instance that points to other Artifactory instances via smart remote repositories, then you will not have to upgrade all of the instances as they will be able to communicate with each other even if they are not on the same version. With that said, it is always recommended to use the latest version of Artifactory (for all of your instances) in order to enjoy all the latest features and bug fixes and best compatibility between instances. You may find further information about the upgrade process in this wiki page.
In addition, it is also always recommended to keep backups of your Artifactory instance, especially when attempting an upgrade. You may use the built-in backup mechanism or you may manually backup your filestore (by default located in $ARTIFACTORY_HOME/data/filestore) and take DataBase snapshots.
What do you mean by
do all repositories need to be on exactly the same version?
Are you asking about Artifactory instances? Artifactory HA nodes?
Regarding the full system export:
https://www.jfrog.com/confluence/display/RTF/Managing+Backups
https://jfrog.com/knowledge-base/how-should-we-backup-our-data-when-we-have-1tb-of-files/
For more info, you might want to contact JFrog's support.

How to get list of installed features in Karaf using REST API?

I know using command line it can be get by running feature:list -i but is there any API/JSON available to fetch this?
You can use jolokia and hawtio to retrieve that information. Quite easily. I believe you can easily add the hawtio repo from the native karaf repos in features (repo-add hawtio). Then you need to install jolokio, hawtio, and the karaf web console. From the karaf webconsole alone you can see a full list of features, but I find the hawtio interface to be a god send.
A REST API can be installed without the need for Hawtio, which uses jolokia for accessing the bundle list under the hood.
The jolokia project provides web applications called agents serving a REST API. For quick experiments you can deploy the war jolokia-war-unsecured into the hot deploy folder of a running karaf instance. This installs a A REST web service at e.g. http://localhost/jolokia-war-unsecured/ which does not require any authentications.

Resources