How to index the data received from OpenShift cluster log forwarder - kibana

I am receiving the data to my Elasticsearch from Openshift log forwarder (Fluentd) directly without any pipeline or integrations. I am using an open-source Elasticsearch which is outside of the Openshift platform, where I am getting the logs in 3 different tags which are "Application, Infra & Audit".
All the application logs are segregated in one particular index only which is Application, but I would like to segregate them with the applications I have with different index names in order to create dashboards and visualizations.
How can I try to resolve this? I need any samples to try.

Related

AML Studio: Register mutliple gateways on the same server

I am struggling to find a way to register multiple gateways. I have a local instance of my SQL server and have created a gateway to access to it from the AML Studio workspace. It works fine but now I would like to access to the same SQL server instance from another workspace. So the question is: how to register a new gateway without removing the previous one?
I followed this documentation.
Does the following explanation mean that there is no way to do that?
You can create and set up multiple gateways in Studio for each workspace. For example, you may have a gateway that you want to connect to your test data sources during development, and a different gateway for your production data sources. Azure Machine Learning gives you the flexibility to set up multiple gateways depending upon your corporate environment. Currently you can’t share a gateway between workspaces and only one gateway can be installed on a single computer.
It is quite limiting as connecting to the same server from multiple workspaces may be sometimes crucial.
Well, finally I have found a way to bypass this limitation. From this documentation I have found that:
The IR does not need to be on the same machine as the data source. But staying closer to the data source reduces the time for the gateway to connect to the data source. We recommend that you install the IR on a machine that's different from the one that hosts the on-premises data source so that the gateway and data source don't compete for resources.
So the logic is pretty simple. You provide access to your local server to another machine on vpn and install your gateway there. Important: I have set up the firewall rules on the server before, to be able to establish the connection remotely.

Associate Elastic IP with scheduled AWS data pipeline

Anybody knows is it possible to associate Elastic IP with scheduled data pipeline? I have configured data pipeline to run every day. During data pipeline execution, I need access to Google DB. To have access to Google DB I should add IP (CIDR) in DB authorization settings. But without known public IP of EC2 instance created by data pipeline I cannot configure it.
So I need to have a possibility to setup Elastic IP once to be used for EC2 instance which is creating automatically by data pipeline each time when data pipeline is runned by scheduler.
I am not aware how you can associate a EIP, however, you can create a VPC with with a NAT gateway. When you create your EC2 put it in your subnet that you've created and if everything is setup properly then your public IP will always be the same.
A second option would be to run your pipeline on a Task Runner.

Not able to upload files to openstack swift

We have created an openstack cluster with one proxy server and three storage nodes. Configuration consist of two regions and three zones.
We are able to create containers
But while trying to upload files we are getting 503 service unavailable and seeing below logs in swift.log

How to send one server collectd information to hosted server?

Am new to graphite monitoring tool. I have one question in this setup. I have two servers and hear one server treated as a hosted server(installed graphite,collectd,statsd and grafana)and it grafana displays the all metrics. In the another second server i have installed the graphite and collectd.Now i would need to send this second server collectd informtion to first server(hosted server)and those metrics information will need to display the web using grafana...
could you please suggest me is there any plugin or any way to setup this configuration?
Thanks.
You don't actually need graphite on the second host, you can just configure collectd on that host to write to graphite (actually the carbon ingest api) on the first host.
https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_write_graphite
If you do want to have graphite on both servers for some reason, you can use multiple Node entries in your collectd config to have it send metrics to both graphite instances.

Publishing hystrix metrics to API

I have a web service running as multiple docker containers. The docker hosts are not under my control. I use hystrix to record metrics. I thought of using Turbine to monitor the metrics. But i do not have access to the real hostnames and ports of my we app instances to give Turbine. So I am thinking of a push model where the individual instances of my web app publishes the metrics to another API on which i can run dadhboard tools. I looked at Servo, but it also does not suite my needs as it publishes to JMX. Can I use custom publisher for this? Are there examples for this use case?

Resources