Kibana without Elastic search - kibana

As we know Elasticsearch stores, search and analyses data and then shows it on Kibana. But I have my data already stored in PostgreSQL and we have to deal with huge data, so storing it in Elasticsearch for seeing a graph on Kibana is not good. There will be duplication like we have same data in Postgres as well as in Elasticsearch and I have huge data (full traffic from a telecom company) and we want to build a reporting tool.
Kibana has all the features that we want but we don't want this duplication of data. I mean we want to use only Kibana. Is it possible? And what should I do to avoid this problem? What are the possibilities?

My opinion. If you have all this data, and it is not in a non-sql, document database, your are going about it the wrong way. Either it's elasticsearch or mongo, you should use that kind of databases.
As far as I know, there is no way of using Kibana to display information from something other than Elasticsearch.
You could check out Grafana http://grafana.org/, it has that and more.
Good luck.

For connecting to SQL databases, Tableau is one of the best options. As I worked with both Tableau and Kibana, I can tell that Tableau supports almost all operations that are supported by Kibana and also Tableau can generate graphs for complex visualizations like
sum(field1)/ sum(field2) over values of field3.
which can not be generated by using Kibana.

This is way late, but the way I would tackle this is to write an app that pulls data out of your database and publishes to elasticsearch. Sure there is duplication of data, but you can focus on only that data you care about. You also wouldn't be querying against a production database when displaying charts in kibana, which can add its own complications.

Related

Tableau Server Reports IMpact Analysis

Is there a way to search across all of Tableau Server to determine if a table and/or column is being used? I read that xml for the reports are in a postgres database. How do I access that db or is there an easier way. We need to drop tables/columns but need to know which reports will break.

Effective way to query BMC Helix Remedy database other than Smart Reporting

Currently we are connecting the AR System through the Oracle database for this purpose. I need to know: is there any alternative way to access or query the Remedy database effectively? Is there any built-in API which we can utilise which will increase the efficiency of the work?
What could be used is the REST api, in which you can query directly the forms.
Please check following url:
REST API Doc
It will result with JSON object containing all data.
In order to obtain access to all forms you need to create a "service" user with fixed license and permissions to the forms which you would like to read using the API call.
You can query the Oracle back-end directly, with a few caveats. It should only be for reading data, not writing or modifying data. Otherwise, you could break data integrity as well as bypass workflow that should be fired. Also, this direct access does not enforce any permissions, nor does it translate any of the data. For example, selection fields come back as a number instead of their string value, dates are in epoch format, etc.
There is a Remedy ODBC driver, which isn't being updated, nor does it support joins. However, you can open multiple connections with it and join them manually. Plus, it does handle permissions and translations for you.
https://docs.bmc.com/docs/ars1911/odbc-database-access-introduction-896318914.html
If you know in advance what joins you will be doing, you should setup join forms within Remedy. That way the joins are done efficiently in the database. Otherwise, you are stuck with either of the above solutions or using one of the APIs which don't support ad-hoc joins.

Monitoring table changes in DynamoDB

I am using AWS DynamoDB in order to store information.
I have two machines running separate codes, that accessing the information in the database.
One of the machines is writing into the database and the second one is reading.
Since the second one does not know whether or not the information in the database has been changed I need to somehow monitor my database for changes.
I know that there is something called dynamo streams that can provide you with the information regarding changes made in your database and I already have that code implemented.
The question is as follows: if I am monitoring the database constantly, I need to query this stream all the time, let's say once every minute.
What is the difference between doing that and actually querying the database every minute?
Is it much more efficient?
Is it less costly (resources, moneywise)?
Is there any other, more efficient way of monitoring changes in the database in a specific table from the code?
any help would be appreciated, thank you.
Most people I have seen do something like this do it with DynamoDB Streams + Lambda for best results. Definitely check out the DynamoDB docs and the Lambda docs on this topic.
There's also an example in the docs of monitoring DynamoDB where changes fire off a message to an SNS topic.
DynamoDB Streams is more efficient and near real time. Think of using Lambda in this way like you would a trigger in a relational database. Why do the extra effort, when the patterns are this very well defined and people use them all the time?

What is Kibana used for / where is it useful?

I’ve read the Kibana website so I know theoretically what Kibana can be used for, but I’m interested in real-world stories. Is Kibana solely used for log analysis? Can it be used as some kind of BI tool across your actual data set? Interested to hear what kind of applications its useful in.
Kibana is very useful for visualizing mixed types of data, not just numbers - metrics, but also text and GEO data. You can use to Kibana to visualize:
real-time data about visitors of your webpage
number of sales per region
locations from sensor data
emails sent, server load, most frequent errors
... and many other there's a plethora of use cases, you only need to feed your data into Elasticsearch (and find appropriate visualization)
Kibana is basically an analytics and visualization platform, which lets you easily visualize data from Elasticsearch and analyze it to make sense of it. You can assume Kibana as an Elasticsearch dashboard where you can create visualizations such as pie charts, line charts, and many others.
There are like the infinite number of use cases for Kibana. For example, You can plot your website’s visitors onto a map and show traffic in real time. Kibana is also where you configure change detection and forecasting. You can aggregate website traffic by the browser and find out which browsers are important to support based on your particular audience. Kibana also provides an interface to manage authentication and authorization regarding Elasticsearch. You can literally think of Kibana as a Web interface to the data stored on Elasticsearch.

Synchronize Postgres Server Database to Sqllite Client database

I am trying to create an app that receives an Sqlite database from a server for offline use but cloud synchronization. The server has a postgres database with information from many clients.
1) Is it better to delete the sql database and create a new one from a query, or try to synchronize and update the existing separate sqlite files (or another better solution). The refreshes will be a few times a day per client.
2) if it is the latter, could you give me any leads to resources on how I could do this?
I am pretty new to database applications so please excuse my ignorance and let me know if there is any way I could clarify.
There is no one size fits all approach here. You need to carefully consider exactly what needs to be done, what you are replicating, how much data is involved, and what your write models are, all before you build a solution. Along the way you have to decide how to handle write conflicts and more.
In general the one thing I would say is that such synchronization works best with append-only write models (i.e. inserts, no deletes, no updates), and one way to do it is to log changes that need to be made and replicate those changes.
However, master-master replication is difficult on the best of days and with the best of tools available. Jumping between databases with very different capabilities will introduce a number of additional problems. You are in for a big job.
Here's an open source product that claims to solve this for many database types including Postgres. I have no affiliation or commercial interest in this company.
https://github.com/sqlite-sync/SQLite-sync.com
http://sqlite-sync.com/
If you're able and willing to step outside relational databases to use an object store you might want to have a look at CouchDb and perhaps PouchDb that use a MVCC based replication protocol designed to support multi-master replication including conflict resolution. Under the covers, PouchDb uses adaptors for Sqlite, IndexDb, Local storage or a remote CouchBb instance to persist client side data. It auto selects the best client side storage option for the given desktop or mobile browser. The Sqlite engine can be either WebSQL or a Cordova Sqlite plugin.
http://couchdb.apache.org/
https://pouchdb.com/

Resources