how to enrich logs with cluster name fluentbit - fluent-bit

I have been using fluentbit across multiple kubernetes clusters and sending data to a centralized elasticsearch cluster. Logs have a lot of metadata on them that helps up to pin point the origin of a given log. Its missing the one crucial information about logs metadata, cluster_name.
Is there a way to enrich logs with a cluster name using fluentbit?
I can have a filter which adds a custom static value like this,
custom-filter.conf: |
[FILTER]
Name modify
Match *
Add cluster_name robust_cluster_1
but this approach seems error prone.
I am looking for an approach where cluster name is dynamically fetched by fluentbit.

You can interpolate environment variables, as shown in the Record Modifier example:
[FILTER]
Name record_modifier
Match *
Record hostname ${HOSTNAME}
Usage with the modify filter and cluster name would be pretty much the same
[FILTER]
Name modify
Match *
Add cluster_name ${CLUSTER_NAME}

Related

How to handle commands for aggregates with IDs assigned after command?

I know the subject line doesn't make sense given the way Axon works, but here is my problem:
I need to create a new instance of aggregate, "Quote", that is tied to a backend system of record. That is, the aggregate ID must eventually match the ID assigned in the backend system.
So, my uiServer app is calling commandGateway and sending it a CreateQuoteCmd, but I don't know what to pass as the target aggregate ID, since the ID will come from a backend system called by the command handler. The uiServer cannot assign the quoteId. The command handler for CreateQuoteCmd contacts our backend system to get the new quoteId. The backend system also supplies several default values which will be placed in the aggregate.
So, how do I make that quoteId the ID for the aggregate?
What do I pass as the target aggregate ID in the command object?
Is it true that I must pass a target aggregate ID in CreateQuoteCmd instead of allowing the object to set its own ID in the command handler after communication with the backend system?
Thanks for your help.
The Command which will create an Aggregate is not inclined to have a #TargetAggregateIdentifier annotated field. This holds as the field which is the 'target aggregate identifier', cannot point to an existing aggregate, as that command will be the starting point of an aggregate.
The creation of the Aggregate Identifier can happen at several points in your system, and is really up to you.
The important part here though is that the #CommandHandler annotated constructor within an Aggregate has a return value, which is the Aggregate Identifier you have assigned to that Aggregate.
You should thus handle the result given to you from the CommandGateway/CommandBus when dispatching your CreateQuoteCmd. This should contain the QuoteId you have assigned to your (I assume) Quote Aggregate.
You need to get the aggregate ID from the external system before sending the command (at domain or application service layer)

to get data from table without using reference of state

am trying to get the value from db without using serviceHub and vault.but i couldn't. what my logic is, when i pass the country name, it should return the id's(PK)of that country which is in one table.using those id's, it should return the values related to those id's from other table.it could be possible in flow class.but am trying to do in api class where servicehub couldn't import. Please help me out.
Only the node has access to the ServiceHub. The API runs outside of the node in a separate process, so it is limited to interacting with the node via the operations offered by CordaRPCOps.
Either you need to store the data you want to access in a separate database outside of the node, or you need to find some way to programatically log into the node's database from the API, using JDBC as described here: https://docs.corda.net/node-database.html.

How to store only node specific off-ledger custom data in corda?

I created custom table in corda using QueryableState. e.g. IOUStates table.
I can able to see the custom information getting stored in this kind of table.
but i observed that if party A and Party B is doing the transaction then this
custom information gets stored at both the places , e.g. IOUStates
table getting created at nodeA ledger as well as nodeB's ledger.
and custom information is stored in partyA's and PartyB's ledger.
My Question is :-
If some Transaction is getting processed from PartyA's node , then
I want to store part of the transaction's data i.e. custom data ONLY at partyA's Ledger.* level . i.e. off-Ledger of partA only.
It should not be shared with partyB.
In simple case , how to store Only node specific off ledger custom data ?
Awaiting for some reply...
Thanks.
There's a number of ways to achieve this:
Don't use Corda at all! If the data is truly off-ledger then why are you using Corda? Instead, store it in a separate database. Of course you can "JOIN" it with on-ledger data if required, as the on-ledger data is stored in a SQL database.
Similar to point one except you can use the jdbcSession() functionality of the ServiceHub to create a custom table in the node's database. This table can easily be accessed from within your flows.
Create a ContractState object that only has one participant: the node that wants to store the data. I call this a "unilateral" state, i.e. a state that only one party ever stores.
Most importantly, if you don't want to share some data with a counter-party then it should never be disclosed inside a corda state object or attachment that another party might see. Instead:
inside your flows, you can use the data encapsulated within the shared state object (e.g. the IOU) to derive the private data
alternatively if the data is supplied when the flow begins then store the private data locally using one of the methods above

Internal Kinds Returned When Retrieving All Entities Belonging to a Particular Namespace

I am trying to retrieve all entities belonging to a particular namespace. The query is quite simple
query = datastore.Query(namespace=<namespace>)
Running this however returns keys belonging to internal kinds which aren’t part of the data I am storing, for example I get entities belonging to this kind:
__Stat_Ns_Kind_IsRootEntity__
Do you know how I can prevent this? Can I refine my query to exclude these?

complex Queries in kibana or quering for different values of a single field type

I am new to Kibana. I have successfully installed Logstash ,Elasticsearch and Kibana. All the links or documents i read have simple query syntax like search by text,by typing phrase or putting logical operators .but all this is so basic.
How can we query in detail.for example i have logs of my magento store and the logs have time stamp,product ID and the action that is the product is purchased or viewed or removed like that.
I imported these logs in kibana via logstash.
Now i want to query logs for the action field not different fields.When i query the logs it returns me logs that have added action and logs that have remove action.The query is "added" OR "removed" when i do "added" AND "removed" there are no logs given because these both words are of same field type and kibana does not allow this it returns zero records because any particular log cant have two valuesin the action field that is product added and removed.I need to know the product which is added and removed the most by people and do a visualization of that.
please suggest if there are any tutorial for studying kibana lik, how to configure it learn to write complex queries
You can try to parse your logs in Logstash to multiple fields.
As your requirement, say add field-"Action" and field "Product".
In the Kibana you can add Table with terms set to "Product" field.
So, when you search for "Added", the table will show out all the product with Added action.
I wanted to match two disparate search terms in the SAME field using logical operators. For example, a field called 'product_comments' has value 'residential plumbing bathroom sink", and I want "residential" AND "sink" to match.
The documentation here: https://lucene.apache.org/core/2_9_4/queryparsersyntax.html#AND says this is possible, just as OP originally tried.
Using Kibana 5.1.1 I found that logical operator is case sensitive:
"residential" and "sink" matched documents with the word 'and' in it, but
"residential" AND "sink" worked as expected

Resources