New index in Kibana with +18000 fields causes problems - nginx

We have been running ElasticSearch v. 1.6.0 and Kibana v. 4.1.0 with NGINX as proxy in production for about two months now and the set-up handles logging of multiple applications.
We have 830.000 documents, total of 450MB and a single node. All log statements get shipped to ElasticSearch using Serilog, which creates an index per day.
A week ago Kibana started being extremely slow. I think that I've finally figured out the reason; upon creating the index in Kibana, we did not check the box "Use event times to create index names", thus I guess that even though we only query log statements which has occurred within the last 15 minutes, then Kibana actually asks ElasticSearch to look in all indices.
Thus, I created a new index in Kibana with the format [serilog-]YYYY.MM.DD and Kibana acknowledged that the pattern followed all the existing indices.
However, it found more than 18000 fields in the indices. And many of the fields looks like
fields.modelState.Value.Value.Culture.Parent.Parent.Parent.Parent.Parent.Parent.Parent.Parent.Parent.Parent.Parent.Parent.Parent.Parent.Parent.Parent.IsReadOnlye
(where IsReadOnly varies from field to field, and likewise do the number of nested Parent).
It seems like this is causing problems, because when I go to the Discover tab, then I get a JavaScript error in the Console:
Notice how indexPattern.fields is undefined. And after this error, I receive a 413:
And Kibana looks like
These fields weren't in our old index (which had the format serilog-*). I guess it's because the fields first appeared in log statements after our initial deployment to production.
Update
curl -XGET 'localhost:9200/.kibana/index-pattern/_search' yields
However, under the Settings tab I can see
I do not need all those fields with pattern
fields.modelState.Value.Value.Culture.Parent.

Related

How to get fresh db data with wc_get_order

I have a script that works as a daemon.
This script every so often is supposed to retrieve the order data and then processes the data.
In a situation where the script is running and retrieves the data of a given order and the order has a status of, for example, "on-hold" and then I change its status to "processing" the script still sees the status "on-hold" when I retrieve the data via wc_get_order because it uses an internal wp cache that is not refreshed.
So how do I retrieve the most current order data from the database.
I searched in the source code if there is perhaps a parameter to force the retrieval of data from the database but did not find it.
After hours of searching I made it.
wp_using_ext_object_cache( false );
wp_cache_flush();
wp_cache_init();
Using this 3 lines of code clear cache.

Is there a limit for "max number of OUTPUT plugin to be used in fluentbit 1.6.10"

I have been using fluentbit 1.6.10 for quite some time now and kept on adding OUTPUT splunk plugin one after another
Now it seems limit to add number of OUTPUT plugin is crossed because after addition of any new splunk plugin is causing inconsistent execution of fluent bit
Current:
Total number of OUTPUT plugins used: 37,
Name: kafka (4),
Name: null (1),
Name: splunk (32)
But now
Whenever a new splunk plugin is added then fluentbit execution behaviour is inconsistent.
Total number of OUTPUT plugins used: 37,
Name: kafka (4),
Name: null (1),
Name: splunk (33) this seems like creating problem
After adding 33rd OUTPUT plugin, one of the existing splunk output plugin stops working
But after manually editing config at runtime --> lets say adding a simple stdout --> restart pod then all splunk plugin starts working (even though this time stdout plugin doesn't work)
Working on getting the exact behaviour and will post steps to reproduce with config but same configuration works good in version 1.7.1
(I am aware that using these many plugins is not a good design and it will be addressed.)
Is there a known limit for number of plugins to be used in fluentbit 1.6.10 ?
There is no defined upper limit as such in fluent-bit for OUTPUT plugins to be added, but depending on resource consumption, every plugin has there own defined resource utilization configurations.
The resource can be managed through proper tagging and by maintaining caches.
In short, you need to test the configs depending upon the resource limit to set the limit in your environment.

How to resolve celery.backends.rpc.BacklogLimitExceeded error

I am using Celery with Flask after working for a good long while, my celery is showing a celery.backends.rpc.BacklogLimitExceeded error.
My config values are below:
CELERY_BROKER_URL = 'amqp://'
CELERY_TRACK_STARTED = True
CELERY_RESULT_BACKEND = 'rpc'
CELERY_RESULT_PERSISTENT = False
Can anyone explain why the error is appearing and how to resolve it?
I have checked the docs here which doesnt provide any resolution for the issue.
Possibly because your process consuming the results is not keeping up with the process that is producing the results? This can result in a large number of unprocessed results building up - this is the "backlog". When the size of the backlog exceeds an arbitrary limit, BacklogLimitExceeded is raised by celery.
You could try adding more consumers to process the results? Or set a shorter value for the result_expires setting?
The discussion on this closed celery issue may help:
Seems like the database backends would be a much better fit for this purpose.
The amqp/RPC result backends needs to send one message per state update, while for the database based backends (redis, sqla, django, mongodb, cache, etc) every new state update will overwrite the old one.
The "amqp" result backend is not recommended at all since it creates one queue per task, which is required to mimic the database based backends where multiple processes can retrieve the result.
The RPC result backend is preferred for RPC-style calls where only the process that initiated the task can retrieve the result.
But if you want persistent multi-consumer result you should store them in a database.
Using rabbitmq as a broker and redis for results is a great combination, but using an SQL database for results works well too.

BizTalk 2006 Event Log Warnings - Cannot insert duplicate key row in object 'dta_MessageFieldValues' with unique index 'IX_MessageFieldValues'

We have been seeing the following 'warnings' in the event log of our BizTalk
machine since upgrading to BTS 2006. They seem to occur
randomly 6 or 8 times per day.
Does anyone know what this means and what needs to be done to clear it up?
we have only one BizTalk server which is running on only one machine.
I am new to BizTalk, so I am unable to find how many tracking host instances running for BizTalk server. Also, can you please let me know that we can configure only one instance for one server/machine?
Source: BAM EventBus Service
Event: 5
Warning Details:
Execute batch error. Exception information: TDDS failed to batch execution
of streams. SQLServer: bizprod, Database: BizTalkDTADb.Cannot insert
duplicate key row in object 'dta_MessageFieldValues' with unique index
'IX_MessageFieldValues'.
The statement has been terminated..
I see you got a partial answer in your MSDN Post
go to BizTalk Admin Console ,check in Platform Settings -> Hosts, in the list of hosts on the right, confirm that only a single Host has the Tracking column marked as Yes.
As to your other question. Yes you can run a Single Host Instance on a Single Server. Although when your server starts to come under a bit of load you may want to consider setting up some more so you can balance the workload better.

Long query: Does anyone know what this error is all about?

Currently, I am doing a query to a sql server database which has 6 million records.
A date range is specified in the query in order to filter the result. When the date range is short, i.e. 2 hours, the application displays the result with no problems.
But if the date range is a bit longer, i.e. a week, the application displays the following errors:
Finally, after I have accepted the two previous errors, and I click in any other section of the application I get the following error:
Strangely, this behaviour only happens in the live server (running on iis7), whereas in the localhost (casini) the applications displays the query results regardless the data range value.
Any thoughts on how to get around the problem will be greatly appreciated.
For your first problem, read following article here
When an error occurs on the server while the request is being processed, an error response is returned to the browser and a PageRequestManagerServerErrorExceptionobject is created by using the Error.create function. To customize error handling and to display more information about the server error, handle the AsyncPostBackError event and use the AsyncPostBackErrorMessage and AllowCustomErrorsRedirect properties. For an example of how to provide custom error handling during partial-page updates, see Customizing Error Handling for ASP.NET UpdatePanel Controls.
For second problem, may be you can get solution here
Solution: Our web server could not resolve the URL of the back-end website. We needed to add a hosts file entry on our server to resolve the issue.

Resources