elk kibana cannot create index pattern - kibana

I hava a ELK running, which is pretty ok. But today, I got an exception when trying to create new index pattern.
To solve this issue, I have deleted .kibana index, and .monitoring-kibana-6-xxx indexices.
I also tried to create index pattern by command line (Create index-patterns from console with Kibana 6.0). But I could not set the default index pattern. So I still need to create or set index from UI.
Error: 413 Response
at http://staging.alct56.club/bundles/kibana.bundle.js?v=16070:231:21272
at processQueue (http://staging.alct56.club/bundles/commons.bundle.js?v=16070:39:9912)
at http://staging.alct56.club/bundles/commons.bundle.js?v=16070:39:10805
at Scope.$digest (http://staging.alct56.club/bundles/commons.bundle.js?v=16070:39:21741)
at Scope.$apply (http://staging.alct56.club/bundles/commons.bundle.js?v=16070:39:24520)
at done (http://staging.alct56.club/bundles/commons.bundle.js?v=16070:38:9495)
at completeRequest (http://staging.alct56.club/bundles/commons.bundle.js?v=16070:38:13952)
at XMLHttpRequest.xhr.onload (http://staging.alct56.club/bundles/commons.bundle.js?v=16070:38:14690)

finally I found this is blocked by our company http policy. But I still cannot understand why the "response body is too big" causes index pattern cannot be created.

Related

how to write nginx log to the TDengine?

We want to write nginx logs to the TDengine, we tried nginx -> vector -> kafka -> tdengine without success, we can only do the basic database creation completed, but can not write data to the data table, the prompt is as follows.
[2022-05-28 19:25:09,454] ERROR [nginx-tdengine|task-0] WorkerSinkTask{id=nginx-tdengine-0} RetriableException from SinkTask: (org.apache. kafka.connect.runtime.WorkerSinkTask:600)
RetriableException: java.sql.SQLException: Exception chain:
SQLException: TDengine ERROR (80000221): Invalid JSON format
The TDengine version is 2.4.0.14
We refer to the documentation at https://docs.taosdata.com/third-party/kafka.
I wonder what the problem is and if there is a more reasonable solution?

Jupyter notebooks are loading but while running there's nothing happening

I was facing an error with Jupyter notebook to which I could not found a solution so far:
HTTPServerRequest(protocol='http', host='127.0.0.1:8888', method='GET', uri='/ipython/api/kernelspecs',
line 1703, in _execute result = await result
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Reference of this, is mentioned here as well [but not much helpful] - I could not update comment there as its closed.
In my case I was using a domain name to point to this Jupyter notebook (thorugh nginx), I didnt realize but I changed to a new domain name (since old one was expired for me) and then this error started occurring. I even have tried passing a new domain name as an argument while restarting the notebook server but the same did not help and was through above error.
--GatewayClient.url='https://newdomain.com'
As a temporary fix, I tried putting in my local dns hosts file a mapping of old domain names pointing to the same IP address and voilla, it fixed the problem. At least it works.
While this is not a fix but locally unblocked me to access and execute my jupyter notebooks.
I will post an update if I could fix it and move to a new domain name.
Thanks,

After upgrade attempting to get artifact results in "Could not process download request: Binary provider has no content for"

I recently upgraded our artifactory repository from 2.6.5 to the current version 5.4.6.
However, something seems to have gone wrong in the process. There are some artifacts that throw a HTTP 500 error when attempting to access them. Here is an example using wget:
wget http://xyz.server.com:8081/artifactory/gradle/org/jfrog/buildinfo/build-info-extractor-gradle/2.0.12/build-info-extractor-gradle-2.0.12.pom
--2017-09-12 12:17:13--
http://xyz.server.com:8081/artifactory/gradle/org/jfrog/buildinfo/build-info-extractor-gradle/2.0.12/build-info-extractor-gradle-2.0.12.pom
Resolving xyz.server.com (xyz.server.com)... 10.125.1.28
Connecting to xyz.server.com (xyz.server.com)|10.125.1.28|:8081... connected.
HTTP request sent, awaiting response... 500 Internal Server Error
2017-09-12 12:17:13 ERROR 500: Internal Server Error.
I verified this by going to the artifactory site, browsing to the object in question, and trying to download it. The result was the following:
{
"errors" : [ {
"status" : 500,
"message" : "Could not process download request: Binary provider has no content for 'e52a9a9a58d6829b7b13dd841af4b027c88bb028'"
} ]
}
The problem seems to be in the final step of the upgrade process, upgrading from 3.9.5 to 5.4.6. The wget command above works on 3.9.5, but not on the 5.4.6 instance.
I found a reference to a "Zap Cache" function in older documentation and thought it might fix things, but I don't seem to be able to find that function in the current site.
Is anyone able to point me to: a way to fix this issue, or what I need to do/look for in the upgrade process in order to prevent it from occurring?
As a further data point, we're using an Oracle database for the full file store, if that matters in any way (using the tag: <chain template="full-db"> in binarystore.xml)
Thanks in advance....

Lucene index gets broken segments after every restart of liferay-tomcat

I have a corrupted Lucene index. If I run "CheckIndex -fix" the problem is resolved, but as soon as I restart tomcat it becomes corrupted again.
The index directory is shared between two application servers running Liferay-Tomcat. I am fixing the index on 1 server and restarting that whilst the other is running. This is a production environment so I cannot bring them both down.
Any suggestions please?
Before fix, CheckIndex says:
Opening index # /usr/local/tomcat/liferay/lucene/0
Segments file=segments_5yk numSegments=1 version=FORMAT_SINGLE_NORM_FILE [Lucene 2.2]
1 of 1: name=_2vg docCount=31
compound=false
hasProx=true
numFiles=8
size (MB)=0.016
no deletions
test: open reader.........FAILED
WARNING: fixIndex() would remove reference to this segment; full exception:
java.io.IOException: read past EOF
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:151)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:38)
at org.apache.lucene.store.IndexInput.readVInt(IndexInput.java:78)
at org.apache.lucene.index.FieldInfos.read(FieldInfos.java:335)
at org.apache.lucene.index.FieldInfos.<init>(FieldInfos.java:71)
at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(SegmentReader.java:119)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:652)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:605)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:491)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
WARNING: 1 broken segments (containing 31 documents) detected
WARNING: would write new segments file, and 31 documents would be lost, if -fix were specified
If you access your search index with more than one application server, I would suggest integrating a Solr Server. So you don't have the problem that 2 app servers are trying to write on the same file. This could be error prone as you already found out.
To get Solr up and running you have to follow those steps:
Install a Solr Server on any machine you like. A machine running only Solr would be quite preferable.
Install the Solr search portlet in Liferay
Adjust the config files according to the setup document of Sol Search portlet.
Here are some additional links:
http://www.liferay.com/de/marketplace/-/mp/application/15193648
http://www.liferay.com/de/community/wiki/-/wiki/Main/Pluggable+Enterprise+Search+with+Solr

Drupal 7 Search api Solr - 400 Bad request on indexing or cleaning

Trying to clean the index:
Apache_Solr_HttpTransportException: '400' Status: Bad Request em Apache_Solr_Service->_sendRawPost() (linha 364 de /home/mercado/public_html/sites/all/libraries/SolrPhpClient/Apache/Solr/Service.php
Trying to index itens:
Couldn't index items. Check the logs for details.
Log Says:
An error occurred while indexing: '400' Status: Bad Request.
Im using the latest release for apache solr and solrPHPclient. Everything seens ok, im using the schema.xml from the module and the module checks the solr server ok...
There is no log file in my path/to/solr/example/logs
Thanks in advance
I was having the same problem and realized I had the copied the wrong configuration files to the /Applications/apache-solr-3.6.1/example/solr/conf directory. Once I copied the correction version from sites/all/modules/search_api_solr/solr-conf/3.x it started to index properly.
Check your Apache solar is running on your server

Resources