I'm trying to migrate my old dashboards fron kibana 4.1.2 to kibana 5.0.0. , but haven't been able to successfully do it.
I took an export of my searches, visualisations and dashboards from the older version and imported it to the newer version. But, the visualisations said they didn't find they couldn't locate index-pattern.
Saved Objects: Could not locate that index-pattern (id: [logstash-]YYYY.MM.DD)
Error: Could not locate that index-pattern (id: [logstash-]YYYY.MM.DD)
at updateFromElasticSearch (http://development-log-server*****.com:5601/bundles/kibana.bundle.js?v=14438:25:9602)
at http://development-log-server*****.com:5601/bundles/kibana.bundle.js?v=14438:25:13725
at processQueue (http://development-log-server*****.com:5601/bundles/commons.bundle.js?v=14438:38:23621)
at http://development-log-server*****.com:5601/bundles/commons.bundle.js?v=14438:38:23888
at Scope.$eval (http://development-log-server*****.com:5601/bundles/commons.bundle.js?v=14438:39:4619)
at Scope.$digest (http://development-log-server*****.com:5601/bundles/commons.bundle.js?v=14438:39:2359)
at Scope.$apply (http://development-log-server*****.com:5601/bundles/commons.bundle.js?v=14438:39:5037)
at done (http://development-log-server*****.com:5601/bundles/commons.bundle.js?v=14438:37:25027)
at completeRequest (http://development-log-server*****.com:5601/bundles/commons.bundle.js?v=14438:37:28702)
at XMLHttpRequest.xhr.onload (http://development-log-server*****.com:5601/bundles/commons.bundle.js?v=14438:37:29634)
I would really appreciate if someone could steer me in the right direction and even more grateful if someone could point me to a tool to do the migration.
And please let me know if more information is needed.
Thanks,
Ashutosh Singh
I doubt you could do that in a simple manner, considering a downtime. Maybe you could give it a try by using elasticdump. Moving the index data to another instance could be a pain but then you could use the python elasticsearch library (SO).
Or else you could store it as Objects and then import them back. Storing it as a JSON, was added back in Kibana 4.1. Hope this helps.
A clear guide to managing kibana visualizations and dashboards.
Related
I am trying to train a TensorFlow/Keras/TFRuns model and tune hyperparameters on Google Cloud ML. I wish to do this starting from my laptop, and follow the example here:
https://blogs.rstudio.com/tensorflow/posts/2018-01-24-keras-fraud-autoencoder/
The issue is that because I have installed some packages from resources outside of CRAN (i.e. SparkR, assertthat, aws.s3, et. al.) I keep getting an error stating "Unable to retrieve package records for the following packages: ...<<some package goes here>>"
I only need to have a few packages to follow the example in the link above. I am wondering if there is a way to ask Google Cloud ML to use only a specific subset of all my installed packages? Would it be better for me to setup some sort of virtual environment for R? If so, is there a link to a "How-To" guide I could follow? Should I try to do this in Docker? I'd love to be able to follow this example. I'm hoping someone can point me in the right direction.
Thank you in advance for any help.
All the best,
Nate
You can stage these dependencies on GCS and provide URIs in the job request. Check out this section in the public doc.
I created a snapshot view using Rational ClearCase explorer.
After creating it, I set the config specs, environmental variables and later tried compiling my code and got an MVFS error which says:
Unable to determine if the current working directory is in MVFS - no such device or address
When I searched the IBM website for the sake of eliminating this error, I found out that a snapshot view does not use the MVFS !
Why am I getting this error when Snapshot view does not use MVFS?
When this issue got triggered: Actually in our project we were using a ClearCase (8.0.0.7 version). We never had problems when we tried to build our code on the 8.0.0.7 version. It was only after upgrading this version to 8.0.0.15 that the build issue has arisen. The legacy of both old and new ClearCases are baseClearcase
Some more specifications regarding the issue:
The server which we are using is a Windows 2003 server. I am creating a snapshot view in H drive (NTFS drive) as C drive is not available for use in our project, cleaning the previously built files by running the shell script clean_view.sh and then compiling our C code with the ClearCase command clearmake.exe all. Previously we used to follow the same procedure where build used to succeed, but now the same has become an issue.
This question is an extension to the question which I have asked previously. I am re-posting this question as a whole thing again in order to give more clarity about the issue and also for more number of ClearCase experts to chime-in. Kindly do not treat this as a duplicate one or force close it as my issue has not yet been resolved. Also please note that this is the first time I am working with ClearCase.
LINK FOR THE PREVIOUS QUESTION: MVFS error in a snapshot view
Recently there was a development in the solution of this issue !! We escalated this issue to IBM with the help of our client. They suggested us to use Dynamic views and we used them. To our surprise it was working fine and we are able to generate the executables. But the fact still remains that we are not able to use snapshot views !!
NOTE: This comment is just to share my knowledge and experience regarding this issue. :)
While a snapshot view isn't in the MVFS, clearmake has MVFS-specific functionality for build auditing.
You mentioned that the "H" drive contained the snapshot view, is H:
A local or network drive?
A drive letter created via SUBST? In this case, is the parent drive local?
Do builds in dynamic views still work?
Does the C drive exist? Is it remapped in a Terminal Server/Citrix environment?
A caveat: Windows Server 2003 is nearly a year past MICROSOFT'S end of extended support. I would recommend updating the server environment as soon as possible.
Truthfully, issues where a process fails, and the ONLY change is the ClearCase version are usually best handled by contacting IBM instead of using this venue. Not trying to shill or anything, but if it's a clearmake bug, it has to go there anyway...
Additional questions:
If the C: drive is inaccessible on the system, which is what "can't even get the properties" in the comment seems to infer, where is the OS installed? Where does %SYSTEMROOT% point?
If it worked on a different drive, what's different between those 2 drives (H: Failed and R: worked)
I was trying to setup ebot for the one that don't know is a programs that allows you to manage multiple servers and games in csgo but Im stuck when trying to install it. Could some one please help I have tried to follow the guide but cant get it to work. I get an error saying "# You can find more information about this file on the symfony website: # http://www.symfony-project.org/reference/1_4/en/11-App # default values all:"
Where is the website with the install guide: http://www.esport-tools.net/ebot/install
Please check the guide at:
http://www.esport-tools.net/ebot/install
Follow all the steps and should work.
Please check if you can install this.
https://github.com/AjayMT/online-todos
I have an error with relase file meteor.
Please give me some step to install this app.
The last commit on that repo was 2 years ago, which is a lifetime in Meteor terms, so this is going to use a version of meteor which is way, way out of date. It would probably take a lot of work to get it to function with a supported version.
Whether you're looking for this as a way to learn Meteor or for something to actually use, you should be using the official todos app. The (correct) instructions for Meteor installation are here.
Friends,
First of all I am new to Drupal, Solr. I have installed Drupal Recruiter module and even get Solr installed/configurered. Solr is working at http://localhost:8983/solr/ means it is responding. I followed all the steps regarding enabling solr modules in Drupal Recruiter.
But Whenever I access search in drupal I am repeatedly getting the following error:
"An error occurred while trying to search with Solr: '400' Status: Bad Request."
There are couple of solutions given in stackoverflow, but I am not getting it as I am very much new to both.
Please help me finding where I went wrong.
Thanks in Advance
Dushyant Joshi
In the comments you indicate that you are using a version of solr (4.0) that is not supported by Search API. The latest non-4.x version is 3.6.1, available for download at http://www.apache.org/dyn/closer.cgi/lucene/solr/3.6.1 .
Update: to anyone coming back to this answer. The problem was that the OP had not used the solr configuration files that come with the Search API Solr module.