solr - programmatically create a new collection - collections

I installed solr 5.3 and ran solr in solr cloud mode.
I easily created a new collection in the managed schema mode at the command prompt
solr create -c new_collection
this creates a new folder and updates the schema settings for me
I then tried to dup this process programmatically using the collection API
https://cwiki.apache.org/confluence/display/solr/Collections+API
I can't CREATE a new collection. get error about config not found
i tried to CREATE a new config using the managedschema as a base but that didn't work.
what is the proper way to programmatically create a new collection?

Solr cloud stores its config into zookeeper, so you should upload to it a config files (must contains a schema.xml and solrconfig.xml). That files are stored under path configs/collection_name. You can upload files via zookeeper client lib or via zookeeper cli, read more about it at Using ZooKeeper to Manage Configuration Files

Related

How to Create dashboard for structured log files in disk in any tool like Kibana, Grafana

I have a .net web api application where I have used Serilog and was able to generate both structured logs with custom fields. All log files are stored in a folder in the server. Can I install Kibana/Grafana in the same server to create dashboards using info in the structured log files. Their(Kibana/Grafana) website refer to data sources like Elastisearch or some other but not directly structured logs.
Both Kibana and Grafana require some kind of database to connect to. AFAIK, so does Apache Superset.
If you don't want to provision and manage a database, you could just write something to read the files directly and render charts.

How to use UTL_file to store file at client

I want to use plsql installed at a windows client to retrive some pdf file saved as blob at server. I found a tutorial about UTL_FILE but looks like it can only create file at server side, so is it possible to create file at client or is there a way to transfer files from server to client? Can someone give me some suggestion? Thx.
UTL_File has a parameter named "LOCATION". This is where your files will be written and is called a DIRECTORY. You should be able to create your own DIRECTORY and point it to a location that can be reached by your Oracle instance.
CREATE OR REPLACE DIRECTORY PDF_Out AS 'C:\Users\Me\PDF_Out';
Then replace what you are currently using as the value for "LOCATION" with the name of your new DIRECTORY; in the sample it is called PDF_Out.
You may need to check running services to find out which user is running the Oracle listener and grant that user appropriate read/write privileges to the location defined by your new DIRECTORY.

WSO2 API Manager - API-specific configuration files

We use a configuration management tool (Chef) for WSO2 API Manager installation (v2.1.0). For each installation, the WSO directory is deleted and overwritten with the new changes / patches.
This process removes already created APIs from the WSO2 API Publisher. (Since these are still present in the database, they cannot be re-created with the same name.) We have assumed that the entire API configuration is stored in the database which is obviously not the case.
This API-specific file is noticeable to us:
<wso2am>/repository/deployment/server/synapse-configs/default/api/admin--my-api-definition_vv1.xml
Are there any other such files that must not be deleted during a new installation or is there a way to create these files from the information stored in the database?
We have considered using the "API import / export tool" (https://docs.wso2.com/display/AM210/Migrating+the+APIs+to+a+Different+Environment). However, according to documentation, this also creates the database entries for the API, which in our case already exist.
You have to keep the content of the server folder (/repository/deployment/server). For this, you can use SVN based dep-sync. Once you enable dep-sync by giving an SVN server location, all the server specific data will be written to the SVN server.
When you are installing the newer pack, what you need to do is to point to the svn location and the database. (I hope you are using a production-ready database other than inbuilt h2)

How to fix broken SOLR4 in Alfresco 5.0.c (Community Edition) ?

I accidentally deleted the solr4.xml file located inside tomcat/conf/Catalina/localhost and since then solr stopped working. I tried many methods like restoring solr4.xml file , solr4 full reindexing, generating new keystore but still it doesnt work.
Please suggest how can i fix my broken solr4 without new fresh installation of alfresco.
Confirm the location of the Solr 4 core directories for
archive-SpacesStore and workspace-SpacesStore cores. This can be
determined from the solrcore.properties file for both the cores. By
default, the solrcore.propertiesfile can be found at
/solr4/workspace-SpacesStore/conf or
/solr4/archive-SpacesStore/conf. The Solr 4 core
location is defined in the solrcore.properties file as: For Solr 4,
the default data.dir.root path is:
data.dir.root=/alf_data/solr4/indexes/
Shut down Alfresco (all nodes, if clustered).
Shut down Solr 4 (if running on a separate application server).
Delete the contents of the index data directories for each Solr core
at ${data.dir.root}/${data.dir.store}.
/alf_data/solr4/index/workspace/SpacesStore
/alf_data/solr4/index/archive/SpacesStore
Delete all the Alfresco models for each Solr 4 core at ${data.dir.root}.
/alf_data/solr4/model
Delete the contents of the /alf_data/solr4/content
directory.
Start up the application server that runs Solr 4.
Start up the Alfresco application server (if not same as Solr 4
application server).
Monitor the application server logs for Solr. You will get the
following warning messages on bootstrap:
WARNING: [alfresco] Solr index directory
'/alf_data/solr/workspace/SpacesStore/index' doesn't
exist. Creating new index... 09-May-2012 09:23:42
org.apache.solr.handler.component.SpellCheckComponent inform WARNING:
No queryConverter defined, using default converter 09-May-2012
09:23:42 org.apache.solr.core.SolrCore initIndex WARNING: [archive]
Solr index directory
'/alf_data/solr/archive/SpacesStore/index' doesn't
exist. Creating new index...
Use the Solr 4 administration console to check the health of the
Solr 4 index.
You can follow the procedure reported here: http://docs.alfresco.com/5.0/tasks/solr-reindex.html
Shut down Alfresco (all nodes, if clustered).
Shut down Solr 4 (if running on a separate application server).
Delete the contents of the index data directories for each Solr core at ${data.dir.root}/${data.dir.store}.
/alf_data/solr4/index/workspace/SpacesStore
/alf_data/solr4/index/archive/SpacesStore
Delete all the Alfresco models for each Solr 4 core at ${data.dir.root}.
/alf_data/solr4/model
Delete the contents of the /alf_data/solr4/content directory.
Start up the application server that runs Solr 4.
Start up the Alfresco application server (if not same as Solr 4 application server).
It worked in our environment.
Update:
This procedure works in most cases but after a situation where the system ran out of space left on device, the system remained in a unstable state and we were forced to restore a backup.

Meteor, Is there any need to create database using "use mydb" programmatically?

I have few problems with meteor. Is there any need to create database using "use mydb" programmatically. I didn't used it so far and i'm directly creating collections and applied CRUD operations on them. But, i saw db.collection.find() like things few times and when i'm trying to apply on my collection it is showing error like db is not initialized how to initialize it. Here my main problem is, I tried to import some content from .json file to my collection. which is possible using database only(i thought). I can import them from shell like this
mongoimport --db test --collection mobiles <products.json --jsonArray
and how to import them without db.
You would have to show some code to see what exactly the issue is.
Meteor uses MongoDB so the schema doesn't need to be strictly created for things to work, like as would be done in MySQL or a traditional SQL type database. You can just insert documents and if the collection doesn't exist, or the database doesn't exist it would be created then without explicitly being created separately.
To import your files you need to import to your meteor database running at port 3002 (if your meteor app is running on port 3000 - meteor app port + 2). Something like this should work, the database is meteor
mongoimport --db meteor --host localhost:3002 --collection mobiles --jsonArray --file production.json
(Not sure about your file structure so i'm assuming its --jsonArray --file production.json). You could check out the docs at http://docs.mongodb.org/v2.4/reference/program/mongoimport/ for more details
So again you wouldn't need to create a database when you do this, using the --db argument would load things into meteor. If it doesn't exist it would automatically create it as you use it.

Resources