I accidentally deleted the solr4.xml file located inside tomcat/conf/Catalina/localhost and since then solr stopped working. I tried many methods like restoring solr4.xml file , solr4 full reindexing, generating new keystore but still it doesnt work.
Please suggest how can i fix my broken solr4 without new fresh installation of alfresco.
Confirm the location of the Solr 4 core directories for
archive-SpacesStore and workspace-SpacesStore cores. This can be
determined from the solrcore.properties file for both the cores. By
default, the solrcore.propertiesfile can be found at
/solr4/workspace-SpacesStore/conf or
/solr4/archive-SpacesStore/conf. The Solr 4 core
location is defined in the solrcore.properties file as: For Solr 4,
the default data.dir.root path is:
data.dir.root=/alf_data/solr4/indexes/
Shut down Alfresco (all nodes, if clustered).
Shut down Solr 4 (if running on a separate application server).
Delete the contents of the index data directories for each Solr core
at ${data.dir.root}/${data.dir.store}.
/alf_data/solr4/index/workspace/SpacesStore
/alf_data/solr4/index/archive/SpacesStore
Delete all the Alfresco models for each Solr 4 core at ${data.dir.root}.
/alf_data/solr4/model
Delete the contents of the /alf_data/solr4/content
directory.
Start up the application server that runs Solr 4.
Start up the Alfresco application server (if not same as Solr 4
application server).
Monitor the application server logs for Solr. You will get the
following warning messages on bootstrap:
WARNING: [alfresco] Solr index directory
'/alf_data/solr/workspace/SpacesStore/index' doesn't
exist. Creating new index... 09-May-2012 09:23:42
org.apache.solr.handler.component.SpellCheckComponent inform WARNING:
No queryConverter defined, using default converter 09-May-2012
09:23:42 org.apache.solr.core.SolrCore initIndex WARNING: [archive]
Solr index directory
'/alf_data/solr/archive/SpacesStore/index' doesn't
exist. Creating new index...
Use the Solr 4 administration console to check the health of the
Solr 4 index.
You can follow the procedure reported here: http://docs.alfresco.com/5.0/tasks/solr-reindex.html
Shut down Alfresco (all nodes, if clustered).
Shut down Solr 4 (if running on a separate application server).
Delete the contents of the index data directories for each Solr core at ${data.dir.root}/${data.dir.store}.
/alf_data/solr4/index/workspace/SpacesStore
/alf_data/solr4/index/archive/SpacesStore
Delete all the Alfresco models for each Solr 4 core at ${data.dir.root}.
/alf_data/solr4/model
Delete the contents of the /alf_data/solr4/content directory.
Start up the application server that runs Solr 4.
Start up the Alfresco application server (if not same as Solr 4 application server).
It worked in our environment.
Update:
This procedure works in most cases but after a situation where the system ran out of space left on device, the system remained in a unstable state and we were forced to restore a backup.
Related
Trying to build an installer using Excelsior JET
I am trying to create an installer for Eclipse RCP application product.
My product is working fine only concern is when i try to make windows installer (using Excelsior JET and install creator) the database does not update.
I didn't look at the tutorial (since it is flash), but is the problem that the MSI does not overwrite an existing database file on installation? If so, this normally has to do with MSI's default file versioning rules - and how it will preserve modified, unversioned files - in essence non-versioned files that have different create and modify date stamps. This issue is a common confusion associated with MSI deployments.
I will check back to see if this is the problem. In the mean time, here is a link to an answer describing ways to deploy data files and per-user files and settings: Create folder and file on Current user profile, from Admin Profile. You might want to install a read-only database file to a per-machine location, and then copy it to the user profile upon application launch.
We use a configuration management tool (Chef) for WSO2 API Manager installation (v2.1.0). For each installation, the WSO directory is deleted and overwritten with the new changes / patches.
This process removes already created APIs from the WSO2 API Publisher. (Since these are still present in the database, they cannot be re-created with the same name.) We have assumed that the entire API configuration is stored in the database which is obviously not the case.
This API-specific file is noticeable to us:
<wso2am>/repository/deployment/server/synapse-configs/default/api/admin--my-api-definition_vv1.xml
Are there any other such files that must not be deleted during a new installation or is there a way to create these files from the information stored in the database?
We have considered using the "API import / export tool" (https://docs.wso2.com/display/AM210/Migrating+the+APIs+to+a+Different+Environment). However, according to documentation, this also creates the database entries for the API, which in our case already exist.
You have to keep the content of the server folder (/repository/deployment/server). For this, you can use SVN based dep-sync. Once you enable dep-sync by giving an SVN server location, all the server specific data will be written to the SVN server.
When you are installing the newer pack, what you need to do is to point to the svn location and the database. (I hope you are using a production-ready database other than inbuilt h2)
We are using Tridion 2011 SP1. Some of the pages/components are getting failed while publishing with below mentioned error.
Phase: Deployer Prepare Phase failed, Unable to unzip,
D:\Inetpub\TridionPublisherFS4SP\incoming\tcm_0-286137-66560.Content.zip (The process
cannot access the file because it is being used by another process),
D:\Inetpub\TridionPublisherFS4SP\incoming\tcm_0-286137-66560.Content.zip (The process
cannot access the file because it is being used by another process), Unable to unzip,
D:\Inetpub\TridionPublisherFS4SP\incoming\tcm_0-286137-66560.Content.zip (The process
cannot access the file because it is being used by another process),
D:\Inetpub\TridionPublisherFS4SP\incoming\tcm_0-286137-66560.Content.zip (The process
cannot access the file because it is being used by another process)
Components/Pages are failing under stage Preparing Deployment, how should we fix it?
Do you have multiple Deployers using the same incoming location?
It looks like you’re running the Deployer as a WebApp – is the Deployer service also running on the system?
If you search for all files named “cd_deployer_conf.xml”, do they have the same incoming folder (D:\Inetpub\TridionPublisherFS4SP\incoming) defined?
Otherwise, you might use ProcMon to watch the folder and see what else is accessing the file.
If you still have this issue, you may try
1. deleting all files under incoming,
2. making sure there is no encryption enabled for the incoming folder (Some companies apply encrypt script immediately to the files that are added to the drive) or
3. making sure your antivirus is not screening that folder (As Nuno mentioned).
Do a restart of the deployer app and verify in the logs?
Recently I ran a script to save many items in Tridion and noticed that the Jetty log for that day was around 500MB, compared to only 2MB from other days.
I assume the Solr Search Service, which uses Jetty, is logging when items are updated in the index.
Is it possible to change the log level or disable logging from Jetty while running batch scripts?
You can change the log level of jetty in the file C:\Program Files (x86)\Tridion\solr-jetty\conf\log4j.properties.
We are using Tridion 2011 sp1 without any hotfix and .net web application httpupload.aspx to deploy the content in filesystem.
We monitored and found there are two issues:
1) Some time pages those are published successfully in the publishing queue are not uploaded/updated in the file system.
2) Transport package is not created for the pages which are getting failed with the error:
Deploying FailedPhase: Deployment Processing Phase failed, Could not initialize class com.tridion.storage.StorageManagerFactory, Could not initialize class com.tridion.storage.StorageManagerFactory
Also in the deployer log file and transporter log file, there is no reference to failed item transaction id.
Can anyone help me out in this?
You must have some more detail on the failure in your logs than just this.
Could not initialize StorageManagerFactory will typically point to a misconfigured cd_storage_conf.xml or a jar missing.
If you get this occasionally then there must be something that fails occasionally (like your database connection or a file system).
Please scan through your deployer and/or core logs for additional information.
[UPDATE]
I think you may have a second deployer "listening" to the same incoming directory, and that 2nd deployer is broken.
Hints of that:
You say no transport package is created. I assume you mean you can't find the transport package - it must be created in the CM otherwise it can't fail. This means "someone" picked it up
"Sometimes they're published, sometimes not" == Sometimes they're picked up by the right deployer, sometimes they're picked up by the wrong one.
No references to the transaction in the logs
Search your server for all cd_deployer_conf.xml, and go compare all your "incoming" folder settings. You can only have one deployer per incoming folder.
Try following:
1) In the windows event logs identify the path of the Deployer that is getting loaded...generally it should be define by the Tridion_Home variable but there is also a roll up logic in place and it might also get picked up the deployer path from your application config on priority if you have placed the deployer config and bin folders with in your application bin folder for processing by the Tridion Content Delivery API
2) Check if the updated SQL JDBC jar file is present in the deployer bin folder
3) Verify that you do not have jre version between 1.6.0.26 to 1.6.0.30 installed on CMA and/or CDA server - check for both 32-bit as well as 64-bit version