We have installed Alfersco 5.1 and Solr in different servers .
Alfresco has been started but I couldn't find the keystore folder in:
<ALFRESCO_TOMCAT_HOME>/webapps/alfresco/WEB-INF/keystore .
Its little bit confusion in the documentation between implementing 7th point and solr secure communication link. I want to go for solr secure communication,so we have followed the below link.
Generating Secure Keys for Solr Communication
I have implemented generating secure keys by executing generatekeystore.bat file and I kept the newly generated truststore and keystore files in:
<ALFRESCO_HOME>/alf_data/keystore
path and point the paths in tomcat server.xml for both tomcat alfresco and solr servers.
When should we go for 7th point?
Please find the text of mentioned 7th bullet below.
7.Create and populate a keystore directory for the Alfresco and Solr servers. By default, the keystore directory is created in /alf_data/keystore. Note that at this stage the keystore directory will just be a template, containing standard keys. To secure the installation, you must follow the steps to generate new keys as explained in the Generating Secure Keys for Solr Communication section. For example:
For Unix:
mkdir -p <ALFRESCO_HOME>/alf_data/keystore cp <ALFRESCO_TOMCAT_HOME>/webapps/alfresco/WEB-INF/classes/alfresco/keystore/* <ALFRESCO_HOME>/alf_data/keystore
For Windows:
mkdir <ALFRESCO_HOME>\alf_data\keystore copy <ALFRESCO_TOMCAT_HOME>\webapps\alfresco\WEB-INF\classes\alfresco\keystore\* <ALFRESCO_HOME>\alf_data\keystore
Related
New to Artifactory so please bear with me.
Trying (and failing) to create new access token.
The GUI in Artifactory has nothing for this but points to a users guide (https://www.jfrog.com/confluence/display/RTF/Access+Tokens) which talks about managing access tokens through a WAR file.
Here is the blurb:
Access Service
From Artifactory version 5.4, access tokens are managed under a new service
called Access which is implemented in a separate WAR file, access.war. This
change has no impact on how access tokens are used, however, the Artifactory
installation file structure now also includes the added WAR file under the
$ARTIFACTORY_HOME/webapps folder. Artifactory communicates with the Access
service over HTTP and assumes it is running in the same Tomcat using the
context path of "access".
OK, great. So how do I access this thing?
I also don't know much about web apps/servers. Prior to today, I thought WAR was a fight between nations :-)
My Artifactory server proc is running, and I can confirm that the access war file (apparently a jar file of sorts) is in the webapps dir.
I am able to get a artifactory via "http://myserver:8081/artifactory/webapp/#/home".
As it turns out, I believe the interface to manage access tokens is not provided through a gui. Rather, you have to use REST and curl commands.
The documentation mentions:
It is up to the Artifactory administrator to make sure that all participating instances are equipped with the same key pair.
That means you need to have access to the server (where Artifactory is installed).
On that server, the folder where Artifactory is installed is reference ARTIFACTORY_HOME.
That is what is used in the next doc extract:
Start up the first Artifactory instance (or cluster node for an HA installation) that will be in your circle of trust. A private key and root certificate are generated and stored under $ARTIFACTORY_HOME/access/etc/keys.
Copy the private key and root certificate files to a location on your file system that is accessible by all other instances/nodes that are in your circle of trust.
Before bootstrapping, for each of the other instances/nodes, create the $ARTIFACTORY_HOME/access/etc folder and create a properties file in it called access.bootstrap.config with the following contents:
key=/path/to/private.key
crt=/path/to/root.crt
When each instance/node starts up, if the $ARTIFACTORY_HOME/access/etc/access.bootstrap.config file exists, then the private key and root certificate are copied from the specified location into the server's home directory under $ARTIFACTORY_HOME/access/etc/keys.
We use a configuration management tool (Chef) for WSO2 API Manager installation (v2.1.0). For each installation, the WSO directory is deleted and overwritten with the new changes / patches.
This process removes already created APIs from the WSO2 API Publisher. (Since these are still present in the database, they cannot be re-created with the same name.) We have assumed that the entire API configuration is stored in the database which is obviously not the case.
This API-specific file is noticeable to us:
<wso2am>/repository/deployment/server/synapse-configs/default/api/admin--my-api-definition_vv1.xml
Are there any other such files that must not be deleted during a new installation or is there a way to create these files from the information stored in the database?
We have considered using the "API import / export tool" (https://docs.wso2.com/display/AM210/Migrating+the+APIs+to+a+Different+Environment). However, according to documentation, this also creates the database entries for the API, which in our case already exist.
You have to keep the content of the server folder (/repository/deployment/server). For this, you can use SVN based dep-sync. Once you enable dep-sync by giving an SVN server location, all the server specific data will be written to the SVN server.
When you are installing the newer pack, what you need to do is to point to the svn location and the database. (I hope you are using a production-ready database other than inbuilt h2)
I try to install Wordpress on the Swisscom CloudFoundry application cloud. To install it I need SSH with private and public key pairs (not cf ssh).
I follow the steps here:
https://github.com/cloudfoundry-samples/cf-ex-wordpress
Is this possible? What are the correct values for:
SSH_HOST: user#my-ssh-server.name
SSH_PATH: /home/sshfs/remote
Is this possible?
It depends on your CF provider. This method of running Wordpress requires that you use a FUSE filesystem (SSHFS) to mount the remote files system over the wp-content directory of your Wordpress install. In recent versions of CF (I can't remember exactly where this changed) you are no longer allowed to use FUSE based file systems.
Before you spend a lot of time on this, you might want to validate that your provider still allows FUSE. You can validate with a simple test.
Push any test app to your provider.
cf ssh into the application container.
Check that the sshfs binary is available.
Try using sshfs to mount a remote filesystem (man page | examples).
If you can successfully mount a remote filesystem via SSH using the steps above then you should still be able to use the method described in that example application.
If you cannot, the next best option is to use a plugin that allows storing your media on a remote system. Most of these are for S3. Search google or the WP plugin repo, they're easy enough to find.
There is a better solution on the horizon called Volume Services. You can read more about this here. I have not seen any public CF providers offering volume services though.
What are the correct values for:
SSH_HOST: user#my-ssh-server.name
This should be the user name and host name of your SSH server. This is a server that exists outside of CF. Examples: my-user#192.0.2.10 or some-user#host.example.com. You should be able to ssh <this-value> and connect without entering a password. This is so that the volume can automatically be mounted without user interaction when your app starts.
SSH_PATH: /home/sshfs/remote
This is the full path on the remote server where you'd like to store the Wordpress files. In other words, this directory will be mounted as the wp-content directory of your app.
I accidentally deleted the solr4.xml file located inside tomcat/conf/Catalina/localhost and since then solr stopped working. I tried many methods like restoring solr4.xml file , solr4 full reindexing, generating new keystore but still it doesnt work.
Please suggest how can i fix my broken solr4 without new fresh installation of alfresco.
Confirm the location of the Solr 4 core directories for
archive-SpacesStore and workspace-SpacesStore cores. This can be
determined from the solrcore.properties file for both the cores. By
default, the solrcore.propertiesfile can be found at
/solr4/workspace-SpacesStore/conf or
/solr4/archive-SpacesStore/conf. The Solr 4 core
location is defined in the solrcore.properties file as: For Solr 4,
the default data.dir.root path is:
data.dir.root=/alf_data/solr4/indexes/
Shut down Alfresco (all nodes, if clustered).
Shut down Solr 4 (if running on a separate application server).
Delete the contents of the index data directories for each Solr core
at ${data.dir.root}/${data.dir.store}.
/alf_data/solr4/index/workspace/SpacesStore
/alf_data/solr4/index/archive/SpacesStore
Delete all the Alfresco models for each Solr 4 core at ${data.dir.root}.
/alf_data/solr4/model
Delete the contents of the /alf_data/solr4/content
directory.
Start up the application server that runs Solr 4.
Start up the Alfresco application server (if not same as Solr 4
application server).
Monitor the application server logs for Solr. You will get the
following warning messages on bootstrap:
WARNING: [alfresco] Solr index directory
'/alf_data/solr/workspace/SpacesStore/index' doesn't
exist. Creating new index... 09-May-2012 09:23:42
org.apache.solr.handler.component.SpellCheckComponent inform WARNING:
No queryConverter defined, using default converter 09-May-2012
09:23:42 org.apache.solr.core.SolrCore initIndex WARNING: [archive]
Solr index directory
'/alf_data/solr/archive/SpacesStore/index' doesn't
exist. Creating new index...
Use the Solr 4 administration console to check the health of the
Solr 4 index.
You can follow the procedure reported here: http://docs.alfresco.com/5.0/tasks/solr-reindex.html
Shut down Alfresco (all nodes, if clustered).
Shut down Solr 4 (if running on a separate application server).
Delete the contents of the index data directories for each Solr core at ${data.dir.root}/${data.dir.store}.
/alf_data/solr4/index/workspace/SpacesStore
/alf_data/solr4/index/archive/SpacesStore
Delete all the Alfresco models for each Solr 4 core at ${data.dir.root}.
/alf_data/solr4/model
Delete the contents of the /alf_data/solr4/content directory.
Start up the application server that runs Solr 4.
Start up the Alfresco application server (if not same as Solr 4 application server).
It worked in our environment.
Update:
This procedure works in most cases but after a situation where the system ran out of space left on device, the system remained in a unstable state and we were forced to restore a backup.
Would be possible to use Google Drive with FUSE for to build filesystem in unix systems?
https://github.com/jcline/fuse-google-drive
fuse-google-drive is a fuse filesystem wrapper for Google Drive released under GPLv2
Currently in alpha stages. Do not trust this for anything important...
Discussion:
#fuse-google-drive on irc.freenode.net
Usage:
Right now you need to go to http://code.google.com/apis/console and create a new app and generate a client id and client secret for an install application. The clientid value and clientsecrets value should each go into:
$XDG_CONFIG_HOME/fuse-google-drive/clientid
$XDG_CONFIG_HOME/fuse-google-drive/clientsecrets
respectively. You should chmod 700 $XDG_CONFIG_HOME/fuse-google-drive as well. If the folder does not exist at runtime, a helpful message is printed and the directory is created with the correct permissions if possible. Note: If $XDG_CONFIG_HOME is unset on your system, it defaults to ~/.config/.
$ mkdir mountpoint
$ ./fuse-google-drive mountpoint
You can use the Google Documents List API to create a Fuse client.