What caches uses Alfresco and how to properly clean them? - alfresco

After several experiments with business processes, I noticed that the old definitions of business processes somewhere is cached.
For example, I developed some business process, then installed the AMP file with it. I worked with it, and then decided to make some changes.
For this, I again assembled the AMP file and installed it:
[bykovan#docflow alfresco-community]$ sudo java -jar bin/alfresco-mmt.jar uninstall some-module-repo tomcat/webapps/alfresco.war
...
[bykovan#docflow alfresco-community]$ sudo java -jar bin/alfresco-mmt.jar install amps/some-module-repo-1.0-SNAPSHOT.amp tomcat/webapps/alfresco.war
...
But I don't see my changes after deploy! To make the change take effect I have to make quite a lot of extra work.
Sequence of actions:
1 Shutdown Tomcat
[bykovan#docflow alfresco-community]$ sudo ./alfresco.sh stop
2 Re-create the alfresco database
[bykovan#docflow alfresco-community]$ sudo -i -u postgres
[postgres#docflow ~]$ psql
psql (9.5.5)
postgres=#
postgres=# drop database alfresco;
...
postgres=# create database alfresco;
...
postgres=# alter database alfresco owner to alfresco;
...
postgres=# \q
[postgres#docflow ~]$ exit
3 Remove everything from alf_data
[bykovan#docflow alf_data]$ sudo rm -r *
4 Remove alfresco and share folders
[bykovan#docflow alfresco-community]$ sudo rm -r alfresco
[bykovan#docflow alfresco-community]$ sudo rm -r share
5 Start Tomcat
[bykovan#docflow alfresco-community]$ sudo ./alfresco.sh start
6 Wait until the database is initialized...
7 Set the administrator's password
SELECT
anp1.node_id, // paste to the update statement
anp1.qname_id, // paste to the update statement
anp1.string_value
FROM alf_node_properties anp1
INNER JOIN alf_qname aq1 ON aq1.id = anp1.qname_id
INNER JOIN alf_node_properties anp2 ON anp2.node_id = anp1.node_id
INNER JOIN alf_qname aq2 ON aq2.id = anp2.qname_id
WHERE aq1.local_name = 'password'
AND aq2.local_name = 'username'
AND anp2.string_value = 'admin';
UPDATE
alf_node_properties
SET
string_value='209c6174da490caeb422f3fa5a7ae634'
WHERE node_id=... and qname_id=...;
(where '209c6174da490caeb422f3fa5a7ae634' is NTLM-encoded string 'admin')
8 Restart Tomcat
9 Log-in as Admin with password admin, add users etc...
What caches uses Alfresco and how to properly clean them?
I use the following configuration:
Alfresco Share v5.2.d (r134641-b15, Aikau 1.0.101.3, Spring Surf
5.2.d, Spring WebScripts 6.13, Freemarker 2.3.20-alfresco-patched, Rhino 1.7R4-alfresco-patched, Yui 2.9.0-alfresco-20141223)
Alfresco Community v5.2.0 (r134428-b13) schema 10005

You can find temp files at this location
<<alfresco-community>>\tomcat\temp
and files are stored in alf_data
<<alfresco-community>>\alf_data

Axel Faust gave an exhaustive answer:
What caches uses Alfresco and how to properly clean them?
"Any caches that Alfresco uses are emptied / cleared when Alfresco is restarted... Generally you should NEVER have to work with the database directly. It is strongly discouraged and not supported by Alfresco in any way."
It solved my issue.

Related

Can not get flyway-docker to recognize local files in volumes

I am trying to use Flyway to set up a DB2 test/demo environment in a Docker container. I have an image of DB2 running in a docker container and now am trying to get flyway to create the database environment. I can connect to the DB2 docker container and create DB2 objects and load them with data, but am looking for a way for non-technical users to do this (i.e. clone a GitHub repo and issue a single docker run command).
The Flyway Docker site (https://github.com/flyway/flyway-docker) indicates that it supports the following volumes:
| Volume | Description |
|-------------------|--------------------------------------------------------|
| `/flyway/conf` | Directory containing a flyway.conf |
| `/flyway/drivers` | Directory containing the JDBC driver for your database |
| `/flyway/sql` | The SQL files that you want Flyway to use |
I created the conf, drivers, and sql directories. In the conf directory, I placed the file flyway.conf that contained my flyway Url, user name, and password:
flyway.url=jdbc:db2://localhost:50000/apidemo
flyway.user=DB2INST1
flyway.passord=mY%tEst%pAsSwOrD
In the drivers directory, I added the DB2 JDBC Type 4 drivers (e.g. db2jcc4.jar, db2jcc_license_cisuz.jar),
And in the sql directory I put in a simple table creation statement (file name: V1__make_temp_table.sql):
CREATE TABLE EDS.REFT_TEMP_DIM (
TEMP_ID INTEGER NOT NULL )
, TEMP_CD CHAR (8)
, TEMP_NM VARCHAR (255)
)
DATA CAPTURE NONE
COMPRESS NO;
Attempting to perform the docker run with the flyway/flyway image as described in the GitHub Readme.md, it is not recognizing the flyway.conf file, since it does not know the url, user, and password.
docker run --rm -v sql:/flyway/sql -v conf:/flyway/conf -v drivers:/flyway/drivers flyway/flyway migrate
Flyway Community Edition 6.5.5 by Redgate
ERROR: Unable to connect to the database. Configure the url, user and password!
I then put the url, user, and password inline and It could not find the JDBC driver.
docker run --rm -v sql:/flyway/sql -v drivers:/flyway/drivers flyway/flyway -url=jdbc:db2://localhost:50000/apidemo -user=DB2INST1 -password=mY%tEst%pAsSwOrD migrate
ERROR: Unable to instantiate JDBC driver: com.ibm.db2.jcc.DB2Driver => Check whether the jar file is present
Caused by: Unable to instantiate class com.ibm.db2.jcc.DB2Driver : com.ibm.db2.jcc.DB2Driver
Caused by: java.lang.ClassNotFoundException: com.ibm.db2.jcc.DB2Driver
Therefore, I believe it is the way that I am setting up the local file system or associating to local files with the flyway volumes that is causing the issue. Does anyone have an idea of what I am doing wrong?
You need to supply absolute paths to your volumes for docker to mount them.
Changing the relative paths to absolute paths fixed the volume mount issue.
docker run --rm \
-v /Users/steve/github-ibm/flyway-db-migration/sql:/flyway/sql \
-v /Users/steve/github-ibm/flyway-db-migration/conf:/flyway/conf \
-v /Users/steve/github-ibm/flyway-db-migration/drivers:/flyway/drivers \
flyway/flyway migrate

How to work with wordpress dévelopment server?

I'm testing WordPress for personnal project but i would like to install locally my development WordPress website and install on my Personnal production server the final website.
In order to do that, i search a plugin or program for syncronising wordpress dévelopment with new pages, templates, and configurations inside my production wordpress.
Is there a program or plugin to do that? How is much better to work with wordpress?
Thanks :)
There are two topics you can try:
-.By schedule copy files to production like linux CLI with crontab (every min):
* * * * * scp local_file remote_username#remote_ip:remote_file
But I don't recommend this way , and for you to easy understand.
-.By CICD, here is a blog link for you to know the concept first if you don't know this:
https://thecodingmachine.io/continuous-delivery-on-a-dedicated-server
Briefly, you can push your project to private repo on gitlab or github,
then make development(=development server),production(=production server) branches, the automate job will deploy to the servers if you have git push.
Here's an example main part from the link on the file .gitlab-ci.yml:
deploy_staging:
stage: deploy
image: kroniak/ssh-client:3.6
script:
# add the server as a known host
- mkdir ~/.ssh
- echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
# add ssh key stored in SSH_PRIVATE_KEY variable to the agent store
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
# log into Docker registry
- ssh deployer#thecodingmachine.io "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.thecodingmachine.com"
# stop container, remove image.
- ssh deployer#thecodingmachine.io "docker stop thecodingmachine.io_${CI_COMMIT_REF_SLUG}" || true
- ssh deployer#thecodingmachine.io "docker rm thecodingmachine.io_${CI_COMMIT_REF_SLUG}" || true
- ssh deployer#thecodingmachine.io "docker rmi registry.thecodingmachine.com/tcm-projects/thecodingmachine.io:${CI_COMMIT_REF_SLUG}" || true
# start new container
- ssh deployer#thecodingmachine.io "docker run --name thecodingmachine.io_${CI_COMMIT_REF_SLUG} --network=web -d registry.thecodingmachine.com/tcm-projects/thecodingmachine.io:${CI_COMMIT_REF_SLUG}"
only:
- branches
except:
- master
It maybe hard for you to read this, but you can know there is a way which can work you need and you may take times to learn this part.
Hope it work for you.
Thanks for David Négrier sharing.

Alfresco server gets startup gets hung

I am trying to start a Alfresco server but it got hung in between,Please see below screenshot, I have copied Alfresco instance from one server to another server, I have also made necessary changes in Alfresco-global.properties.
Please help on this
For backup of your database and alf_data you can download and run the following script.
http://www.contcentric.com/alfresco-backup/
Note: you will have to manually backup the indexes from the solr4 folder and other customizations (like amps and jar deployed)
Follow the alfresco restore steps
1. Install new alfresco instance. Do not start server
2. Start postgresql using the following command
./alfresco.sh start postgresql
3. Go to the <ALF-HOME>/postgresql/bin
4. Run the following commmand
psql -U alfresco -h <hostname> -p port
e.g. psql -U alfresco -h localhost -p 5422
5. It will ask you to set the password, enter the password and remember it
6. Run the following command
psql -U alfresco -h <host> -p port <dbname> < dumpFile
e.g. psql -U alfresco -h localhost -p 5422 alfresco < /opt/migration-backup/01-10-2018-15-54-47/database/alfresco_db_dump
7. You will notice the multiple tables and index are created
8. Start the tomcat using the following command
./alfresco.sh start tomcat
9. Test your migration.

How to Handle Runtime Configuration of Symfony2 Using Consul Service Discovery

Our team is presently exploring the idea of service discovery for a Symfony2 application using Consul. Being in the relative frontier, there's very little out there in the way of discussion. So far we've discovered:
Runtime configuration has previously been shot down.
A bundle exists to handle such a use case, but has also hasn't seen a lot of activity as of late.
One of the contributors of said bundle suggested that External Parameters might be the answer to the problem.
Sensio has created its own Consul SDK. However, there seems to be little in the way of documentation/official blog articles re: Symfony2 integration
Consol provides watches which can be triggered on various changes
Current thoughts are to explore utilizing the Consul watchers to re-trigger a cache build along with external parameters. That said, there is some concern on the overhead of such an operation if services change semi-frequently.
Based on the above, and knowledge of Consul/Symfony internals, would that be an advisable approach? If not, why, and what alternatives are available?
In the company I work, we took a different route.
Instead of fighting against Symfony to accept runtime configuration (something it should, like Spring Data Consul, for example), we decided to make Consul update Symfony configuration, in a similar in concept, different in implementation than Frank did.
We installed Consul and Consul Template. We create a K/V entry pair that contains the entire parameters.yml file. Example:
Key: eblock/config/parameters.yml
parameters:
router.request_context.host: dev.eblock.ca
router.request_context.scheme: http
router.request_context.base_url: /
Then a consul template configuration file was added at location /opt/consul-template/config/eblock.cfg:
template {
source = "/opt/consul-template/templates/eblock-parameters.yml.ctmpl"
destination = "/var/www/eblock/app/config/parameters.yml"
command = "/opt/eblock/scripts/parameters_updated.sh"
}
The contents of ctmpl file are:
{{key "eblock/config/parameters.yml"}}
Finally, our parameters_updated.sh script does:
#!/bin/bash
readonly PROGNAME=$(basename "$0")
readonly LOCKFILE_DIR=/tmp
readonly LOCK_FD=201
lock() {
local prefix=$1
local fd=${2:-$LOCK_FD}
local lock_file=$LOCKFILE_DIR/$prefix.lock
# create lock file
eval "exec $fd>$lock_file"
# acquire the lock
flock -n $fd \
&& return 0 \
|| return 1
}
lock $PROGNAME || exit 0
export HOME=/root
logger "Starting composer install" && \
/usr/local/bin/composer install -d=/var/www/eblock/ --no-interaction && \
logger "Running composer dump-autoload" && \
/usr/local/bin/composer dump-autoload -d=/var/www/eblock/--optimize && \
logger "Running app/console c:c/c:w" && \
/usr/bin/php /var/www/eblock/app/console c:c -e=prod --no-warmup && \
/usr/bin/php /var/www/eblock/app/console c:w -e=prod && \
logger "Running doctrine commands" && \
/usr/bin/php /var/www/eblock/app/console doctrine:database:create --env=prod --if-not-exists && \
/usr/bin/php /var/www/eblock/app/console doctrine:migrations:migrate -n --env=prod && \
logger "Restarting php-fpm" && \
/bin/systemctl restart php-fpm &
Knowing that both consul and consul-template services are up, as soon as your value changes in the specified key for consul template, it'll dump the file into configured destination and run the command for parameters updated.
It works like a charm. =)
A simple KV watcher that puts the value into parameters.yml, triggers a cache:clear is the simplest option in my opinion and also provides the benefit of compilation so that it doesn't have to go to Consul each time to check if values are updated. Like you said, some overhead but seems to be ok if you don't change your parameters every 5 minutes.
We're exploring that option now but if you made any progress on this, an update would be appreciated.
[Update 2016-02-23] We've implemented the idea I mentioned above and it works as expected: well. Mind you, we change our parameters only on deploy of a new version (because we also use service discovery by Consul so no need to update service lists in parameters). We mostly did it because it saves us the boring job of changing parameters on several servers. As usual: this might not work for you but I think you would be safe if, like I said before, you don't change your parameters every 5 mins :)

How to change the owner for a rsync

I understand preserving the permissions for rsync.
However in my case my local computer does not have the user the files need to under for the webserver. So when I rsync I need the owner and group to be apache on the webserver, but be my username on my local computer. Any suggestions?
I wanted to clarify to explain exactly what I need done.
My personal computer: named 'home' with the user account 'michael'
My web server: named 'server' with the user account 'remote' and user account 'apache'
Current situation: My website is on 'home' with the owner 'michael' and on 'server' with the owner 'apache'. 'home' needs to be using the user 'michael' and 'server' needs to be using the user 'apache'
Task: rsync my website on 'home' to 'server' but have all the files owner by 'apache' and the group 'apache'
Problem: rsync will preseve the permissions, owner, and group; however, I need all the files to be owner by apache. I know the not preserving the owner will put the owner of the user on 'server' but since that user is 'remote' then it uses that instead of 'apache'. I can not rsync with the user 'apache' (which would be nice), but a security risk I'm not willing to open up.
My only idea on how to solve: after each rsync manually chown -R and chgrp -R, but it's a huge system and this takes a long time, especially since this is going to production.
Does anyone know how to do this?
Current command I use to rsync:
rsync --progress -rltpDzC --force --delete -e "ssh -p22" ./ remote#server.com:/website
If you have access to rsync v.3.1.0 or later, use the --chown option:
rsync -og --chown=apache:apache [src] [dst]
More info in an answer from a similar question here: ServerFault: Rsync command issues, owner and group permissions doesn´t change
There are hacks you could put together on the receiving machine to get the ownership right -- run 'chmod -R apache /website' out of cron would be an effective but pretty kludgey option -- but instead, I'd recommend securely allowing rsync-over-ssh-as-apache.
You'd create a dedicated ssh keypair for this:
ssh-keygen -f ~/.ssh/apache-rsync
and then take ~/.ssh/apache-rsync.pub over to the webserver, where you'd put it into ~apache/.ssh/authorized_keys and carefully specify the allowed command, something like so, all on one line:
command="rsync --server -vlogDtprCz --delete . /website",from="IP.ADDR.OF.SENDER",no-port-forwarding,no-X11-forwarding,no-pty ssh-rsa AAABKEYPUBTEXTsVX9NjIK59wJ+fjDgTQtGwhATsfidQbO6u77dbAjTUmWCZjKAQ/fEFWZGSlqcO2yXXXXXXXXXXVd9DSS1tjE6vAQaRdnMXBggtn4M9rnePD2qlR5QOAUUwhyFPhm6U4VFhRoa3wLvoqCVtCV0cuirB6I45On96OPijOwvAuz3KIE3+W9offomzHsljUMXXXXXXXXXXMoYLywMG/GPrZ8supIDYk57waTQWymUyRohoQqFGMzuDNbq+U0JSRlvLFoVUZ5Piz+gKJwwiFwwAW2iNag/c4Mrb/BVDQAyEQ== comment#email.address
and then your rsync command on your "home" machine would be something like
rsync -av --delete -e 'ssh -i ~/.ssh/apache-rsync apache#server' ./ /website
There are other ways to skin this cat, but this is the clearest and involves the fewest workarounds, to my mind. It prevents getting a shell as apache, which is the biggest security concern, natch. If you're really deadset against allowing ssh as apache, there are other ways ... but this is how I've done it.
References here: http://ramblings.narrabilis.com/using-rsync-with-ssh, http://www.sakana.fr/blog/2008/05/07/securing-automated-rsync-over-ssh/
Last version (at least 3.1.1) of rsync allows you to specify the "remote ownership":
--usermap=tom:www-data
Changes tom ownership to www-data (aka PHP/Nginx). If you are using Mac as the client, use brew to upgrade to the last version. And on your server, download archives sources, then "make" it!
The solution using rsync --chown USER:GROUP [src] [dst] only works if the remote user has write access to the the destination directory which in most cases is not the case.
Here's another solution:
Overview
(srcmachine) (rsync) (destmachine)
srcuser -- SSH --> destuser
|
| sudo su jenkins
|
v
jenkins
Let's say that you want to rsync:
From:
Machine: srcmachine
User: srcuser
Directory: /var/lib/jenkins
To:
Machine: destmachine
User: destuser to establish the SSH connection.
Directory: /tmp
Final files owner: jenkins.
Solution
rsync --rsync-path 'sudo -u jenkins rsync' -avP --delete /var/lib/jenkins destuser#destmachine:/tmp
Read more here:
https://unix.stackexchange.com/a/546296/116861
rsync version 3.1.2
I mostly use windows in local, so this is the command line i use to sync files with the server (debian) :
user#user-PC /cygdrive/c/wamp64/www/projects
$ rsync -rptgoDvhnP --chown=www-data:www-data --exclude=.env --exclude=vendor --exclude=node_modules --exclude=.git --exclude=tests --exclude=.phpintel --exclude=storage ./website/ username#hostname:/var/www/html/website
-n : perform a trial run with no changes made, to really execute the command remove the -n option

Resources