On Azure CentOS VM while starting a custom application server that rely on FLEXnet license, I got the following error:
Error checking out license: System clock has been set back.
Feature: ep_u
License path: /opt/MyApplication/license/MyApplication-license.dat:
FLEXnet Licensing error:-88,309
For further information, refer to the FLEXnet Licensing documentation,
available at "www.flexerasoftware.com".
After searching on the net, I found that this error is due to some system file modified in the future.
As I didn't found a clear response to this issue, I write my own.
First on centOS I could check the current timezone:
ls -l /etc/localtime
And eventually update it
timedatectl list-timezones | grep Paris
sudo timedatectl set-timezone Europe/Paris
Then I need to check if my system have directories or files with a date in the future:
cd /
sudo find . -newermt "1 days"|more
I dont know why I have some results here but this command return me a lots of files and directory, event virtual file like /dev /sys /proc...
I finally fix this issue by updating timestamp of some working files/directory. In the previous results I fixed /etc and /var directories.
Here is the command to fix (reset future date) a given directory content (ex. /var) and set the current date for each entry (having a date in the future) :
cd /var
sudo find . -newermt "1 days"|sudo xargs touch
hope this helps
I guess this can help:
$ cd /
$ touch ref_file
$ find . -newer ref_file
$ find . -newer ref_file -exec touch {} \;
Related
We are using Drupal8.7.5 headless, continuously we are getting such warning.
So my Question was do we need twig cache enabled.
How to solve the warning appearing in logs.
Those kind of messages are a folder permission problem at the most of time. And may be it's the case of your Drupal installation.
So I invite to verify the owner of "files" directory:
chown -R :www-data files
Then setting the proper permission on the Files directory:
chmod g+ws files
Fixing the permissions of pre-existing files in the Files directory:
cd files && find . -type d -exec chmod g+ws {} \ && find . -type f -exec chmod 664 {} \;
As recommended by Chris Toler.
And be careful because may be you are using a Dockerfile that force Nginx to use another user then www-data or may be you are using a shared volume for "files" directory as a cloud webapp. In that case you need to verify permissions on volumes at the host machine or using the Cloud UI to find the right permissions for your volumes. For further reading you can take a look at this Drupal topic.
Getting error in copying multiple files. Below command is copying only first file and giving error for rest of the files. Can someone please help me out.
Command:
scp $host:$(ssh -n $host "find /incoming -mmin -120 -name 2018*") /incoming/
Result:
user#host:~/scripts/OTA$ scp $host:$(ssh -n $host "find /incoming -mmin -120 -name 2018*") /incoming/
Password:
Password:
2018084session_event 100% |**********************************************************************************************************| 9765 KB 00:00
cp: cannot access /incoming/2018084session_event_log.195-10.45.40.9
cp: cannot access /incoming/2018084session_event_log.195-10.45.40.9_2_3
Your command uses Command Substitution to generate a list of files. Your assumption is that there is some magic in the "source" notation for scp that would cause multiple members of the list generated by your find command to be assumed to live on $host, when in fact your command might expand into something like:
scp remotehost:/incoming/someoldfile anotheroldfile /incoming
Only the first file is being copied from $host, because none of the rest include $host: at the beginning of the path. They're not found in your local /incoming directory, hence the error.
Oh, and in addition, you haven't escape the asterisk in the find command, so 2018* may expand to multiple files that are in the login directory for the user in question. I can't tell from here, it depends on your OS and shell configuration.
I should point out that you are providing yet another example of the classic Parsing LS problem. Special characters WILL break your command. The "better" solution usually offered for this problem tends to be to use a for loop, but that's not really what you're looking for. Instead, I'd recommend making a tar of the files you're looking for. Something like this might do:
ssh "$host" "find /incoming -mmin -120 -name 2018\* -exec tar -cf - {} \+" |
tar -xvf - -C /incoming
What does this do?
ssh runs a remote find command with your criteria.
find feeds the list of filenames (regardless of special characters) to a tar command as options.
The tar command sends its result to stdout (-f -).
That output is then piped into another tar running on your local machine, which extracts the stream.
If your tar doesn't support -C, you can either remove it and run a cd /incoming before the ssh, or you might be able to replace that pipe segment with a curly-braced command: { cd /incoming && tar -xvf -; }
The curly brace notation assumes a POSIX-like shell (bash, zsh, etc). The rest of this should probably work equally well in csh if that's what you're stuck with.
Limited warranty: Best Effort Only. Untested on animals or computers. Your milage may vary. May contain nuts.
If this doesn't work for you, poke at it until it does.
I need to secure copy files (only .csv) belonging to May & June 2016 (i.e. last modified between 1st May - 30th June 2016) from one source server to target server.
The source server is like-> 10.87.87.89
source folder -> /home/server/source
The Target server is -> 10.34.69.32
target folder -> /home/ftp/destination
Example Username for TARGET: a_ftp
Example Password for TARGET: a_ftp
I have tried the below command-
scp /home/server/source/h.xml a_ftp#10.34.69.32://home/ftp/destination
but its not for taking all files.
First, you will need to create two reference files in the server's filesystem:
$ ssh server#10.87.87.89 touch --date "2016-05-01" /tmp/start.tmp
$ ssh server#10.87.87.89 touch --date "2016-06-30" /tmp/end.tmp
Now, it is possible to list only the files created in May/2016 and June/2016:
$ ssh server#10.87.87.89 find /home/server/source -type f -newer /tmp/start.tmp -not -newer /tmp/end.tmp
And finally, you can use scp inside a for loop, copying only the needed files between the hosts:
for i in `ssh server#10.87.87.89 find /home/server/source -type f -newer /tmp/start.tmp -not -newer /tmp/end.tmp`
do
scp server#10.87.87.89:$i ftp#10.34.69.32:/home/ftp/destination
done
Hope it helps!
Today, I installed testlink. And after I select 'new Installation' and choose 'I agree' option, it failed at the second step. The failed message are as following:
Read/write permissions
For security reason we suggest that directories tagged with [S] on following messages, will be made UNREACHEABLE from browser
Checking if C:\xampp\htdocs\testlink\gui\templates_c directory exists OK
Checking if C:\xampp\htdocs\testlink\gui\templates_c directory is writable (by user used to run webserver process) OK
Checking if /var/testlink/logs/ directory exists [S] Failed!
Checking if /var/testlink/upload_area/ directory exists [S] Failed!
So, can anyone give me a hand? Many thanks!
In C:\xampp\htdocs\testlink\config.inc.php file, change
$g_repositoryPath = 'C:\xampp\htdocs\testlink\upload_area';
$tlCfg->log_path = 'C:\xampp\htdocs\testlink\logs';
Worked for me , make sure you dont have the slash at the end.
i.e, make sure that it is NOT:
$g_repositoryPath = 'C:\xampp\htdocs\testlink\upload_area\';
$tlCfg->log_path = 'C:\xampp\htdocs\testlink\logs\';
If you installed the XAMPP or testlink in another directories, change the paths above accordingly.
Go to config.inc.php and log directory ($tlCfg->log_path) edit the path to C:\xampp\testlink\logs and upload directory ($g_repositoryPath) to C:\xampp\testlink\upload_area
In some cases, you would do like this:
Go to C:\xampp\htdocs\testlink\config.inc.php1
and log directory ($tlCfg->log_path) edit the path to C:\xampp\htdocs\testlink\logs
and upload directory ($g_repositoryPath) to C:\xampp\htdocs\testlink\upload_area
Then you have:
$g_repositoryPath = 'C:\xampp\htdocs\testlink\upload_area';
$tlCfg->log_path = 'C:\xampp\htdocs\testlink\logs';
I had paths set correct , also user, group and access are set correct and still can not get rid of issue. It took me very long to go to the root cause, the issue lies at- http daemon does not have access to file in concern due to SELinux policies. So simple chown, chmod would not help(group and user access). For testlink 1.16 I resolved it with re-installing with sudo user, but for upgrade, an issue arose again even with sudo user.
And resolved issue by executing following commands, I hope this helps. (Note: You might have to mend attributes to run it successfully)
$chcon -t httpd_sys_content_rw_t "<path_to_testlink_folder>/gui/templates_c/"
$chcon -t httpd_sys_content_rw_t "/<path_to_testlink_folder>/upload_area/"
$chcon -t httpd_sys_content_rw_t "<path_to_testlink_folder>/logs"
$semanage fcontext -a -t httpd_sys_content_rw_t "<path_to_testlink_folder>(/.*)?"
$restorecon -R -v path_to_testlink_folder
Ubuntu 12.04 - All you have to do is chmod 777 these directories, Fails will become Pass.
~$ cd into /var/www/testlink
~$ sudo chmod 777 ./gui/templates_c/
~$ sudo chmod 777 ./upload_area/
~$ sudo chmod 777 ./logs/
Whatever the instructions say is total BS. Making these directories unreachable from browser is optional, and that created a confusion. if you chmod 777 them, your Fails will turn into pass and now you'll be able to proceed to step 3 of your testlink installation. Tested with testlink version 1.9.5.
For Mac OS Users try this in 1.9.19 version:
Make Sure with your folder name.
In config.inc.php file:
$tlCfg->log_path = TL_ABS_PATH . 'logs' . DIRECTORY_SEPARATOR;
$g_repositoryPath= TL_ABS_PATH . 'upload_area' . DIRECTORY_SEPARATOR;
After this if you got read write permission issue failed.
Goto testlink -> logs / upload_area -> press Command + I -> in Permission Enable Read Write to Everyone.
on Linux; ensure the paths are as follows:
$tlCfg->log_path
$g_repositoryPath
are
/var/www/html/testlink/logs/
/var/www/html/testlink/upload_area/
Valid for ubuntu 16.04 LTS add permisions
Change:
$g_repositoryPath = 'var/www/html/testlink/upload_area'; //linux user
$tlCfg->log_path = 'var/www/html/testlink/logs';
~$ cd into /var/www/testlink
~$ sudo chmod 777 ./gui/templates_c/
~$ sudo chmod 777 ./upload_area/
~$ sudo chmod 777 ./logs/
In CentOS go to /var/www/html/testlink-code-1.9.16 and edit the file custom_config.inc.php replace these two lines
// $tlCfg->log_path = '/var/testlink-ga-testlink-code/logs/'; /* unix example */
// $g_repositoryPath = '/var/testlink-ga-testlink-code/upload_area/'; /* unix example */
with
$tlCfg->log_path = '/var/www/html/testlink-code-1.9.16/logs/';
$g_repositoryPath = '/var/www/html/testlink-code-1.9.16/upload_area/';
Make sure you have disabled the selinux. If not to do so edit the file /etc/sysconfig/selinux and change the variable SELINUX to disabled and reboot the machine. Now these errors should have gone.
on ubuntu 18.04, will need to run
apt-get remove apparmor
in order to install it
To solve the problem :
Checking if /var/www/html/testlink-1.9.16/gui/templates_c directory is writable (by user used to run webserver process) on Centos 7.
Disable SELinux, and then restart your system.
You should no longer have this error message.
I understand preserving the permissions for rsync.
However in my case my local computer does not have the user the files need to under for the webserver. So when I rsync I need the owner and group to be apache on the webserver, but be my username on my local computer. Any suggestions?
I wanted to clarify to explain exactly what I need done.
My personal computer: named 'home' with the user account 'michael'
My web server: named 'server' with the user account 'remote' and user account 'apache'
Current situation: My website is on 'home' with the owner 'michael' and on 'server' with the owner 'apache'. 'home' needs to be using the user 'michael' and 'server' needs to be using the user 'apache'
Task: rsync my website on 'home' to 'server' but have all the files owner by 'apache' and the group 'apache'
Problem: rsync will preseve the permissions, owner, and group; however, I need all the files to be owner by apache. I know the not preserving the owner will put the owner of the user on 'server' but since that user is 'remote' then it uses that instead of 'apache'. I can not rsync with the user 'apache' (which would be nice), but a security risk I'm not willing to open up.
My only idea on how to solve: after each rsync manually chown -R and chgrp -R, but it's a huge system and this takes a long time, especially since this is going to production.
Does anyone know how to do this?
Current command I use to rsync:
rsync --progress -rltpDzC --force --delete -e "ssh -p22" ./ remote#server.com:/website
If you have access to rsync v.3.1.0 or later, use the --chown option:
rsync -og --chown=apache:apache [src] [dst]
More info in an answer from a similar question here: ServerFault: Rsync command issues, owner and group permissions doesn´t change
There are hacks you could put together on the receiving machine to get the ownership right -- run 'chmod -R apache /website' out of cron would be an effective but pretty kludgey option -- but instead, I'd recommend securely allowing rsync-over-ssh-as-apache.
You'd create a dedicated ssh keypair for this:
ssh-keygen -f ~/.ssh/apache-rsync
and then take ~/.ssh/apache-rsync.pub over to the webserver, where you'd put it into ~apache/.ssh/authorized_keys and carefully specify the allowed command, something like so, all on one line:
command="rsync --server -vlogDtprCz --delete . /website",from="IP.ADDR.OF.SENDER",no-port-forwarding,no-X11-forwarding,no-pty ssh-rsa AAABKEYPUBTEXTsVX9NjIK59wJ+fjDgTQtGwhATsfidQbO6u77dbAjTUmWCZjKAQ/fEFWZGSlqcO2yXXXXXXXXXXVd9DSS1tjE6vAQaRdnMXBggtn4M9rnePD2qlR5QOAUUwhyFPhm6U4VFhRoa3wLvoqCVtCV0cuirB6I45On96OPijOwvAuz3KIE3+W9offomzHsljUMXXXXXXXXXXMoYLywMG/GPrZ8supIDYk57waTQWymUyRohoQqFGMzuDNbq+U0JSRlvLFoVUZ5Piz+gKJwwiFwwAW2iNag/c4Mrb/BVDQAyEQ== comment#email.address
and then your rsync command on your "home" machine would be something like
rsync -av --delete -e 'ssh -i ~/.ssh/apache-rsync apache#server' ./ /website
There are other ways to skin this cat, but this is the clearest and involves the fewest workarounds, to my mind. It prevents getting a shell as apache, which is the biggest security concern, natch. If you're really deadset against allowing ssh as apache, there are other ways ... but this is how I've done it.
References here: http://ramblings.narrabilis.com/using-rsync-with-ssh, http://www.sakana.fr/blog/2008/05/07/securing-automated-rsync-over-ssh/
Last version (at least 3.1.1) of rsync allows you to specify the "remote ownership":
--usermap=tom:www-data
Changes tom ownership to www-data (aka PHP/Nginx). If you are using Mac as the client, use brew to upgrade to the last version. And on your server, download archives sources, then "make" it!
The solution using rsync --chown USER:GROUP [src] [dst] only works if the remote user has write access to the the destination directory which in most cases is not the case.
Here's another solution:
Overview
(srcmachine) (rsync) (destmachine)
srcuser -- SSH --> destuser
|
| sudo su jenkins
|
v
jenkins
Let's say that you want to rsync:
From:
Machine: srcmachine
User: srcuser
Directory: /var/lib/jenkins
To:
Machine: destmachine
User: destuser to establish the SSH connection.
Directory: /tmp
Final files owner: jenkins.
Solution
rsync --rsync-path 'sudo -u jenkins rsync' -avP --delete /var/lib/jenkins destuser#destmachine:/tmp
Read more here:
https://unix.stackexchange.com/a/546296/116861
rsync version 3.1.2
I mostly use windows in local, so this is the command line i use to sync files with the server (debian) :
user#user-PC /cygdrive/c/wamp64/www/projects
$ rsync -rptgoDvhnP --chown=www-data:www-data --exclude=.env --exclude=vendor --exclude=node_modules --exclude=.git --exclude=tests --exclude=.phpintel --exclude=storage ./website/ username#hostname:/var/www/html/website
-n : perform a trial run with no changes made, to really execute the command remove the -n option