sh script: no output when run in mounted filesystem - unix

Need some help to understand what's wrong.
In short: I've written a bourne shell script, which creates links to contents of source directory in the target directory.
It worked fine on the host system but when targeted on directories on mounted fs (both from chroot and native system) it doesn't work and provides no output at all.
Details:
mounted fs: ext3, rw
host system: 3.2.0-48-generic #74-Ubuntu SMP GNU/Linux
To narrow the question, "/usr" was taken as an example.
permissions for "/usr" in the host system: drwxr-xr-x
permissions for "/usr" on mounted partition: drwxr-xr-x
Tried to use both bash and dash from host system. Same result - works for native file systems, does not work for the mounted.
script (cord.sh; run from root in my cases):
# !/bin/sh
SRCFOLDER=$2 # folder with package installation
DESTFOLDER=$3 # destination folder to install symlinks to ('/' - for base sys; '/usr' - userland)
TARGETS=$(ls $SRCFOLDER) # targets to handle
SRCFOLDER=${SRCFOLDER%/} # stripping slashes from the end, if they are present
DESTFOLDER=${DESTFOLDER%/} #
##
## LINKING
##
if [ "$1" = "-c" ];
then printf %s "$TARGETS" | while IFS= read -r line
do
current_target=$(file $SRCFOLDER/$line) # had an issue with different output in different systems
if [ "${current_target% }" = "$SRCFOLDER/$line: directory" ]; # stripping space helped
then
mkdir -v $DESTFOLDER/$line # if other package created it - it'll fail
/usr/local/bin/cord.sh -c $SRCFOLDER/$line $DESTFOLDER/$line # RECURSION
else
ln -sv $SRCFOLDER/$line $DESTFOLDER/$line # will fail, if exists
fi;
done
##
## REMOVING LINKS
##
elif [ "$1" = "-d" ];
then printf %s "$TARGETS" | while IFS= read -r line
do
current_target=$(file $SRCFOLDER/$line)
if [ "${current_target% }" = "$SRCFOLDER/$line: directory" ];
then
/usr/local/bin/cord.sh -d $SRCFOLDER/$line $DESTFOLDER/$line # RECURSION
else
rm -v $DESTFOLDER/$line
fi;
done
elif [ "$1" = "-h" ];
then
echo "Usage:"
echo "cord -c /path/to/pkgdir /path/to/linkdir - create simlinks for package contents"
echo "cord -d /path/to/pkgdir /path/to/linkdir - delete links for package"
echo "cord -h - displays this help note"
else
echo "Usage:"
echo "cord -c /path/to/pkgdir /path/to/linkdir - create simlinks for package contents"
echo "cord -d /path/to/pkgdir /path/to/linkdir - delete links for package"
echo "cord -h - displays this help note"
fi;
The most obvious thing to suggest, was some issue with permissions. Yet everything looks sane. Maybe I've missed something?

I don't know what your main problem might be (permissions or something else - you should include an example of how you run the script and how you prepare for it with the mounts and everything). But this script can be cleaned up.
First, if you want to test whether something is a directory, use
if [ -d "$something ]
That'll get rid of the clumsy file usage.
Second, don't go through the redundant steps of converting your $TARGETS array to a series of lines and then reading the lines with a loop. Just loop over the array directly.
for line in $TARGETS
Also, instead of using ls to populate an array of filenames, I'd use a glob. But instead of either of those, I'd use find so it can take care of recursion and eliminate the tree of processes you're creating by recursing with a call to the same script. And instead of writing a symlink-tree-maker script I'd use something like lndir which already exists for that purpose...

Related

DDEV multisite setup with Acquia pull

I've just gotten DDEV setup and I have multisite working by manually running ddev import-db --target-db=[db-name]. It's working just fine but I would like to figure out how to get database pulls from Acquia to work where I can specify the site to pull from.
I have this script working but is there a way to do this with DDEV commands that would be a little cleaner?
First I modified acquia.yaml to this:
environment_variables:
project_id: mysite.dev
uri: mysite.com
db_name: mysite_us
#uri: mysite.ca
#db_name: mysite_canada
#uri: mysite.co.uk
#db_name: mysite_unitedkingdom
# etc etc
db_pull_command:
command: |
# set -x # You can enable bash debugging output by uncommenting
ls /var/www/html/.ddev >/dev/null # This just refreshes stale NFS if possible
pushd /var/www/html/.ddev/.downloads >/dev/null
acli remote:drush -n ${project_id} -- sql-dump --extra-dump=--no-tablespaces --uri=${uri} >${db_name}.sql
Then I wrote the following script which i call like:
./ddev-refresh-db.sh mysite_us mysite.com
#!/bin/bash
site="$1"
uri="$2"
ddev pull acquia
ddev import-db --target-db=${site} --src=.ddev/.downloads/${site}.sql
ddev drush --uri=${uri} cr
However this still requires us to change the site and URI in the acquia.yaml file before running this command.
Is there a way to pass a variable through to ddev pull acquia ? And also a way to mimic what this script is doing with a real DDEV command?
Here's a more complete answer for Acquia multisite pull, pulling all sites. As of DDEV v1.18.0, the ddev pull itself really isn't robust enough to pull multiple sites, because it assumes one database and one set of files. This works where #kelly howard's answer in https://stackoverflow.com/a/68553116/215713 is inadequate. (In her example, she pulls just one of the multisites, and it works great for that situation.)
But here we'll put all the logic in a DDEV custom command and pull all databases and files for any named site, so ddev acquiapull <sitename>
Place this file in the project as .ddev/commands/web/acquiapull
#!/bin/bash
# This DDEV custom command is set up to pull database and files from Acquia for several subsites.
# Usage: `ddev acquiapull [ --skip-db ] [ --skip-files ] <site1> <site2>
# Example: `ddev acquiapull subsite1`
# This assumes that each subsite has its own database (named for the site)
# and that each subsite has its own files in sites/<sitename>/files
# To use it set up the needed ACQUIA_API_KEY and ACQUIA_API_SECRET in global
# or project config, just as described in
# https://ddev.readthedocs.io/en/stable/users/providers/acquia/
acquia_project_id=myprojectid.dev
tmpdir=/tmp #inside web container
set -eu -o pipefail
while :; do
case ${1:-} in
-h | -\? | --help)
show_help
exit
;;
-y|--yes)
SKIP_CONFIRMATION=true
;;
--skip-files)
SKIP_FILES=true
;;
--skip-db)
SKIP_DB=true
;;
--) # End of all options.
shift
break
;;
-?*)
printf 'WARN: Unknown option (ignored): %s\n' "$1" >&2
;;
*) # Default case: No more options, so break out of the loop.
break ;;
esac
shift
done
# Map sitename to database name
function target_db_name() {
site_name=$1
echo $site_name
}
# Map sitename to files dir
function target_files_dir() {
site_name=$1
echo "sites/${site_name}/files"
}
# Get the files from upstream and load them.
function files_pull() {
#set -x # You can enable bash debugging output by uncommenting
set -eu -o pipefail
site_name=$1
files_dir=$(target_files_dir $1)
mkdir -p ${DDEV_DOCROOT}/${files_dir}/
echo "Using drush rsync to update files for ${site_name}..."
drush rsync --alias-path=~/.drush -q -y -r ${DDEV_DOCROOT} --verbose #${acquia_project_id}:${files_dir}/ ${DDEV_DOCROOT}/${files_dir}/
}
# Get the db from upstream and load it
function db_pull() {
#set -x # You can enable bash debugging output by uncommenting
set -eu -o pipefail
site_name=$1
target_db=$(target_db_name ${site_name})
echo "Downloading ${site_name} database..."
acli remote:drush -n ${acquia_project_id} -- sql-dump --uri=${site_name} --extra-dump=--no-tablespaces >${tmpdir}/${site_name}.sql
echo "Loading ${site_name} into database '${target_db}'..."
mysql -uroot -proot -e "CREATE DATABASE IF NOT EXISTS ${target_db}; GRANT ALL ON ${target_db}.* TO 'db'#'%'"
mysql -uroot -proot ${target_db} <${tmpdir}/${site_name}.sql
drush -r root --uri=${site_name} cr
}
# Handle initial authentication via Acquia secrets and ssh
function authenticate() {
if [ -z "${ACQUIA_API_KEY:-}" ] || [ -z "${ACQUIA_API_SECRET:-}" ]; then echo "Please make sure you have set ACQUIA_API_KEY and ACQUIA_API_SECRET in your project or global config" && exit 1; fi
if ! command -v drush >/dev/null; then echo "Please make sure your project contains drush, ddev composer require drush/drush" && exit 1; fi
ssh-add -l >/dev/null || (echo "Please 'ddev auth ssh' before running this command." && exit 1)
acli auth:login -n --key="${ACQUIA_API_KEY}" --secret="${ACQUIA_API_SECRET}"
acli remote:aliases:download -n >/dev/null
}
# Main script
authenticate || (printf "Failed to authenticate" && exit $?)
if [ $# -eq 0 ]; then
printf "Usage: ddev acquiapull [ --skip-db ] [ --skip-files ] <sitename>"
exit 1
fi
if [ ${SKIP_CONFIRMATION:-} != "true" ]; then
echo "This will overwrite your database and files for sites $*. OK?"
select yn in "Yes" "No"; do
case $yn in
No ) exit;;
esac
done
fi
for subsite in $*; do
echo "Pulling subsite: $subsite"
if [ "${SKIP_DB:-}" != "true" ]; then
db_pull ${subsite} || (printf "Failed to pull db for ${subsite}" && exit $?)
else
echo "Skipping db pull for ${subsite}"
fi
if [ "${SKIP_FILES:-}" != "true" ]; then
files_pull ${subsite} || (printf "Failed to pull files for ${subsite}" && exit $?)
else
echo "Skipping files pull for ${subsite}"
fi
done
Thanks to the guidance from #rfay I set up a set of files in .ddev/providers for each country. Each one is structured like this:
environment_variables:
uri: mysite.be
db_name: belgium
auth_command:
command: |
<no changes>
db_pull_command:
command: |
# set -x # You can enable bash debugging output by uncommenting
ls /var/www/html/.ddev >/dev/null # This just refreshes stale NFS if possible
pushd /var/www/html/.ddev/.downloads >/dev/null
acli remote:drush -n ${ACQUIA_PROJECT_ID} -- sql-dump --extra-dump=--no-tablespaces --uri=${uri} >${db_name}.sql
Then I created a custom command in .ddev/commands/host that has the contents of my script. There are more cases in the real script to cover all the countries.
#!/usr/bin/env bash
## Description: Refresh a database from Acquia and run post-db commands
## Usage: refresh-db [dbname]
## Example: "ddev refresh-db france"
site="$1"
case $site in
canada)
uri="mysite.ca"
;;
australia)
uri="mysite.com.au"
;;
belgium)
uri="mysite.be"
;;
brazil)
uri="mysite.com.br"
;;
*)
site="db"
uri="mysite.com"
;;
esac
ddev pull ${site} -y 2>/dev/null # suppress pull failed message since it really didn't
ddev import-db --target-db=${site} --src=${DDEV_APPROOT}/.ddev/.downloads/${site}.sql
ddev drush --uri=${uri} cr
ddev drush --uri=${uri} -y pmu simplesamlphp_auth
ddev drush --uri=${uri} -y config-set system.performance css.preprocess 0
ddev drush --uri=${uri} -y config-set system.performance js.preprocess 0
I tried to handle the db import during the db_pull_command as suggested but I couldn't get past database permission errors for importing a DB that I had not already imported using ddev import-db. However with the custom command I can also incorporate the post-db-import steps that normally would only run against the default DB if done through config.yaml.
The other change I made was to move the project ID into the web environment settings in global_config.yaml file. This way if we want to change the environment we want to pull from, we just make an edit to the project ID there and don't have to edit the provider files.
I'm not experienced with contributing back to open source projects but if this can be helpful to others I'd love to work with someone to do that pull request on the documentation or wherever it belongs.
I'm going to go ahead and answer in general, but you can add a full answer when you get this sorted out. (I don't have access to an Acquia multisite.)
You're on the right track, but you can do all of this in the pull script. The problem you're having is that ddev just assumes a single database, and you have multiple.
Here's a strategy for your acquia.yaml:
Create all the databases. You can use mysql -e "CREATE DATABASE IF NOT EXISTS <dbname>;, use several lines or a for loop.
Pull all the databases. You can do this with separate acli lines, or use a for loop.
Import the databases that aren't the primary db using the mysql command. mysql <dbname> < <dbname.sql Again, this can be a few lines or a for loop. (You can also just import the primary db and it will just be re-imported by ddev, no harm done if it's not large.)
Thanks for the great question, and I hope you'll give a full answer here. Your answer could also be incorporated into https://ddev.readthedocs.io/en/stable/users/providers/acquia/ - you could do a PR there by clicking the pencil link at the upper right.

NordVPN setup on linux

NordVPN does not offer an automatic setup for linux, just VPN config files. What's the best way to implement this?
(my own implementation below, please feel free to comment or suggest improvements!)
EDIT: When I wrote this, I did not know that NordVPN did introduce a command line tool for linux recently.
I have written a little script that downloads the config files, renames them and enables automatic authentification. Insert your NordVPN login credentials in the generate authentification file part.
#!/bin/bash
# run as root!!!
# install openvpn. I'm running arch, this might be different on your system.
pacman -S openvpn
# go to openvpn config folder
cd /etc/openvpn
# download config files, extract and clean up
wget https://downloads.nordcdn.com/configs/archives/servers/ovpn.zip
unzip ovpn.zip
rm ovpn.zip
# rename tcp config files and put them in /etc/openvpn/client
cd ovpn_tcp
for file in *; do mv "${file}" "${file/.nordvpn.com.tcp.ovpn/}tcp.conf"; done
cp * ../client
# rename udp config files and put them in /etc/openvpn/client
cd ../ovpn_udp
for file in *; do mv "${file}" "${file/.nordvpn.com.udp.ovpn/}udp.conf"; done
cp * ../client
# generate authentification file
cd ../client
printf "<your email>\n<your password>" > auth.txt
# make all configs use authentification file
find . -name '*.conf' -exec sed -i -e 's/auth-user-pass/auth-user-pass\ auth.txt/g' {} \;
# clean up
cd ..
rm -r ovpn_tcp/
rm -r ovpn_udp
You can now start and stop vpn-connections via e.g.
systemctl start openvpn-client#de415tcp.service
and
systemctl stop openvpn-client#de415tcp.service
To automate this, and to connect to the server recommended by NordVPN, I have written two scripts. Make them executable and put them somewhere in your $PATH.
Pass a country code (like us, de or uk) as command line argument to start-vpn if you want to choose a specific country. It automatically chooses a tcp connection. You can change that to udp if you want.
start-vpn
#!/usr/bin/python
import sys
import requests
import os
import time
# you don't necessarily need the following. It's for monitoring via i3blocks.
def notify_i3blocks():
os.system('pkill -RTMIN+12 i3blocks')
def fork_and_continue_notifying_in_background():
newpid = os.fork()
if newpid == 0: # if this is the child process
for i in range(60):
notify_i3blocks()
time.sleep(1)
if __name__ == '__main__':
notify_i3blocks()
# below is what you do need.
suffix = ''
if len(sys.argv) > 1:
countries = requests.get('https://nordvpn.com/wp-admin/admin-ajax.php?action=servers_countries').json()
for country in countries:
if country["code"].lower() == sys.argv[1].lower():
suffix = '&filters={"country_id":' + str(country["id"]) + '}'
result = requests.get('https://nordvpn.com/wp-admin/admin-ajax.php?action=servers_recommendations' + suffix)
profile = result.json()[0]['subdomain'] + 'tcp'
command = 'systemctl start openvpn-client#' + profile + '.service'
os.system(command)
# the following is for i3blocks again.
fork_and_continue_notifying_in_background()
stop-vpn
#!/bin/bash
function service {
systemctl |
grep openvpn |
grep running |
head -n1 |
awk '{print $1;}'
}
while [[ $(service) ]]; do
systemctl stop $(service)
done
# notify i3blocks
pkill -RTMIN+12 i3blocks
For convenience, I have two aliases in my ~/.bashrc:
alias start-vpn='sudo start-vpn'
alias stop-vpn='sudo stop-vpn'
if you do want to monitor it via i3blocks, put this in your i3blocks config:
[vpn]
interval=once
signal=12
and this in your i3blocks-scripts-directory (with name vpn):
#!/bin/bash
function name {
systemctl |
grep openvpn |
grep running |
head -n1 |
awk '{print $1;}' |
cut -d # -f 2 |
cut -d . -f 1
}
starting=$(pgrep -f start-vpn) # this might not be the most accurate, but it works for me. Improvement suggestions are welcomed.
if [[ $(name) ]]; then
echo $(name)
echo && echo "#00FF00"
else
if [[ ${starting} ]]; then
echo starting vpn...
echo && echo "#FFFF00"
else
echo no vpn
echo && echo "#FF0000"
fi
fi
In order to automatically start and stop vpn when a network interface goes up/down, put the following in /etc/NetworkManager/dispatcher.d/10-openvpn. To activate the feature you need to enable and start the NetworkManager-dispatcher.service. More info here.
At my university, I connect to eduroam, which does not allow vpn. That's why I exclude that.
/etc/NetworkManager/dispatcher.d/10-openvpn
#!/bin/bash
case "$2" in
up)
if ! nmcli -t connection | grep eduroam | grep wlp3s0 ; then
start-vpn
fi
;;
down)
stop-vpn
;;
esac
I hope this helps other people who want to use NordVPN on linux. Again, feel free to comment and suggest improvements.
In particular, I am not sure how much of a security risk it is to have the NordVPN-password written out in plain text in a file.

How do I reset and put the zshrc file back to default?

/Users/ello/.zshrc:source:3: no such file or directory:
/Users/ello/Projects/config/env.sh
Ello-MacBook-Pro% /Users/ello/.zshrc:source
zsh: no such file or directory: /Users/ello/.zshrc:source
Ello-MacBook-Pro% /Users/ello/.zshrc
zsh: permission denied: /Users/ello/.zshrc
Ello-MacBook-Pro%
This has been happening, after I foolishly edited the .zshrc file. All that remains in the file now, after attempting to reset the shell, is this:
# Created by newuser for 5.3.1
# Add env.sh
How do I undo everything, reinstall zsh, or remake the .zshrc file?
This is on macOS Sierra.
Edit: I reinstalled oh-my-zsh, leading to this message:
ain() {
# Use colors, but only if connected to a terminal, and that terminal
# supports them.
if which tput >/dev/null 2>&1; then
ncolors=$(tput colors)
fi
if [ -t 1 ] && [ -n "$ncolors" ] && [ "$ncolors" -ge 8 ]; then
RED="$(tput setaf 1)"
GREEN="$(tput setaf 2)"
YELLOW="$(tput setaf 3)"
BLUE="$(tput setaf 4)"
BOLD="$(tput bold)"
NORMAL="$(tput sgr0)"
else
RED=""
GREEN=""
YELLOW=""
BLUE=""
BOLD=""
NORMAL=""
fi
# Only enable exit-on-error after the non-critical colorization
stuff,
# which may fail on systems lacking tput or terminfo
set -e
CHECK_ZSH_INSTALLED=$(grep /zsh$ /etc/shells | wc -l)
if [ ! $CHECK_ZSH_INSTALLED -ge 1 ]; then
printf "${YELLOW}Zsh is not installed!${NORMAL} Please install zsh
first!\n"
exit
fi
unset CHECK_ZSH_INSTALLED
if [ ! -n "$ZSH" ]; then
ZSH=~/.oh-my-zsh
fi
if [ -d "$ZSH" ]; then
printf "${YELLOW}You already have Oh My Zsh installed.${NORMAL}\n"
printf "You'll need to remove $ZSH if you want to re-install.\n"
exit
fi
# Prevent the cloned repository from having insecure permissions.
Failing to do
# so causes compinit() calls to fail with "command not found:
compdef" errors
# for users with insecure umasks (e.g., "002", allowing group
writability). Note
# that this will be ignored under Cygwin by default, as Windows ACLs
take
# precedence over umasks except for filesystems mounted with option
"noacl".
umask g-w,o-w
printf "${BLUE}Cloning Oh My Zsh...${NORMAL}\n"
hash git >/dev/null 2>&1 || {
echo "Error: git is not installed"
exit 1
}
# The Windows (MSYS) Git is not compatible with normal use on cygwin
if [ "$OSTYPE" = cygwin ]; then
if git --version | grep msysgit > /dev/null; then
echo "Error: Windows/MSYS Git is not supported on Cygwin"
echo "Error: Make sure the Cygwin git package is installed and is
first on the path"
exit 1
fi
fi
env git clone --depth=1 https://github.com/robbyrussell/oh-my-zsh.git
$ZSH || {
printf "Error: git clone of oh-my-zsh repo failed\n"
exit 1
}
printf "${BLUE}Looking for an existing zsh config...${NORMAL}\n"
if [ -f ~/.zshrc ] || [ -h ~/.zshrc ]; then
printf "${YELLOW}Found ~/.zshrc.${NORMAL} ${GREEN}Backing up to
~/.zshrc.pre-oh-my-zsh${NORMAL}\n";
mv ~/.zshrc ~/.zshrc.pre-oh-my-zsh;
fi
zsh itself does not have a default user configuration. So the default ~/.zshrc is actually no ~/.zshrc.
But as you tagged the question with oh-my-zsh I would assume that you want to restore the default oh-my-zsh configuration. For this it should be sufficient to copy templates/zshrc.zsh-template from your oh-my-zsh installation path, usually ~/.oh-my-zsh:
cp ~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc
You may want to backup your current ~/.zshrc beforehand. Although it may have some problems now, you still might want to look up some settings once you reverted to default.
There is no such thing as "default". The best you can do, is check if your system has /etc/skel/.zshrc. If yes copy that into your home.
When you log in first time, your home is populated with everything from /etc/skel.
My dumass decided to just put a crash command into the zsh file. Now when I open the terminal, it just kernel panics. so I just deleted the config file using rm -f ~/.zshrc* and by default, it just got replaced with another copy. So good luck.
You can copy .zshrc template from
https://github.com/ohmyzsh/ohmyzsh/blob/master/templates/zshrc.zsh-template
And copy and paste all content in to ~/.zshrc
[MS Windows Friendly Solution - If terminal(using vim editor) steps are confusing]
Actually, there is no default .zshrc file, but if you need to edit is as a simple notepad, do these:
Goto /Users/ Folder via Finder App.
Click Shift + Command + . (Dot) to view hidden system files.
Look on .zshrc file, double click to open, then it will open in a notepad(TextEdit.app) in default.
Clear whichever lines to be removed.
Retype/Edit the file as per the Paths to be added.
Hit Command + s to save and exit.
Make it your default shell using this command:
chsh -s $(which zsh)

Unix script changing directory

I am in root directory, i am creating a script that will take me from root > Home > Logs and inside logs delete 3 log files.
Script will check if they exist, if YES it will delete it.
I am facing some syntax problems if you could help.
Thanks
My code:
#!/bin/sh
cd Home/Log
if [ -e error1.log ]
then
rm error1
fi
if [ -e error2.log ]
then
rm error1
fi
if [ -e error3.log ]
then
rm error1
fi
when i execute the file in root using ./delete here is what is am getting as errors:
$ ./delete
: No such file or directoryme/Log
./delete: line 14: syntax error near unexpected token `fi'
I am in root directory
When writing a script, it's almost always better not to assume things like that. If you know where the files are and it's not important that they're somewhere relative to what happens to be your current working directory, just name them.
Here are three ways you could accomplish what you want safely.
#!/bin/sh
dir=/Home/Log
rm -f ${dir}/error1.log ${dir}/error2.log ${dir}/error2.log
or
#!/bin/sh
dir=/Home/Log
rm -f ${dir}/error{1,2,3}.log
or
#!/bin/sh
set -e
cd /Home/Log && rm -f error1.log error2.log error2.log
For anything nontrivial, set -e is your friend. In your example, nothing happens later in the script. What you don't want is to keep going thinking you've changed directories, but haven't, and wind up scribbling somewhere you didn't intend. Many have lost much that way.

Shell script to sort & mv file based on date

Im new to unix,I have search a lot of info but still don not how to make it in a bash
What i know is used this command ls -tr|xargs -i ksh -c "mv {} ../tmp/" to move file by file.
Now I need to make a script that sorts all of these files by system date and moves them into a directory, The first 1000 oldest files being to be moved.
Example files r like these
KPK.AWQ07102011.66.6708.01
KPK.AWQ07102011.68.6708.01
KPK.EER07102011.561.8312.13
KPK.WWS07102011.806.3287.13
-----------This is the script tat i hv been created-------
if [ ! -d /app/RAID/Source_Files/test/testfolder ] then
echo "test directory does not exist!"
mkdir /app/RAID/Source_Files/calvin/testfolder
echo "unused_file directory created!"
fi
echo "Moving xx oldest files to test directory"
ls -tr /app/RAID/Source_Files/test/*.Z|head -1000|xargs -i ksh -c "mv {} /app/RAID/Source_Files/test/testfolder/"
the problem of this script is
1) unix prompt a syntax erro 'if'
2) The move command is working but it create a new filename testfolder instead move to directory testfolder (testfolder alredy been created in this path)
anyone can gv me a hand ? thanks
Could this help?
mv `ls -tr|head -1000` ../tmp/
head -n takes the n first lines of the previous command (here the 1000 oldest files). The backticks allow for the result of ls and head commands to be used as arguments to mv.

Resources