I have about 2000 folders all named like this
dataul_-_ppfsefs
music_-_ppfsefs
fun [gghhhses]
pictures_-_ppfsefs
backup [gghhhses]
tempfiles_-_ppfsefs
trash_it_-_ppfsefs
There are two unwanted portion at the end of the name "_-_ppfsefs" and " [gghhhses]", How to I rename them to look like this
dataul
music
fun
pictures
backup
tempfiles
trash_it
Edit: I'm on Mac OSX (installed brew, macports).
The following two commands should do it:
for dir in * ; do mv "${dir}" "${dir/_-_ppfsefs/}" ; done
for dir in * ; do mv "${dir}" "${dir/ [gghhhses]/}" ; done
Related
I have a Bash shell script which takes three input files as an argument. I would like to package them all, so I can place that package on any UNIX machines and run it.
There is an old and ancient technique to include a tar file in a executable well known to the grey haired admins ;)
At first create a script ... put it into a file named script
#!/bin/bash
TAR_STARTS=`awk '/^__TARMAN BEGINS__/ { print NR + 1; exit 0; }' $0`
NAME_OF_SCRIPT=`pwd`/$0
tail +$TAR_STARTS $NAME_OF_SCRIPT | gunzip -c | tar -xvf -
# Insert commands to execute after untaring here .... with relative
# pathname (e.g. if there is directory "bin" with a file "executable"
# then you should insert a line bin/executable
exit
__TARMAN BEGINS__
No newline after the last __
Of course this script is derived from somewhere in the internet. It's not mine. I just cannot remember where for proper kudos.
Then create your tarfile and put it at the end of the file. This is the reason why it's nescessary that there is no newline after the __
$ cat script test.tar.gz > selfexploding.sh
Now you can just try it
$ bash ./selfexploding.sh
tar: blocksize = 9
x testtar, 0 bytes, 0 tape blocks
x testtar/test2, 1024 bytes, 2 tape blocks
x testtar/test1, 1024 bytes, 2 tape blocks
You could of course put the name of a script before the exit, that you create by unpack ... of course path must be relative to the pwd of the execution. Don't know if this works with AIX. At least with Solaris 11.3 it works. But as it only uses standard command. It should work everywhere. Besides of this you could of course create native packages for Solaris and AIX.
I am using BPXBATCH to concatenate an unknown number of files to 1 single file, then porting the single file to the mainframe; The files are VB: The files append after the last byte of previous file and I would like to append new file at beginning of new record on the Single file
What Data looks like:
File1BDT253748593725623.....File2BDT253748593725623.......
...............File3BDT253748593725623....
Here is what I would like it to look like:
File1BDT253748593725623.....
File2BDT253748593725623.......
...............
File3BDT253748593....
725623
Here is the BPXBATCH SH command I am using.
BPXBATCH SH cat /u/icm/comq/tmp1/rdq40.img.bin* > +
/u/icm/comq/tmp1/rdq40.img.all
Does anyone know a way to accomplish this?
You should use something like:
SH for f in /u/icm/comq/tmp1/rdq40.img.bin* ; do cat $f >> /u/icm/comq/tmp1/rdq40.img.all ; done
you can also copy your file to an MVS Sequential Dataset with the following syntax "//'RDQ40.IMG.ALL'". Not all shell commands understand it. cp and mv does.
My zip file has my folders inside. After unzipping my zip file, I want to iterate a loop for available folders inside the zip.
Inside loop condition is like below:
If my folder has index file (This is a file contains some data), then only I want to run some process (I know what this process is..). Otherwise we can ignore that folder.
Then loop will continue with other folder if there are anything
Thanks advance..
something like this?
(note: I assume $destdir will only contain the zipfile and its extraction!)
zipfile="/path/to/the/zipfile.zip"
destdir="/path/to/where/you/want/to/unzip"
indexfile="index.txt" #name of the index files
mkdir -p "$destdir" 2>/dev/null #make "sure" it exists.. but ignore errors in case it already exists
cd "$destdir" || { echo "Can not go into destdir=$destdir" ; Exit 1 ; }
#at that point, we are inside $destdir : we can start to work:
unzip "$zipfile"
for i in ./*/ ; do # you could change ./*/ to ./*/*/ if the zip contains a master directory too
cd "$i" && { #the && is important: you want to be sure you could enter that subdir!
if [ -e ./"$indexfile" ]; then
dosomething # you can define the function dosomething and use it here..
# or just place commands here
fi
cd - #we can safely this works, as we started there...
}
done
note: I iterate on ./*/ instead of */ as the dirname could contain a leding -, and therefore make cd -something not work (it would say it can't recognise some options!) ! this goes away with ./, cd ./-something will work !
Hi this is a simple question but the solution eludes me at the moment..
I can find out the folder name that I want to change the name of, and I know the command to change the name of a folder is mv
so from the current directory if i go
ls ~/relevant.directory.containing.directory.name.i.want.to.change
to which i get the name of the directory is called say lorem-ipsum-v1-3
but the directory name may change in the future but it is the only directory in the directory:
~/relevant.directory.containing.directory.name.i.want.to.change
how to i programmatically change it to a specific name like correct-files
i can do it normally by just doing something like
mv lorem-ipsum-v1-3 correct-files
but I want to start automating this so that I don't need to keep copying and pasting the directory name....
any help would be appreciated...
Something like:
find . -depth -maxdepth 1 -type d | head -n 1 | xargs -I '{}' mv '{}' correct-files
should work fine as long as only one directory should be moved.
If you are absolutely certain that relevant.directory.containing.directory.name.i.want.to.change only contains the directory you want to rename, then you can simply use a wildcard:
mv ~/relevant.directory.containing.directory.name.i.want.to.change/*/ ~/relevant.directory.containing.directory.name.i.want.to.change/correct-files
This can can also be simplified further, using bash brace expansion, to:
mv ~/relevant.directory.containing.directory.name.i.want.to.change/{*/,correct-files}
cd ~/relevant.directory.containing.directory.name.i.want.to.change
find . -type d -print | while read a ;
do
mv $a correct-files ;
done
Caveats:
No error handling
There may be a way of reversing the parameters to mv so you can use xargs instead of a while loop, but that's not standard (as far as I'm aware)
Not parameterised
If there any any subdirectories it won't work. The depth parameters on the find command are (again, AFAIK) not standard. They do exist on GNU versions but seem to be missing on Solaris
Probably others...
I have a feeling that I already know the answer to this one, but I thought I'd check.
I have a number of different folders:
images_a/
images_b/
images_c/
Can I create some sort of symlink such that this new directory has the contents of all those directories? That is this new "images_all" would contain all the files in images_a, images_b and images_c?
No. You would have to symbolically link all the individual files.
What you could do is to create a job to run periodically which basically removed all of the existing symbolic links in images_all, then re-create the links for all files from the three other directories, but it's a bit of a kludge, something like this:
rm -f images_all/*
for i in images_[abc]/* ; do; ln -s $i images_all/$(basename $i) ; done
Note that, while this job is running, it may appear to other processes that the files have temporarily disappeared.
You will also need to watch out for the case where a single file name exists in two or more of the directories.
Having come back to this question after a while, it also occurs to me that you can minimise the time during which the files are not available.
If you link them to a different directory then do relatively fast mv operations that would minimise the time. Something like:
mkdir images_new
for i in images_[abc]/* ; do
ln -s $i images_new/$(basename $i)
done
# These next two commands are the minimal-time switchover.
mv images_all images_old
mv images_new images_all
rm -rf images_old
I haven't tested that so anyone implementing it will have to confirm the suitability or otherwise.
You could try a unioning file system like unionfs!
http://www.filesystems.org/project-unionfs.html
http://aufs.sourceforge.net/
to add on to paxdiablo 's great answer, i think you could use cp -s
(-s or --symbolic-link)
which makes symbolic links instead of literal copying
to maybe speed up or simplify the the bulk adding of symlinks to the "merge" folder A , of the files from folder B and C.
(i have not tested this though)
I cant recall of the top of my head, but im sure there is some option for CP to NOT overwrite existing, thus only symlinks of new files will be "cp -s" ed