In Unix, is it possible to use one command ONLY to list the directory if a sub-directory exists?
For example, I would like to list the directory name if it contains a sub-directory called "division_A"
/data/data_file/form_100/division_A
/data/data_file/form_101/division_A
/data/data_file/form_102/division_A
The desired result would be
form_100
form_101
form_102
I can only use 2 command lines to realize the goal.
cd /data/data_files
echo `ls -d */division_A 2> /dev/null | sed 's,/division_A,,g'`
So I would like to ask if anyone can use one command to proceed it.
Many Thanks!
Using find:
find /data/data_file -type d -name division_A -exec sh -c 'basename `dirname {}`' \; 2> /dev/null
If you don't mind the weird .., you can just do:
$ ls -d /data/data_file/*/division_A/..
It will output something like /data/data_file/form_100/division_A/.. and you can access it like normal folders.
Related
My use case is I want to search a collection of JARs for a specific class file. More specifically, I want to search recursively within a directory for all *.jar files, then list their contents, looking for a specific class file.
So this is what I have so far:
find . -name *.jar -type f -exec echo {} \; -exec jar tf {} \;
This will list the contents of all JAR files found recursively. I want to put a grep within the seconed exec because I want the second exec to only print the contents of the JAR that grep matches.
If I just put a pipe and pipe it all to grep afterward, like:
find . -name *.jar -type f -exec echo {} \; -exec jar tf {} \; | grep $CLASSNAME
Then I lose the output of the first exec, which tells me where the class file is (the name of JAR file is likely to not match the class file name).
So if there was a way for the exec to run two commands, like:
-exec "jar tf {} | grep $CLASSNAME" \;
Then this would work. Using a grep $(...) in the exec command wouldn't work because I need the {} from the find to take the place of the file that was found.
Is this possible?
(Also I am open to other ways of doing this, but the command line is preferred.)
i find it difficult to execute multiple commands within find-exec, so i usually only grab the results with find and loop around the results.
maybe something like this might help?
find . -type f -name *.jar | while read jarfile; do echo $jarfile; jar tf $jarfile; done
I figured it out - still using "one" command. What I was looking for was actually answered in the question How to use pipe within -exec in find. What I have to do is use a shell command with my exec. This ends up making the command look like:
find . -name *.jar -type f -exec echo {} \; -exec sh -c "jar tf {} | grep --color $CLASSNAME" \;
The --color will help the final result to stick out while the command is recursively listing all JAR files.
A couple points:
This assumes I have a $CLASSNAME set. The class name has to appear as it would in a JAR, not within a Java package. So com.ibm.json.java.JSONObject would become com/ibm/json/java/JSONObject.class.
This requires a JDK - that is where we get the jar command. The JDK must be accessible on the system path. If you have a JDK that is not on the system path, you can set an environment variable, such as JAR to point to the jar executable. I am running this from cygwin, so it turns out my jar installation is within the "Program Files" directory. The presence of a space breaks this, so I have to add these two commands:
export JAR=/cygdrive/c/Program\ Files/Java/jdk1.8.0_65/bin/jar
find . -name *.jar -type f -exec echo {} \; -exec sh -c "\"$JAR\" tf {} | grep --color $CLASSNAME" \;
The $JAR in the shell command must be escaped otherwise the terminal will not know what to do with the space in "Program Files".
I had a directory with many files and sub-directories. To move only the sub-directories, I just learned you can use:
ls -d BASEDIR/*/ | xargs -n1 -I% mv % TARGETDIR/
I use the following:
$ mv ./*/ DirToMoveTo
For example:
Say I wanted to move all directories with "Old" in the name to a folder called "Old_Dirs" on /data.
The command would look like this:
mv ./*Old*/ /data/
Why not use find?
find . -maxdepth 1 -type d -exec mv '{}' /tmp \;
-maxdepth 1 makes sure find won't go deeper than current directory
-type d tells find to only find directories
-exec execute a command with the result of the find referenced by {}
In my opinion a cleaner solution and it also works better then using xargs when you have files with white space or tabs in them.
With a file structure like this:
/dir2move2
/dir
/subdir1
/subdir2
index.js
To move only the sub directories and not the files you could just do:
mv ./dir/*/ ./dir2move2
Possible solution:
find BASEDIR/ -maxdepth 1 -mindepth 1 -type d -exec mv '{}' TARGETDIR \;
you can simply use the same command for moving a file but put a slash after the name of the subdirectory
sudo mv */ path/to/destination
sudo mv subdir/ path/to/subdirectory
I have a bunch of directories that all contain a file /SubDir1/SubDir2/File, and I want to see the memory of each file under directories matching a certain pattern. How do I do this?
So far I have ls -l | grep "pattern* to get a list of the directories, but am stuck at this.
You should use the find command:
find . -name 'pattern*' -printf '%s\t%p\n'
By "memory of each file" I guess you mean file size.
The find command will do a better job:
find . -name "pattern*" -exec du -b {} \;
This will print the file size of every file named File in your arborescence along with the file path.
Bash Pitfall #1: Don't parse ls
You can use find or shell patterns:
for i in pattern*; do
cat "$i"
done
One of your special problems is to get a list of all files under a set of matching directories, and you can do that with a more elaborate pattern:
for i in pattern*/*; do
if [ -f "$i" ]; then
cat "$i"
fi
done
In addition to what SirDarius said, you can also use the -R option to ls to get a recursive listing.
Something like ls -lRh | grep "pattern" should do what you want.
Using find I create a file that contains all the files that use a specific key word:
find . -type f | xargs grep -l 'foo' > foo.txt
I want to take that list in foo.txt and maybe run some commands using that list, i.e. run an ls command on the list contained within the file.
You don't need xargs to create foo.txt. Just execute the command with -exec like this:
find . -type f -exec grep -l 'foo' {} \; > foo.txt
Then you can run ls against the file by looping through the file:
while IFS= read -r read file
do
ls "$file"
done < foo.txt
Maybe it is a little ugly, but this can also make it:
ls $(cat foo.txt)
You can use xargs like this:
xargs ls < foo.txt
The advantage of xargs is that it will execute the command with multiple arguments which is more efficient than executing the command once per argument using a loop, for example.
UNIX:
How to find the number of users who have a given file in their home directory??
As in how can we access the files being used by other users.the command required for that.
I tried find command and all extensions of who
Assuming all the users' home directories are under /home and your're trying to find all users that have a file foo.txt, you can use this find command:
find /home -name "foo.txt" -exec bash -c "IFS=/ && read -a arr <<< {} && echo ${arr[2]}" \;
Assuming you have root privilege and assuming foo.txt is in the home directory, not a subdirectory thereof:
sudo find /home -maxdepth 2 -name "foo.txt" | wc -l
will give you the user count and
sudo find /home -maxdepth 2 -name "foo.txt" -printf "%u\n"
will give you a list of their names (assuming each foo.txt is owned by the owner of the home directory it is found in).