Using grep to find a file that contains a string - unix

My .htaccess file in my htdocs folder does not work. I tried to redirect to Google when accessing a filename. I want to find out where the settings for my httpd.conf are, so I can enable mod_rewrite. I did the following UNIX command to find out if a httpd.conf file existed on my hard drive:
find * -name "httpd.conf"
The file does not exist. I am thinking that maybe there is another file that controls mod_rewrite. I want to see if "AllowOverride" exists in any directory. I entered the following UNIX command:
grep -r "AllowOverride" *
But it's hard to read because it prints out so many folders. The message that accompanies the folders are "Permission denied" or "No such file or directory". How do I only get the file paths of files that contain AllowOverride?

Many Unix and similar systems provide a locate(1) command that uses a database to speed finding individual files. Try this:
locate httpd.conf
Note, of course, that Apache configurations are stored in files of all sorts of names; I've seen apache.conf, httpd.conf, httpd2.conf, and then there's the giant pile of /etc/apache2/conf.d/ -- entire directory structures set aside for configuring Apache. Your distribution may vary.
Perhaps apachectl configtest will show the paths? (currently not installed on my machine, so I can't easily test.)

Try this command:
find / -name "httpd.conf" 2>1 | grep -v "Permission denied"
the 2>1 funnels stderr to stdout so that both can be piped into the grep utility. grep in turn will print anyline that doesn't have the string "Permission denied" in it (the -v negates/inverts the matching of the search string)
If you don't redirect stderr to stdout, the output of stderr to the console would bypass the rest of the command line.
You could extend the above command line by appending this:
| grep -v "No such file or directory"
if that string was coming up and you wanted to suppress it too.
This tells you all about io redirection. And here's a nice quick summary.

Use the following:
find / -type f -exec grep -n "AllowOverride" {} \; -print 2>/dev/null
To scan files containing the "AllowOverride" string from the root, if you want to run the search in a particular directory, use the following instead:
find /path/to/directory -type f -exec grep -n "AllowOverride" {} \; -print 2>/dev/null
The output should only print the files containing the specified string along with the number of the matching line

Related

SCP issue with multiple files - UNIX

Getting error in copying multiple files. Below command is copying only first file and giving error for rest of the files. Can someone please help me out.
Command:
scp $host:$(ssh -n $host "find /incoming -mmin -120 -name 2018*") /incoming/
Result:
user#host:~/scripts/OTA$ scp $host:$(ssh -n $host "find /incoming -mmin -120 -name 2018*") /incoming/
Password:
Password:
2018084session_event 100% |**********************************************************************************************************| 9765 KB 00:00
cp: cannot access /incoming/2018084session_event_log.195-10.45.40.9
cp: cannot access /incoming/2018084session_event_log.195-10.45.40.9_2_3
Your command uses Command Substitution to generate a list of files. Your assumption is that there is some magic in the "source" notation for scp that would cause multiple members of the list generated by your find command to be assumed to live on $host, when in fact your command might expand into something like:
scp remotehost:/incoming/someoldfile anotheroldfile /incoming
Only the first file is being copied from $host, because none of the rest include $host: at the beginning of the path. They're not found in your local /incoming directory, hence the error.
Oh, and in addition, you haven't escape the asterisk in the find command, so 2018* may expand to multiple files that are in the login directory for the user in question. I can't tell from here, it depends on your OS and shell configuration.
I should point out that you are providing yet another example of the classic Parsing LS problem. Special characters WILL break your command. The "better" solution usually offered for this problem tends to be to use a for loop, but that's not really what you're looking for. Instead, I'd recommend making a tar of the files you're looking for. Something like this might do:
ssh "$host" "find /incoming -mmin -120 -name 2018\* -exec tar -cf - {} \+" |
tar -xvf - -C /incoming
What does this do?
ssh runs a remote find command with your criteria.
find feeds the list of filenames (regardless of special characters) to a tar command as options.
The tar command sends its result to stdout (-f -).
That output is then piped into another tar running on your local machine, which extracts the stream.
If your tar doesn't support -C, you can either remove it and run a cd /incoming before the ssh, or you might be able to replace that pipe segment with a curly-braced command: { cd /incoming && tar -xvf -; }
The curly brace notation assumes a POSIX-like shell (bash, zsh, etc). The rest of this should probably work equally well in csh if that's what you're stuck with.
Limited warranty: Best Effort Only. Untested on animals or computers. Your milage may vary. May contain nuts.
If this doesn't work for you, poke at it until it does.

Omit "Is a directory" results while using find command in Unix

I use the following command to find a string recursively within a directory structure.
find . -exec grep -l samplestring {} \;
But when I run the command within a large directory structure, there will be a long list of
grep: ./xxxx/xxxxx_yy/eee: Is a directory
grep: ./xxxx/xxxxx_yy/eee/local: Is a directory
grep: ./xxxx/xxxxx_yy/eee/lib: Is a directory
I want to omit those above results. And just get the file name with the string displayed. can someone help?
grep -s or grep --no-messages
It is worth reading the portability notes in the GNU grep documentation if you are hoping to use this code multiple places, though:
-s
--no-messages
Suppress error messages about nonexistent or unreadable files. Portability note: unlike GNU grep, 7th Edition Unix grep did not conform to POSIX, because it lacked -q and its -s option behaved like GNU grep’s -q option.1 USG-style grep also lacked -q but its -s option behaved like GNU grep’s. Portable shell scripts should avoid both -q and -s and should redirect standard and error output to /dev/null instead. (-s is specified by POSIX.)
Whenever you are saying find ., the utility is going to return all the elements within your current directory structure: files, directories, links...
If you just want to find files, just say so!
find . -type f -exec grep -l samplestring {} \;
# ^^^^^^^
However, you may want to find all files containing a string saying:
grep -lR "samplestring"
Exclude directory warnings in grep with the --exclude-dir option:
grep --exclude-dir=* 'search-term' *
Just look at the grep --help page:
--exclude-dir=PATTERN directories that match PATTERN will be skipped.

Formatting Find output before it's used in next command

I am batch uploading files to an FTP server with find and curl using this command:
find /path/to/target/folder -not -path '*/\.*' -type f -exec curl -u username:password --ftp-create-dirs -T {} ftp://ftp.myserver.com/public/{} \;
The problem is find is outputting paths like
/full/path/from/root/to/file.txt
so on the FTP server I get the file uploaded to
ftp.myserver.com/public/full/path/from/root/to/file.txt
instead of
ftp.myserver.com/public/relative/path/to/file.txt
The goal was to have all files and folders that are in the target folder get uploaded to the public folder, but this problem is destroying the file structure. Is there a way to edit the find output to trim the path before it gets fed into curl?
Not sure exactly what you want to end up with in your path, but this should give you an idea. The trick is to exec sh to allow you to modify the path and run a command.
find . -type f -exec sh -c 'joe=$(basename {}); echo $joe' \;

How to do Multi file find and replace from unix prompt

I want to replace 'localhost' with an actual ip like '1.1.1.1' in every file in a directory including subfolders, plus I want it to log the filenames it changed. I'm having a difficult time doing this, what command should I use?
grep -r --files-with-matches localhost *|tee changed_files|xargs sed -i 's/localhost/1.1.1.1/g'
The files changed will be logged to changed_files.
find /path/to/all/files -type f -exec sed -i 's/localhost/IP/g' {}\; should work. Or you get an idea of how to make sed work on every file that find finds.

Unix shell file copy flattening folder structure

On the UNIX bash shell (specifically Mac OS X Leopard) what would be the simplest way to copy every file having a specific extension from a folder hierarchy (including subdirectories) to the same destination folder (without subfolders)?
Obviously there is the problem of having duplicates in the source hierarchy. I wouldn't mind if they are overwritten.
Example: I need to copy every .txt file in the following hierarchy
/foo/a.txt
/foo/x.jpg
/foo/bar/a.txt
/foo/bar/c.jpg
/foo/bar/b.txt
To a folder named 'dest' and get:
/dest/a.txt
/dest/b.txt
In bash:
find /foo -iname '*.txt' -exec cp \{\} /dest/ \;
find will find all the files under the path /foo matching the wildcard *.txt, case insensitively (That's what -iname means). For each file, find will execute cp {} /dest/, with the found file in place of {}.
The only problem with Magnus' solution is that it forks off a new "cp" process for every file, which is not terribly efficient especially if there is a large number of files.
On Linux (or other systems with GNU coreutils) you can do:
find . -name "*.xml" -print0 | xargs -0 echo cp -t a
(The -0 allows it to work when your filenames have weird characters -- like spaces -- in them.)
Unfortunately I think Macs come with BSD-style tools. Anyone know a "standard" equivalent to the "-t" switch?
The answers above don't allow for name collisions as the asker didn't mind files being over-written.
I do mind files being over-written so came up with a different approach. Replacing each / in the path with - keep the hierarchy in the names, and puts all the files in one flat folder.
We use find to get the list of all files, then awk to create a mv command with the original filename and the modified filename then pass those to bash to be executed.
find ./from -type f | awk '{ str=$0; sub(/\.\//, "", str); gsub(/\//, "-", str); print "mv " $0 " ./to/" str }' | bash
where ./from and ./to are directories to mv from and to.
If you really want to run just one command, why not cons one up and run it? Like so:
$ find /foo -name '*.txt' | xargs echo | sed -e 's/^/cp /' -e 's|$| /dest|' | bash -sx
But that won't matter too much performance-wise unless you do this a lot or have a ton of files. Be careful of name collusions, however. I noticed in testing that GNU cp at least warns of collisions:
cp: will not overwrite just-created `/dest/tubguide.tex' with `./texmf/tex/plain/tugboat/tubguide.tex'
I think the cleanest is:
$ find /foo -name '*.txt' | xargs -i cp {} /dest
Less syntax to remember than the -exec option.
As far as the man page for cp on a FreeBSD box goes, there's no need for a -t switch. cp will assume the last argument on the command line to be the target directory if more than two names are passed.

Resources