Grep: could not find file - unix

In Unix environment, I need to write report to x_out file and also at the end of the process, the file needs to be removed. But, it always throws the following error.
grep: can't open /XYZ/123/Tmp/x_out
rm: /XYZ/123/Tmp/x_out non-existent
But, I can find the file x_out at the corresponding location. I'm able to open and view the contents too. I have found that sometime the file name changes with some '~' like characters appended to it. Is there a way to resolve this?
Edit: I'm not having any '~' appended to it. But, I have a doubt may be some unreadable chatacters like that have been appended.
Edit:I have added the actual error here
Edit: the command I used
grep "Report_values" ${REPORTOUT}|cut -d "|" -f 6
rm ${REPORTOUT}

Well, there are two possibilities I can see off the top of my head. There are undoubtedly more but the top of my head isn't a very big space :-)
The first is that the file doesn't exist despite your assertions.
The second is that it does exist but you're looking for it in the wrong place (for example, you've changed into a different directory).
If you place a line similar to:
( pwd ; cd ../.. ; pwd ; ls )
in your script before the grep/rm, it should tell you if either of those two possibilities is correct.
It will give your current directory, the directory you're looking in for the file and the files in that directory.

just check if you have non-printable/graphic character in the filename ... use -Q or -q flag of ls to see it... check below how it looks....
flag description from ls man page
-q, --hide-control-chars
print ? instead of non graphic characters
--show-control-chars
show non graphic characters as-is (default unless program is `ls' and output is a terminal)
-Q, --quote-name
enclose entry names in double quotes
--quoting-style=WORD
use quoting style WORD for entry names: literal, locale, shell, shell-always, c, escape
Demo Session
$ ls
demo.txt test.dat
$ ls -1
demo.txt
test.dat
$ cat demo.txt
cat: demo.txt: No such file or directory
$ rm demo.txt
rm: cannot remove `demo.txt': No such file or directory
$ ls -Q
"demo.txt " "test.dat"
$ ls -1Q
"demo.txt "
"test.dat"
$ rm "demo.txt "
$

Related

Piping the results of *nix commands into Vim's set of open files

I have a folder resembling this structure:
nietzsche.txt
kant.org
buddha.txt
kierkegaard.org
aristotle.txt
plato.org
I wish to read the text files that have the *.org extension, so I use the command:
ls | grep .org
The above command neatly sends the following to stdin:
> kant.org
> kierkegaard.org
> plato.org
I would like to open the files listed above in vim all at once - with the above given example, this is trivial; it would just mean typing out the list of files prefixed with "vim", for example:
vim kant.org kierkegaard.org plato.org
...but in my actual folder of articles there are several hundred plain text files, with the *.org and the *.txt extension. It isn't a matter of converting the org files to true plain text, it's trying to get vim to use the output of other commands through pipes. In reality, the conditions for generating the "books-to-read" list are far more complicated (ie. using date last read, author, date written etc) so a simple find and replace of org-to-txt wouldn't work, as I currently have a bash script to generate the list and spit it to stdout.
How would I get vim to accept the output of a command like grep as a list of files to open immediately?
In this specific example, ls | grep .org is pointless since you can simply do:
$ vim *.org
As for the general case, you would use $ man xargs on Unix-like systems:
$ <command that generates a list of files> | xargs -o vim
or:
$ <command that generates a list of files> | xargs vim --not-a-term
Note that xargs' -o and Vim's --not-a-term are more or less the opposite of each other. The former ensures that xargs passes a proper tty to Vim, while the latter ensures that Vim doesn't complain if there is no attached tty.
You can use command mode completion inside vim:
:e *.org<C-a>
Read more on :h c_CTRL-A

Delete files from a list in a text file

I have a text file containing around 500 lines. Each line is an absolute path to a file. I want to delete these files using a script.
There's a suggestion here but my files have spaces in them. They have been treated with \ to escape the space but it still doesn't work. There is discussion on that thread about problems with white spaces but no solutions.
I can't simply use the find command as that won't give me the precise result, I need to use the list (which was created by running find and editing out the discrepancies).
Edit: some context. I noticed that iTunes has re-downloaded and copied multiple songs and put them in the same directory as the original songs, e.g., inside a particular album directory is '01 This Song.aac' and '01 This Song 1.aac'.
I ran a find to produce a text file with all songs matching "* 1.*" to get songs ending in 1 but of any file type. I ran this in my iTunes Media/Music directory.
Some of these songs included in the file had the number 1 in but weren't actually duplicates (victims of circumstance), so I manually deleted them.
The file I am left with is around 500 lines with songs all including spaces in the filenames. Because it's an iTunes issue, there are just a few songs in one directory, then more in another, then another, and so on -- I can't just run a script on a single directory, it has to work recursively and run only on the files named in my list.txt
As you would expect, the trick is to get the quoting right:
while read line; do rm "$line"; done < filename
To remove the file which name has spaces you can just wrap the whole path in quotes.
And to delete the list of files I would recommend to change each line of your file so that it looks like rm call. The fastest way is to use sed. So if your file is in following format:
/home/path/file name.asd
/opt/some/string/another name.wasd
...
The oneliner for that would be something like this:
sed -e 's/^/rm -f "/' file.txt | sed -e 's/$/" ;/' > newfile.sh
First sed replaces beginning of the line with rm -f ", second sed end of the line with " ;.
It would produce file with following content:
rm -rf "/home/path/file name.asd" ;
rm -rf "/opt/some/string/another name.wasd" ;
...
So you can just execute this file as a bash script.

How to get the placeholder's value which is stored in a different file (same directory) using JSch exec

With the conditions:
I cannot use any XML parser tool as I don't have permission , read only
My xmllint version does not support xpath, and I cannot update it , read only
I dont have xmlstarlet and cannot install it
I run my script using Java JSch exec channel ( I have to run it here )
So we have 3 files in a directory.
sample.xml
values1.properties
values2.properties
The contents of the files are as follows:
Sample.xml
<block>
<name>Bob</name>
<address>USA</address>
<email>$BOB_EMAIL</email>
<phone>1234567</phone>
</block>
<block>
<name>Peter</name>
<address>France</address>
<cell>123123123</cell>
<drinks>Coke</drinks>
<car>$PETER_CAR</car>
<bike>Mountain bike</bike>
</block>
<block>
<name>George</name>
<hobby>$GEORGE_HOBBY</hobby>
<phone>$GEORGE_PHONE</phone>
</block>
values1.properties
JOE_EMAIL=joe#google.com
BOB_EMAIL=bob#hotshot.com
JACK_EMAIL=jack#jill.com
MARY_EMAIL=mary#rose.com
PETER_EMAIL=qwert1#abc.com
GEORGE_PHONE=Samsung
values2.properties
JOE_CAR=Honda
DAISY_CAR=Toyota
PETER_CAR=Mazda
TOM_CAR=Audi
BOB_CAR=Ferrari
GEORGE_HOBBY=Tennis
I use this script to get the xml block to be converted to a properties file format
NAME="Bob"
sed -n '/name>'${NAME}'/,/<\/block>/s/.*<\(.*\)>\(.*\)<.*/\1=\2/p' sample.xml
OUTPUT:
name=Bob
address=USA
email=$BOB_EMAIL
phone=1234567
How do I get the value of $BOB_EMAIL in values1.properties and values2.properties. Assuming that I do not know where it is located between the two (or probably more) properties file. Bacause it should work differently if I entered
Name=Peter
in the script, it should get
name=Peter
address=France
cell=123123123
drinks=Coke
car=$PETER_CAR
bike=Mountain bike
and the think that will be searched will be PETER_CAR
EXPECTED OUTPUT (The user only needs to input 1 Name at a time and the output expected is one set of data in properties format with the $PLACEHOLDER replaced with the value from the properties file):
User Input: Name=Bob
name=Bob
address=USA
email=bob#hotshot.com
phone=1234567
User Input: Name=Peter
name=Peter
address=France
cell=123123123
drinks=Coke
car=Mazda
bike=Mountain bike
Ultimately, the script that I need has this logic:
for every word with $
in the result of sed -n '/name>'${name}'/,/<\/block>/s/.*<(.*)>(.*)<.*/\1=\2/p' sample.xml ,
it will search for the value of that word in all of the properties file in that directory(or specified properties files),
then replace the word with $ with the value found in the properties file
PARTIALLY WORKING ANSWER:
Walter A's answer is working in cmd line (putty) but not in Jsch exec.
I keep getting an error of No value found for token 'var' .
The solution beneath will look in the properties files a lot of times, so I think there is a faster solution for the problem.
The solution beneath will get you started and with small files you might be happy with it.
# Question has a bash en ksh tag, choose the shebang line you want
# Make sure it is the first line without space or ^M after it.
#!/bin/ksh
#!/bin/bash
# Remove next line (debugging) when all is working
# set -x
for name in Bob Peter; do
sed -n '/name>'${name}'/,/<\/block>/s/.*<\(.*\)>\(.*\)<.*/\1=\2/p' sample.xml |
while IFS="\$" read line var; do
if [ -n "${var}" ]; then
echo "${line}$(grep "^${var}=" values[12].properties | cut -d= -f2-)"
else
echo "${line}"
fi
done
echo
done
EDIT: Commented two possible shebang lines, set -x and added output.
Result:
name=Bob
address=USA
email=bob#hotshot.com
phone=1234567
name=Peter
address=France
cell=123123123
drinks=Coke
car=Mazda
bike=Mountain bike
. values1.properties
. values2.properties
sed -n '/name>'${NAME}'/,/<\/block>/s/.*<\(.*\)>\(.*\)<.*/echo \1="\2"/p' sample.xml >output
. output
Dangerous, and not the way I would prefer to do it.
A sed based version:
$ temp_properties=`mktemp`
$ NAME=Bob
$ sed '/./{s/^/s|$/;s/=/|/;s/$/|g/}' values*.properties > $temp_properties
$ sed -n '/name>'${NAME}'/,/<\/block>/s/.*<\(.*\)>\(.*\)<.*/\1=\2/p' sample.xml | sed -f $temp_properties
Gives:
name=Bob
address=USA
email=bob#hotshot.com
phone=1234567
It does have issues of script injection. However, if you trust the values*.properties files & contents of NAME variable, you are good to go.

Unix: prepending a file without a dummy-file?

I do not want:
$ cat file > dummy; $ cat header dummy > file
I want similar to the command below but to the beginning, not to the end:
$ cat header >> file
You can't append to the beginning of a file without rewriting the file. The first way you gave is the correct way to do this.
This is easy to do in sed if you can embed the header string directly in the command:
$ sed -i "1iheader1,header2,header3"
Or if you really want to read it from a file, you can do so with bash's help:
$ sed -i "1i$(<header)" file
BEWARE that "-i" overwrites the input file with the results. If you want sed to make a backup, change it to "-i.bak" or similar, and of course always test first with sample data in a temp directory to be sure you understand what's going to happen before you apply to your real data.
The whole dummy file thing is pretty annoying. Here's a 1-liner solution that I just tried out which seems to work.
echo "`cat header file`" > file
The ticks make the part inside quotes execute first so that it doesn't complain about the output file being an input file. It seems related to hhh's solution but a bit shorter. I suppose if the files are really large this might cause problems though because it seems like I've seen the shell complain about the ticks making commands too long before. Somewhere the part that is executed first must be stored in a buffer so that the original can be overwritten, but I'm not enough of an expert to know what/where that buffer would be or how large it could be.
You can't prepend to a file without reading all the contents of the file and writing a new file with your prepended text + contents of the file. Think of a file in Unix as a stream of bytes - it's easy to append to an end of a stream, but there is no easy operation to "rewind" the stream and write to it. Even a seek operation to the beginning of the file will overwrite the beginning of with any data you write.
One possibility is to use a here-document:
cat > "prependedfile" << ENDENDEND
prepended line(s)
`cat "file"`
ENDENDEND
There may be a memory limitation to this trick.
Thanks to right searchterm!
echo "include .headers.java\n$(cat fileObject.java )" > fileObject.java
Then with a file:
echo "$(cat .headers.java)\n\n$(cat fileObject.java )" > fileObject.java
if you want to pre-pend "header" to "file" why not append "file" to "Header"
cat file >> header
Below is a simple c-shell attempt to solve this problem. This "prepend.sh" script takes two parameters:
$1 - The file containing the pre-appending wording.
$2 - The original/target file to be modified.
#!/bin/csh
if (if ./tmp.txt) then
rm ./tmp.txt
endif
cat $1 > ./tmp.txt
cat $2 >> ./tmp.txt
mv $2 $2.bak
mv ./tmp.txt $2

Why did my use of the read command not do what I expected?

I did some havoc on my computer, when I played with the commands suggested by vezult [1]. I expected the one-liner to ask file-names to be removed. However, it immediately removed my files in a folder:
> find ./ -type f | while read x; do rm "$x"; done
I expected it to wait for my typing of stdin:s [2]. I cannot understand its action. How does the read command work, and where do you use it?
What happened there is that read reads from stdin. When you put it at the end of a pipe, it read from that pipe.
So your find becomes
file1
file2
and so on; read reads that and replaces x successively with file1 then file2, and so your loop becomes
rm "file1"
rm "file2"
and sure enough, that rm's every file starting at the current directory ".".
A couple hints.
You didn't need the "/".
It's better and safer to say
find . -type f
because should you happen to type ". /" (ie, dot SPACE slash) find will start at the current directory and then go look starting at the root directory. That trick, given the right privileges, would delete every file in the computer. "." is already the name of a directory; you don't need to add the slash.
The find or rm commands will do this
It sounds like what you wanted to do was go through all the files in all the directories starting at the current directory ".", and have it ASK if you want to delete it. You could do that with
find . -type f -exec rm -i {} \;
or
find . -type f -ok rm {} \;
and not need a loop at all. You can also do
rm -r -i *
and get nearly the same effect, except that it will try to delete directories too. If the directory is empty, that'll even work.
Another thought
Come to think of it, unless you have a LOT of files, you could also do
rm -i `find . -type f`
Now the find in backquotes will become a bunch of file names on the command line, and the '-i' interactive flag on rm will ask the yes or no question.
Charlie Martin gives you a good dissection and explanation of what went wrong with your specific example, but doesn't address the general question of:
When should you use the read command?
The answer to that is - when you want to read successive lines from some file (quite possibly the standard output of some previous sequence of commands in a pipeline), possibly splitting the lines into several separate variables. The splitting is done using the current value of '$IFS', which normally means on blanks and tabs (newlines don't count in this context; they separate lines). If there are multiple variables in the read command, then the first word goes into the first variable, the second into the second, ..., and the residue of the line into the last variable. If there's only one variable, the whole line goes into that variable.
There are many uses. This is one of the simpler scripts I have that uses the split option:
#!/bin/ksh
#
# #(#)$Id: mkdbs.sh,v 1.4 2008/10/12 02:41:42 jleffler Exp $
#
# Create basic set of databases
MKDUAL=$HOME/bin/mkdual.sql
ELEMENTS=$HOME/src/sqltools/SQL/elements.sql
cat <<! |
mode_ansi with log mode ansi
logged with buffered log
unlogged
stores with buffered log
!
while read dbs logging
do
if [ "$dbs" = "unlogged" ]
then bw=""; cw=""
else bw="-ebegin"; cw="-ecommit"
fi
sqlcmd -xe "create database $dbs $logging" \
$bw -e "grant resource to public" -f $MKDUAL -f $ELEMENTS $cw
done
The cat command with a here-document has its output sent to a pipe, so the output goes into the while read dbs logging loop. The first word goes into $dbs and is the name of the (Informix) database I want to create. The remainder of the line is placed into $logging. The body of the loop deals with unlogged databases (where begin and commit do not work), then run a program sqlcmd (completely separate from the Microsoft new-comer of the same name; it's been around since about 1990) to create a database and populate it with some standard tables and data - a simulation of the Oracle 'dual' table, and a set of tables related to the 'table of elements'.
Other scripts that use the read command are bigger (by far), but generally read lines containing one or more file names and some other attributes of relevance, and then apply an appropriate transform to the files using the attributes.
Osiris JL: file * | grep 'sh.*script' | sed 's/:.*//' | xargs wgrep read
esqlcver:read version letter
jlss: while read directory
jlss: read x || exit
jlss: read x || exit
jlss: while read file type link owner group perms
jlss: read x || exit
jlss: while read file type link owner group perms
kb: while read size name
mkbod: while read directory
mkbod:while read dist comp
mkdbs:while read dbs logging
mkmsd:while read msdfile master
mknmd:while read gfile sfile version notes
publictimestamp:while read name type title
publictimestamp:while read name type title
Osiris JL:
'Osiris JL: ' is my command line prompt; I ran this in my 'bin' directory. 'wgrep' is a variant of grep that only matches entire words (to avoid words like 'already'). This gives some indication of how I've used it.
The 'read x || exit' lines are for an interactive script that reads a response from standard input, but exits if the command gets EOF (for example, if standard input comes from /dev/null).

Resources