Transfer one line of text to another line in the same textfile - unix

I would like to know if there is a particular code for tranfserring one line of a text to another line in the same text file in Unix? Supposedly i have Wow.txt and it contains:
The quick brown fox
jumps over the lazy dog
The dog is my pet
Oh yeah!
I would like to have an output of:
The quick brown fox jumps over the lazy dog
The dog is my pet
Oh yeah!
Is it possible? Thank you!

Try:
cat Wow.txt | tr -d '\n' > Wow-oneline.txt
Edit, or for slightly more clean, correct way
cat Wow.txt | tr -s '\n' | tr '\n' ' ' > Wow-oneline.txt
Edit x2:
If you're going to be doing any significant file processing, I would recommend reading up on sed and/or awk.

awk '!/^ *$/{print}' < Wow.txt | fmt

Related

How to read from a file which starts with particular string using unix

I have a file which has one particular string which never repeats and all my data starts from this string. My requirement is to read all data beneath this string(say [string-start]) and redirect the data read into another file.
#Krishna Kanth: This command may be helpful . Try it :
sed -e 's/^.*\(search-string\)/\1/' input-file > output-file
#Landys:
I tried using below command but found parsing error for the same.
$ sed -ne 'H;1h;${g;s/.*\START-OF-DATA//g;p}' < file.txt > file.out
sed: 0602-404 Function H;1h;${g;s/.*\START-OF-DATA//g;p} cannot be parsed.
Please suggest!!!
It's easy to achieve it with sed in one line.
sed -ne 'H;1h;${g;s/.*string-start//;p}' input.txt > output.txt
Here's the decomposition.
-ne run the following script with quiet mode.
h/H - copy/append pattern space to hold space, and H will append \n to hold space first.
H;1h - just used for copy all text to hold space, 1 match the first line.
s/.../.../ - used to replace text before string-start as empty, which means delete it.
p - print the current pattern space.
${...} - match the last line.
For example, the input.txt is as follows.
abc
def
ghistring-startjkl
mno
pqr
The output.txt will be as follows.
jkl
mno
pqr

In zsh, how to expand a file of glob patterns?

Given a file globs.txt containing lines of glob patterns, what's a nice way to expand them all into one line?
I.e., given
$ cat globs.txt
a/b*
c/d*
and
$ ls prefix/*
a:
brunch lunch
c:
dance lance
x:
banana
I want to get prefix/a/brunch prefix/c/dance.
My current approach is:
(for line in $(cat globs.txt); do g=prefix/$line; print $~g; done) | tr "\n" " "
You are right, there is a simpler way ;-)
echo prefix/${^~$(<globs.txt)}

How can I delete the second word of every line of top(1) output?

I have a formatted list of processes (top output) and I'd like to remove unnecessary information. How can I remove for example the second word+whitespace of each line.
Example:
1 a hello
2 b hi
3 c ahoi
Id like to delete a b and c.
You can use cut command.
cut -d' ' -f2 --complement file
--complement does the inverse. i.e. with -f2 second field was choosen. And with --complement if prints all fields except the second. This is useful when you have variable number of fields.
GNU's cut has the option --complement. In case, --complement is not available then, the following does the same:
cut -d' ' -f1,3- file
Meaning: print first field and then print from 3rd to the end i.e. Excludes second field and prints the rest.
Edit:
If you prefer awk you can do: awk {$2=""; print $0}' file
This sets the second to empty and prints the whole line (one-by-one).
Using sed to substitute the second column:
sed -r 's/(\w+\s+)\w+\s+(.*)/\1\2/' file
1 hello
2 hi
3 ahoi
Explanation:
(\w+\s+) # Capture the first word and trailing whitespace
\w+\s+ # Match the second word and trailing whitespace
(.*) # Capture everything else on the line
\1\2 # Replace with the captured groups
Notes: Use the -i option to save the results back to the file, -r is for extended regular expressions, check the man as it could be -E depending on implementation.
Or use awk to only print the specified columns:
$ awk '{print $1, $3}' file
1 hello
2 hi
3 ahoi
Both solutions have there merits, the awk solution is nice for a small fixed number of columns but you need to use a temp file to store the changes awk '{print $1, $3}' file > tmp; mv tmp file where as the sed solution is more flexible as columns aren't an issue and the -i option does the edit in place.
One way using sed:
sed 's/ [^ ]*//' file
Results:
1 hello
2 hi
3 ahoi
Using Bash:
$ while read f1 f2 f3
> do
> echo $f1 $f3
> done < file
1 hello
2 hi
3 ahoi
This might work for you (GNU sed):
sed -r 's/\S+\s+//2' file

problems with cut (unix)

I've got strange problem with cut
I wrote script, there I have row:
... | cut -d" " -f3,4 >! out
cut recieves this data (I checked it with echo)
James James 033333333 0 0.00
but I recieve empty lines in out, can somebody explain why?
You need to compress out the sequences of spaces, so that each string of spaces is replaced by a single space. The tr command's -s (squeeze) option is perfect for this:
$ ... | tr -s " " | cut -d" " -f3,4 >! out
If you want fields from a text file, awk is almost always the answer:
... | awk '{print $3" "$4}'
For example:
$ echo 'James James 033333333 0 0.00' | cut -d" " -f3,4
$ echo 'James James 033333333 0 0.00' | awk '{print $3" "$4}'
033333333 0
Cut doesn't see multiple spaces as single space, so it matches "nothingness" between spaces.
Do you get empty lines when you leave out >! out part? Ie, are you targeting correct fields?
If your input string uses fixed spacing, you might want to use cut -c 4-10,15-20 | tr -d ' ' to extract character groups 4-10 and 15-20 and remove spaces from them..
... | grep -o "[^ ]*"
will extract fields, each on separate line. Then you might head/tail them. Not sure about putting them on the same line again.

Forcing the order of output fields from cut command

I want to do something like this:
cat abcd.txt | cut -f 2,1
and I want the order to be 2 and then 1 in the output. On the machine I am testing (FreeBSD 6), this is not happening (its printing in 1,2 order). Can you tell me how to do this?
I know I can always write a shell script to do this reversing, but I am looking for something using the 'cut' command options.
I think I am using version 5.2.1 of coreutils containing cut.
This can't be done using cut. According to the man page:
Selected input is written in the same order that it is read, and is
written exactly once.
Patching cut has been proposed many times, but even complete patches have been rejected.
Instead, you can do it using awk, like this:
awk '{print($2,"\t",$1)}' abcd.txt
Replace the \t with whatever you're using as field separator.
Lars' answer was great but I found an even better one. The issue with his is it matches \t\t as no columns. To fix this use the following:
awk -v OFS=" " -F"\t" '{print $2, $1}' abcd.txt
Where:
-F"\t" is what to cut on exactly (tabs).
-v OFS=" " is what to seperate with (two spaces)
Example:
echo 'A\tB\t\tD' | awk -v OFS=" " -F"\t" '{print $2, $4, $1, $3}'
This outputs:
B D A

Resources