I am using BPXBATCH to concatenate an unknown number of files to 1 single file, then porting the single file to the mainframe; The files are VB: The files append after the last byte of previous file and I would like to append new file at beginning of new record on the Single file
What Data looks like:
File1BDT253748593725623.....File2BDT253748593725623.......
...............File3BDT253748593725623....
Here is what I would like it to look like:
File1BDT253748593725623.....
File2BDT253748593725623.......
...............
File3BDT253748593....
725623
Here is the BPXBATCH SH command I am using.
BPXBATCH SH cat /u/icm/comq/tmp1/rdq40.img.bin* > +
/u/icm/comq/tmp1/rdq40.img.all
Does anyone know a way to accomplish this?
You should use something like:
SH for f in /u/icm/comq/tmp1/rdq40.img.bin* ; do cat $f >> /u/icm/comq/tmp1/rdq40.img.all ; done
you can also copy your file to an MVS Sequential Dataset with the following syntax "//'RDQ40.IMG.ALL'". Not all shell commands understand it. cp and mv does.
Related
Could someone steer me towards a resource to do the following in unix: I want to set a variable equal to a filename so that I can input that variable/filename into a command line tool. I am trying to automate the process of running this command line tool by doing so.
My input files will always have the same string at the end their unique names.
How can I get this filename by searching the directory for a string AND successfully input that variable into command line tool?
so the unix code would look something like:
file1="find . -maxdepth 1 -name "string""
my command line tool --input $file1
thanks for your patience!
P.S only one file with that string will be in a directory at a time.
Instead of work with variables you can directly use the output of find command as parameter in your command line:
my_command_line_tool --input "$(find . -maxdepth 1 -name "*string*")"
If you expect more than one file you may remove the outer quotation marks. But this may broke the command if you have files which match the string, but have special characters like space, new line, etc in filename.
a bunch of text files that start with the prefix r_ , and I want to display the contents of all these files at once.
I tried to use cat and a command like this :
cat [f_*] ,
But this doesn't work like I expect
Using that cat properly:
$ cat r_*
As there is some mixup in the OP about the starting letter, go ahead and use this: cat [fr]_* .
you can use tail or head commands,
tail -n +1 r_*
You can use a script something like this
path="<Your Path Here>"
find $path -name "r_*" | while read -r currentFile;
do
while read -r line;
do
HERE $line will be each line
done < $currentFile
done
Here find $path -name "r_*" will find all the files starting with a r_ in the given path and iterate each file one by one.
The loop while read -r line will read each line content for you to perform any action
I use UNIX fairly infrequently so I apologize if this seems like an easy question. I am trying to loop through subdirectories and files, then generate an output from the specific files that the loop grabs, then pipe an output to a file in another directory whos name will be identifiable from the input file. SO far I have:
for file in /home/sub_directory1/samples/SSTC*/
do
samtools depth -r chr9:218026635-21994999 < $file > /home/sub_directory_2/level_2/${file}_out
done
I was hoping to generate an output from file_1_novoalign.bam in sub_directory1/samples/SSTC*/ and to send that output to /home/sub_directory_2/level_2/ as an output file called file_1_novoalign_out.bam however it doesn't work - it says 'bash: /home/sub_directory_2/level_2/file_1_novoalign.bam.out: No such file or directory'.
I would ideally like to be able to strip off the '_novoalign.bam' part of the outfile and replace with '_out.txt'. I'm sure this will be easy for a regular unix user but I have searched and can't find a quick answer and don't really have time to spend ages searching. Thanks in advance for any suggestions building on the code I have so far or any alternate suggestions are welcome.
p.s. I don't have permission to write files to the directory containing the input folders
Beneath an explanation for filenames without spaces, keeping it simple.
When you want files, not directories, you should end your for-loop with * and not */.
When you only want to process files ending with _novoalign.bam, you should tell this to unix.
The easiest way is using sed for replacing a part of the string with sed.
A dollar-sign is for the end of the string. The total script will be
OUTDIR=/home/sub_directory_2/level_2
for file in /home/sub_directory1/samples/SSTC/*_novoalign.bam; do
echo Debug: Inputfile including path: ${file}
OUTPUTFILE=$(basename $file | sed -e 's/_novoalign.bam$/_out.txt/')
echo Debug: Outputfile without path: ${OUTPUTFILE}
samtools depth -r chr9:218026635-21994999 < ${file} > ${OUTDIR}/${OUTPUTFILE}
done
Note 1:
You can use parameter expansion like file=${fullfile##*/} to get the filename without path, but you will forget the syntax in one hour.
Easier to remember are basename and dirname, but you still have to do some processing.
Note 2:
When your script first changes the directory to /home/sub_directory_2/level_2 you can skip the basename call.
When all the files in the dir are to be processed, you can use the asterisk.
When all files have at most one underscore, you can use cut.
You might want to add some error handling. When you want the STDERR from samtools in your outputfile, add 2>&1.
These will turn your script into
OUTDIR=/home/sub_directory_2/level_2
cd /home/sub_directory1/samples/SSTC
for file in *; do
echo Debug: Inputfile: ${file}
OUTPUTFILE="$(basename $file | cut -d_ -f1)_out.txt"
echo Debug: Outputfile: ${OUTPUTFILE}
samtools depth -r chr9:218026635-21994999 < ${file} > ${OUTDIR}/${OUTPUTFILE} 2>&1
done
So I'm running a program that works but the issue is my computer is not powerful enough to handle the task. I have a code written in R but I have access to a supercomputer that runs a Unix system (as one would expect).
The program is designed to read a .csv file and find everything with the unit ft3(monthly total) in the "Units" column and select the value in the column before it. The files are charts that list things in multiple units.
To convert this program in R:
getwd()
setwd("/Users/youruserName/Desktop")
myData= read.table("yourFileName.csv", header=T, sep=",")
funData= subset(myData, units="ft3(monthly total)", select=units:value)
write.csv(funData, file="funData.csv")
To a program in Shell Script, I tried:
pwd
cd /Users/yourusername/Desktop
touch RunThisProgram
nano RunThisProgram
(((In nano, I wrote)))
if
grep -r yourFileName.csv ft3(monthly total)
cat > funData.csv
else
cat > nofun.csv
fi
control+x (((used control x to close nano)))
chmod -x RunThisProgram
./RunThisProgram
(((It runs for a while)))
We get a funData.csv file output but that file is empty
What am I doing wrong?
It isn't actually running, because there are a couple problems with your script.
grep needs the pattern first, and quoted; -r is for recursing a
directory...
if without a then
cat is called wrong so it is actually reading from stdin.
You really only need one line:
grep -F "ft3(monthly total)" yourFileName.csv > funData.csv
I'm writing a application that acts like a filter: it reads input from a file (stdin), processes, and write output to another file (stdout). The input file is completely read before the application starts to write the output file.
Since I'm using stdin and stdout, I can run is like this:
$ ./myprog <file1.txt >file2.txt
It works fine, but if I try to use the same file as input and output (that is: read from a file, and write to the same file), like this:
$ ./myprog <file.txt >file.txt
it cleans file.txt before the program has the chance to read it.
Is there any way I can do something like this in a command line in Unix?
There's a sponge utility in moreutils package:
./myprog < file.txt | sponge file.txt
To quote the manual:
Sponge reads standard input and writes it out to the specified file. Unlike a shell redirect, sponge soaks up all its input before opening the output file. This allows constructing pipelines that read from and write to the same file.
The shell is what clobbers your output file, as it's preparing the output filehandles before executing your program. There's no way to make your program read the input before the shell clobbers the file in a single shell command line.
You need to use two commands, either moving or copying the file before reading it:
mv file.txt filecopy.txt
./myprog < filecopy.txt > file.txt
Or else outputting to a copy and then replacing the original:
./myprog < file.txt > filecopy.txt
mv filecopy.txt file.txt
If you can't do that, then you need to pass the filename to your program, which opens the file in read/write mode, and handles all the I/O internally.
./myprog file.txt # reads and writes according to its own rules
For a solution of a purely academic nature:
$ ( unlink file.txt && ./myprog >file.txt ) <file.txt
Possibly problematic side-effects are:
If ./myprog fails, you destroy your input. (Naturally...)
./myprog runs from a subshell (Use { ... ; } instead of ( ... ) to avoid.)
file.txt becomes a new file with a new inode and file permissions.
You need +w permission on the directory housing file.txt.