Im using this script to resize some scaned pdf files that users upload to FTP. What i need is to reduce the size of pdf, to reduce the time to process them (upload to s3, etc).
The script:
gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/screen -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf
So my question is: How can i make this recursive? I need all files in the folder to be reduced. If this can overwrite the original file, then is perfect.
Thanks in advance.
Your script is not a script. It is a command line.
You could write a shell script to iterate over all *.pdf files in the folder, and then call gs ... for each.
Something like this:
#!/bin/bash
for f in *.pdf
do
gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/screen -dNOPAUSE -dQUIET -dBATCH -sOutputFile=processed/$f $f
done
Related
I need output of "gsutil cp" command to output.txt file.
using ">" symbol its just printing on terminal, but not writing to output.txt
How can I do it? As it is taking few seconds to execute, I don't know about to deal asynchronous commands output.
If any body know, how to get generation of file after success of "cp"?
I know how to display on terminal (using -v option in cp command). But I need only generation, as it is printing so many lines in output.
As the documentation states [1], the gsutil cp command allows you to copy data between your local file system and the cloud, within the cloud, and between cloud storage providers.
If you want to save the output of a gsutil cp command to a text file, here is an example:
gsutil cp your-file-here gs://your-bucket-here 2> result.txt
Using a file called hello.txt and one of my buckets, the content of result.txt is
Copying file://hello.txt [Content-Type=text/plain]...
/ [1 files][ 92.0 B/ 92.0 B]
Operation completed over 1 objects/92.0 B.
[1] - https://cloud.google.com/storage/docs/gsutil/commands/cp
Can you help me with a shell script, which tail on live every new file in folder and reads lines in the file that coming and grep specific lines and write it to the file. For example:
Find latest file in folder
then unix read it like cat command that newest file, then grep it out specific lines like grep -A3 "some word" and those lines will be saved in file like >>someother file.
To detect file-system changes, you might want to have a look at inotify.
There are double compressed files with extension xxx.zip.gz
On gunzip - xxx.zip file is created of size 0.25 GB
On unzip after gunzip - xxx.zip file extension does not change
Output of unzip :
Archive: xxx.zip
inflating: xxx.txt
also
echo $? shows 0
so, even though zip command completed successfully and still the file remains with zip extension , any help ?
OS - SunOS 5.10
You're finding the xxx.txt is being created, right?
unzip and gunzip have different "philosophies" about dealing with their archive. gunzip gets rid of the .gz file, while unzip leaves its zip file in place. So in your case, zip is working as designed.
I think the best you can do is
unzip -q xxx.zip && /bin/rm xxx.zip
This will only delete the zip file if unzip exits without error. The -q option makes unzip quiet, so you won't get the status messages you included above.
edit
as you asked when zip file itself is +10 GB in size, then unzip does not succeed
Assuming that you are certain there is enough diskspace to save the expanded orig file, then it's hard to say. How big is the expanded file? Over 2GB? SunOS5 I believe, used to have file-size limitation at 2GB, requiring a 'large-file' support to be added into kernel and utilities. I don't have access to Sun anymore so can't confirm. I think you'll find places to look with apropos largefile (assuming your $MANPATH is setup correctly).
But the basic test for did the unzip work correctly would be something like
if unzip "${file}" ; then
echo "clean unzip for ${file}, deleting the archive file" >&2
/bin/rm "${file}"
else
echo "error running unzip for ${file}, archive file remains in place" >&2
fi
(Or I don't understand your use case). Feel free to post another question showing ls -l xxx.zip.gz xxx.zip and other details to help reconstruct your expected workflow.
IHTH.
I have some files that are .py and others that are ".txt". Instead of
cp *.py myDir/
cp *.txt myDir/
is there a way to perform this in one line on the command line?
Thanks
Try this:
cp *.{py,txt} myDir/
More info about *nix wildcards you can find here.
I have multiple files in Unix directory.
files names are as below.
EnvName.Fullbkp.schema_121212_1212_Part1.expd
EnvName.Fullbkp.schema_121212_1212_Part2.expd
EnvName.Fullbkp.schema_121212_1212_Part3.expd
In each of the above file there is a common line like below. eg
EnvName.Fullbkp.schema_121212_1212_Part1.expd
is having below data
Log=EnvName.Fullbkp.schema_10022012_0630_Part1.log
file=EnvName.Fullbkp.schema_10022012_0630_Part1.lst
EnvName.Fullbkp.schema_121212_1212_Part2.expd
is having below data
Log=EnvName.Fullbkp.schema_10022012_0630_Part2.log
file=EnvName.Fullbkp.schema_10022012_0630_Part2.lst
I want to replace the 10022012_0630 from EnvName.Fullbkp.schema_121212_1212_Part*.expd files with 22052013_1000 without actully opening those files. Changes should happen in all EnvName.Fullbkp.schema_121212_1212_Part*.expdp files in a directory at a time
Assuming you mean you don't want to manually open the files:
sed -i 's/10022012_0630/22052013_1000/' filename*.log
update: since the "-i" switch is not available on AIX, but assuming you have ksh (or a compatible shell):
mkdir modified
for file in filename*.log; do
sed 's/10022012_0630/22052013_1000/' "$file" > modified/"$file"
done
Now the modified files will be in the modified directory.
It's some kind of extreme optimist who suggests sed -i on AIX.
It's a bit more likely that perl will be installed.
perl -pi -e 's/10022012_0630/22052013_1000/' EnvName.Fullbkp.schema_121212_1212_Part*.expd
If no perl, then you'll just have to do it like a Real Man:
for i in EnvName.Fullbkp.schema_121212_1212_Part*.expd
do
ed -s "$i" <<'__EOF'
1,$s/10022012_0630/22052013_1000/g
wq
__EOF
done
Have some backups ready before trying these.