zip files of a certain date to zip file with batch - datetime

I'm trying to do a batch file to run every day. The batch needs to zip all the files with the current system date.
I have the following for now and it's working but is zipping all the files.
#echo off
"c:\Program Files\7-Zip\7z.exe" a -tzip "C:\test\location_zip\zip_file_%date:~6,4%-%date:~3,2%-%date:~0,2%" "C:\test\*.*"
pause
Can someone please help me to zip only the files created with system date instead all?
Many thanks,

I do not have a 7-zip solution. But with WinRAR this can be very easily achieved:
"%ProgramFiles%\WinRAR\WinRar.exe" a -ac -afzip -agYYYY-MM-DD -ao -cfg- -ed -ep1 -inul -m5 -r -tn24h "C:\test\location_zip\zip_file_" "C:\test\*.*"
a ... add files to archive.
-ac ... clear archive attribute after compression.
-afzip ... create a ZIP archive instead of a RAR archive.
-agYYYY-MM-DD ... append current date to archive file name in format YYYY-MM-DD.
-ao ... add only files with archive attribute set.
-cfg- ... ignore configuration file and RAR environment variable.
-ed ... do not add empty directories.
-ep1 ... exclude base directory from names, i.e. C:\test\
-inul ... disable all error messages.
-m5 ... use best compression method.
-r ... recurse subdirectories.
-tn24h ... add only files with a last modification date within last 24 hours (newer than 24 hours).
A single license of WinRAR is very cheap and includes unlimited upgrades. The few dollars/euros for a WinRAR license are invested well taking into account the time needed to code a batch file for archiving purposes with free 7-zip because of missing features which WinRAR has since more than 10 years. (I bought my WinRAR license in 1999 and have never needed any other compression tool.)

Related

zipping the files in unix having extension after

I have a logs file at directory cd /opt/app/logs named as
coa.log.1
uoa.log.2
erete-rere.log.1
now my concern is that i am looking for unix command which will zip the files having extension log.1 or log.2 or having log. extension anything
request you to please advise the unix comand to zip these files
Folks please advise for this
You can compress in a lot of formats in unix. The most commom is the tar.gz extension. If you are looking fot .zip specifically, there is a zip command: zip <zipname> <files>
To pass the file list you may use the * wirldcard, indicating any name. For example: *.log.*, as sugested by glen, will match every file that contains .log. in its name.
You probably want something like this:
zip backup_logs.zip /opt/app/logs/*.log.*
This will zip all files in the folder that matched what you asked and create a zip file in your current directory.

Can a pre-commit Git hook zip a directory and add it to the repository?

I'm doing development on a Wordpress plugin. My development directory contains a lot of development-specific stuff (e.g. Grunt files, Sass files, the git repository itself, etc.).
Obviously, I don't want to distribute this folder containing all of those development files; people don't want a few MB of Grunt files when they download my Wordpress plugin.
Up until now, though, my "release" process has been cumbersome:
Commit the Git changes
Zip the entire folder
Open the zip file and delete the .git folder, grunt files, and all the other development-specific files
Release the new zip
I don't know the best way to accomplish this, but I'm very vaguely familiar with Git hooks, and I had this thought: could I set up a Git hook that would zip ONLY the needed production files into a ZIP file and store it with the repo? That way, every time I commit it would automatically create a new release ZIP.
Is that possible? If so, could someone point me in the right direction?
Oh also, I'm on Windows (・_・;). So I'm hoping that there's a way to do it on Windows.
I can't speak for Windows, but:
It's technically possible to do that sort of thing in a pre-commit hook.
Don't.
A pre-commit hook that modifies "what you will commit" is annoying (if nothing else, it violates the "rule of least astonishment", where your version control system simply stores the versions you tell it to store). Apart from that, storing large pre-compressed binaries interferes with git's attempt to save space in pack files, and will cause rapid repository bloat, poor performance, running out of memory, and so on. A ZIP-archive is a pre-compressed binary and hence will behave badly.
In general, a more reasonable "hook-y" way to handle releases is to set up a "release server" to which you push new releases, and have the push trigger the archive-generation. (There are ways to do this without a separate server / repository, and you can do it in a more pull-style fashion, but the push-style is easy to illustrate.)
[Edit: I had originally considered git archive but did not realize you could get it to exclude files conveniently, so wrote up the below instead. So, jthill's answer is better and should be one's first resort. I'll leave this in place as an alternative for some case where for some reason, git archive might not do.]
For instance, here's a server-side post-receive hook code fragment that checks whether a branch whose name matches release* has been pushed-to, and if so, invokes a shell function with the name of the branch (once for each such branch):
#! /bin/sh
NULL_SHA1=0000000000000000000000000000000000000000
scan()
{
local oldsha newsha fullref shortref
local optype
while read oldsha newsha fullref; do
case $oldsha,$newsha in
$NULL_SHA1,*) optype=create;;
*,$NULL_SHA1) optype=delete;;
*) optype=update;;
esac
case $fullref in
refs/heads/*)
reftype=branch
shortref=${fullref#refs/heads/}
;;
*)
reftype=other
shortref=fullref
;;
esac
case $optype,$reftype,$shortref in
create,branch,release*|update,branch,release*)
do_release $shortref;;
esac
done
}
scan
(much of the above is boilerplate, which I have stripped down to essentials). You would have to write the do_release function, which might resemble (totally untested):
do_release()
{
local tmpdir=/tmp/build.$$ # or use mktemp -d
# $tmpdir/index is git's index; $tmpdir/t is the work tree
trap "rm -rf $tmpdir; exit 1" 1 2 3 15
rm -rf $tmpdir
mkdir $tmpdir/t
GIT_INDEX_FILE=$tmpdir/index GIT_WORK_TREE=$tmpdir/t git checkout $1
# now clean out grunt files and make zip archive
(cd $workdir/t; rm -rf grunt; zip ../t.zip .)
# put completed zip archive in export location, name it
# based on the branch name
mv $workdir/t.zip /place/where/zip/files/live/$1.zip
# clean up temp dir now, and no longer need to clean up
# on signal related abort
rm -rf $tmpdir
trap - 1 2 3 15
}
There's actually a command for this, git archive.
git archive master -o wizzo-v1.13.0.zip
See the EXAMPLES section, you can select paths, add prefixes to them, define custom postprocessing by output extension, and some more minor tweaks.
Also see the ATTRIBUTES section: you can give files -- arbitrary patterns, really -- an export-ignore attribute to exclude them from archives.
It's got a bunch more handy-dandies, you can get archives from remote repos, expand arbitrary git log --pretty=format: placeholders, the git manpages are definitely worth whatever time you can invest in them.

An appendable compressed archive

I have a requirement to maintain a compressed archive of log files. The log filenames are unique and the archive, once expanded, is simply one directory containing all the log files.
The current solution isn't scaling well, since it involves a gzipped tar file. Every time a log file is added, they first decompress the entire archive, add the file, and re-gzip.
Is there a Unix archive tool that can add to a compressed archive without completely expanding and re-compressing? Or can gzip perform this, given the right combination of arguments?
I'm using zip -Zb for that (appending text logs incrementally to compressed archive):
fast append (index is at the end of archive, efficient to update)
-Zb uses bzip2 compression method instead of deflate. In 2018 this seems safe to use (you'll need a reasonably modern unzip -- note some tools do assume deflate when they see a zip file, so YMMV)
7z was a good candidate: compression ratio is vastly better than zip when you compress all files in the same operation. But when you append files one by one to the archive (incremental appending), compression ratio is only marginally better than standard zip, and similar to zip -Zb. So for now I'm sticking with zip -Zb.
To clarify what happens and why having the index at the end is useful for "appendable" archive format, with entries compressed individually:
Before:
############## ########### ################# #
[foo1.png ] [foo2.png ] [foo3.png ] ^
|
index
After:
############## ########### ################# ########### #
[foo1.png ] [foo2.png ] [foo3.png ] [foo4.png ] ^
|
new index
So this is not fopen in append mode, but presumably fopen in write mode, then fseek, then write (that's my mental model of it, someone let me know if this is wrong). I'm not 100% certain that it would be so simple in reality, it might depend on OS and file system (e.g. a file system with snapshots might have a very different opinion about how to deal with small writes at the end of a file… huge "YMMV" here 🤷🏻‍♂️)
It's rather easy to have an appendable archive of compressed files (not same as appendable compressed archive, though).
tar has an option to append files to the end of an archive (Assuming that you have GNU tar)
-r, --append
append files to the end of an archive
You can gzip the log files before adding to the archive and can continue to update (append) the archive with newer files.
$ ls -l
foo-20130101.log
foo-20130102.log
foo-20130103.log
$ gzip foo*
$ ls -l
foo-20130101.log.gz
foo-20130102.log.gz
foo-20130103.log.gz
$ tar cvf backup.tar foo*gz
Now you have another log file to add to the archive:
$ ls -l
foo-20130104.log
$ gzip foo-20130104.log
$ tar rvf backup.tar foo-20130104.log
$ tar tf backup.tar
foo-20130101.log.gz
foo-20130102.log.gz
foo-20130103.log.gz
foo-20130104.log.gz
If you don't need to use tar, I suggest 7-Zip. It has an 'add' command, which I believe does what you want.
See related SO question: Is there a way to add a folder to existing 7za archive?
Also, the 7-Zip documentation: https://sevenzip.osdn.jp/chm/cmdline/commands/add.htm

unzip all files and save all contents in a single folder - unix

I have a single directory containing multiple zip files that contain .jpg files.
I need to unzip all files and save all contents (.jpgs files) into a single folder.
Any suggestions on a unix command that does that?
Please note that some of the contents (jpgs) might exist with same name in multiple zipped files, I need to keep all jpgs.
thanks
unzip '*.zip' -o -B
as by default these utilities are not installed
see http://www.cyberciti.biz/tips/how-can-i-zipping-and-unzipping-files-under-linux.html regarding installation
read about -B flag to realize its limitations.

Add last n lines of files to tar/zip

I need to regularly send a collection of log files that can grow quite large, so I would like to only send the last n lines of the each of the files.
for example:
/usr/local/data_store1/file.txt (500 lines)
/usr/local/data_store2/file.txt (800 lines)
Given a file with a list of needed files named files.txt, I would like to create an archive (tar or zip) with the last 100 lines of each of those files.
I can do this by creating a separate directory structure with the tail-ed files, but that seems like a waste of resources when there's probably some piping magic that can happen to accomplish it. Full directory structure also must be preserved since files can have the same names in different directories.
I would like the solution to be a shell script if possible, but perl (without added modules) is also acceptable (this is for Solaris machines that don't have ruby/python/etc.. installed on them.)
You could try
tail -n 10 your_file.txt | while read line; do zip /tmp/a.zip $line; done
where a.zip is the zip file and 10 is n or
tail -n 10 your_file.txt | xargs tar -czvf test.tar.gz --
for tar.gz
You are focusing in an specific implementation instead of looking at the bigger picture.
If the final goal is to have an exact copy of the files on the target machine while minimizing the amount of data transfered, what you should use is rsync, which automatically sends only the parts of the files that have changed and also can automatically compress while sending and decompress while receiving.
Running rsync doesn't need any more daemons on the target machine that the standard sshd one, and to setup automatic transfers without passwords you just need to use public key authentication.
There is no piping magic for that, you will have to create the folder structure you want and zip that.
mkdir tmp
for i in /usr/local/*/file.txt; do
mkdir -p "`dirname tmp/${i:1}`"
tail -n 100 "$i" > "tmp/${i:1}"
done
zip -r zipfile tmp/*
Use logrotate.
Have a look inside /etc/logrotate.d for examples.
Why not put your log files in SCM?
Your receiver creates a repository on his machine from where he retrieves the files by checking them out.
You send the files just by commiting them. Only the diff will be transmitted.

Resources