Accurev binaries and recursive keep - recursion

My problem is in two parts:
My team and I are using an Test Design Studio to write .vbs files in a Accurev Workspace. The problem is that Accurev recognize them as binaries instead text/ptext files... which causes problems when merging. Is there a setting in Accurev I can change to force it to recognize .vbs files as text/ptext?
All those binaries that are already in the stream, I need solution to convert them all into text/ptext. I've given up on the Client UI, because it means I'd have to go in the Workspace explorer and go through every single folder, one by one, and keep those binaries. Then I thought of the commands. I tried
2.1. accurev keep -c "keep ptext" -n -E ptext -R target_folder
2.2. accurev keep -c "keep ptext" -n -E ptext -R .
2.3. But I get a No Element Selected. That's because the "-n" flag is required for recursive, but it means it'll ignore non-modified files... and most of my files are backed and not modified... otherwise I can't even select the directory for keeping (I'll report "can't keep a directory"). I could create a file-list, but it would take as long as manually keeping all the files one by one. I also tried if I could work directly in the stream (since it has another empty stream above, it lists all it's files as outgoing), but I do not have the keep option in the stream. Is there an easy way to convert all files in stream/workspace as text/ptext?

Yes, you will need to enable a pre-create-trigger using the elem_type.pl script found in "accurev install dir/examples" on your server. Inside the elem_type file, you will see the directions for setting this trigger.
Yes, run the following command to generate a list of all the files in your workspace.
"accurev stat -a -ffl > list.txt"
Then run the this command to convert the files to ptext:
"accurev keep -c "ptext conversion" -E ptext -l list.txt"
Then you can promote those files.

Check the files with a hex editor to see if there are any non-ASCII characters.
If there's binary content in the file AccuRev will see those files as binary.
Overwrite the keep as jstanley suggested to change the type.
On the add use "accurev add -E ptext -c "your favorite comment" file.vbs

Related

Folderstructure with rsync in bash

I looked up the forum but didn't find an article which matches my problem. Maybe there is some, and you can help me out with it.
My problem is I want to sync an folder with the command rsync -a -v. The point is I got 5 different Maschinen. On every maschine is a scratch folder I want to sync into the folder: ~/work_dir/scratch_maschines and inside the /scratch_maschines folder should be a folder for maschine_a, maschine_b and so on.
On the maschines it is always the same path: /scratch/my_name. So when I use now this command for the first two maschines:
rsync -a -v --exclude='*.chk' --exclude='*.rwf' --exclude='*.fchk' --delete sp02:/scratch/my_name ~/work_dir/scratch_maschine01; rsync -a -v --exclude='*.chk' --exclude='*.rwf' --exclude='*.fchk' --delete maschine02:/scratch/my_name ~/work_dir/scratch_maschine02
I got a folders for scratch_maschine01 and scratch_maschine02 in my working directory but inside these folders are not direct my data there is first a folder inside with my_name and this folder contains the data. So my question is how can I use the rsync command and get the files from the scratch directorys straight to the folders for each machine?
You might want to consider reformulating your commands similar to the following:
START=`pwd`
EXCLUDES="--exclude='*.chk' --exclude='*.rwf' --exclude='*.fchk'"
{ SOURCE="sp02:/scratch/my_name"
REMOTE="${HOME}/work_dir/scratch_maschine01"
cd "${SOURCE}"
rsync --recursive -v --delete ${EXCLUDES} "./" "${REMOTE}/"
}>${START}/job.log 2>${START}/job.err
The key elements there are
the --recursive which will rsync will expand to include all content and subdirs of the SOURCE directory.
the / behind the ${SOURCE} notifies rsync to limit itself to content of the SOURCE directory, but not the directory itself.
the / behind the ${REMOTE} notifies rsync to limit itself to depositing content into that directory and expect it to already exist, to specifically fail if that does not already exist at REMOTE; this ensures that the remote site doesn't attempt a failsafe PWD and deposit files elsewhere than expected.
The above approach lends itself to a function form that could be placed into a loop with pre-attempt condition checks, along with having a complementary case for all variable assignments grouped under a destination heading (i.e. case statements).
Using such an approach with meaningful labels for variables lends itself to a type of implicit documentation, making the code more meaningful to someone not familiar with the code, as well as a refresher for yourself after a long period of not working or using the code.
I try to avoid the "~" because I prefer to always enclose definitions for variables in double quotes, to avoid issues that might arise from paths that may include unexpected characters or spaces. That way, you are sure to have your defined paths correctly interpreted by commands in scripts.
Lastly, I prefer to use the long form for the rsync options (and almost every other command) so that I don't have to refer to the manual every time to translate the single-character options when trying to understand what is coded, if the need arises for troubleshooting unexpected errors (I have always had poor memory).
My own backup command is as follows. The only reason why the
${PathMirror}${dirC}/
is not encapsulated in single quotes within the double quotes for COM is because I know those variables all evaluate to non-complex strings which cannot be misinterpreted.

Restrict zsh tab completion behavior

My zsh has some completion features I don't understand and can't find where to change. I have two issues where I suspect they have a similar "fix" for my problem. I initialize the zsh completion system with
autoload -Uz compinit
compinit
to get advanced completion features, but I also get the following problems that I don't have without compinit.
First: I happen to have a directory called mydir in my home directory and unfortunately, there is also a user called mydir. When I want to change into my directory and then use tab completion, i.e.
cd mydir/<TAB>
I get the content directories of ~myusername/mydir/ along with all directories available for ~mydir/. I already tried to put
zstyle ':completion:*' users myusername
in my .zlogin file, but it does only change the completion of the username itself and not subsequent directories. Is there some similar switch to turn off completion of other users' home directories? Alternatively, it would already be good if the current directory completion would appear first in the completion menu.
Second: I wrote a script called setup-file-with-a-long-name.sh that resides in my home directory. When I want to execute it via
source setup-file-with-a-long-name.sh
I start with the first few characters, I press <TAB> and I get a list of lots of executable files that are probably somewhere on my $PATH installed by the system, but I don't care about all those files, I just want my file (which is the only match in the current directory) to be displayed either first in the menu and accessible via <TAB> <TAB> or better yet, be accepted after the first <TAB>. (If I select any of them, they don't work anyway because source needs the absolute path, not the filename. Therefore this is a behavior I don't understand and can't see how this is useful as a default for anybody.)
Possible workarounds:
1. Write ~/ explicitly - this is what I want to avoid, because I have to ssh into a new shell pretty often and want to start navigating without thinking about whether I am in $HOME or not.
2. Don't use compinit - well, I like the context-aware completion in principle, I just want to adapt it to my needs.
The following works in bash,
man source -
source filename [arguments]
Read and execute commands from filename in the current shell environment and return the exit status of the last command executed from filename. If filename does not contain a slash, file
names in PATH are used to find the directory containing filename. The file searched for in PATH need not be executable. When bash is not in posix mode, the current directory is searched if
no file is found in PATH. If the sourcepath option to the shopt builtin command is turned off, the PATH is not searched.
to disable the flag instructions are a little above the description of sourcepath
shopt [-pqsu] [-o] [optname ...]
Toggle the values of variables controlling optional shell behavior. With no options, or with the -p option, a list of all settable options is displayed, with an indication of
whether or not each is set. The -p option causes output to be displayed in a form that may be reused as input. Other options have the following meanings:
-s Enable (set) each optname.
-u Disable (unset) each optname.
-q Suppresses normal output (quiet mode); the return status indicates whether the optname is set or unset. If multiple optname arguments are given with -q, the return
status is zero if all optnames are enabled; non-zero otherwise.
-o Restricts the values of optname to be those defined for the -o option to the set builtin.
...
sourcepath
If set, the source (.) builtin uses the value of PATH to find the directory containing the file supplied as an argument. This option is enabled by default.
so executing the following should remove path from your tab completion...
shopt -u sourcepath

copy with rsync when files are different

I have to copy a big directory to my NAS using rsync, I would like to say to rsync only copy the files when source and destination are different to avoid to copy a files already copied.
Skipping identical files is the whole purpose why people use rsync. This is default behavior of rsync. Most of the time the only option you want to use is -a:
rsync -a -P <source> <dest>
The -P just means show progress and the -a means "archive" and that means "when copying files, try to make copy as identical as possible" (try to keep permissions, ownership, timestamps, etc.) but is also means "Only update files if you have to". It's like saying "make sure <dest> is an up-to-date backup of <source>".
However, by default rsync will already consider two files identical, if they have same file size and same last modification date. Of course, two files may also have same size and same last modification date and not be identical. So when running that command for the very first time and you are not sure which files may need update and which ones don't, try this:
rsync -a -c -P <source> <dest>
-c means don't rely just upon size and date, checksum every file and compare the checksums. Only if checkums are identical, consider files as identical. Note that rsync will not necessary checksum the whole file, big files are broken into smaller chunks and every chunk is checksumed separately as only chunks that have changed are transferred.
So even with checksuming you can save you a lot of time when copying over a network connection. It won't save you any time when copying locally because just copying everything is probably faster than checksuming everything. So a plain copy will always beat a checksuming rsync in speed when both, source and destination, are local drives. In that case use
cp -a -v <source> <dest>
or if your system doesn't know -a, use
cp -pPR -v <source> <dest>
that's identical to -a. Again, the -v is just to see some progress.
And I'd only use -c for the very first sync, after that, relying on file size and last modification date usually works very well for updating and it is a whole lot faster. It will work because if a file has been altered since the last sync, it will have a different last modification date and so by just comparing the dates rysnc will know that the file must be updated at the destination. Of course, that only works if your systems all have the correct date/time set and if you don't manipulate the last modification date of files and also don't forbid your system to update them.
If you want to skip files solely on presence, use this:
rsync -a -P --ignore-existing <source> <dest>
That's like telling rsync "If you see a file with the same name at the destination, always consider it to be identical and never update it".
Please note that if -a detects a file in <source> is different than a files in <dist>, whether this is determined by size and modification date or by checksumming, it will always update the file at <dest> to match then file at <source>. If multiple sources are syncing to the same destination, you might also want to add -u which means "in case two files are different, only update if the file at <source> has a newer last modification date than then file at <dest>"
Just as a general tip, if you type
man <command>
in a terminal, you will get a nice help page on most systems (Linux, MacOS X and UNIX systems), explaining you all the options in all detail. You can scroll up/down using arrow keys or page up/down and you can leave that view by hitting "q" for quit. E.g.
man rsync

How to delete Excel files from a Unix machine?

I have to delete a number of files with names like "test excel-27-03-2016.xls" from a directory on a Unix machine. Can you please suggest how? I tried using command
rm -f test excel-27-03-2016.xls
but it is not deleting the file.
Does the name of the file contains a space? It seems so.
If this is the case, rm -f "test excel-27-03-2016.xls" (note double quotes around the file name) ought to do it.
Running rm -f test excel-27-03-2016.xls means trying to erase two files, one named test and the other excel-27-03-2016.xls.
So if 'test excel-27-03-2016.xls' is one filename, you have to escape the space in the rm command.
rm test\ excel-27-03-2016.xls
or
rm 'test excel-27-03-2016.xls'
otherwise rm will think 'test' and 'excel-27-03-2016.xls' are two different files.
(Also you shouldn't need to use -f.)
For a single file, if the file name contains spaces, you have to protect those spaces. By default, the shell splits file names (arguments in general) at spaces. So, enclose the name in double quotes or single quotes:
rm -f "test excel-27-03-2016.xls"
or use a backslash if you prefer (but I don't prefer backslashes; I normally use quotes):
rm -f test\ excel-27-03-2016.xls
When a delete operation doesn't work, the -f option to rm becomes your enemy; it suppresses the error messages that rm would otherwise give. For example, without the -f, you might see:
$ rm test excel-27-03-2016.xls
rm: test: No such file or directory
rm: excel-27-03-2016.xls: No such file or directory
$
That tells you that rm was given two names, not one as you intended.
From a comment:
I have 20-30 files; do I have to give rm 'test excel-27-03-2016.xls" each time and provide "Yes" permission to delete file?
Time to learn wild-cards. First thing to learn — Be Careful! Do not destroy files carelessly.
Run a command such as:
ls -ld *.xls
Does that list the files you want deleted — all the files you want deleted and nothing but the files you want deleted? If it doesn't contain any extra file names (and no directory names), then you can run:
rm -f *.xls
If it doesn't contain all the file names you need deleted, but it does contain only names that you need deleted, then run the rm to reduce the size of the problem, then devise an alternative pattern to delete the others:
ls -ld *.xlsx # Semi-plausible
If it contains too many names, you have a couple of options. One is to use rm interactively:
rm -i *.xls
and respond yes to those that should be deleted and no to those that should be kept. Another is to work out a more precise wildcard, perhaps *-27-03-2016.xls.
When using wild-cards, the shell keeps file names as single arguments, so the fact that the generated names have spaces in them isn't a problem. Be aware that many shell techniques, such as capturing that list of file names in a variable, do not preserve the spaces properly — a cause of much confusion.
And, with any mass file removal, be very careful. The Unix system will not stop you doing immense damage to your files It will take you at your word — if you say 'remove everything', it will try to do so.
From another comment:
I have taken root access so I will have all permissions.
Don't run as root when you have problems working out what you are doing. Running as root means that any mistake has the potential to be dramatically more devastating than if you run as yourself.
If you are running as root, the -f option to rm really isn't needed (unless someone has attempted to protect you by creating an alias for the rm command).
When you're root, the system does what you tell it to do. root: Remove the kernel. system: Yes, sir! Right away, sir! root: Remove the complete root file system. system: Yes, sir! Right away, sir!
Be very, very careful when running as root. It is a bad idea to experiment when running as root. It is very important to know exactly what you plan to do as root, to gain root privileges and do what you plan to do, and then lose the root privileges as soon as possible. Use sudo (or su) to temporarily gain root privileges.

Add last n lines of files to tar/zip

I need to regularly send a collection of log files that can grow quite large, so I would like to only send the last n lines of the each of the files.
for example:
/usr/local/data_store1/file.txt (500 lines)
/usr/local/data_store2/file.txt (800 lines)
Given a file with a list of needed files named files.txt, I would like to create an archive (tar or zip) with the last 100 lines of each of those files.
I can do this by creating a separate directory structure with the tail-ed files, but that seems like a waste of resources when there's probably some piping magic that can happen to accomplish it. Full directory structure also must be preserved since files can have the same names in different directories.
I would like the solution to be a shell script if possible, but perl (without added modules) is also acceptable (this is for Solaris machines that don't have ruby/python/etc.. installed on them.)
You could try
tail -n 10 your_file.txt | while read line; do zip /tmp/a.zip $line; done
where a.zip is the zip file and 10 is n or
tail -n 10 your_file.txt | xargs tar -czvf test.tar.gz --
for tar.gz
You are focusing in an specific implementation instead of looking at the bigger picture.
If the final goal is to have an exact copy of the files on the target machine while minimizing the amount of data transfered, what you should use is rsync, which automatically sends only the parts of the files that have changed and also can automatically compress while sending and decompress while receiving.
Running rsync doesn't need any more daemons on the target machine that the standard sshd one, and to setup automatic transfers without passwords you just need to use public key authentication.
There is no piping magic for that, you will have to create the folder structure you want and zip that.
mkdir tmp
for i in /usr/local/*/file.txt; do
mkdir -p "`dirname tmp/${i:1}`"
tail -n 100 "$i" > "tmp/${i:1}"
done
zip -r zipfile tmp/*
Use logrotate.
Have a look inside /etc/logrotate.d for examples.
Why not put your log files in SCM?
Your receiver creates a repository on his machine from where he retrieves the files by checking them out.
You send the files just by commiting them. Only the diff will be transmitted.

Resources