eyeD3 -- recursive Tag removal - recursion

I want to recursively remove all ID3v1/ID3v2-tags of my mp3-files with eyeD3.
Can't get it work.
The slim doc doesn't say much about the PATH-variable and its usage.
usage: eyeD3 [-h] [--version] [--exclude PATTERN]
[--plugins] [--plugin NAME]
[PATH [PATH ...]]
How to apply and use the PATH-variable correct?

According to the online documentation
The PATH argument(s) along with optional usage of --exclude are used to tell eyeD3 what files or directories to process.
Directories are searched recursively and every file encountered is passed to the plugin until no more files are found.
Are you sure, that the PATH-variable doesn't work this way?

Related

Rsync all files (recursively) from one dir to another, maintaining only a portion of the original dir structure

I have two directories:
Directory #1, 'C'
C's absolute path:
/A/B/C
Directory #2, 'T'
T's absolute path:
/Q/R/T
I want to use rsync, to copy all files, recursively, from C, and copy them in to T, while maintaining the original directory structure - but only from B onwards.
Example to make it clearer: suppose 'B' has only 3 files nested within it:
/A/B/f1.txt
/A/B/C/f2.txt
/A/B/C/D/f3.txt
Then I want to end up with only f2.txt and f3.txt being copied over, with the final filepaths as follows (notice how I keep the directory structure, only from B onwards):
/Q/R/T/B/C/f2.txt
/Q/R/T/B/C/D/f3.txt
Here is the catch: I must execute the rsync cmd from within /Q/R/. So when I execute this command, my pwd must be /Q/R/.
Can anyone help me figure out how to do this?
[If I did not have this constraint of where my cwd must be, I could cd to /A/B, and then execute: rsync . /Q/R/T/ --recursive --relative . Unfortunately, I can not do that for reasons that would take a lot of pointless explaining here. And when I try to execute rsync /A/. /Q/R/T/ --recursive --relative, I end up with not only everything within A, but maintaining that first part of the dir structure (/A/) that I don't want. (Note - in the real life scenario the dir structure is much more complex then this, this is just the general problem.]
The rsync command includes a couple of options which are suitable for this scenario. They are:
--include=PATTERN - Don't exclude files matching PATTERN
--exclude=PATTERN - Exclude files matching PATTERN
An excellent description and examples of the --exclude flag can be found here.
Solution
Given the directory structures provided in your question and your pwd being set to /Q/R/. Running the following command will meet your requirement:
rsync ../../A/ T/ --recursive --include A/B/** --exclude B/*.*
Edit:
If you do want /A/B/f1.txt to copy to /Q/R/T/B/f1.txt (as it's unclear in your question because you don't show it in the "I want to end up with" example"). Then omit the --exclude B/*.* part, so the complete command is reduced to:
rsync ../../A/ T/ --recursive --include A/B/**
or reduced even further in complexity to just:
rsync ../../A/** T/ --recursive
Explanation of the command
../../A/
The first argument provides the path to the source directory. I.e. The relative position within the hierarchical tree of names (Based on your pwd being /Q/R).
T/
The second argument provides the path to the destination directory. Again this is a relative position within the hierarchical tree of names (and is also based on the pwd being /Q/R).
--recursive
The first option is to recurse into the directories.
--include A/B/**
This says that you want to include all the assets (files/folders), however many levels deep, from within the folder named B which resides inside folder A.
--exclude B/*.*
This says that you want to exclude any assets (files/folders), whose name includes a dot [.] plus extension, which reside inside folder B (at the top level). This will prevent the file named f1.txt from being copied. You could be even more specific here and use --exclude B/f1.txt instead, however I'm assuming in real life you perhaps have additional files you want to exclude here too.
Additional notes
Both the --include and --exclude options can be utilized multiple times. This can be very useful for some scenarios too as it enables you to be specific about what to include and/or exclude during the copy process.
For example, lets assume that your source directory /A/B/, (as described in your question), also contains a folder named X. So its path is A/B/X.
Lets say that we also do not want to copy this folder named X (in the same way as you currently do not want to copy /A/B/f1.txt).
For this scenario we add another --exclude option as follows:
rsync ../../A/ T/ --recursive --include A/B/** --exclude B/*.* --exclude X/
Note the additional --exclude X/ at the end.
You mention...
(Note - in the real life scenario the dir structure is much more complex then this, this is just the general problem.
... in your question, so you may find it necessary to add additional --exclude=PATTERN to truly meet your requirements.
Grunt
As you have included the gruntjs flag with your question, then you may want to consider utilizing plug-ins which can run shell commands like rsync such as:
grunt-shell
grunt-exec

Makefile rule depend on directory content changes

Using Make is there a nice way to depend on a directories contents.
Essentially I have some generated code which the application code depends on. The generated code only needs to change if the contents of a directory changes, not necessarily if the files within change their content. So if a file is removed or added or renamed I need the rule to run.
My first thought is generate a text file listing of the directory and diff that with the last listing. A change means rerun the build. I think I will have to pass off the generate and diff part to a bash script.
I am hoping somehow in their infinite intelligence might have an easier solution.
Kudos to gjulianm who got me on the right track. His solution works perfect for a single directory.
To get it working recursively I did the following.
ASSET_DIRS = $(shell find ../../assets/ -type d)
ASSET_FILES = $(shell find ../../assets/ -type f -name '*')
codegen: ../../assets/ $(ASSET_DIRS) $(ASSET_FILES)
generate-my-code
It appears now any changes to the directory or files (add, delete, rename, modify) will cause this rule to run. There is likely some issue with file names here (spaces might cause issues).
Let's say your directory is called dir, then this makefile will do what you want:
FILES = $(wildcard dir/*)
codegen: dir # Add $(FILES) here if you want the rule to run on file changes too.
generate-my-code
As the comment says, you can also add the FILES variable if you want the code to depend on file contents too.
A disadvantage of having the rule depend on a directory is that any change to that directory will cause the rule to be out-of-date — including creating generated files in that directory. So unless you segregate source and target files into different directories, the rule will trigger on every make.
Here is an alternative approach that allows you to specify a subset of files for which additions, deletions, and changes are relevant. Suppose for example that only *.foo files are relevant.
# replace indentation with tabs if copy-pasting
.PHONY: codegen
codegen:
find . -name '*.foo' |sort >.filelist.new
diff .filelist.current .filelist.new || cp -f .filelist.new .filelist.current
rm -f .filelist.new
$(MAKE) generate
generate: .filelist.current $(shell cat .filelist.current)
generate-my-code
.PHONY: clean
clean:
rm -f .filelist.*
The second line in the codegen rule ensures that .filelist.current is only modified when the list of relevant files changes, avoiding false-positive triggering of the generate rule.

Combine multiple scripts in an "index.html" like fashion?

Is there a standard way in a unixesque (sh/bash/zsh) system to execute a group of scripts as if the group of scripts was one script? (Think index.html). The point is to avoid additional helper scripts like you usually find and keep small programs self sufficient and easier to maintain.
Say I have two (in bold) ruby scripts.
/bin /bin/foo_master /bin/foo_master/main
/bin/foo_master/helper.rb
So now when I execute foo_master
seo#macbook ~ $foo_master [/bin/foo_master/main]: Make
new friends, but keep the old. [/bin/foo_master/helper.rb]: One
is silver and the other gold.
If you're trying to do this without creating a helper script, the typical way to do this would just be to execute both (note: I'll use : $; to represent the shell prompt):
: $; ./main; ./helper.rb
Now, if you're trying to capture the output of both into a file, say, then you can group these into a subshell, with parenthesis, and capture the output of the subshell as if it was a single command, like so:
: $; (./main; ./helper.rb) > index.html
Is this what you're after? I'm a little unclear on what your final goal is. If you want to make this a heavily repeatable thing, then one probably would want to create a wrapper command... but if you just want to run two commands as one, you can do one of the above two options, and it should work for most cases. (Feel free to expand the question, though, if I'm missing what you're after.)
I figured out how to do this in a semi-standard complaint fashion.
I used the eval syntax in shell scripting to lambda evaluate the $PATH at runtime. So in my /etc/.zshrc
$REALPATH = $PATH
$PATH = $REALPATH:`find_paths`
where find_paths is a function that recursively searches the $PATH directories for folders (pseudocode below)
(foreach path in $PATH => ls -d -- */)
So we go from this:
seo#macbook $ echo $PATH
/bin/:/usr/bin/
To this, automagically:
seo#macbook $ echo $PATH
/bin/:/usr/bin/:/bin/foo_master/
Now I just rename main to "foo_master" and voilà! Self contained executable, dare I say "app".
Yep that's an easy one!
#!/bin/bash
/bin/foo_master/main
/bin/foo_master/helper.rb
Save the file as foo_master.sh and type this in the shell:
seo#macbook ~ $sudo chmod +x foo_master.sh
Then to run type:
seo#macbook ~ $./foo_master.sh
EDIT:
The reason that an index.html file is served at any given directory is because the HTTP Server explicitly looks for one. (In server config files you can specify names of files to look for to server like index.html i.e. index.php index.htm foo.html etc). Thus it is not magical. At some point, a "helper script" is explicitly looking for files. I don't think writing a script like above is a step you can skip.

How to generate translation file (.po, .xliff, .yml,...) from a Symfony2/Silex project?

Im going to build a Silex/Symfony2 project and I have been looking around for a method to generate XLIFF/PO/YAML translation files based on texts-to-be-translated inside the project but not found any instruction or documentation on it.
My question is: Is there an automated way to generate translation file(s) in specific format for a Symfony2/Silex project?
If yes, please tell me how to generate the file then update the translation after that.
If no, please tell me how to create translation file(s) then adding up more text for my project? I am looking for an editor desktop based or web-based instead of using normal editor such as Transifex, GetLocalization (but they dont have option to create a new file or add more text)
After a long time searching the internet, I found a good one:
https://github.com/schmittjoh/JMSTranslationBundle
I see you've found a converter, but to answer your first question about generating your initial translation file -
If you have Gettext installed on your system you could generate a PO file from your "texts-to-be-translated inside the project". The command line program xgettext will scan the source files looking for whatever function you're using.
Example:
To scan PHP files for instances of the trans method call as shown here you could use the following command -
find . -name "*.php" | xargs xgettext --language=PHP --keyword=trans --output=messages.pot
To your question about editors:
You could use any PO editor, such as POEdit, to manage your translations, but as you say you eventually need to convert the PO file to either an XLIFF or YAML language pack for Symfony.
I see you've already found a converter tool. You may also like to try the one I wrote for Loco. It supports PO to YAML, and PO to XLIFF
Workaround for busy people (UNIX)
You can run the following command in the Terminal:
$ grep -rEo --no-filename "'.+'\|\btrans\b" templates/ > output.txt
This will output the list of messages to translate:
'Please provide your email'|trans
'Phone'|trans
'Please provide your phone number'|trans
...
I mean almost.. But you can usually do some work from here...
Obviously you must tweak the command to your liking (transchoice, double-quotes instead of single...).
Not ideal but can help!
grep options
grep -R, -r, --recursive: Read all files under each directory, recursively this is equivalent to the -d recurse option.
grep -E, --extended-regexp: Interpret PATTERN as an extended regular expression.
grep -o, --only-matching: Show only the part of a matching line that matches PATTERN.
grep -h, --no-filename: Suppress the prefixing of filenames on output when multiple files are searched.
(source)

Nmake getting a list of all .o files from .cpp files

I'm using nmake to compile multiple source files into an elf. However I do not want to specify the .o files in a long list like this:
OBJS = file1.o file2.o file3.o
What I would prefer is to use a wildcard that specifies all .o files in the current directory as dependencies for the .elf. However, the .o files don't exist until I've compiled them from the .cpp files. Is there any way to get a list of cpp files using wildcard expansion and then do a string replacement to replace the .cpp with .o.
There's not a particularly elegant way to do this in NMAKE. If you can, you should use GNU Make instead, which is available on Windows and makes many tasks much easier.
If you must use NMAKE, then you must use recursive make in order to do this automatically, because NMAKE only expands wildcards in prerequisites lists. I demonstrated how to do this in response to another similar question here.
Hope that helps.
I'm more familiar with Unix make and gmake, but you could possibly use:
OBJS = $(SOURCES:.cpp=.o)
(assuming your source files could be listed in SOURCES)
Here is another answer that might help you.
Another solution may be to use a wrapper batch file, where you create a list of all .cpp files with a "for" loop, like
del listoffiles.txt
echo SOURCES= \ >> listoffiles.txt
for %i in (*.dll) do #echo %i \ >>listoffiles.txt
echo. >> listoffiles.txt
Afterwards, you can try to use this with the !INCLUDE preprocessor macro in nmake:
!INCLUDE listoffiles.txt
(I am sure this won't work from scratch, but the general idea should be clear).

Resources