What is the Unix way for a console script to use config files? - unix

Let's imagine we have some script 'm12' (I've just invented this name) that runs
on Linux computers. If it is situated in your $PATH, you can easily run it
from the console like this:
m12
It will work with the default parameters. But you can customize the work of
this script by running it something like:
m12 --enable_feature --select=3
It is great and it will work. But I want to create a config file ~/.m12rc so I
will not need to specify --enable_feature --select=3 every time I run it.
It can be easily done.
The difficult part is starting here.
So, I have ~/.m12rc config file, but I what to start m12 without parameters that
are stored in that config file. What is the Unix way to do this? Should I run
script like this:
m12 --ignore_config
or there is better solution?
Next. Let's imagine I have a config file ~/.m12rc and I want some parameters from that
file, but want to change them a bit. How should I run the script and how the
script should work?
And the last question. Is it a good idea for script to first look for .m12rc
in the current directory, then in ~/ and then in /etc?
I'm asking all these questions because I what to implement config files in my
small script and I want to make the correct decisions about the design.

The book 'The Art of Unix Programming' by E S Raymond discusses such issues.
You can override the config file with --config-file=/dev/null.
You would normally use the order:
System-wide configuration (/etc/m12/m12rc, or just /etc/m12).
User's personal configuration (~/.m12rc)
Local directory configuration (./.m12rc)
Command-line options
with each later-listed item overriding earlier listed items. You should be able to specify the configuration file to read on the command line; arguably, that should be given precedence over other options. Think about --no-system-config or --no-user-config or --no-local-config. Many scripts do not warrant a system config file. Most scripts I've developed would not use both local config and user config. But that's the way my mind works.
The way I package standard options is to have a script in $HOME/bin (say m12a) that does it for me:
#!/bin/sh
exec m12 --enable_feature --select=3 "$#"
If I want those options, I run m12a. If I want some other options, I run raw m12 with the requisite options. I have multiple hundreds of files in my personal bin directory (about 500 on my main machine, a Mac; some of those are executables, but many are scripts).

Let me share my experience. I normally source config file at the beginning of the script. In the config file I also handle all the parameter switches:
DEFAULT_USER=blabla
while getopts ":u" do
case $opt in
u)
export APP_USER=$OPTARG
;;
esac
done
export APP_USER=${APP_USER-$DEFAULT_USER}
Then within the script I just use variables, this let me to have number of script having same input parameters.
In your case I imaging you would move "getopts" section to script and after it source the config file (if there was no switch to skip sourcing).
You should not put yours script config file to etc, it will require root privilidge to do that, and you simple can live with config file in home.
If you would like anyway to put your script for sharing with other users, it should go to /usr/share...
Another solution use thor (ruby gem), its way simpler to handle input parameter, avoiding work to get same result in bash e.g. getopts support only single letter switches.

Related

How do I unzip a password protected file with Deflate64 compression? I have the password already. In python or vb.net

So I have a series of thousands of .zip files that need to be opened.
They are password protected, but I have the passwords.
Trying to automate the opening of these. the deflate64 issue is causing a lot of pain.
Okay, So deflate64 is proprietary which is annoying as that stops you from using the normal zipfile library in python. As a workaround i typically make a subprocess call to 7zip or similar. So something like:
import subprocess, sys
subprocess.Popen(["7z", "e", f"{filename}", f"-o{destination}", "-y" "-p" password])
Then naturally just run that in a loop over your files. Depending on how they are laid out you might want to just glob everything in a directory or pipe them via stdin etc.
Often tasks like this are well suited to shell scripts so you might want to consider that, I'm not a windows user but i think something like the following script would work as well:
#echo off
set pass=[password]
set folder=[folder]
for /R "%folder%" %%I in ("*.zip") do (
"C:\somedirectory\7z.exe" x -p%pass% -y -o"%%~dpI" "%%~fI"
)

What does $$[QT_HOST_DATA/get] do in a Qt Feature configuration (.prf) file?

Where is the following syntax used in a feature configuration (.prf) file? defined:
$$[QT_HOST_DATA/get]
I know $$[ ... ] is to access QMake properties as explained in the Qt doc, but where is the /get part of the notation in $$[QT_HOST_DATA/get] clarified? And what does it precisely do?
Also, inside a Qt .conf file, what is the difference between include (for other .conf files) and load() (for .prf files)?
If include(some.conf) merely consists in the contents of some.conf to be literally pasted into the including .conf file, what does load() do exactly?
I have found no info about the structure of .prf files.
https://doc.qt.io/qt-5/qmake-advanced-usage.html says that you can create .prf files, but says nothing about how these files are processed or should be structured?
Thanks for any clarifications you can provide!
where is the /get part of the notation in $$[QT_HOST_DATA/get] clarified? And what does it precisely do?
Nowhere, except qmake source code. It looks like all qmake properties may have upto four special "subproperies": xxx/dev xxx/src xxx/raw xxx/get. However, what are they used for is a mystery. Executing qmake -query QT_HOST_DATA/get produces (on my machine) just the same value as plain $$[QT_HOST_DATA].
I have found no info about the structure of .prf files.
Basically, .prf is just "system include file". There are two points, though:
All .prf files reside in a known location(s) pointed by QMAKEFEATURES variable.
BTW. QMAKEFEATURES is a sort of "protected variable". I managed to change it only with the help of (another undocumented) cache() function:
QMAKEFEATURES *= mydir # '*=' because of 3 passes under Windows
# 'transient' prevents creation file on disk
# only 'super' seems to work OK; no idea what's wrong with 'stash' or 'cache'
cache(QMAKEFEATURES, set transient super)
# now I can load .prf from <mydir> too...
Prf can be implicitly loaded by mentioning it in CONFIG variable. For example, CONFIG += qt (which is the default, btw.) results in include of <SomePrefix>/share/qt5/mkspecs/features/qt.prf Note that this takes place after the whole .pro was processed, so .prf file can be used to post-process user options.
what does load() do exactly?
It's just the version of include() designed specially for .prf. All it does, it simply includes .prf file. But, unlike CONFIG += xxx, it does this immediately, and, unlike plain include(), you shouldn't specify path and extension.

Testing plugins live with Varying Vagrant Vagrants

I'm currently trying to use VVV to develop and test my plugins. My host OS is Win10.
My plugins are in D:\Workshop\projects\vendor\module. I've used this folder structure for a long time, and it is really convenient, especially for use with Composer and friends.
Now I've installed VVV, created a site with VV. I want to test a plugin, the source code of which is in D:\Workshop\projects\XedinUnknown\my-project. So, I create a symlink in D:\Workshop\projects\XedinUnknown\vvv-local\www\my-test-site\htdocs\wp-content\plugins that points to that project's folder. Alas, it doesn't work. If I SSH into VVV and ls /srv/www/my-test-site/htdocs/wp-content/plugins, I can see my-project there, but it points to ../../../../../../../XedinUnknown/my-project, which, of course, doesn't exist. If instead of symlink I create a junction, it's just an empty file.
I suspect that this has to do with how the Linux environment handles Windows symlinks, but I'm not entirely sure. Is it possible to make this work somehow? I really don't wanna copy the whole project folder into VVV.
This is also addressed here.
So, it would seem like I've found somewhat of a solution. I added a synched folder, which maps to my projects home. I then create a symlink to that folder from the WP plugins directory, inside the VM.
Step 1 - Add Shared Folder
This should be done in a Customfile as explained here. This file should go into the same directory as the Vagrantfile, e.g. it will become the Vagrantfile's sibling. In my case, if you're following along from my question, it is in D:\Workshop\projects\XedinUnknown\vvv-local. Anything put here becomes global for the whole of VVV. This also gives you the ability to use different combinations of your projects in different websites. Add these contents to your Customfile, creating it if it does not exist.
config.vm.synced_folder "D:/Workshop/projects", "/srv/projects", :owner => "www-data", :mount_options => [ "dmode=775", "fmode=774" ]
Of course, you should replace D:/Workshop/projects with the path to where you store your projects. Note the forward slashes (/). This works on Win/Nix. For a Windows-only configuration, I suspect you'd have to replace them with \\, because this is an escape sequence.
Step 2 - Add Link to Project
This should be done in your site's vvv-init.sh file. In my case, this file was in D:\Workshop\projects\XedinUnknown\vvv-local\www\my-test-site\, because I want to create this symlink specifically for the my-test-site site. Please note that your VVV path will probably be different, and it doesn't have to be inside the projects directory. It's wherever you cloned VVV into. Add the below lines to your site's vvv-init.sh file.
if [ ! -f "htdocs/wp-content/plugins/my-project" ]; then
echo 'Creating symlink to plugin project...'
cd ./htdocs/wp-content/plugins
ln -s /srv/projects/XedinUnknown/my-project my-project
cd -
fi
In the above snippet, change the path to your desired project path, keeping in mind that /srv/projects/ now maps live to the projects root in your host OS. You can also replace the second occurrence (last word) of my-project in ln -s /srv/projects/XedinUnknown/my-project my-project with whatever you want. As long as you don't change it later, your plugin should not suddenly get de-activated.
Also, from what I understood, vvv-init.sh runs during provisioning, not every time the machine is brought up. So, if you want to run the code in there, you have to run vagrant up --provision from the VVV directory. If you don't want to provision, you can run it manually. SSH into VVV with vagrant ssh, then cd /srv/www/my-test-site (replace my-test-site with name of your site), and run . vvv-init.sh.
Afterword
I am quite new to Bash scripting, and I don't know if my solution is the best one, so please feel free to suggest better versions of the Bash script. I also don't know Ruby, and am new to Vagrant, so please feel free to suggest improvements to the Customfile - this is in essence the same as the Vagrantfile.
One possible issue that I can anticipate with this solution (and this is inherently by design of the filesystem architecture) is that if WordPress decides to make changes to your plugin, e.g. if you run a WP update, it will effectively delete all files in your project, including the repository. So, on the testing site I would recommend using something like this. I am in no way associated with this plugin.

Making multiple files from multiple files with one command in gnu make

Assume 1000 files with extension .xhtml are in directory input, and that a certain subset of those files (with output paths in $(FILES), say) need to be transformed via xslt to files with the same name in directory output. A simple make rule would be:
$(FILES): output/%.xhtml : input/%.xhtml
saxon s:$< o:$# foo.xslt
This works, of course, doing the transform one file at a time. The problem is that I want to use saxon's batch processing to do all the files at one time, since, given the number of files, that would be much faster, considering the overhead of loading java and saxon for each file. Saxon allows the -s (source) option to be a directory and processes all files in that directory, placing the results with the same name in the directory specified in the -o: option.
I'm aware of the well-known technique to get GNU make to do a single command to update multiple files by using pattern rules:
output/%.xhtml: input/%.xhtml
saxon s:input -o:output foo.xslt
But in my case this suffers from two problems. First, it will run the transform on all files in the input directory, not just the ones that have changed; and second, it will not limit the transform to the subset of files specified in $(FILES). The GNU make feature of running a recipe given in a pattern rule only once for all matched targets does not work in the case of so-called "static pattern rules" (see [here]), as the rule given at the top of the post is known.
In order to use the saxon batching feature, I need to create a temporary directory, copy to it only those files to be processed, then run the transform with that temporary directory as the input directory. I tried creating a temporary directory, and remember its name using a target-specific variable for future use, using
$(FILES): TMPDIR:=$(shell mktemp -d)
but this creates a new temporary directory for every single target that is out-of-date. In any case, I'm not sure how to structure the rule that would then copy the necessary files into that directory. I don't want to create the temporary directory at the time the makefile is parsed, since I have a non-recursive make system that will parse all make files, even those not related to the current top-level target, and don't want to create the temporary directory for situations in which it is not necessary/will not be used.
I'm well aware that many questions have been asked on SO in the past about creating multiple files from a single input; one solution is (non-static) pattern rules; other solutions involve phony targets. However, in this case I'm stuck as to how to put all this together.
I can identify the files that changed and copy them using the static pattern rule
$(FILES): output/%.xhtml : input/%.xhtml
TMPDIR=`mktemp -d`
cp $< $(TMPDIR)
but actually I would prefer to copy the files with a single cp command, whereas this copies them one by one. Perhaps there is some application here of cp -u?
I also considered using an ad-hoc extension for those files needing updating but could not see how to get this to work either. I'm about to give up and just run the saxon transform on all files when any of them have changed, but is there any better way?
Personally, I wouldn't try to do this from the command line. That's partly because I'm not a shell scripting wizard. I'm not an Ant wizard either, but because the requirement is to process files that haven't changed, this seems to fall very much into Ant territory. On the other hand, Ant will recompile the stylesheet for each transformation, which is an overhead you might want to avoid; if that's the case then your best bet is probably to write a little Java application. It's probably only 100 lines or less.
Final possibility is to do the processing within Saxon: that is, a single transformation that reads multiple input files using the collection() function and generates multiple result files using xsl:result-document. Saxon (commercial editions) offers an extension function last-modified that allows you to filter the files to be processed. With 1000 files you might also want the extension function saxon:discard-document() to prevent the heap filling.
Personally, I like your original one-compiler-per-file formulation. Does not this work well with make's -j n flag?
You can of course batch up files by copying, and then running saxon at the end. Recursive make (ugh!) can sort out the ordering. Something like:
.PHONY: all
all:
rm -rf tmpdir
${MAKE} tmpdir/sentinel
saxon -s:tmpdir -o:output foo.xslt
tmpdir/sentinel: $(FILES) ; touch $#
$(FILES): output/%.xhtml: input/%.xhtml
ln $< $(patsubst input/%,tmpdir/%,$<)
This does work, though I am very queasy about lying to make (the static pattern rule purports to create the target in output/, but in fact does its dirty deed in tmpdir/).
Note in the recipe for tmpdir/sentinel, that $? is correctly set to the list of output files that are out of date. This might be useful if you can pass a bunch of files to saxon rather than a folder.
I think one issue here is that 'saxon' supports either one file or all files in a directory, so isn't suitable for batch processing without copying to temporary directories.
Otherwise, this is quite simple to do by using a timestamp marker file as a proxy target. For example:
output/.timestamp : $(FILES)
mkdir -p $(#D)
$(COMMAND) -outputdir=output $?
touch $#
The three commands are:
Ensure that the output directory exists.
Run the batch command on files newer than the timestamp file.
Update the timestamp file (creating it if necessary).
Remembering that each line of a command is executed in its own subshell, and that if any command line fails, then subsequent lines are not invoked.
This approach is useful with Java builds.

Location of configuration in unix program

I want to write a unix/linux program, that will use a configuration file.
My problem is, where should I put the location of the file?
I could "hardcode" the location (like /etc) into the program itself.
However, I would like it, if the user without privileges could install it (through make) somewhere else, like ~.
Should the makefile edit the source code? Or is it usually done in a different way?
Create some defaults:
/etc/appname
~/.appname
Then if you want to allow these to be overridden have your application inspect an environment variable. e.g.
$app_userconfig
$app_config
Which would contain an override path/filename.
Lastly add a command line option that allows a config to be specified at runtime, e.g.
-c | --config {filename}
It is common to use a series of places to get the location:
Supplied by the user as a command line argument (i.e. ./program -C path/to/config/file.cfg).
From an environment variable (char *path_to_config = getenv("PROGRAMCONFIG");).
Possibly look for a user specific or local version (stat("./program.cfg") or build up a strig to specify either "$HOME/.program/config.cfg" or "$HOME/.program.cfg" and stat that).
Hardcoded as a backup (stat("/etc/program/config.cfg",...)).
keeping a global config file under /etc/prgname is a standard. Also allowing a .local config file for individual users that will override the global settings would allow each user to personalize the program to their preference.
As skaffman says, the canonical locations for things like config files are specified in FHS. There appears to be a convention that a program will read a config file from the directory from which it is run as an alternative to the one in the hard-coded location. You may wish to consider adding a command-line switch that allows a user to specify an alternative config file location, as well.
The makefile shouldn't modify the source directly, but it can pass a folder path/name to the compiler through the -D option. One way to handle it would be to #define something like DEFAULT_PATH to be the default installation path. If the user wants to define a path, the makefile would add -DUSER_PATH=whatever to the compiler options. You would write your code to use USER_PATH if it exists, and DEFAULT_PATH otherwise.

Resources