Combine multiple scripts in an "index.html" like fashion? - unix

Is there a standard way in a unixesque (sh/bash/zsh) system to execute a group of scripts as if the group of scripts was one script? (Think index.html). The point is to avoid additional helper scripts like you usually find and keep small programs self sufficient and easier to maintain.
Say I have two (in bold) ruby scripts.
/bin /bin/foo_master /bin/foo_master/main
/bin/foo_master/helper.rb
So now when I execute foo_master
seo#macbook ~ $foo_master [/bin/foo_master/main]: Make
new friends, but keep the old. [/bin/foo_master/helper.rb]: One
is silver and the other gold.

If you're trying to do this without creating a helper script, the typical way to do this would just be to execute both (note: I'll use : $; to represent the shell prompt):
: $; ./main; ./helper.rb
Now, if you're trying to capture the output of both into a file, say, then you can group these into a subshell, with parenthesis, and capture the output of the subshell as if it was a single command, like so:
: $; (./main; ./helper.rb) > index.html
Is this what you're after? I'm a little unclear on what your final goal is. If you want to make this a heavily repeatable thing, then one probably would want to create a wrapper command... but if you just want to run two commands as one, you can do one of the above two options, and it should work for most cases. (Feel free to expand the question, though, if I'm missing what you're after.)

I figured out how to do this in a semi-standard complaint fashion.
I used the eval syntax in shell scripting to lambda evaluate the $PATH at runtime. So in my /etc/.zshrc
$REALPATH = $PATH
$PATH = $REALPATH:`find_paths`
where find_paths is a function that recursively searches the $PATH directories for folders (pseudocode below)
(foreach path in $PATH => ls -d -- */)
So we go from this:
seo#macbook $ echo $PATH
/bin/:/usr/bin/
To this, automagically:
seo#macbook $ echo $PATH
/bin/:/usr/bin/:/bin/foo_master/
Now I just rename main to "foo_master" and voilà! Self contained executable, dare I say "app".

Yep that's an easy one!
#!/bin/bash
/bin/foo_master/main
/bin/foo_master/helper.rb
Save the file as foo_master.sh and type this in the shell:
seo#macbook ~ $sudo chmod +x foo_master.sh
Then to run type:
seo#macbook ~ $./foo_master.sh
EDIT:
The reason that an index.html file is served at any given directory is because the HTTP Server explicitly looks for one. (In server config files you can specify names of files to look for to server like index.html i.e. index.php index.htm foo.html etc). Thus it is not magical. At some point, a "helper script" is explicitly looking for files. I don't think writing a script like above is a step you can skip.

Related

How to rename multiple filenames in cshell script?

I have a c shell script which has the following two lines, it creates a directory and copies some files into it. My question is the following - the files being copied look like this abc.hello, abc.name, abc.date, etc... How can i strip the abc and just copy them over as .hello, .name, .date.. and so forth. I'm new to this.. any help will be appreciated!
mkdir -p $home_dir$param
cp /usr/share/skel/* $home_dir$param
You're looking for something like basename:
In Bash, for example, you could get the base name, file suffix like this:
filepath=/my/folder/readme.txt
filename=$(basename "$filepath") # $filename == "readme.txt"
extension="${filename##*.}" # $extension == "txt"
rootname="${filename%.*}" # $rootname == "readme"
ADDENDUM:
The key takeaway is "basename". Refer to the "man basename" page I linked to above. Here's another example that should make things clearer:
basename readme.txt .txt # prints "readme"
"basename" is a standard *nix command. It works in any shell; it's available on most any platform.
Going forward, I would strongly discourage you from writing scripts in csh, if you can avoid it:
bash vs csh vs others - which is better for application maintenance?
Csh Programming Considered Harmful

Makefile rule depend on directory content changes

Using Make is there a nice way to depend on a directories contents.
Essentially I have some generated code which the application code depends on. The generated code only needs to change if the contents of a directory changes, not necessarily if the files within change their content. So if a file is removed or added or renamed I need the rule to run.
My first thought is generate a text file listing of the directory and diff that with the last listing. A change means rerun the build. I think I will have to pass off the generate and diff part to a bash script.
I am hoping somehow in their infinite intelligence might have an easier solution.
Kudos to gjulianm who got me on the right track. His solution works perfect for a single directory.
To get it working recursively I did the following.
ASSET_DIRS = $(shell find ../../assets/ -type d)
ASSET_FILES = $(shell find ../../assets/ -type f -name '*')
codegen: ../../assets/ $(ASSET_DIRS) $(ASSET_FILES)
generate-my-code
It appears now any changes to the directory or files (add, delete, rename, modify) will cause this rule to run. There is likely some issue with file names here (spaces might cause issues).
Let's say your directory is called dir, then this makefile will do what you want:
FILES = $(wildcard dir/*)
codegen: dir # Add $(FILES) here if you want the rule to run on file changes too.
generate-my-code
As the comment says, you can also add the FILES variable if you want the code to depend on file contents too.
A disadvantage of having the rule depend on a directory is that any change to that directory will cause the rule to be out-of-date — including creating generated files in that directory. So unless you segregate source and target files into different directories, the rule will trigger on every make.
Here is an alternative approach that allows you to specify a subset of files for which additions, deletions, and changes are relevant. Suppose for example that only *.foo files are relevant.
# replace indentation with tabs if copy-pasting
.PHONY: codegen
codegen:
find . -name '*.foo' |sort >.filelist.new
diff .filelist.current .filelist.new || cp -f .filelist.new .filelist.current
rm -f .filelist.new
$(MAKE) generate
generate: .filelist.current $(shell cat .filelist.current)
generate-my-code
.PHONY: clean
clean:
rm -f .filelist.*
The second line in the codegen rule ensures that .filelist.current is only modified when the list of relevant files changes, avoiding false-positive triggering of the generate rule.

Complex command execution in Makefile

I have a query regarding the execution of a complex command in the makefile of the current system.
I am currently using shell command in the makefile to execute the command. However my command fails as it is a combination of a many commands and execution collects a huge amount of data. The makefile content is something like this:
variable=$(shell ls -lart | grep name | cut -d/ -f2- )
However the make execution fails with execvp failure, since the file listing is huge and I need to parse all of them.
Please suggest me any ways to overcome this issue. Basically I would like to execute a complex command and assign that output to a makefile variable which I want to use later in the program.
(This may take a few iterations.)
This looks like a limitation of the architecture, not a Make limitation. There are several ways to address it, but you must show us how you use variable, otherwise even if you succeed in constructing it, you might not be able to use it as you intend. Please show us the exact operations you intend to perform on variable.
For now I suggest you do a couple of experiments and tell us the results. First, try the assignment with a short list of files (e.g. three) to verify that the assignment does what you intend. Second, in the directory with many files, try:
variable=$(shell ls -lart | grep name)
to see whether the problem is in grep or cut.
Rather than store the list of files in a variable you can easily use shell functionality to get the same result. It's a bit odd that you're flattening a recursive ls to only get the leaves, and then running mkdir -p which is really only useful if the parent directory doesn't exist, but if you know which depths you want to (for example the current directory and all subdirectories one level down) you can do something like this:
directories:
for path in ./*name* ./*/*name*; do \
mkdir "/some/path/$(basename "$path")" || exit 1; \
done
or even
find . -name '*name*' -exec mkdir "/some/path/$(basename {})" \;

Change to xth directory terminal

Is there a way in a unix shell (specifically Ubuntu) to change directory into the xth directory that was printed from the ls command?
I know you can sort a directory in multiple ways, but using the output from ls to get the xth directory?
An example shell:
$ ls
$ first_dir second_dir third_really_long_and_complex_dir
where I want to move into the third_really_long_and_complex_dir by passing 3 (or 2 in proper array format).
I know I could simply copy and paste, but if I'm already using the keyboard, it would be easier to type something like "cdls 2" or something like that if I knew the index.
The main problem with cd in an interactive session is that you generally want to change the current directory of the shell that is processing the command prompt. That means that launching a sub-shell (e.g. a script) would not help, since any cd calls would not affect the parent shell.
Depending on which shell you are using, however, you might be able to define a function to do this. For example in bash:
function cdls() {
# Save the current state of the nullglob option
SHOPT=`shopt -p nullglob`
# Make sure that */ expands to nothing when no directories are present
shopt -s nullglob
# Get a list of directories
DIRS=(*/)
# Restore the nullblob option state
$SHOPT
# cd using a zero-based index
cd "${DIRS[$1]}"
}
Note that in this example I absolutely refuse to parse the output of ls, for a number of reasons. Instead I let the shell itself retrieve a list of directories (or links to directories)...
That said, I suspect that using this function (or anything to this effect) is a very good way to set yourself up for an enormous mess - like using rm after changing to the wrong directory. File-name auto-completion is dangerous enough already, without forcing yourself to count...

get back to the previous location after 'cd' command?

I'm writing a shell script that needs to cd to some location. Is there any way to get back to the previous location, that is, the location before cd was executed?
You can simply do
cd -
that will take you back to your previous location.
Some shells let you use pushdir/popdir commands, check out this site. You may also find this SO question useful.
If you're running inside a script, you can also run part of the script inside a sub-process, which will have a private value of $PWD.
# do some work in the base directory, eg. echoing $PWD
echo $PWD
# Run some other command inside subshell
( cd ./new_directory; echo $PWD )
# You are back in the original directory here:
echo $PWD
This has its advantages and disadvantages... it does isolate the directory nicely, but spawning sub-processes may be expensive if you're doing it a lot. ( EDIT: as #Norman Gray points out below, the performance penalty of spawning the sub-process probably isn't very expensive relative to whatever else is happening in the rest of the script )
For the sake of maintainability, I typically use this approach unless I can prove that the script is running slowly because of it.
You could echo PWD into a variable and the cd back to that variable. It may be quieter.
Another alternative is a small set of functions that you can add to your .bashrc that allow you to go to named directories:
# cd /some/horribly/long/path
# save spud
# cd /some/other/horrible/path
# save green
...
# go spud
/some/horribly/long/path
This is documented at A directory navigation productivity tool, but basically involves saving the output of "pwd" into the named mnemonics ("spud" and "green") in the above case, and then cd'ing to the contents of the files.

Resources