How to convert relative path to absolute path in Unix [closed] - unix

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 12 months ago.
The community reviewed whether to reopen this question 12 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I want to convert
Relative Path - /home/stevin/data/APP_SERVICE/../datafile.txt
to
Absolute Path - /home/stevin/data/datafile.txt
Is there a built-in tool in Unix to do this or any good ideas as to how can I implement this?

readlink -f /home/stevin/data/APP_SERVICE/../datafile.txt should do what you're looking for, assuming your Unix/Linux has readlink.

Something like this can help for directories: (For files, append with basename)
echo $(cd ../dir1/dir2/; pwd)
For files,
filepath=../dir1/dir2/file3
echo $(cd $(dirname $filepath); pwd)/$(basename $filepath)

I'm surprised nobody mentions realpath yet. Pass your paths to realpath and it will canonicalize them.
$ ls
Changes
dist.ini
$ ls | xargs realpath
/home/steven/repos/perl-Alt-Module-Path-SHARYANTO/Changes
/home/steven/repos/perl-Alt-Module-Path-SHARYANTO/dist.ini

Based on Thrustmaster's answer but with pure bash:
THING="/home/stevin/data/APP_SERVICE/../datafile.txt"
echo "$(cd ${THING%/*}; pwd)/${THING##*/}"
Of course the cd requires the path to actually exist, which may not always be the case - in that case, you'll probably have a simpler life by writing a small Python script instead...

This is my helper function
# get a real path from a relative path.
function realPath() {
if [[ -d "$1" ]]; then
currentPath="$(pwd)" # save current path
cd "$1" || exit 1 # cd to the checking dir
pwd
cd "$currentPath" || exit 1 # restore to original path
else
echo "$(cd "$(dirname "$1")" && pwd -P)/$(basename "$1")"
fi
}

Related

How can I execute script with xargs after find command [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
find . -name "recovery_script" | xargs
I try to execute but it only prints it. How can I run it parallel ?
find . -name "recovery_script" | xargs -n1 -P8 sh
for 8 processes in parallel.
Provided there are at least 8 places where "recovery_script" can be found.
The -n1 argument is necessary to feed one argument at a time to sh. Otherwise, xargs will feed a reasonable number of arguments all at once to sh, meaning it's trying to execute something like
sh dir1/recovery_script dir2/recovery_script dir3/recovery_script ...
instead of
sh dir1/recovery_script
sh dir2/recovery_script
sh dir3/recovery_script
...
in parallel.
Bonus: your command can be longer than just a single command, including options. I often use nice to allow other processes to still continue without problems:
find . -name "recovery_script" | xargs -n1 -P8 nice -n19
where -n19 is an option to nice, not to xargs.
(Aside: if you ever use wildcards for -name in find, use the -print0 option to find, and the -0 option to xargs: that separates output and input by the null character, instead of whitespace (since the latter may be part of the filename). Since you search for the full name here, that is not a problem.)
From the xargs manual page:
SYNOPSIS: xargs ... [command [initial-arguments]]
and
... and executes the command (default is /bin/echo) one or more times with any initial-arguments followed by items read from standard input.
The default behaviour is thus to echo whatever arguments you give to xargs. Providing a command like sh (perhaps depending on what executable you're trying to run) then works.
This solution is not using xargs but a simple bash script. Maybe it can help:
#!/bin/sh
for i in $(find -name recovery_script)
do
{
echo "Started $i"
$i
echo "Ended $i"
} &
done
wait

How to store in file outputs of a c/c++ program in unix [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I have a program made on c++, let's call it program, which has a lot of output while running. I want to log that output on a file so is easier then to look at it. I tried already ./program & > file ./program & |tee file but it doesnt work (I want to run it in background if posible).
any idea?
Thanks
EDIT: It doesnt work ./program > file. I tried putting it on background, on foreground, but the file is empty....
Redirect (>, 2>, 2>&1, etc) is the standard way of achieving this. However, this doesn't work in all possible scenarios.
Use script to capture everything displayed on your terminal:
script -c "./program arg1 --arg2" output.log
If you want to redirect stdout and put the program to background use (./program > file)&
You might try
program >& somefile &
then ending & means you want to put that job in the background and (with bash or zsh) the >& redirects both stdout and stderr
Then you could use tail -f somefile (perhaps in a different terminal window) to look at that growing somefile. See tail(1)
BTW, there are other possibilities. Look perhaps into batch command (you may need to install the at package to have it) and into nohup(1).

First tab completion enhancement [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
Customizing zsh allows you to simply hit the tab key and let you cycle through directories. See this answer.
That is an amazing workflow improvement, but I need help with the following:
How can I achieve, that zsh tab completion will show me ALL files and folders and lets me cycle through them? (Actually it only shows files when there is no more directory to change to.)
In addition, it would be very useful, that it will not put "cd" in front of the completion when the choice is a file and not a folder.
(I use the systems mime to open files from terminal.)
Thanks.
Modifying the answer here slightly:
function complete_pwd_items_on_empty_buffer
{
if [[ -z $BUFFER ]]; then
BUFFER="./"
CURSOR=2
zle list-choices
else
zle expand-or-complete
fi
}
zle -N complete_pwd_items_on_empty_buffer
bindkey '^I' complete_pwd_items_on_empty_buffer
This will insert ./ and list executable files or directories if the command line is empty and you press the TAB key. You can execute an executable file in the current directory tree this way, or cd into a subdirectory this way if you have set the AUTO_CD option.
In fact we can do a little bit better than that by enabling this trick on a command line with whitespace only:
function complete_pwd_items_on_empty_buffer
{
if [[ $BUFFER =~ ^[[:space:]]*$ ]]; then
BUFFER+="./"
CURSOR+=2
zle list-choices
else
zle expand-or-complete
fi
}
zle -N complete_pwd_items_on_empty_buffer
bindkey '^I' complete_pwd_items_on_empty_buffer

Using the mv command - file deleted? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
This is probably a very stupid question, but is it possible that files can be deleted with the "mv" command?
I'm asking because I when I was attempting to move a file up to its parent directory, I accidentally typed one "." too many and now I can't find my file.
So instead of:
$ mv myfile.txt ..
I had put:
$ mv myfile.txt ...
Now my file is gone. Did I delete it accidentally, and is it possible to get it back at all?
Thanks!
Your file has been renamed to "..." do an ls -a to see dot files.
Try mv ... ../myfile.txt to get do what you originally wanted.
you file is now named as ..., check it with ls -al in your current dir.
On UNIX systems, file names starting with a dot are hidden from directory listings by default.
ls -lA
will display dot files.
You can rename the file back
mv ... myfile.txt
Your file is now called ... and is not visible thru the simple ls command.
Use ls -a to make "system" files (starting with a dot) visible or rename it back mv ... your_file.
And to answer the title-question:
Yes and no.
It' s not possible, but moving the file to /dev/null will delete it as well. :D

How to copy recursive directory structure while creating symbolic file links only? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What's the best way to copy a whole recursive directory structure where all files are just copied as symbolic links?
In other words, the copy should mirror the whole directory (sub-)structure of the original directory but each file should just be a symbolic link.
I guess ... first you want to make your directories...
cd "$source"
find . -type d -exec mkdir -p "$target/{}" \;
Next, make your symlinks...
cd "$source"
find . -type f -print | (
cd "$target"
while read one; do
deep=$(echo "${one:2}" | sed 's:[^/][^/]*:..:g')
ln -s "${deep:3}/${one:2}" "$(basename "$one")"
done
)
Note that this will fail if you have linefeeds or possibly other odd characters in your filenames. I can't think of a quick way out of this (I mean by doing this in a find -exec), since you need to calculate $deep differently for each level of directory.
Also, this is untested, and I'm not planning to test it. If it gives you inspiration, that's great. :)
This is not the solution that will make symbolic links but it will make hard links.
cp -rl $src $dst
Cons:
it is harder to see if the file replaced in the tree
the both trees should be on the same filesystem

Resources