Unix Utitliy du -b vs du -h [closed] - unix

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I was using the unix utility du -h to check the file sizes of a text and a utf file. The man page says the -h --human-readable flag "print sizes in human readable format (e.g., 1K 234M 2G)". After reading the man page, I decided to try du -b. The man page says the -b --bytes flag is "equivalent to '--apparent-size --block-size=1'". What I understand is that du -b lists how many bytes are in the file.
The output of du -b is 122, but when running du -h on the same file I get 4.0K.
What does the K stand for? When looking at the man page it looks like it is supposed represent Kilobytes, but 122 bytes can't be 4 Kilobytes. What am I missing here?

The difference here has to do with block size of your filesystem. https://unix.stackexchange.com/questions/62049/why-is-a-text-file-taking-up-at-least-4kb-even-when-theres-just-one-byte-of-tex gives a good answer.
The gist is that many filesystems reserve disk space in 4 kilobyte chunks. So, even if a file contains very few bytes of information, it'll take up 4K on the filesystem.
$ echo "foo" > foo
$ du -h foo
4.0K foo
The number of bytes of the contents of the file may be far less than 4K, like 122 bytes, but the file itself is taking up 4K on your filesystem.

Related

How to store in file outputs of a c/c++ program in unix [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I have a program made on c++, let's call it program, which has a lot of output while running. I want to log that output on a file so is easier then to look at it. I tried already ./program & > file ./program & |tee file but it doesnt work (I want to run it in background if posible).
any idea?
Thanks
EDIT: It doesnt work ./program > file. I tried putting it on background, on foreground, but the file is empty....
Redirect (>, 2>, 2>&1, etc) is the standard way of achieving this. However, this doesn't work in all possible scenarios.
Use script to capture everything displayed on your terminal:
script -c "./program arg1 --arg2" output.log
If you want to redirect stdout and put the program to background use (./program > file)&
You might try
program >& somefile &
then ending & means you want to put that job in the background and (with bash or zsh) the >& redirects both stdout and stderr
Then you could use tail -f somefile (perhaps in a different terminal window) to look at that growing somefile. See tail(1)
BTW, there are other possibilities. Look perhaps into batch command (you may need to install the at package to have it) and into nohup(1).

zsh history is too short [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 months ago.
The community reviewed whether to reopen this question 3 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
When I run history in Bash, I get a load of results (1000+). However, when I run history the zsh shell I only get 15 results. This makes grepping history in zsh mostly useless.
My .zshrc file contains the following lines:
HISTFILE=~/.zhistory
HISTSIZE=SAVEHIST=10000
setopt sharehistory
setopt extendedhistory
How can I fix zsh to make my shell history more useful?
UPDATE
If in zsh I call history 1 I get all of my history, just as I do in Bash with history. I could alias the command to get the same result, but I wonder why does history behave differently in zsh and in Bash.
NVaughan (the OP) has already stated the answer in an update to the question: history behaves differently in bash than it does in zsh:
In short:
zsh:
history lists only the 15 most recent history entries
history 1 lists all - see below.
bash:
history lists all history entries.
Sadly, passing a numerical operand to history behaves differently, too:
zsh:
history <n> shows all entries starting with <n> - therefore, history 1 shows all entries.
(history -<n> - note the - - shows the <n> most recent entries, so the default behavior is effectively history -15)
bash:
history <n> shows the <n> most recent entries.
(bash's history doesn't support listing from an entry number; you can use fc -l <n>, but a specific entry <n> must exist, otherwise the command fails - see below.)
Optional background info:
In zsh, history is effectively (not actually) an alias for fc -l: see man zshbuiltins
For the many history-related features, see man zshall
In bash, history is its own command whose syntax differs from fc -l
See: man bash
Both bash and zsh support fc -l <fromNum> [<toNum>] to list a given range of history entries:
bash: specific entry <fromNum> must exist.
zsh: command succeeds as long as least 1 entry falls in the (explicit or implied) range.
Thus, fc -l 1 works in zsh to return all history entries, whereas in bash it generally won't, given that entry #1 typically no longer exists (but, as stated, you can use history without arguments to list all entries in bash).
#set history size
export HISTSIZE=10000
#save history after logout
export SAVEHIST=10000
#history file
export HISTFILE=~/.zhistory
#append into history file
setopt INC_APPEND_HISTORY
#save only one command if 2 common are same and consistent
setopt HIST_IGNORE_DUPS
#add timestamp for each entry
setopt EXTENDED_HISTORY
this is my setting, and it work
Perhaps late however coming across this post and trying to apply it, and failed .... so in practical terms, put this in .zshrc :
alias history='history 1'
and you'll see anything until the HIST_SIZE runs out. To find a command I use (after the .zshrc change)
history | grep "my_grep_string"

Please help me understand grep and fgrep [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I am trying to grep a list of IDs present in file1 from file2
I write:
grep -f file1 file2
The command gets stuck as if perpetually in the run phase.
Then I try:
fgrep -f file1 file2
This works in a flash.
The man page of grep says that fgrep is same as "grep -f". But then how come I get no output for "grep -f"
You cite the man page incorrectly! What is written there is this:
fgrep is the same as grep -F
Note the uppercase -F which is quite different to -f!

How to convert relative path to absolute path in Unix [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 12 months ago.
The community reviewed whether to reopen this question 12 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I want to convert
Relative Path - /home/stevin/data/APP_SERVICE/../datafile.txt
to
Absolute Path - /home/stevin/data/datafile.txt
Is there a built-in tool in Unix to do this or any good ideas as to how can I implement this?
readlink -f /home/stevin/data/APP_SERVICE/../datafile.txt should do what you're looking for, assuming your Unix/Linux has readlink.
Something like this can help for directories: (For files, append with basename)
echo $(cd ../dir1/dir2/; pwd)
For files,
filepath=../dir1/dir2/file3
echo $(cd $(dirname $filepath); pwd)/$(basename $filepath)
I'm surprised nobody mentions realpath yet. Pass your paths to realpath and it will canonicalize them.
$ ls
Changes
dist.ini
$ ls | xargs realpath
/home/steven/repos/perl-Alt-Module-Path-SHARYANTO/Changes
/home/steven/repos/perl-Alt-Module-Path-SHARYANTO/dist.ini
Based on Thrustmaster's answer but with pure bash:
THING="/home/stevin/data/APP_SERVICE/../datafile.txt"
echo "$(cd ${THING%/*}; pwd)/${THING##*/}"
Of course the cd requires the path to actually exist, which may not always be the case - in that case, you'll probably have a simpler life by writing a small Python script instead...
This is my helper function
# get a real path from a relative path.
function realPath() {
if [[ -d "$1" ]]; then
currentPath="$(pwd)" # save current path
cd "$1" || exit 1 # cd to the checking dir
pwd
cd "$currentPath" || exit 1 # restore to original path
else
echo "$(cd "$(dirname "$1")" && pwd -P)/$(basename "$1")"
fi
}

ffmpeg returning nonsense [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I use this command
ffmpeg -i X.mpg -b 533k –vcodec h263 -ac 1 -ab 48k -acodec aac -strict experimental -s 352x288 X.3gp
from cmd to convert file X from mpg to 3gp. I even used this yesterday and it worked.
Today I decided to improve the command:
ffmpeg –i X.mpg -b 1000k –r 25 –vcodec h263 -ac 1 -ab 15750 –ar 8000 -acodec libopencore_amrnb -s 352x288 X.3gp
Now ffmpeg is completely screwed up, it returns garbage like
[NULL # 02EFF020] Unable to find a suitable output format for 'ÔÇôvcodec'
ÔÇôvcodec: Invalid argument
or
[NULL # 02CBEA80] Unable to find a suitable output format for 'ÔÇôi'
ÔÇôi: Invalid argument
even if I use the first command, which it worked and now, on the same file, in the same directory, with a new fresh ffmpeg executable from the same archive I extracted it before, it doesn't convert anymore.
If I type a nonexistent file as input, ffmpeg gives
[NULL # 02CBEA80] Unable to find a suitable output format for 'ÔÇôr'
ÔÇôr: Invalid argument
I really don't know what to do. Looks like something really basic has been changed...
The dash in –vcodec option is the wrong character (code 0x96, should be 0x2D). Delete and retype. That should fix the problem.

Resources