Create a torrent with Transmission doesn't work - transmission

I try to create my own torrent with transmission in command line.
I follow a basic tutorial but I missed something: https://forum.transmissionbt.com/viewtopic.php?f=8&p=55854
transmission-create --comment "my comment" --tracker "udp://tracker.openb ittorrent.com:80/announce" --outfile test.torrent MyFile.txt
Creating torrent "test.torrent" ..... done!
And
transmission-show test.torrent
Name: MyFile.txt
File: test.torrent
GENERAL
Name: MyFile.txt
Hash: 67bd20d96046a0a80753fc6f0c93006077f99d7d
Created by: Transmission/2.52 (13304)
Created on: Sun Aug 3 12:31:29 2014
Comment: my comment
Piece Count: 1574
Piece Size: 32.00 KiB
Total Size: 51.57 MB
Privacy: Public torrent
TRACKERS
Tier #1
udp://tracker.openbittorrent.com:80/announce
FILES
MyFile.txt
Ok now, when i upload the torrent on my computer, i use utorrent. He blocks to :
Connection to pairs 0%
Basically i have this :
So i'm sure i have doing something wrong, or forgot something but i can't find what...

Related

Ring buffer log file on unix

I'm trying to come up with a unix pipeline of commands that will allow me to log only the most recent n lines of a program's output to a text file.
The text file should never be more than n lines long. (it may be less when it is first filling the file)
It will be run on a device with limited memory/resources, so keeping the filesize small is a priority.
I've tried stuff like this (n=500):
program_spitting_out_text > output.txt
cat output.txt | tail -500 > recent_output.txt
rm output.txt
or
program_spitting_out_text | tee output.txt | tail -500 > recent_output.txt
Obviously neither works for my purposes...
Anyone have a good way to do this in a one-liner? Or will I have to write a script/utility?
Note: I don't want anything to do with dmesg and must use standard BSD unix commands. The "program_spitting_out_text" prints out about 60 lines/second, every second.
Thanks in advance!
If program_spitting_out_text runs continuously and keeps it's file open, there's not a lot you can do.
Even deleting the file won't help since it will still continue to write to the now "hidden" file (data still exists but there is no directory entry for it) until it closes it, at which point it will be really removed.
If it closes and reopens the log file periodically (every line or every ten seconds or whatever), then you have a relatively easy option.
Simply monitor the file until it reaches a certain size, then roll the file over, something like:
while true; do
sleep 5
lines=$(wc -l <file.log)
if [[ $lines -ge 5000 ]]; then
rm -f file2.log
mv file.log file2.log
touch file.log
fi
done
This script will check the file every five seconds and, if it's 5000 lines or more, will move it to a backup file. The program writing to it will continue to write to that backup file (since it has the open handle to it) until it closes it, then it will re-open the new file.
This means you will always have (roughly) between five and ten thousand lines in the log file set, and you can search them with commands that combine the two:
grep ERROR file2.log file.log
Another possibility is if you can restart the program periodically without affecting its function. By way of example, a program which looks for the existence of a file once a second and reports on that, can probably be restarted without a problem. One calculating PI to a hundred billion significant digits will probably not be restartable without impact.
If it is restartable, then you can basically do the same trick as above. When the log file reaches a certain size, kill of the current program (which you will have started as a background task from your script), do whatever magic you need to in rolling over the log files, then restart the program.
For example, consider the following (restartable) program prog.sh which just continuously outputs the current date and time:
#!/usr/bin/bash
while true; do
date
done
Then, the following script will be responsible for starting and stopping the other script as needed, by checking the log file every five seconds to see if it has exceeded its limits:
#!/usr/bin/bash
exe=./prog.sh
log1=prog.log
maxsz=500
pid=-1
touch ${log1}
log2=${log1}-prev
while true; do
if [[ ${pid} -eq -1 ]]; then
lines=${maxsz}
else
lines=$(wc -l <${log1})
fi
if [[ ${lines} -ge ${maxsz} ]]; then
if [[ $pid -ge 0 ]]; then
kill $pid >/dev/null 2>&1
fi
sleep 1
rm -f ${log2}
mv ${log1} ${log2}
touch ${log1}
${exe} >> ${log1} &
pid=$!
fi
sleep 5
done
And this output (from an every-second wc -l on the two log files) shows what happens at the time of switchover, noting that it's approximate only, due to the delays involved in switching:
474 prog.log 0 prog.log-prev
496 prog.log 0 prog.log-prev
518 prog.log 0 prog.log-prev
539 prog.log 0 prog.log-prev
542 prog.log 0 prog.log-prev
21 prog.log 542 prog.log-prev
Now keep in mind that's a sample script. It's relatively intelligent but probably needs some error handling so that it doesn't leave the executable running if you shut down the monitor.
And, finally, if none of that suffices, there's nothing stopping you from writing your own filter program which takes standard input and continuously outputs that to a real ring buffer file.
Then you would simply do:
program_spitting_out_text | ringbuffer 4096 last4k.log
That program could be a true ring buffer in that it treats the 4k file as a circular character buffer but, of course, you'll need a special marker in the file to indicate the write-point, along with a program that can turn it back into a real stream.
Or, it could do much the same as the scripts above, rewriting the file so that it's always below the size desired.
Since apparently this basic feature (circular file) does not exist on GNU/Linux, and because I needed it to track logs on my Raspberry Pi with limited storage, I just wrote the code as suggest above!
Behold: circFS
Unlike other tools quoted on this post and other similar, the maximum size is arbitrary and only limited by the actual available storage.
It does not rotate with several files, all is kept in the single file, which is rewritten on "release".
You can have as many log files as needed in the virtual directory.
It is a single C file (~600 lines including comments), and it builds with a single compile line after having installed fuse development dependencies.
This first version is very basic (see the README), if you want to improve it with some of the TODOs (see the TODO) be welcome to submit pull requests.
As a joke, this is my first "write only" fuse driver! :-)

How to restore the modification time of a file after changing it?

I am modifying some files with the help of a script in Unix. I don't want the modification times of the files to be changed. I used the touch command but no use. Is there any other way?
I want the previous modification time of the file. Is it possible?
Touch is the way to go. Was your syntax correct?
[01:35:42 root#~]# touch -t 201107262235.34 foo
[01:35:49 root#~]# stat foo
File: `foo'
Size: 0 Blocks: 0 IO Block: 4096 regular empty file
Device: ca20h/51744d Inode: 642445 Links: 1
Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2011-07-26 22:35:34.000000000 -0400
Modify: 2011-07-26 22:35:34.000000000 -0400
Change: 2011-07-27 01:35:49.000000000 -0400
[01:35:50 root#~]#
backup:
# savedate=$(stat -c %Y filename.ext)
restore:
# touch -d #${savedate} filename.ext
Capture the modification time before changing the file:
oldFileTime=`find theFileThatIsBeingChanged -maxdepth 0 -printf "%Ty%Tm%Td%TH%TM.%.2TS"`
Make your modifications, then use touch to reset the time:
touch -t "$oldFileTime" theFileThatIsBeingChanged

Need help to write a shell script to calculate the directory space used & free

I need help to write a shell script to calculate the directory space in tabular format.the output will contain the used space,total space & the free space.Please kindly help me on this.
It sounds like you are looking for the command "df". Unfortunately, the output and options depend on whatever flavor of UNIX you happen to be using.
Type "man df" or try "df -h" on a command line to learn more.
Here's an example from my Mac:
kim-burgaards-macbook-pro:~ kim$ df
Filesystem 512-blocks Used Available Capacity Mounted on
/dev/disk0s2 976101344 912544576 63044768 94% /
devfs 224 224 0 100% /dev
map -hosts 0 0 0 100% /net
map auto_home 0 0 0 100% /home
/dev/disk1s1 976773104 761379976 215393128 78% /Volumes/Photo Vault
/dev/disk2s2 1952853344 1844058136 108795208 95% /Volumes/Backup
As far as I remember, the output looks very similar on Solaris.
AFAIK, directories don't have a notion of total space and free space, that depends on the disk the directory resides on.
To know the space used you can use du -s 'directory name'.
You can use df 'Directory name' to find the available space on that storage media and probably combine the two.
For example, consider the directory '~/Desktop'
[foo#bar ~] df -h ~/Desktop
Filesystem Size Used Avail Use% Mounted on
foobar:/vol/arbit/foo
126G 84G 43G 67% /home/foo
Clearly ~/Desktop has not used 84G,
[foo#bar ~] du -sh ~/Desktop
30M /home/foo/Desktop
which is the correct usage.
You can use awk to grab required fields and populate your information.

cat file | ... vs ... <file

Is there a case of ... or context where cat file | ... behaves differently than ... <file?
When reading from a regular file, cat is in charge of reading the data, performs it as it pleases, and might constrain it in the way it writes it to the pipeline. Obviously, the contents themselves are preserved, but anything else could be tainted. For example: block size and data arrival timing. Additionally, the pipe in itself isn't always neutral: it serves as an additional buffer between the input and ....
Quick and easy way to make the block size issue apparent:
$ cat large-file | pv >/dev/null
5,44GB 0:00:14 [ 393MB/s] [ <=> ]
$ pv <large-file >/dev/null
5,44GB 0:00:03 [1,72GB/s] [=================================>] 100%
Besides the thing posted by other users, when using input redirection from a file, standard input is the file but when piping the output of cat to the input, standard input is a stream with the contents of the file. When standard input is the file will be able to seek within the file but the pipe will not allow it. You can see this by finding a zip file and running the following commands:
zipinfo /dev/stdin < thezipfile.zip
and
cat thezipfile.zip | zipinfo /dev/stdin
The first command will show the contents of the zipfile while the second will show an error, though it is a misleading error because zipinfo does not check the result of the seek call and errors later on.
A useless use of cat is always to be avoided. It's like driving with the handbrake on. It wastes CPU cycles for nothing, the OS constantly context switching between the cat process and the next in the pipe. If all the world's useless cats were gone and stopped being invented, reinvented, passed on from father to son, we wouldn't have global warming because we could easily live with 1.21 Gigawatts of power saved.
Thanks. I feel better now. Please join me in my crusade to stamp out useless use of cat on stackoverflow. This site is, as far as I perceive it, a major contribution to the proliferation of useless cats. I don't blame the newbies, but I do want to teach them. Workers and newbies of the world, loosen the handbrakes and save the planet!!!1!
cat will allow you to pipe multiple files in sequentially. Otherwise, < redirection and cat file | produce the same side effects.
Pipes cause a subshell to be invoked for the command on the right. This interferes with environment variables.
cat foo | while read line
do
...
done
echo "$line"
versus
while read line
do
...
done < foo
echo "$line"
One further difference is behavior on a blocking open() of the input file.
For example, assuming input is a FIFO with no writers, one invocation will not spawn any child programs until the input file is opened, while the other will spawn two processes:
prog ... < a_fifo # 'prog' not launched until shell can open file
cat a_fifo | prog ... # 'prog' and 'cat' are running (latter may block on open)
In practice this rarely matters except in contrived circumstances. prog might periodically log or do some cleanup work while waiting for input, for example, which you might want to happen even if no input is available. (Why wouldn't prog be sophisticated enough to open its own input fifo nonblocking?)
cat file | starts up another program (cat) that doesn't have to start in the second case. It also makes it more confusing if you want to use "here documents". But it should behave the same.

How can I display rsync progress from a log file without the carriage returns?

I have a log file from rsync command with progress on. When running this progress updates the display information on the same line. When I capture the output from this command I get a file that displays normally with cat on a terminal (all the backspaces and re-edits are replayed) but I want to be able to use grep on the file and process it so I see all the backspace edit commands. How can I process the file to remove all the progress updates and just get a file that has the final edits?
If you want to see what your captured file looks like without the progress lines overwriting the previous ones:
sed 's/\r/\n/g' capture-file
which might yield something like this:
created directory /some/dest/dir
filename
32768 0% 0.00kB/s 0:00:00
6717440 21% 6.38MB/s 0:00:03
13139968 41% 6.25MB/s 0:00:02
19791872 62% 6.28MB/s 0:00:01
26214400 82% 6.24MB/s 0:00:00
31784420 100% 6.25MB/s 0:00:04 (xfer#1, to-check=0/1)
sent 31788388 bytes received 31 bytes 3346149.37 bytes/sec
total size is 31784420 speedup is 1.00
If you want to see just the final step of the progress message and eliminate the previous ones:
sed 's/.*\r/\n/g' capture-file
Which might look like this:
created directory /some/dest/dir
filename
31784420 100% 6.25MB/s 0:00:04 (xfer#1, to-check=0/1)
sent 31788388 bytes received 31 bytes 3346149.37 bytes/sec
total size is 31784420 speedup is 1.00
You can run rsync with the --log-file=name option to capture log information in a file. Replace "name" with the name that you want. You can control the information that gets logged using the log-file-format option (see the log format section in man rsyncd.conf for details).
On my system the default rsync log file looks like this:
2009/11/01 17:19:20 [23802] building file list
2009/11/01 17:19:20 [23802] created directory /some/dest/dir
2009/11/01 17:19:25 [23802] <f+++++++++ filename
2009/11/01 17:19:25 [23802] sent 31788388 bytes received 31 bytes 3346149.37 bytes/sec
2009/11/01 17:19:25 [23802] total size is 31784420 speedup is 1.00
First, I'm guessing the progress indicators are important for you to keep for various reasons, because the simplest solution is to just turn them off. Secondly, I'm assuming there is no way to tell rsync to write out a log with everything but the progress indicators to a separate file from what it shows on the screen. Lastly, I'm assuming that you do not want any of the progress indicator stuff at all in the log file you later process with other tools.
Given these two assumptions, you can strip out the progress indicators by basically looking through each line for \b or \r characters and tossing all lines that have those characters. This can be as simple as a grep command line that looks like this:
grep -ve "$(echo -ne '[\b\r]')" logfile
This presupposes you have a shell that supports "$(...)" style argument substitution and an echo command that supports the -e arguments.

Resources