Easiest way to merge partitions under debian (unix)? [closed] - unix

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have two unix partitions under debian which I would like to merge (disk space problems :/). What would be the easiest way to do it? I think it would be best to tar or copy files from one partition to the other, delete one and resize the other. I will use parted to resize but how should I copy the files? There are links, permissions and devices which need to be moved without change.

You could run the following (as root) to copy the files. It works for symlinks, devices and ordinary files.
cd /partition2
tar cf - . | ( cd /partition1 && tar xf - )
Another way is to use cpio, but I never remember the correct syntax.

Since this is Debian with GNU fileutils, cp --archive should work fine.
cp --archive --sparse=always --verbose --one-file-system --target-directory=/TARGET /ORIGIN
If for some reason you’d want to go via GNU tar, you’d need to do something like this:
cd /origin
find . -xdev -depth -not -path ./lost+found -print0 \
| tar --create --atime-preserve=system --null --files-from=- \
--format=posix --no-recursion --sparse \
| { cd /target; tar --extract --overwrite --preserve-permissions --sparse; }
(I’ve done this so many times that I’ve got a file with all these command lines for quick reference.)
Warning: Using GNU "tar" will not copy POSIX ACLs; you'll need to use either the above "cp --archive" method or "bsdtar":
mkdir /target
cd /origin
find . -xdev -depth -not -path ./lost+found -print0 \
| bsdtar -c -n --null -T - --format pax \
| { cd /target; bsdtar -x -pS -f -; }

You can also use SquashFS to create a mirror of the partition and copy that over. After resizing your 2nd partition, mount the SquashFS image and copy over the necessary files. Keep in mind that your kernel will need SquashFS support to mount the image.
SquashFS

Related

SCP issue with multiple files - UNIX

Getting error in copying multiple files. Below command is copying only first file and giving error for rest of the files. Can someone please help me out.
Command:
scp $host:$(ssh -n $host "find /incoming -mmin -120 -name 2018*") /incoming/
Result:
user#host:~/scripts/OTA$ scp $host:$(ssh -n $host "find /incoming -mmin -120 -name 2018*") /incoming/
Password:
Password:
2018084session_event 100% |**********************************************************************************************************| 9765 KB 00:00
cp: cannot access /incoming/2018084session_event_log.195-10.45.40.9
cp: cannot access /incoming/2018084session_event_log.195-10.45.40.9_2_3
Your command uses Command Substitution to generate a list of files. Your assumption is that there is some magic in the "source" notation for scp that would cause multiple members of the list generated by your find command to be assumed to live on $host, when in fact your command might expand into something like:
scp remotehost:/incoming/someoldfile anotheroldfile /incoming
Only the first file is being copied from $host, because none of the rest include $host: at the beginning of the path. They're not found in your local /incoming directory, hence the error.
Oh, and in addition, you haven't escape the asterisk in the find command, so 2018* may expand to multiple files that are in the login directory for the user in question. I can't tell from here, it depends on your OS and shell configuration.
I should point out that you are providing yet another example of the classic Parsing LS problem. Special characters WILL break your command. The "better" solution usually offered for this problem tends to be to use a for loop, but that's not really what you're looking for. Instead, I'd recommend making a tar of the files you're looking for. Something like this might do:
ssh "$host" "find /incoming -mmin -120 -name 2018\* -exec tar -cf - {} \+" |
tar -xvf - -C /incoming
What does this do?
ssh runs a remote find command with your criteria.
find feeds the list of filenames (regardless of special characters) to a tar command as options.
The tar command sends its result to stdout (-f -).
That output is then piped into another tar running on your local machine, which extracts the stream.
If your tar doesn't support -C, you can either remove it and run a cd /incoming before the ssh, or you might be able to replace that pipe segment with a curly-braced command: { cd /incoming && tar -xvf -; }
The curly brace notation assumes a POSIX-like shell (bash, zsh, etc). The rest of this should probably work equally well in csh if that's what you're stuck with.
Limited warranty: Best Effort Only. Untested on animals or computers. Your milage may vary. May contain nuts.
If this doesn't work for you, poke at it until it does.

SCP alternative to copy file from one unix host to another unix host [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
Have below constraints to copy file from one host to another unix host
1) Target host dont have ftp installed
2) scp is very slow for file in gigs
Is there any alternative option to copy file in less time , currently it is taking 90 hrs to copy 3 gigs file with scp
Faster alternatives to scp are bbcp, gzip+nc or pigz+nc.
This link describes all the commandos in detail and why scp is slow:
http://intermediatesql.com/linux/scrap-the-scp-how-to-copy-data-fast-using-pigz-and-nc/
Here is a short summary of the commands used in the link.
bbcp:
bbcp -P 10 -f -T 'ssh -x -a %I -l %U %H bbcp' /u02/databases/mydb/data_file-1.dbf remote_host:/u02/databases/mydb/data_file-1.dbf
gzip+nc:
tar -cf - /u02/databases/mydb/data_file-1.dbf | gzip -1 | nc -l 8888
nc <source host> 8888 | gzip -d | tar xf - -C /
pigz+nc:
tar -cf - /u02/databases/mydb/data_file-1.dbf | pigz | nc -l 8888
nc <source host> 8888 | pigz -d | tar xf - -C /

Find and tar files on Solaris

I've got a little problem with my bash script. I'm newbie in unix world, so I find it difficult to deal with an exercise. What I have to do is find files on Solaris server with specific name, modified in specific time and archive them in one .tar file. First two points are easy, but I'm having a nightmare with trying to archive it. The thing is, I constantly archive whole tree of file (with file at the end) to .tar file, but I need just a file. My code looks like this:
find ~ -name "$maska" -mtime -$dni | xargs -t -L 1 tar -cvf $3 -C
where $maska is the name of the file, $dni refers to modification time and $3 is just a archive name. I found out about -C switch, that let's me jump into the folder where desired file is, but when I use it with xargs, it seems just to jump there and do nothing else.
So my question is:
1) is there any possibility of achieving my goal this way?
Please remember, I don't work on gnu tar. And I HAVE TO use commands: tar, find.
Edit: I'd like to specify more my problem. When I use the script for, for example, file a, it should look for it since the point shown in script (it's ~ ) and everything it will find should be in one tar file.
What I got right now is (I'm in /home/me/Scripts):
-bash-3.2$ ./Script.sh a 1000 backup
a /home/me/Program/Test/a/ 0K
a /home/me/Program/Test/a/a.c 1K
a /home/me/Program/Test/a/a.out 8K
So script has done some packing. Next I want to see my packed file, so:
-bash-3.2$ tar -tf backup
/home/me/Program/Test/a/
/home/me/Program/Test/a/a.c
/home/me/Program/Test/a/a.out
And that's the problem. Tar file have all the paths in it, so if I will untar it, instead of getting just the file I wanted to archive, I will replace them in their old places. For visualisation:
-bash-3.2$ ls
Script.sh* Script.sh~* backup
-bash-3.2$ tar -xvf backup
x /home/me/Program/Test/a, 0 bytes, 0 tape blocks
x /home/me/Program/Test/a/a.c, 39 bytes, 1 tape blocks
x /home/me/Program/Test/a/a.out, 7928 bytes, 16 tape blocks
-bash-3.2$ ls
Script.sh* Script.sh~* backup
That's the problem.
So all I want is to pack all those desired file (a in example above) in one tar file without those paths, so it will simply untar in the directory I run the Script.sh.
I'm not sure to understand what you want but this might be it :
find ~ -name "$maska" -mtime -$dni -exec tar cvf $3 {} +
Edit: second attempt after your wrote the main issue is the absolute path:
( cd ~; find . -name "$maska" -type f -mtime -$dni -exec tar cvf $3 {} + )
Edit: third attempt, after you wrote you want no path at all in the archive, maska is a directory name and $3 need to be in the current directory:
mkdir ~/foo && \
find ~ -name "$maska" -type d -mtime -$dni -exec sh -c 'ln -s $1/* ~/foo/' sh {} \; && \
( cd ~/foo ; tar chf - * ) > $3 && \
rm -rf ~/foo
Replace ~/foo by ~/somethingElse if ~/foo already exists for some reason.
Maybe you can do something like this:
#!/bin/bash
find ~ -name "$maska" -mtime -$dni -print0 | while read -d $'\0' file; do
d=$(dirname "$file")
f=$(basename "$file")
echo $d: $f # Show directory and file for debug purposes
tar -rvf tarball.tar -C"$d" "$f"
done
I don't have a Solaris box at hand for testing :-)
First of all, my assumptions:
1. "one tar file", like you said, and
2. no absolute paths, ie if you backup ~/dir/file, you should be able to test extracting it in /tmp obtaining /tmp/dir/file.
If the problem is the full paths, you should replace
find ~ # etc
with
cd ~ || exit
find . # etc
If the tar archive isn't an absolute name, instead, it should be something like
(
cd ~ || exit
find . etc etc | xargs tar cf - etc etc
) > $3
Explanation
"(...)" runs a subshell, meaning some of the tings you change in there have no effects outside of the parens; the current directory is one of them, so "(cd whatever; foo)" means you run another shell, change its current directory, run foo from there, and then you're back in your script which never changed directory.
"cd ~ || exit" is paranoia, it means "cd ~; if that fails, exit".
"." is an alias meaning "the current directory, whatever that is"; play with "find ." vs "find ~" if you don't know what it means, you'll understand it better than if I explained it here.
"tar cf -" means that you create the tar archive on standard output; I think the syntax is portable enough, you may have to replace "-" with "/dev/stdout" or whatever works on solaris (the simplest solution is simply "tar", without the "c" command, but it's ugly to read).
The final "> $3", outside of the parens, is output redirection: rather than writing the output to the terminal, you save it into a file.
So the whole script reads like this:
- open a subshell
- change the subshell's current directory to ~
- in the subshell, find the files newer than requested, archive them, and write the contents of the resulting tar archive to standard output
- the subshell's stdout is saved to $3; because the redirection is outside the parens, relative paths are resolved relatively to your script's $PWD, meaning that eg if you run the script from the /tmp directory you'll get a tar archive in the /tmp directory (it would be in ~ if the redirection happened in the subshell).
If I misunderstood your question, the solution doesn't work or the explanation isn't clear let me know (the answer is too long, but I already know that :).
The pax command will output tar-compatible archives and has the flexibility you need to rewrite pathnames.
find ~ -name "$maska" -mtime -$dni | pax -w -x ustar -f "$3" -s '!.*/!!'
Here are what the options mean, paraphrasing from the man page:
-w write the contents of the file operands to the standard output (or to the pathname specified by the -f option) in an archive format.
-x ustar the output archive format is the extended tar interchange format specified in the IEEE POSIX standard.
-s '!.*/!!' Modifies file operands according to the substitution expression, using regular expression syntax. Here, it deletes all characters in each file name from the beginning to the final /.

Conditional remove in unix [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I need to remove all the files in the current directory except one file, say abc.txt. Is there any command to rm all the other files in the directory except abc.txt?
If you're after a succinct command, then with extended globbing in bash, you should be able to use:
rm !(abc.txt)
There are however several caveats to this approach.
This will run rm on all entries in the directory (apart from "abc.txt") and this includes subdirectories. You will therefore end up with the "cannot remove directory" error if subdirs exist. If this is the case, use find instead:
find . -maxdepth 1 -type f \! -name "abc.txt" -exec rm {} \;
# omit -maxdepth 1 if you also want to delete files within subdirectories.
If !(abc.txt) returns a very long list of files, you will potentially get the infamous "argument list too long" error. Again, find would be the solution to this issue.
rm !(abc.txt) will fail if the directory is empty or if abc.txt is the only file. Example:
[me#home]$ ls
abc.txt
[me#home]$ rm !(abc.txt)
rm: cannot remove `!(abc.txt)': No such file or directory
You can workaround this using nullglob, but it can often be cleaner to simply use find. To illustrate, a possible workaround would be:
shopt -s nullglob
F=(!(abc.txt)); if [ ${#F[*]} -gt 0 ]; then rm !(abc.txt); fi # not pretty
1)
mv abc.txt ~/saveplace
rm *
mv ~/saveplace/abc.txt .
2)
find . ! -name abc.txt -exec rm {} "+"
Try
find /your/dir/here -type f ! -name abc.txt -exec rm {} \;
Providing you don't have file with space in the name, you can use a for to loop on the result of ls:
for FILE in `ls -1`
do
if [[ "$FILE" != "abc.txt" ]]; then
rm $FILE
fi
done
You could write it as a script, or you can write it directly at bash prompt: write the first line and press enter, then you can write the other lines and bash will wait for you to write done before executing. Otherwise you can write is in a single line:
for FILE in `ls -1`; do if [[ "$FILE" != "abct.txt" ]]; then rm $FILE; fi; done

Removing only my files in Unix [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I need to rm files from a unix directory that only belong to my id. I tried building this command, but to no avail:
ls -la | grep 'myid' | awk ' { print $9 } ' | rm
My result: Usage: rm [-firRe] [--] File...
You were really close. Try:
rm `ls -la | grep 'myid' | awk ' { print $9 } '`
Note that those are backticks, not single quotes surrounding the first three segments from your original pipeline. Also for me the filename column was $8, but if $9 is the right column for you, then that should do it.
find . -user myuser -print0 |xargs -0 rm
Put your own userid (or maybe user number) in for "myuser".
rm doesn't read from stdin.
find -user $(whoami) -delete
Please always test without the delete first.
Try with find where you can search for files belonging to a user and then delete them:
find . -user *username* -delete
More info: man find
rm does not accept a list of files to delete on the stdin (which is what you are doing by passing it through the pipe.
Try this
find . -type f -user username -exec rm -f {} \;
Delete files of user_name from folder /tmp (you can replace this with your folder) older than 60 days - you ca use any date here but make sure you keep evidence in a deleted.txt file in user_name home folder:
find /tmp -user user_name -mtime +60 -exec rm -rfv {} \; >> /home/user_name/deleted.txt
You could use find:
find . -maxdepth 1 -type f -user myid -print0 | xargs -0 rm -f
Drop the -maxdepth 1 if you want it to handle subdirectories as well.

Resources