Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I have a folder which is 60Gigabytes in size on a server I need to destroy. But, I only have 6G of space remaining on the server.
Besides the size of the folder, there are literally hundreds of thousands of small files in it. So doing a simple scp would take forever. I really want to tar czf the folder, but again, I don't have the space.
Is there any way to do this?
Another method that you can use (and which fulfills the "transfer to another server" part of the request):
tar cz sourcedir/ | ssh somewhere 'cat > dest.tar.gz'
Unlike scp, it's not doing individual operations, with separate round-trips, for every little file, so it will go just as fast as you can gzip (or just as fast as your network can transfer, if that's slower). Since the archive is getting written to a remote server, you don't have to worry about disk space. And since it isn't deleting as it goes, you can ^C it without being left with half of your files in their original locations and the other half in the tarball.
You can also get a live filesystem (instead of an archive) on the destination end just by changing to
tar cz sourcedir/ | ssh somewhere 'tar xC destdir/'
which operates a bit like rsync without the "sync". Add a v on the right side tar command to list files as they're received by the destination server.
I discovered the solution and wanted to share this: --remove-files is the way to achieve this.
So my command is this:
tar --remove-files -czf cpm006.tar.gz cpm006/
On a second terminal window entering du -sh /home/cpm006 several times confirmed that the files are being deleted AS SOON AS they are added to the tar archive.
The obvious benefit to this is that you can do archives for the purpose of freeing disk space even if that space is limited.
Reference:
https://linux.die.net/man/1/tar
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I have three directories on a UNIX box, as described below:
/tmp mapped on /dev/mapper/data-tmpVol1
/var mapped on /dev/mapper/data-varVol1
/opt mapped on /dev/mapper/data-optVol1
If I perform a move operation from /tmp to /var, will the UNIX do in fact a copy since there are two different file systems behind scene?
If I want an instant move, is it better to copy the file first in a /var/staging and perform a move from /var/staging to /var/input?
Context around the issue: I have a process which scans for files in /var/input, and I've seen cases when it picked up half-copied files (when moving directly from /tmp to /var/input).
Regards,
Cristi
When moving across file systems, you may like to create a file in the destination directory with a temporary filename, e.g. my-file.txt~. The scanning process must ignore such temporary filenames. When the file is complete you rename it to the final name. This way when the file (with a final name) exists it is complete, or it doesn't exist at all.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 7 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
After over a month, I have managed to piece together how to setup an AWS EC2 server. It has been very hard to upload files as there are very conservative (size) limits when done via the upload button in Rstudio Server. The error message when this is attempted is "Unexpected empty response from server".
I am not unique in this respect e.g. Trouble Uploading Large Files to RStudio using Louis Aslett's AMI on EC2
I have managed to use the following commands through putty and this has allowed me to upload files via either filezilla or winscp.
sudo chown -R ubuntu /home/rstudio
sudo chmod -R 755 /home/rstudio
Once I use these commands and log out, I can no longer access rstudio on the instances in future logins. I can relogin to my instances via my browser, but I get the error message:
Error Occurred During Transmission
Everything is fine other than once I use Putty I lose browser access to my instances.
I think this is because the command is change of ownership or similar. Should I be using a different command?
If I don't use a command I cannot connect between filezilla/winscp and the instance.
If anyone is thinking of posting a comment that this should be closed as it is a hardware issue, I don't have a problem with hardware. I am interested in the correct coded commands.
Thank you :)
Ok so eventually I realised what was going on here. The default home directory size for AWS is less than 8-10GB regardless of the size of your instance. As this as trying to upload to home then there was not enough room. An experienced linux user would not have fallen into this trap, but hopefully any other windows users new to this who come across this problem will see this. If you upload into a different drive on the instance then this can be solved. As the Louis Aslett Rstudio AMI is based in this 8-10GB space then you will have to set your working directory outside this, the home directory. Not intuitively apparent from Rstudio server interface. Whilst this is an advanced forum and this is a rookie error I am hoping no one deletes this question as I spent months on this and I think someone else will too.
Don't change the rights of /home/rstudio unless you know what you are doing, this may cause unexpected issues (and it actually does cause issues in your case). Instead, copy the files with filezilla or winscp to a temporary file (let say /tmp), then ssh to your instance with putty and move the file to the rstudio directory with sudo (e.g sudo mv /tmp/myfile /home/rstudio).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
The infamous cryptowall has encrypted a large number of my files/folders.
While I have restored most of my files from backup, I am now looking for a way to scan the remaining encrypted files scattered across my local and network drives.
Is there a way of generating a list of those encrypted files ? (by scanninng the file header / or verifying file integrity). Is it possible in command line or with a specific software ?
Cheers,
Florian
You can try to generate a list of crypted files
http://www.bleepingcomputer.com/download/listcwall/
Try to use this link to decrypt files - better than nothing
https://www.fireeye.com/blog/executive-perspective/2014/08/your-locker-of-information-for-cryptolocker-decryption.html
Original post taken from
http://www.bleepingcomputer.com/virus-removal/cryptowall-ransomware-information
CryptoWall store in windows registry the list of all files encrypted.
Once restored some files may have been missed and might still be encrypted.
Looking at the modified file attribute gives a short list of files that have likely been and remain encrypted.
Using Recuva (Windows recovery tool), I have notice that encrypted file magic numbers are random, while for a normal file those magic numbers (first four bytes) are the same per file type.
JPEG : FF D8 FF E0
EDIT : I have found this handy unix command named "file". It is available on Linux, Cygwin, and OS X.
With a quick script to scan every files in the system, the unknown filetype are likely to be the remaining encrypted files.
Comparing those magic numbers with the file extension is scriptable and should allow to determine what is encrypted/corrupted. Yet I have not find such a tool able to perform this and compare to a well known database of magic numbers (first four bytes).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have recently set up RStudio on an AMI ec2 instance using the process generously laid out by Louis Aslet from his website. But in an embarrassing turn of events I can't access the data I need because it resides on my personal computer. I am new to cloud computing and have 0 functional knowledge of Linux, but I do know SQL, and R well. Any help or suggestions would be greatly appreciated.
Have you tried the "Upload" button in the "Files" window of Rstudio?
use scp in terminal.
To put files from your remote server
Example: if the files are located locally in ~/mylocalfolder and you want to put them in /home/rstudio/mydata you would execute in terminal:
scp ~/mylocalfolder/*.csv ubuntu#<your address>:/home/rstudio/myData/
Note that if you want to access them under a different user, eg, rstudio, you need to change owners on the files. Use chown
To grab data from your remote server
Example: if the files are located on /home/rstudio/mydata and you want to put them locally in ~/mylocalfolder you would use
scp ubuntu#<your address>:/home/rstudio/myData*.Rda ~/mylocalfolder
I use the RStudio AMI all the time and what works for me is to use Dropbox. I can't remember exactly how I did it but I think I may have started the shell from within RStudio and installed Dropbox from the command line.
This link has a little more info:
http://www.louisaslett.com/RStudio_AMI/#comment-1041983219
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
In a UNIX filesystem, if the inode of the root directory is in the memory, what are the sequence of disk operations needed to remove a file in Desktop?
While I am trying to solve a question in textbook, I have seen this question but I could not solve it . Can anyone help me ?
If you know much about Unix, can you tell me what are sequence of disk operation needed for creating a file in Desktop ?
Use rm to remove files in Unix. e.g.,
rm file_to_delete
or better yet if you are uncertain about working in Unix
rm -i file_to_delete
which will prompt with the name of the file to be deleted to confirm the operation.
The file_to_delete can be in the current directory, or in some other directory as long as the path is properly specified.
See rm man page for more information.
As for creating a file, you can create an empty file with the touch command. I.e.,
touch some_file
will create an empty file named some_file. Again, this can be in the current directory, or any directory with the path specified.
For more information see the touch man page.
Your questions wasn't quite clear to me, so if this doesn't answer it please add a comment or (better yet) consider possibly rephrasing your original question or at least the title of your question (removing a file in unix)