I have a directory with around 50,000 .jpg images.
Let's call this directory "imageDir", and the empty directory I'm trying to copy to "outputDir".
when I execute:
cp imageDir/* outputDir/
around 30,000ish images through I get:
cp: cannot open `imageDir/234235.jpg' for reading: Bad address
(this does not always occur on the same file) and then the copy operation will cease without copying the rest of the files.
I tried adding the -R option after reading that it would continue the copy even if errors occurred:
cp -R imageDir/* outputDir/
but this did nothing to solve my problem.
Is there some sort of limit to the number of files you can successfully copy at a time?
Why am I seeing this error, and how can I solve it? (if it happened for just photos here and there, I'd be fine with it as long as it completed the rest!)
Additionally: this is using Cygwin on Windows 7.
Thanks!
Looks like an issue with Cygwin to me. Since you said it happens randomly, you might just want to try again when it happens. Here's a script (untested) that will do that:
#!/bin/sh
for i in imageDir/*
do
cp $i outputDir/
while [ $? -ne 0 ]
do
cp $i outputDir/
done
done
Related
I know this is a basic question but I'm missing something fundamental about makefiles.
Take this simple rule/action:
doc: ${SRC_DIR}/doc/dir1/file1.pdf ${SRC_DIR}/doc/dir1/file2.pdf
cp $? ${DEST_DIR}/doc/
the first time I run it, it copies file1.pdf and file2.pdf to the destination/doc directory. Perfect. I'm expecting the next time I run it, for it to do nothing. The source files haven't changed, aren't they a dependency? But when I run I get :
cp : cannot create regular file ..... :Permission denied.
so, 2 questions:
1) Why is it trying to do it again? When I run make -d I see it eventually says: No need to remake target .../file1.pdf and .../file2.pdf but then
it says : must remake target 'doc'
If it doesn't need to make either pdf file, why does it need to make doc?
2) say the pdf files had changed in the source, they are read only though, so it gets the permission denied error. How do you get around this?
A make rule:
target: preqreq0 prereq1...
command
...
says that target needs to be (re)made if it does not exist or is older than
any of the prerequisites preqreq0 prereq1..., and that target shall be
(re)made by running the recipe command ....
Your rule:
doc: ${SRC_DIR}/doc/dir1/file1.pdf ${SRC_DIR}/doc/dir1/file2.pdf
cp $? ${DEST_DIR}/doc/
never creates a file or directory doc, so doc will never exist when
the rule is evaluated (unless you create doc by other means), so the recipe
will always be run.
The kind of target that I believe you want doc to be is a phony target,
but you are going about it wrongly. A reasonable makefile for the purpose would
be:
SRC_DIR := .
DEST_DIR := .
PDFS := file1.pdf file2.pdf
PDF_TARGS := $(patsubst %,$(DEST_DIR)/doc/%,$(PDFS))
.PHONY: doc clean
doc: $(PDF_TARGS)
$(DEST_DIR)/doc/%.pdf: $(SRC_DIR)/doc/dir1/%.pdf
cp $< $#
clean:
rm -f $(PDF_TARGS)
I recommend The GNU Make documentation
As for your second problem, how to overwrite "readonly" files, it is unrelated to make.
You cannot overwrite files to which you do not have write permission, regardless
of the means by which you try to do it. You must get write permission to any files
that you need to write to. It is a system administration matter. If you do not
understand file permissions you may find help at sister-site Unix & Linux
or serverfault
I'm copying from one NAS to another. (Netgear ReadyNAS -> QNAP) i tried Pulling the files by running rsync on the QNAP, and that took forever, so I'm currently trying to push them from the Netgear. The code I'm using is:
rsync -avhr /sauce/folder admin#xxx.xxx.xxx.xxx:/dest/folder
i'm seeing:
sending incremental file list
and nothing after that.
File transfer is 577gb and there are a lot of files, however I'm seeing 0 network traffic on the QNAP (It fluctuates between 0kb/s to 6kb/s) so it looks like its not sending any kind of incremental file list.
all folders are created on the destination and then nothing happens after that.
Anyone have any thoughts? Or any ideas on if there is a better way to copy files from a ReadyNAS to QNAP
The documentation for -v says increase verbosity.
If the only thing you're interested in is seeing more progress, you can chain -v together like so:
rsync -avvvhr /sauce/folder/ admin#xxx.xxx.xxx.xxx:/dest/folder/
and you should see more interesting progress.
This could tell you if your copying requirements -a are stricter than you need and thus take a lot of unnecessary processing time.
For example, I attempted to use -a, which is equivalent to -rlptgoD, on over 100,000 images. Sending the incremental file list did not finish, even overnight.
After changing it to
rsync -rtvvv /sauce/folder/ admin#xxx.xxx.xxx.xxx:/dest/folder/
sending the incremental file list became much faster, being able to see file transfers within 15 minutes
After leaving it over night and it doing nothing, i came in and tried again.
the code that worked appended a '*' to the end of the sauce folder. so this was what worked:
rsync -avhr /sauce/folder/* admin#xxx.xxx.xxx.xxx:/dest/folder
If anyone else has troubles - give this a shot.
My encounter with this was a large file that was incomplete but considered "finished transfer".
I deleted the large (incomplete) file on the remote side and did another sync, which appears to have resolved the issue.
I am using QNAP 1 as production system and QNAP 2 as a backup server. On QNAP 1, I use the following script as cronjob to copy files in regular intervals to the backup-QNAP. Maybe you could try this:
DATUM=`date '+%Y-%m-%d'`;
MAILFILE="/tmp/rsync_svn.txt"
EMAIL="my.mail#mail.com"
echo "Subject: SVN Sync" > $MAILFILE
echo "From: $EMAIL" >> $MAILFILE
echo "To: $EMAIL" >> $MAILFILE
echo "" >> $MAILFILE
echo "-----------------------------------" >> $MAILFILE
rsync -e ssh -av /share/MD0_DATA/subversion 192.168.2.201:/share/HDA_DATA/subversion_backup >> $MAILFILE 2>&1
echo "-----------------------------------" >> $MAILFILE
cat $MAILFILE | sendmail -t
I encountered the same thing and determined that it was because rsync was attempting to calculate checksums for comparison, which is very slow. By default rsync uses file size, creation time, and some other attributes to check if two files are identical.
To avoid this, either omit -c / --checksum or explicitly disable checksum checking with the appropriate flag.
This will be a problem with large files or large numbers of files, so it may look like an issue with the file list, but most often is not.
I am currently trying to remove a number of files from my root directory. There are about 110 files with almost the exact same file name.
The file name appears as wp-cron.php?doing_wp_cron=1.93 where 93 is any integer from 1-110.
However when I try to run the code: sudo rm /root/wp-cron.php?doing_wp_cron=1.* it actually tries to find the file with the asterisk * in the filename, leaving me with a file not found error.
What is the correct notation for removing a series of files using wildcard notation?
NOTE: I have already tried delimiting the filepath with both single ' and double quotes ". This did not avail.
Any thoughts on the matter?
Take a look at the permission on the /root directory with ls -ld /root, typically a non-root user will not have r-x permissions, which won't allow them to read the directory listing.
In your command sudo rm /root/wp-cron.php?doing_wp_cron=1.* the filename expansion attempt happens in the shell running under your non-root user. That fails to expand to the individual filenames as you do not have permissions to read /root.
The shell then execs sudo\0rm\0/root/wp-cron.php?doing_wp_cron=1.*\0. (Three separate, explicit arguments).
sudo, after satisfying its conditions, execs rm\0/root/wp-cron.php?doing_wp_cron=1.*\0.
rm runs and attempts to unlink the literal path /root/wp-cron.php?doing_wp_cron=1.*, failing as you've seen.
The solution to removing depends on your sudo permissions. If permitted, you may run a bash sub-process to do the file-name expansion as root:
sudo bash -c "rm /root/a*"
If not permitted, do the sudo rm with explicit filenames.
Brandon,
I agree with #arkascha . That glob should match, so something is amiss here. Do you get the appropriate list of files if you use a different binary, say 'ls' ? Try this:
ls /root/wp-cron.php?doing_wp_cron=1.*
If that returns the full list of files, then you know there's something funny with your environment regarding rm. This could be an alias as suggested.
If you cannot determine what is different or wrong with your environment you could run the list of files through a for loop and remove each one as a work-around:
for file in `ls /root/wp-cron.php?doing_wp_cron=1.*`
do
rm $file
done
I am have a simple egrep command that searches through all the files in the current directory for lines that contain the word "error":
egrep -i "error" *
This command will also go through the sub-directories as well. Here is a sample of what the whole folder looks like:
/Logfile_20120630_030000_ID1.log
/Logfile_20120630_030001_ID2.log
/Logfile_20120630_030005_ID3.log
/subfolder/Logfile_20120630_031000_Errors_A3.log
/subfolder/Logfile_20120630_031001_Errors_A3.log
/subfolder/Logfile_20120630_031002_Errors_A3.log
/subfolder/Logfile_20120630_031003_Errors_A3.log
The logfiles at the top directory contain "error" lines. But the logfiles in the "subfolder" directory do not contain lines with "error". (only in the filename)
So the problem I am getting is that the egrep command seems to be looking at the information within the "subfolder". My result gets a chunk of what seems to be binary block, then the text lines that contain the word "error" from the top folder logfiles.
If I deleted all the files underneath "subfolder", but did not delete the folder itself, I get the exact same results.
So does Unix keep file history information inside a folder??
The problem was corrected by running:
find . -type f | egrep -i "error" *
But I still dont understand why it was a problem. I'm running C-shell on a SunOS.
egrep -i error *
The * metacharacter matches ANY file name. Directories are files, too. * is expanded by the shell into any and all files in the current directory, this is traditionally called globbing.
set noglob
turns off that behavior. However, it is unlikely there are files named * in your directory, so in this example the command would find no files of any kind. BTW - Do not create a file named * to test this, because files named * may cause all kinds of interesting and unwanted things to happen. Think about what might happen when you tried to delete the file? rm '*' would be the right command, but if you or someone else did a rm * unthinkingly then you have problems...
When attempting to run R, I get this error:
Fatal error: cannot mkdir R_TempDir
I found two possible fixes for this problem by googling around. The first was to ensure my tmp directory didn't contain a load of subdirectories - it doesn't and it's virtually empty. The second fix was to ensure that TMP, TMPDIR, and R_USER in my environment weren't set to non-existent paths - I didn't even have these set. Therefore, I created a tmp directory in my home directory and added it's path to TMP in my environment. I was able to run R once and then I got the fatal error again. Nothing was in the TMP directory that I set in my environment. Does anyone know what else I can try? Thanks.
Dirk is right, but misses a point: If /tmp is full, you can't create subdirectories there. Try
df /tmp
I just hit this on a shared server, where /tmp is mounted on it's own partition, and is shared by many users. In this particular case, you can't really see who's fault it is, because permissions restrict you seeing who is filling up the tmp partition. Basically have to ask the sys admins to figure it out.
Your default temporary directory appears to have the wrong permissions. Here I have
$ ls -ld /tmp
drwxrwxrwt 22 root root 4096 2011-06-10 09:17 /tmp
The key part is 'everybody' can read or write. You need that too. It certainly can contain subdirectories.
Are you running something like AppArmor or SE Linux?
Edit 2011-07-21: As someone just deemed it necessary to downvote this answer -- help(tempfile) is very clear on what values tmpdir (the default directory for temporary files or directories) tries:
By default, 'tmpdir' will be the directory given by 'tempdir()'. This
will be a subdirectory of the temporary directory found by the
following rule. The environment variables 'TMPDIR', 'TMP' and 'TEMP'
are checked in turn and the first found which points to a writable
directory is used: if none succeeds '/tmp' is used.
So my money is on checking those three environment variables. But AppArmor and SELinux have shown to be an issue too on some distributions.
Go to your user directory and create a file called .Renviron and add the following line, save it and reopen RStudio or Rgui or Rterm
TMP = '<path to folder where Everyone has full control>'
This worked with me on Windows 7
If you are running one of the rocker docker images (e.g., rocker/verse), you need to map a local directory to the /tmp directory in the container. For example,
docker run --rm -v ${PWD}/tmp:/tmp -p 8787:8787 -e PASSWORD=password rocker/verse:4.0.4
where ${PWD} for me is ~/devProjs/r, and I created a /tmp directory inside it, so that the container's /tmp is mapped to my ~/devProjs/r/tmp directory.
Just had this issue and finally solved it. Simply a windows permission issue. Go to environment variables and find the location of the temp folders. Then right click on the folder > properties > security > advanced > change everyone to full control > tick "replace all child object permission entries with inheritable permission entries from this object" > Ok > ok.
This will also happen when your computer is completely, utterly out of space. Currently, my Mac has 0 kb free and it's causing this error. Freeing up some space solved the problem.
Check for the user account with which you are launching the RStudio with. Now u check the TMP(System Environment variable) for its location. If the user who is launching RStudio has Write access for those directories you will not face this issue. Being said that you are facing this issue, all you have to do is to change the permissions for that user to have write access on those directories.
Running R on CentOS system and had the same issue. I had to remove all R folders from the tmp directory. Usually all R folders will be in the form of /tmp/Rtmp*****
so i tried to delete the folders from /tmp by running the below.
CD into /tmp directory and run rm -rf Rtmp*
R shell Worked for me afterwards
I had this issue, solution was slightly different. I run R on a linux server - it turned out for me R had made a whole load of tempdirs when running jobs with cron that had hung and not been cleaned up, clogging up the root /tmp directory with ~300 RtmpXXXXXX folders.
Using terminal access, I navigated to the /tmp folder did a recursive find/rm - deleting all of them using this command:
find . -type d -name 'Rtmp*' -exec rm -r -v {} \;
After this, Rstudio took a while to load up, but was once again happy and my scripts began to run again.
You will need the appropriate admin rights for this solution. And always be careful when running rm -r, especially with a find command, as it's easy to remove things unexpectedly.
When it comes to deleting tmp files, make sure that the tmp files are in the server or in local.
If its in the remote, 1st check for the df /tmp in the server or in the remote to see who uses more storage.
Then use rm(file_name)` to remove the files which cause the blocking.
If its in the remote, then use rm /tmp/(file_name)..
MOreover, you can also refer to https://support.rstudio.com/hc/en-us/articles/218730228-Resetting-a-user-s-state-on-RStudio-Server