I am trying to untar UNIX-based operating system from a .tar.gz file. In order to do so I use the following command:
tar -xvf rootfs.tar.gz -o
The -o flag is to not to preserve the ownership of the files (it gave some problems). The problem is that when a symbolic link is untared the following message shows up
Cannot create symlink to `toto': Operation not permitted
Moreover, mknod also gives problems
dev/tty0: Cannot mknod: Operation not permitted
I am in a FAT system. Does anyone know how to untar that file?
Thanks in advance
If the file is a tar.gz you must use:
tar -xvzf rootfs.tar.gz
And notice that a FAT filesystem doesn't support symbolic links, so it doesn't know how to make it on that FS, and it explains the Operation Not Permitted Error.
+1 fpr Ivan's answer
please note that:
flags always go right after the name of the command!
you will need to study "man tar" to see what other options you want, e.g. preserve owner, permissions, time-creation date, etc..
The correct answer is that if you're trying to untar a UNIX root file system, that's going to include special files such as device nodes (which is why tar is invoking mknod).
To create those successfully, tar must be allowed to run as root. Therefore, the correct answer is to use sudo, like so:
sudo tar -xvzf rootfs.tar.gz
Try this to untar a tar file. Hopefully it will work fine without any problem, as this one solved my issue
tar -xvvf foo.tar
Related
I was trying to deploy my application on Ubuntu 16.04. So i made a package with the following hierarchy -
Package
|
----bin
|
-----application
-----application.sh
-----Qt
|
-----necessary qt libraries
-----platforms
Here is the application.sh file -
#!/bin/sh
export LD_LIBRARY_PATH=`pwd`/Qt
./application
When i execute the application.sh file, it shows me that it cant find the libQt5MultimediaWidgets.so.5 file. But its in the Qt folder. Also when i print the ldd application from the application.sh file after exporting LD_LIBRARY_PATH it gives me following output -
Please check the marked parts. Can anyone please explain why the libraries from the Qt folder are not found even after exporting the LD_LIBARRY_PATH?
Edit:
So as suggested by #Zang, i have checked the debug log and here it is -
Please check the marked parts.
It seems like its actually trying the actual libQt5MultimediaWidgets.so and then report that its unable to find it. Can anyone please help me understand whats happening here?
Edit-2: As per suggestion from #Tarun, i have ran ls -al on my Qt folder. Here is the output -
All files in Your Qt directory are actually simlinks to non-existing files in the same directory, therefore they cannot be found.
If you look at the output of your ls -al
These are soft links that you have. Your softlink libQt5MultimediaWidgets.so.5 points to libQt5MultimediaWidgets.so.5.9.2 in the same directory and the file is not there at all. So you need to either set the correct softlink path or have the file in same directory
First
Could it be that the pwd is not where you assume it is?
You could try adding
# Figure out where the application.sh script is located
scriptpath="$( cd "$(dirname "$0")" ; pwd -P )"
# Make sure our pwd is that location
cd "$scriptpath"
in the top of your script (assumes bash shell, from here)
By doing this all relative paths to Qt folder will be valid.
Second
Maybe you should considder exporting your new LD_LIBRARY_PATH, like so (from here):
LD_LIBRARY_PATH=whatever
export LD_LIBRARY_PATH
Third
It may be useful to run ldconfig command for ld to update after changing the variable (from here):
sudo ldconfig
The file libQt5MultimediaWidgets.so is not present in /Desktop/package/bin/Qt according to the screenshots shown.
I am having trouble symlinking dotfiles. I have a folder in my home directory ~/dotfiles which I have synced to a github repo. I am trying to take my .vimrc file in ~/dotfiles/.vimrc and create a symbolic link to put it at ~/.vimrc. To do this I type in
ln -s ~/dotfiles/.vimrc ~/.vimrc
But when I run that it says
ln: /Users/me/.vimrc: File exists
What am I doing wrong?
That error message means that you already have a file at ~/.vimrc, which ln is refusing to overwrite. Either delete the ~/.vimrc and run ln again or let ln delete it for you by passing the -f option:
ln -s -f ~/dotfiles/.vimrc ~/.vimrc
There is a better solution for managing dotfiles without using symlinks or any other tool, just a git repo initialized with --bare.
A bare repository is special in a way that they omit working directory, so you can create your repo anywhere and set the --work-tree=$HOME then you don't need to do any work to maintain it.
Approach
first thing to do is, create a bare repo
git init --bare $HOME/.dotfiles
To use this bare repo, you need to specify --git-dir=$HOME/.dotfiles/ and --work-tree=$HOME, better is to create an alias
alias dotfiles='/usr/bin/git --git-dir=$HOME/.dotfiles/ --work-tree=$HOME
At this point, all your configuration files are being tracked, and you can easily use the newly registered dotfiles command to manage the repository, ex :-
# to check the status of the tracked and untracked files
dotfiles status
# to add a file
dotfiles commit .tmux.conf -m ".tmux.conf added"
# push new files or changes to the github
dotfiles push origin main
I also use this way to sync and store my dotfiles, see my dotfiles repository and can read at Storing dotfiles with Git where I wrote about managing for multiple devices.
How to symlink all dotfiles in a directory recursively
Have a dotfiles directory that is structured as to how they should be structured at $HOME
dotfiles_home=~/dotfiles/home # for example
cp -rsf "$dotfiles_home"/. ~
-r: Recursive, create the necessary directory for each file
-s: Create symlinks instead of copying
-f: Overwrite existing files (previously created symlinks, default .bashrc, etc)
/.: Make sure cp "copy" the contents of home instead of the home directory itself.
Tips
Just like ln, if you want no headache or drama, use an absolute path for the first argument like the example above.
Note
This only works with GNU cp (preinstalled in Ubuntu), not POSIX cp. Check your man cp, you can install GNU coreutils if needed.
Thanks
To this and this.
I used tar -cvf sample_directory/* and didn't specify file.tar.gz. So the Makefile within the folder is in some unreadable format. is there a way to recover my Makefile?
The Makefile within the folder contains the output from the tar command, so it's not "some unreadable format", it's gzipped tar format. that tar archive won't contain your missing Makefile though.
The comments about recovering the Makefile from your backups or from your version control system are apt. This is in fact what you need to do.
If you don't have a backup or the Makefile wasn't checked in to a version control system, then there isn't a feasible way to recover its contents.
Aside from the issue of your poor lost Makefile, a piece of advice about using tar: never tar up a bunch of individual files inside a directory. Always tar up the directory itself instead. There is not much more annoying than untarring an archive that contains a big bunch or files instead of a single directory (which then contains files). Doing that makes a mess by littering files all over the directory that happens to be the current directory. Please be nice to whoever is going to extract your tar files (which might be yourself, later on!), follow convention, and tar up complete directories.
tar -czf file.tar.gz sample_directory
As a bonus, if you do it that way, and you forget the output filename like this:
tar -czf sample_directory
You won't squash anything, you'll just get an error.
When attempting to run R, I get this error:
Fatal error: cannot mkdir R_TempDir
I found two possible fixes for this problem by googling around. The first was to ensure my tmp directory didn't contain a load of subdirectories - it doesn't and it's virtually empty. The second fix was to ensure that TMP, TMPDIR, and R_USER in my environment weren't set to non-existent paths - I didn't even have these set. Therefore, I created a tmp directory in my home directory and added it's path to TMP in my environment. I was able to run R once and then I got the fatal error again. Nothing was in the TMP directory that I set in my environment. Does anyone know what else I can try? Thanks.
Dirk is right, but misses a point: If /tmp is full, you can't create subdirectories there. Try
df /tmp
I just hit this on a shared server, where /tmp is mounted on it's own partition, and is shared by many users. In this particular case, you can't really see who's fault it is, because permissions restrict you seeing who is filling up the tmp partition. Basically have to ask the sys admins to figure it out.
Your default temporary directory appears to have the wrong permissions. Here I have
$ ls -ld /tmp
drwxrwxrwt 22 root root 4096 2011-06-10 09:17 /tmp
The key part is 'everybody' can read or write. You need that too. It certainly can contain subdirectories.
Are you running something like AppArmor or SE Linux?
Edit 2011-07-21: As someone just deemed it necessary to downvote this answer -- help(tempfile) is very clear on what values tmpdir (the default directory for temporary files or directories) tries:
By default, 'tmpdir' will be the directory given by 'tempdir()'. This
will be a subdirectory of the temporary directory found by the
following rule. The environment variables 'TMPDIR', 'TMP' and 'TEMP'
are checked in turn and the first found which points to a writable
directory is used: if none succeeds '/tmp' is used.
So my money is on checking those three environment variables. But AppArmor and SELinux have shown to be an issue too on some distributions.
Go to your user directory and create a file called .Renviron and add the following line, save it and reopen RStudio or Rgui or Rterm
TMP = '<path to folder where Everyone has full control>'
This worked with me on Windows 7
If you are running one of the rocker docker images (e.g., rocker/verse), you need to map a local directory to the /tmp directory in the container. For example,
docker run --rm -v ${PWD}/tmp:/tmp -p 8787:8787 -e PASSWORD=password rocker/verse:4.0.4
where ${PWD} for me is ~/devProjs/r, and I created a /tmp directory inside it, so that the container's /tmp is mapped to my ~/devProjs/r/tmp directory.
Just had this issue and finally solved it. Simply a windows permission issue. Go to environment variables and find the location of the temp folders. Then right click on the folder > properties > security > advanced > change everyone to full control > tick "replace all child object permission entries with inheritable permission entries from this object" > Ok > ok.
This will also happen when your computer is completely, utterly out of space. Currently, my Mac has 0 kb free and it's causing this error. Freeing up some space solved the problem.
Check for the user account with which you are launching the RStudio with. Now u check the TMP(System Environment variable) for its location. If the user who is launching RStudio has Write access for those directories you will not face this issue. Being said that you are facing this issue, all you have to do is to change the permissions for that user to have write access on those directories.
Running R on CentOS system and had the same issue. I had to remove all R folders from the tmp directory. Usually all R folders will be in the form of /tmp/Rtmp*****
so i tried to delete the folders from /tmp by running the below.
CD into /tmp directory and run rm -rf Rtmp*
R shell Worked for me afterwards
I had this issue, solution was slightly different. I run R on a linux server - it turned out for me R had made a whole load of tempdirs when running jobs with cron that had hung and not been cleaned up, clogging up the root /tmp directory with ~300 RtmpXXXXXX folders.
Using terminal access, I navigated to the /tmp folder did a recursive find/rm - deleting all of them using this command:
find . -type d -name 'Rtmp*' -exec rm -r -v {} \;
After this, Rstudio took a while to load up, but was once again happy and my scripts began to run again.
You will need the appropriate admin rights for this solution. And always be careful when running rm -r, especially with a find command, as it's easy to remove things unexpectedly.
When it comes to deleting tmp files, make sure that the tmp files are in the server or in local.
If its in the remote, 1st check for the df /tmp in the server or in the remote to see who uses more storage.
Then use rm(file_name)` to remove the files which cause the blocking.
If its in the remote, then use rm /tmp/(file_name)..
MOreover, you can also refer to https://support.rstudio.com/hc/en-us/articles/218730228-Resetting-a-user-s-state-on-RStudio-Server
I'm getting a confusing error from rsync and the initial things I'm finding from web searches (as well as all the usual chmod'ing) are not solving it:
rsync: failed to set times on "/foo/bar": Operation not permitted (1)
rsync error: some files could not be transferred (code 23)
at /SourceCache/rsync/rsync-35.2/rsync/main.c(992) [sender=2.6.9]
It seems to be working despite that error, but it would be nice to get rid of that.
If /foo/bar is on NFS (or possibly some FUSE filesystem), that might be the problem.
Either way, adding -O / --omit-dir-times to your command line will avoid it trying to set modification times on directories.
The issue is probably due to /foo/bar not being owned by the writing process on a remote darwin (OS X) system.
A solution to the issue is to set adequate owner on the remote site.
Since this answer has been voted, and therefore has been hopefully useful to someone, I'm extending it to make it clearer.
The reason why this happens is that rsync is probably trying to set an arbitrary modification time (mtime) when copying files.
In order to do this darwin's system utime() function requires that the writing process effective uid is either the same as the file uid or super user's one, see opengroup utime's page.
Check this discussion on rsync mailing list as reference.
As #racl101 has commented on an answer, this problem might be related to the folder owner. The rsync command should be done by the same user as the folder owner's one. If it's not the same, you can change it.
chown -R userCorrect /remote/path/to/foo/bar
I had the same problem. For me the solution is to delete the remote file and let rsync create again.
The problem in my case was that the "receiver mountpoint" was incorrectly mounted. It was in read-only mode (for some extrange reason).
It looked like rsync was copying the files, but it was not.
I checked my fstab file and changed mount options to default, re-mount file system and execute rsync again. All fine then.
I've seen that problem when I'm writing to a filesystem which doesn't (properly) handle times -- I think SMB shares or FAT or something.
What is your target filesystem?
This happened to me on a partition of type xfs (rw,relatime,seclabel,attr2,inode64,noquota), where the directories where owned by another user in a group we were both members of. The group membership was already established before login, and the whole directory structure was group-writeable. I had manually run sudo chown -R otheruser.group directory and sudo chmod -R g+rw directory to confirm this.
I still have no idea why it didn't work originally, but taking ownership with sudo chown -R myuser.group directory fixed it. Perhaps SELinux-related?
I came across this problem as well and the issue I was having was a permissions issue with the root folder that contained the files I was trying to send over. I don't care about that root folder being included with rsync I just care what's in it. The error was coming from my command where I need to specify an additional / at the end. If you do not have that trailing slash rsync will attempt to set times the folder.
Example:
This will attempt to set times on html
rsync /var/www/html/ ubuntu#xxx.xxx.xxx.xxx:html
This will not
rsync /var/www/html/ ubuntu#xxx.xxx.xxx.xxx:html/
This error might also pop-up if you run the rsync process for files that are not recently modified in the source or destination...because it cant set the time for the recently modified files.
I ran into this error trying to fix timestamps on a new MacOS Monterey, after the Migration Assistant decided to set all of them to the time the copy operation occurred, instead of the original file's.
anddam's answer did not help me, as the remote user used in the rsync command did match the directories and files owner.
After further research, I realised that I had no access to the Mac's Documents directory over SSH (error ls: Documents: Operation not permitted).
I managed to fix the problem by opening System Preferences on the Mac, then selecting Security & Privacy, go to Privacy tab select Full Disk Access and check the box next to sshd-keygen-wrapper.
It could be that you don't have privileges to some of the files. From an administrator account, try "sudo rsync -av " Alternately, enable the root account and sign in as root. That should allow you to completely hose your system and brute force your rsync! ;-) I'm not sure if the above mentioned --extended-attributes will help, but I threw it in too, just for good measure.