I am running Neutrino 6.4.0 with an attached standard 2.5" , 250GB Toshiba hard disk with 6 pre-existing qnx6 partitions. All of them can be mounted read-only; however, I cannot mount any of those as read/write. For instance:
mount -r -t qnx6 /dev/hd1t77.3 /home/p3
works fine, while
mount -t qnx6 /dev/hd1t77.3 /home/p3
returns the following:
mount: Can't mount /home/p3 (type qnx6)
mount: Possible reason: Read-only file system
I have even tried different sync options (-o sync=ignore , and -o sync=optional), to no avail.
Interestingly, I have created an additional partition on the same disk using mkqnx6fs /dev/hd1t77.6 , and that partition CAN be mounted read/write
My question is, what might be causing the existing partitions to be read-only, and is there any way to make them read-write?
Your system defines partitions as qnx4 (t77). Probably you should try mount it with qnx4 type. See official help for details.
Related
I am trying to copy a directory full of directories and small files to a new server for an app migration. rsync is always my go to tool for this type of migration but this time it is not working as expected.
The directory has 174,412 files and is 136g in size. Based on this I created a 256G disk for them on the new server.
The issue is when I rsync'd the files over to the new server the new partition ran out of space before all files were copied.
I did some tests with a bigger destination disk on my test machine and when it finishes the total size on the new disk is 272G
time sudo rsync -avh /mnt/dotcms/* /data2/
sent 291.61G bytes received 2.85M bytes 51.75M bytes/sec
total size is 291.52G speedup is 1.00
df -h /data2
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/data2vg-data2lv 425G 272G 154G 64% /data2
The source is on a NAS and the new target is a XFS file system so first I thought it may be a block size issue. But then I used the cp command and it copied the exact same size.
time sudo cp -av /mnt/dotcms/* /data
df -h /data2
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/data2vg-data2lv 425G 136G 290G 32% /data2
Why is rsync increasing the space used?
According to the documentation, dotcms makes use of hard links. So, you need to give rsync the -H option to preserve them. Note that GNU's cp -av preserves hard links so doesn't have this problem.
Other rsync options you should consider using include:
-H, --hard-links : preserve hard links
-A, --acls : preserve ACLs (implies --perms)
-X, --xattrs : preserve extended attributes
-S, --sparse : turn sequences of nulls into sparse blocks
--delete : delete extraneous files from destination dirs
This assumes you are running as root and that the destination is supposed to have the same users/groups as the source. If the users and groups are not the same, then #Cyrus' alternative commandline using --numeric-id may be more appropriate.
I am attempting to mount an Azure Storage container on a RHEL server that can be written to by a regular user account. I am not the most familiar with Linux, but the command seems simple:
mount -t cifs <account name> /mnt/disk -o umask=<umask>,uid=<uid>,username=<Containers master username>,password="<password>",vers=3.0
But this is throwing errors, and I'm assuming a syntax error. I have been searching all over, but I haven't seemed to find a good resource for this.
Ok, so I read the error and noticed that it was pointing me to a manual page... Found that the gid and umask are not required to specify the uid.
I am configuring awscli
I run following command:
[bharthan#pchirmpc007 ~]$ aws configure
AWS Access Key ID [None]: adfasdfadfasdfasdfasdf
AWS Secret Access Key [None]: adfasdfasdfasdfasdfasdfasd
Default region name [None]: us-east-1
Default output format [None]: json
It is giving me following error:
[Errno 5] Input/output error
Any suggestions what may be the reason.
You may have some bad sectors on the target HDD.
To check sda1 volume for bad sectors in Linux run fsck -c /dev/sda1. For drive C: in Windows it should be chkdsk c: /f /r.
IMHO chkdsk way will be more suitable as it will remap bad blocks on the HDD while Linux fsck simply marks such blocks as unusable in the current file system.
Quote from man fsck.ext2
-c This option causes e2fsck to use badblocks(8) program to do a read-only scan of the device in order to find any bad blocks. If any bad blocks are found, they are added to the bad block inode to prevent them from being allocated to a file or directory. If this option is specified twice, then the bad block scan will be done using a non-destructive read-write test
I want to store --password-file option that comes with rsync. I don't want to use ssh public_private key encryption. I have tried this command:
rsync -avz --progress --password-file=pass.txt source destination
This says:
The --password-file option may only be used when accessing an rsync daemon.
So, I tried using:
rsync -avz --progress --password-file=pass.txt source destination rsyncd --daemon
But this return various errors like unknown options. Is my sytanx correct? How do I setup rsync daemon in my Debian machine.
That is correct,
--password-file is only applicable when connecting to a rsync daemon.
You probably haven't set it in the daemon itself though, the password you set and the one you use during that call must match.
Edit /etc/rsyncd.secrets, and set the owner/group of that file to root:root with world reading permissions.
#/etc/rsyncd.secrets
root:YourSecretestPassword
To connect to a rsync daemon, use a double colon followed by the module name, and the file or folder to synchronize (instead of a colon when using SSH),
RSYNC_PASSWORD="YourSecretestPassword"; rsync -rtv user#remotehost::module/source/ destination/
NOTE:
this implies abdicating SSH encryption, though the password itself is not sent across the network in plain text, your data is ...
this is already insecure as is, never as the the same password as any of your users account.
For a better understanding of its inner workings (how to give specific IPs/processes the ability to upload to specified areas of the filesystem without the need for a user account): http://transamrit.net/docs/rsync/
After trying a while, I got this to work. Since Im copying from my live server (and routers data) to my local server in my laptop as backup user no problem with password been unencrypted, its secured wired on my laptop at home. First you need to install sshpass if Centos with yum install sshpass then create a user backup and assign a temp password. I listed the -p option in case your ssh port is different than default.
sshpass -p 'password' rsync -vaurP -e 'ssh -p 2222' backup#???.your.ip.???:/somedir/public_data/temp/ /your/localdata/temp
Understand SSH RSA is a better permanente alternative and all that, but this is a quick alternative to backup and restore on the go. It works if you are not too concern about security but more concern about your data been backup locally as in an emergency o data recovery. Your user backup password you can change it once the backup is completed. Its a lot faster to setup when your servers change IPs, users, and its in constant modifications (as routers change config and non static IPs, also when routers are not local and you are backing up clients servers locally, where you dont have always access to do SSH. Some of my clients dont even have SSH installed and they dont want to hassle with creating public keys. On some servers only where you have access on a temporary basis. By the way, if you want to do the restore, just reverse the case. Dont need change much, from the same command shell you can do it reversing the order of target and source directories, and creating another backup user with same temp password on the target. After finish, you delete the backup user or change its passwords on target and/or source servers. You can protect even further, as I have done, replacing the password for a one line file using a bash script for multi server environment. Alternative is to use the -f option so the password does not show in the bash history -f "/path/to/passwordfile" Regards
NOTE: If you want to update only modified files then you should use this parameters -h -v -r -P -t as described here https://unix.stackexchange.com/questions/67539/how-to-rsync-only-new-files
rsync -arv -e \
"sshpass -f '/your/pass.txt' ssh -o StrictHostKeyChecking=no" \
--progress /your/source id#IP:/your/destination
Maybe you have to install "sshpass" if you not.
I have a windows shared folder named \\mymachine\sf and I want to map it as a ubuntu device. I use smbmount command as below:
smbmount //mymachine/sf /mnt/sf -o <username>
The output is like
retrying with upper case share name
mount error(6): No such device or address
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
I'm sure the device exists and mymachine is ping'ed through.
Any idea?
Double check that the share exists and is the name you expect with:
smbclient -L //mymachine -U <username>
Also double check that the directory your share points to (as mentioned in smb.conf) actually exists on the server/host. This is one situation where you will receive that error, despite smbclient -L //hostname giving reasonable output.
Make sure that the directory the samba share points to exists on the server side as well (might have been deleted or mount might have failed at boot). smbclient -L //mymachine -U <username> lists shares as available even though they're not available!