Yesterday I was very tired and I had a friend of
mine that was kind of annoying. I had to put an
Linux ISO on a usb drive.
I had stored all of my data temporarily on one
device, which is a 5 TB drive. This device was/is
encrypted using :
cryptsetup -luksFormat ...
I mounted this device to /a and did a dd of an
Linux ISO on the 5 TB drive. The second I hit
enter I realized this and then I did a control C
and removed the plug from my drive.
After this I had a complete mental breakdown and
i started hyperventilating.
Anyhow I have read that even if data is overwritten
there is till a chance of reading the data below the
overwritten part.
I assume this is far more complicated then simply
running testdisk ( maybe testdisk can do this, no clue ).
The entire process of yesterday can be seen below :
:/a/1master-targz/iso# ls
debian-8.1.0-amd64-xfce-CD-1.iso FreeBSD-11.0-RELEASE-amd64-disc1.iso
linuxmint-18.1-cinnamon-32bit.iso nonpaelubuntu.iso
:/a/1master-targz/iso# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 4.6T 0 disk
└─sda 254:0 0 4.6T 0 crypt /a
sdb 8:16 1 28.7G 0 disk
├─sdb1 8:17 1 648M 0 part
└─sdb2 8:18 1 416K 0 part
:/a/1master-targz/iso# dd if=linuxmint-18.1-
linuxmint-18.1-cinnamon-32bit.iso linuxmint-18.1-mate-32bit.iso
linuxmint-18.1-mate-64bit.iso
:/a/1master-targz/iso# dd if=linuxmint-18.1-cinnamon-32bit.iso 5~^C
:/a/1master-targz/iso# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 4.6T 0 disk
└─sda 254:0 0 4.6T 0 crypt /a
sdb 8:16 1 28.7G 0 disk
├─sdb1 8:17 1 648M 0 part
└─sdb2 8:18 1 416K 0 part
:/a/1master-targz/iso# dd if=linuxmint-18.1-cinnamon-32bit.iso of=/dev/sda bs=512k
10+1 records in
10+1 records out
5685920 bytes (5.7 MB, 5.4 MiB) copied, 0.81171 s, 7.0 MB/s
:/a/1master-targz/iso#
:/a/1master-targz/iso# ^C
:/a/1master-targz/iso# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 1 28.7G 0 disk
├─sdb1 8:17 1 648M 0 part
└─sdb2 8:18 1 416K 0 part
sdc 8:32 0 4.6T 0 disk
└─sdc1 8:33 0 1.6G 0 part
mmcblk0 179:0 0 29.8G 0 disk
├─mmcblk0p1 179:1 0 41.8M 0 part /boot
└─mmcblk0p2 179:2 0 29.8G 0 part /
:/a/1master-targz/iso#
#somewhere here I got a panic attack
TestDisk 7.0, Data Recovery Utility, April 2015
Christophe GRENIER <grenier#cgsecurity.org>
http://www.cgsecurity.org
Disk /dev/sdc - 5000 GB / 4657 GiB - CHS 4769307 64 32
Partition Start End Size in sectors
>P ext4 0 63 31 4769306 63 30 9767538688 [Ghost] # doubt this is right
fdisk -l /dev/sdb
Disk /dev/sdb: 4.6 TiB, 5000981077504 bytes, 9767541167 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x69e72c6a
Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 0 3289087 3289088 1.6G 17 Hidden HPFS/NTFS
# head /dev/mapper/sda
head: error reading '/dev/mapper/sda': Input/output error
Summary
What do I have :
1 : Overwritten part of an encrypted luks device.
2 : I still have the mapper of when cryptsetup opened the device ( I have not
touched my computer since then )
My hope:
-There may be a way of tricking my OS of seeing my encrypted drive as /dev/sda
and connect the /dev/mapper/sda to this device, and then remount the device ignoring the part of the drive being destroyed ? ( I really have no clue ..)
-I know which image I used to overwrite the data, maybe this helps in possibly reading the data it destroyed.
-Any other idea's are very helpful
Clean up the question and remove the story.
"I have read that even if data is overwritten there is till a chance of reading the data below the overwritten part."
You will not be able to recover the over written data with your computer and drive as-is. In order to attempt to recover overwritten data it is necessary to be able to see the drive read head actual voltage so that the remnant of the over written value can be seen as a slight difference in the voltage. That means opening the HD and using specialized equipment. IOW: no.
You have learned the hard way to have continual backups, it seems the "hard way" is the only way that works and often takes multiple lessons, make this one count.
Related
/usr/ucb/ps command output to screen shows a long list envrionment variable and its values, but when redirecting output to a file yields a truncated line.
How to explain this?
bash-3.2$ /usr/ucb/ps -auxwee 6543 >/var/tmp/env-var 2>&1
bash-3.2$ cat /var/tmp/env-var
USER PID %CPU %MEM SZ RSS TT S START TIME COMMAND
alcrprun 6543 0.0 0.0 8752 5992 ? S 19:35:01 0:15 /usr/bin/python -Bu ./bin/dpfoservice.py CFG_HOME=/opt/apps/algo/algoS
bash-3.2$ /usr/ucb/ps -auxwee 6543
USER PID %CPU %MEM SZ RSS TT S START TIME COMMAND
alcrprun 6543 0.0 0.0 8752 5992 ? S 19:35:01 0:15 /usr/bin/python -Bu ./bin/dpfoservice.py CFG_HOME=/opt/apps/algo/algoSuite.current.solaris.alcr/cfg RISK_HOME=/opt/apps/algo/algoSuite.current.solaris.alcr/risk++ ALGO_HOME=/opt/apps/algo/algoSuite.current.solaris.alcr ARE_HOME=/opt/apps/algo/algoSuite.current.solaris.alcr/aggregation/are
...
...
It prints a 132-wide output to a file because that's what the -w argument does. Per the ps.1b man page:
-w
Uses a wide output format (132 columns rather than 80); if repeated, that is, -ww, use arbitrarily wide output. This information
is used to decide how much of long commands to print.
If you want arbitrary-width output, use -ww instead of -w.
When using just the -w option and the output is a terminal window, /usr/ucb/ps appears to detect the terminal width on my copy of Solaris 11 - changing the width of the window modifies the amount of output emitted with just the -w option. The documentation for that behavior is likely buried somewhere on a Solaris man page, give the historical nature of Solaris documentation.
# /usr/ucb/ps -aux | head
USER PID %CPU %MEM SZ RSS TT S START TIME COMMAND
orabudge 13285 4.1 51.18413180884116344 ? O 08:56:30 0:22 oraclebudget (LOCA
orabudge 11998 3.4 51.18413552884119456 ? O 08:51:53 1:49 oraclebudget (LOCA
root 732 3.1 0.0 0 0 ? S Feb 04 326:27 zpool-budgetdb_dat
orabudge 12030 2.8 51.18413296884116648 ? S 08:52:02 2:04 ora_p004_budget
orabudge 12034 2.8 51.18413284084116504 ? S 08:52:02 2:04 ora_p006_budget
orabudge 12032 2.8 51.18413290484116568 ? S 08:52:02 2:04 ora_p005_budget
orabudge 12036 2.7 51.18413296884117176 ? S 08:52:02 2:02 ora_p007_budget
orabudge 21339 1.0 51.18414346484127680 ? S 07:18:27 4:24 oraclebudget (LOCA
orabudge 4347 0.9 51.18414084084125256 ? S 08:19:23 1:10 oraclebudget (LOCA
[root#budgetdb:ravi]#
USER – User who started the process.
PID – Process ID number assigned by the system.
%CPU – Percentage of the CPU time used by the process.
%MEM – Percentage of the total RAM in your system in use by the
process (it will not add up to 100 percent as some RAM is shared
by several processes).
SZ – Amount of nonshared virtual memory, in Kbytes, allocated to
the process. It does not include the program’s text size which is
normally shared.
RSS – Resident set size of the process. It is the basis for
%MEM and is the amount of RAM in use by the process.
TT – Which "teletype" the user is logged in on.
S – The status of the process.
S – Sleeping
O – Using cpu (on CPU) or running
R – Running and waiting for a CPU to become free
Z– Terminating but has not died (zombie)
P – Waiting for page-in
D – Waiting for disk I/ O
Check disk performance and memory usage if many
P or D statuses appear. This could indicate an overload of both subsystems.
START – Time the process started up
TIME – Total amount of CPU time the process has used so far
COMMAND – Command being executed
I have a binary file abc.bin that is 512 bytes long, and I need to generate 1M (1024 x 1024 = 1048576) byte file by appending 0's (0x00) to the abc.bin. How can I do that with dd utility?
For example, abc.bin has 512 bytes of 0x01 ("11 ... 11"), and I need to have a helloos.bin that is 1048576 bytes ("11 ... 11000 ... 000"); the 0 is not '0', but 0x00, and the number of 0x00 is 1048576 - 512.
You can tell dd to seek to the 1M position in the file, which has the effect of making its size at least 1M:
dd if=/dev/null of=abc.bin obs=1M seek=1
If you want to ensure that dd only extends, never truncates the file, add conv=notrunc:
dd if=/dev/null of=abc.bin obs=1M seek=1 conv=notrunc
If you're on a system with GNU coreutils (like, just about any Linux system), you can use the truncate command instead of dd:
truncate --size=1M abc.bin
If you want to be sure the file is only extended, never truncated:
truncate --size=\>1M abc.bin
I'm assuming you actually mean to allocate 1M of zeroes on the disk, not just have a file whose reported length is 1MiB and reads as zeroes.
dd if=/dev/zero count=2047 bs=512 >> abc.bin
This method also works:
Create a 1M file with 0(0x00)s - dd if=/dev/zero of=helloos.bin bs=512 count=2048
Write the abc on the created file - dd of=helloos.bin conv=notrunc if=abc.bin
Without the conv=notrunc option, I have only 512 byte file. I also can use seek=N to control the start position by skipping N blocks.
I'm having a severe problem find out where I'm running out of memory on my CentOS cPanel Server. I'll Try to provide all the details that I can. I previously had the same server without cPanel and had no issues, so I'm thinking it's a cPanel problem.
I am currently running a cPanel server with CentOS 6.6 Installed, with 8GB of RAM and 1.5TB of Storage (Keep in mind my previous server without cPanel only has 4GB of RAM too).
Here's all my memory statistics.
$ free -m
total used free shared buffers cached
Mem: 8192 3647 4544 2560 0 1164
-/+ buffers/cache: 2483 5708
Swap: 0 0 0 0 0 0
$ cat /proc/user_beancounters
uid resource held maxheld barrier limit failcnt
16167: kmemsize 642249817 642637824 9223372036854775807 9223372036854775807 0
lockedpages 4610 4610 2097152 2097152 0
privvmpages 1971819 1972978 2097152 2097152 11579
shmpages 655390 655390 9223372036854775807 9223372036854775807 0
numproc 493 504 32567 32567 0
physpages 932503 933409 2097152 2097152 0
vmguarpages 0 0 2097152 2097152 0
oomguarpages 478382 478413 2097152 2097152 0
$ smem -u -t -k
User Count Swap USS PSS RSS
mailnull 1 0 996.0K 1001.0K 1.5M
dovecot 2 0 1.0M 1.1M 3.2M
memcached 2 0 1.1M 1.3M 2.3M
varnish 1 0 1.2M 1.5M 2.7M
apache 5 0 716.0K 2.1M 27.5M
lighttpd 6 0 2.6M 4.0M 30.1M
dovenull 4 0 3.4M 4.1M 13.6M
newrelic 2 0 4.6M 4.9M 6.5M
media 7 0 4.6M 5.8M 18.3M
redis 5 0 5.7M 6.1M 10.5M
ldap 1 0 7.6M 7.7M 8.4M
user 9 0 10.3M 12.6M 24.1M
postgres 7 0 11.8M 14.0M 25.8M
named 1 0 27.1M 27.1M 27.9M
ntop 1 0 30.2M 31.8M 35.0M
mongod 1 0 46.8M 46.8M 47.7M
elasticsearch 1 0 201.7M 205.0M 212.7M
graylog2 1 0 262.1M 265.5M 273.4M
nobody 20 0 434.3M 488.6M 789.4M
mysql 1 0 489.6M 489.8M 492.1M
root 58 0 628.2M 695.7M 847.9M
---------------------------------------------------
136 0 2.1G 2.3G 2.8G
$ vzubc -c
----------------------------------------------------------------
CT 16167 | HELD Bar% Lim%| MAXH Bar% Lim%| BAR | LIM | FAIL
-------------+---------------+---------------+-----+-----+------
lockedpages| 18M 0.2% 0.2%| 18M 0.2% 0.2%| 8G| 8G| -
privvmpages|7.51G 93% 93%|7.52G 94% 94%| 8G| 8G| 11.3K
numproc| 495 2% 2%| 504 2% 2%|31.8K|31.8K| -
physpages|3.55G 44% 44%|3.56G 44% 44%| 8G| 8G| -
vmguarpages| - - - | - - - | 8G| 8G| -
oomguarpages|1.82G 22% 22%|1.82G 22% 22%| 8G| 8G| -
numpty| 4 2% 2%| 4 2% 2%| 255 | 255 | -
numsiginfo| - - - | 12 1% 1%| 1K| 1K| -
----------------------------------------------------------------
Also in the administrative side of my server I can see
CPU USAGE - 13.33%
DISK SPACE USAGE - 2.54% / 1536GB
RAM USAGE - 28.64% / 8GB
Continuous errors I'm getting on Command line are
Unable to Fork, Cannot allocate memory
Segmentation Fault
As well as continuous failure of Tailwatchd failing and a few other services failing here and there. I used the Tweak Setting to change memory from 512MB to 4096MB to Unlimited MB to see if it changed anything, with no change. I also changed the Conserve Memory option both to on and off to see if there were any changes, with none.
Also, I tried to check the vz container settings. I have 1 config for 0.conf but nothing for 16167.conf. I tried to adjust setting in 0.conf with no luck, then created 16167.conf and adjusted settings which continued not to show up after a server restart. I experimented with the differet ve templates from 1G all the way to 4G, and again no improvement.
Any help or point in the right direction would be greatly appreciated. I've tried to make any corrections I could and all the research I could before asking the community, but I think at this pint, I need some help with it. Thanks in advance.
I would suggest to tune your mysql for your system memory (half of it)
and then also tune the apache through cpanel... most times it is the sql part.
you do also have elastic search. and your root memory is too high.
But on the current system you do have 5,7GB free memory. Are you sure you are out of mem?
It seems your provider has oversold the memory (the only thought i have about the segfault you do have)
To resolve this issues, You will have to increase privvmpages value for your VM. You can increase it through your main node with the following command.
vzctl set ${cid} --privvmpages 1024M:2048M --save
With the above command you will get 1024MB Guaranteed, 2048MB Burstable memory. Change it with your requirement and check it again.
When i use command format the output is:
AVAILABLE DISK SELECTIONS:
0. c0d0 <DEFAULT cyl 1302 alt 2 hd 255 sec 63>
/pci#0,0/pci-ide#7,1/ide#0/cmdk#0,0
1. c2t0d0 <DEFAULT cyl 1020 alt 2 hd 64 sec 32>
/pci#0,0/pci15ad,1976#10/sd#0,0
But after searching in /dev/dsk $ /dev/rdsk using ls i found:
bash-3.00# ls
c0d0p0 c0d0s11 c0d0s5 c1t0d0p3 c1t0d0s14 c1t0d0s8 c2t0d0s1 c2t0d0s3
c0d0p1 c0d0s12 c0d0s6 c1t0d0p4 c1t0d0s15 c1t0d0s9 c2t0d0s10 c2t0d0s4
c0d0p2 c0d0s13 c0d0s7 c1t0d0s0 c1t0d0s2 c2t0d0p0 c2t0d0s11 c2t0d0s5
c0d0p3 c0d0s14 c0d0s8 c1t0d0s1 c1t0d0s3 c2t0d0p1 c2t0d0s12 c2t0d0s6
c0d0p4 c0d0s15 c0d0s9 c1t0d0s10 c1t0d0s4 c2t0d0p2 c2t0d0s13 c2t0d0s7
c0d0s0 c0d0s2 c1t0d0p0 c1t0d0s11 c1t0d0s5 c2t0d0p3 c2t0d0s14 c2t0d0s8
c0d0s1 c0d0s3 c1t0d0p1 c1t0d0s12 c1t0d0s6 c2t0d0p4 c2t0d0s15 c2t0d0s9
c0d0s10 c0d0s4 c1t0d0p2 c1t0d0s13 c1t0d0s7 c2t0d0s0 c2t0d0s2
Question 1
I know that c0d0p0 is fdisk partitions because i'm on x86 system not spark but still i don't understand why it appeared even though i never used fdisk?
Question 2
As you saw at format output i only have c0d0 [IDE] and c2t0d0 [SCSI] but i don't have c1t0d0s0 ?!! i even used devfsadm -C and still it exists.
i used format /dev/rdsk/c1t0d0s0 and told me No disk found!
I dont understand what is this exactly and using ls -l is sure points on a device file at /device
bash-3.00# ls -l c1t0d0s0
lrwxrwxrwx 1 root root 52 Nov 29 2012 c1t0d0s0 -> ../../devices/pci#0,0/pci-ide#7,1/ide#1/sd#0,0:a,raw
so can you please tell me what is that exactly and how can i remove it?
1: No need to use fdisk to get c0d0p0, the OS provision every possible entry (partition/slice) regardless of whether they actually exist or not.
2: This device is likely not handled by format, might a CD/DVD drive or a remote device (USB key, drive, ...)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have installed OpenStack on my local machine. I am able to perform every functionality such as uploading image, creating and launching instance, associating floating ip etc. But I cannot create volume of more than 2 gb. If I create any volume of more than 2 GB then it gives me the status "error" on my dashboard. Less than 2 GBs are getting created.
Sounds like your VG might only be 2G in size? Try looking deeper in the volume logs, or do a vgs/lvs and have a look at your available vs used capacities.
If you are using a DevStack instance, the default volume backing file is 5GB. Check how much of your volume backing file is free/used by running 'pvs' or 'vgs' on command line (as a root user).
I encountered the same issue. This is how I got it solved.
Execute "vgs" command to see the volume groups of the server. You will see something similar to the below, a volume group which has VSize 2GB.
$vgs
VG #PV #LV #SN Attr VSize VFree
stack-volumes 3 1 0 wz--n- 10.00g 10.00g
stratos1 1 2 0 wz--n- 931.09g 48.00m
This should be the volume group (stack-volumes) Openstack uses to create volume and volume snapshots. In order to create more volumes and larger volumes you have to increase the capacity of the volume group. In this case the volume group to extend is "stack-volumes".
Let's create a partition with 50GB.
dd if=/dev/zero of=cinder-volumes bs=1 count=0 seek=50G
losetup /dev/loop3 cinder-volumes
fdisk /dev/loop3
And at the fdisk prompt, enter the following commands:
n
p
1
ENTER
ENTER
t
8e
w
Create a physical volume
root#stratos1:~# pvcreate /dev/loop3
Physical volume "/dev/loop3" successfully created
Extend the volume group "stack-volumes".
root#stratos1:~# vgextend stack-volumes /dev/loop3
Volume group "stack-volumes" successfully extended
Let's see the details about the available physical devices. You will see the new device listed down.
root#stratos1:~# pvs
PV VG Fmt Attr PSize PFree
/dev/loop0 stack-volumes lvm2 a- 10.01g 10.01g
/dev/loop3 stack-volumes lvm2 a- 50.00g 50.00g
Now check the details of the volume groups by executing the vgdisplay command. You will see there is more free space (60GB since we added 50GB more) in the volume group stack-volumes.
root#stratos1:~# vgdisplay
--- Volume group ---
VG Name stack-volumes
System ID
Format lvm2
Metadata Areas 3
Metadata Sequence No 303
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 3
Act PV 3
VG Size 60.00 GiB
PE Size 4.00 MiB
Total PE 23040
Alloc PE / Size 7680 / 30.00 GiB
Free PE / Size 15360 / 60.00 GiB
VG UUID bM4X5R-hC3V-zY5F-ZMVI-s7dz-Kpiu-tPQ2Zt