Appending 0's to make 1MB file with dd - unix

I have a binary file abc.bin that is 512 bytes long, and I need to generate 1M (1024 x 1024 = 1048576) byte file by appending 0's (0x00) to the abc.bin. How can I do that with dd utility?
For example, abc.bin has 512 bytes of 0x01 ("11 ... 11"), and I need to have a helloos.bin that is 1048576 bytes ("11 ... 11000 ... 000"); the 0 is not '0', but 0x00, and the number of 0x00 is 1048576 - 512.

You can tell dd to seek to the 1M position in the file, which has the effect of making its size at least 1M:
dd if=/dev/null of=abc.bin obs=1M seek=1
If you want to ensure that dd only extends, never truncates the file, add conv=notrunc:
dd if=/dev/null of=abc.bin obs=1M seek=1 conv=notrunc
If you're on a system with GNU coreutils (like, just about any Linux system), you can use the truncate command instead of dd:
truncate --size=1M abc.bin
If you want to be sure the file is only extended, never truncated:
truncate --size=\>1M abc.bin

I'm assuming you actually mean to allocate 1M of zeroes on the disk, not just have a file whose reported length is 1MiB and reads as zeroes.
dd if=/dev/zero count=2047 bs=512 >> abc.bin

This method also works:
Create a 1M file with 0(0x00)s - dd if=/dev/zero of=helloos.bin bs=512 count=2048
Write the abc on the created file - dd of=helloos.bin conv=notrunc if=abc.bin
Without the conv=notrunc option, I have only 512 byte file. I also can use seek=N to control the start position by skipping N blocks.

Related

Open only part of an image (JPEG/TIFF etc.) in R

I am analysing very large images in R, in the order of tens of thousands of pixels square. Unfortunately, even with 64 GB RAM, these images sometimes fail to fit into memory, and when they do I can only open one at a time, precluding parallelisation.
My current strategy is to load them using the JPEG or TIFF packages. e.g.:
image <- readJPEG('image.jpg')
However, as I am only performing simple mathematical manipulations (summing, thresholding etc.) that could be performed piece-by-piece, is it possible to only open part of an image at a time by specifying the dimensions to load? If so, I could write a loop to open 1024 x 1024 sized tiles. The JPEG and TIFF packages do not offer an option to do this.
If you are working with very large images, libvips is probably your best bet. You can shell out to it from R using system().
Your question is not very specific, but let's make a 10,000x10,000 pixel TIFF with ImageMagick and it is a black-white gradient:
convert -size 10000x10000 gradient: -depth 8 a.tif
Now threshold that at 50% with vips and check memory required:
vips im_thresh a.tif b.tif 128 --vips-leak
memory: high-water mark 292.21 MB
Pretty frugal, no? By comparison, the equivalent ImageMagick command requires 1.6GB of RAM:
/usr/bin/time -l convert a.tif -threshold 50% b.tif
Sample Output
...
1603895296 maximum resident set size
...
How about adding 64 to every pixel using im_gadd which does:
usage: vips im_gadd a in1 b in2 c out
where:
a is of type "double"
in1 is of type "image"
b is of type "double"
in2 is of type "image"
c is of type "double"
out is of type "image"
calculate a*in1 + b*in2 + c = outfile
So we use:
vips im_gadd 1 a.tif 0 b.tif 64 c.tif --vips-leak
memory: high-water mark 584.41 MB
Need to do some statistics?
vips im_stats c.tif
band minimum maximum sum sum^2 mean deviation
all 64 319 1.915e+10 4.20922e+12 191.5 73.6206
1 64 319 1.915e+10 4.20922e+12 191.5 73.6206
As it turns out, there is an R package - RBioFormats - that allows you to specify part of an image being opened (though it is not available on CRAN). It can be installed from Git as follows:
source("https://bioconductor.org/biocLite.R")
biocLite("aoles/RBioFormats") # You might need to first run `install.packages("devtools")`
library(RBioFormats)
The dimensions of the image can be read from the metadata without having to open the image:
metadata <- read.metadata('image.tiff')
xdim <- metadata#.Data[[1]]$sizeX
ydim <- metadata#.Data[[1]]$sizeY
Suppose that we want to load the top-left 512 x 512 pixels, we use the subset function:
image <- read.image('image.tiff', subset = list(X = 1:512, y = 1:512))
From this it is trivial to write a loop to iteratively process a whole large image. RBioFormats is an R interface into the Java BioFormats library and will open Tiffs, PNGs, JPEGs as well as many proprietary imaging formats.

Get free space left on target from arm-none-eabi-size

I want to calculate space left on my embedded target.
The Arduino IDE shows this in the output window:
Sketch uses 9544 bytes (3%) of program storage space. Maximum is 262144 bytes.
avr-size has -C option that shows "xx% left":
$ avr-size -C --mcu=atmega32u4 build/myproject.hex
AVR Memory Usage
----------------
Device: atmega32u4
Program: 8392 bytes (25.6% Full)
(.text + .data + .bootloader)
Data: 2196 bytes (85.8% Full)
(.data + .bss + .noinit)
However, I'm actually writing a CMake file to develop code for an Arduino board with an Arm Cortex M0 CPU, so I use arm-none-eabi-size, which shows the code size like this:
[100%] Built target hex
text data bss dec hex filename
8184 208 1988 10380 288c build/myproject
[100%] Built target size
*** Finished ***
Is there a way to calculate the program and data space left on the device? Or do I need to regex the output and calculate percent of a hard-coded value?
If you are using arm-none-eabi toolchain, you can add linker option -Wl,--print-memory-usage which prints RAM and Flash usage in percentage. Output looks like this:
Memory region Used Size Region Size %age Used
RAM: 8968 B 20 KB 43.79%
FLASH: 34604 B 128 KB 26.40%
I am using make file generated by CubeMX, to enable this print I added the option at the end of LDFLAGS line. For CMake this thread might be useful.

rsync - why is it transferring whole file

I believe my rsync is transferring the entire file each time instead of using its algorithm and only transferring changes. Why is this?
I have a text file called rsynctest. Even if I delete only a single character from the text file on the server end, it appears to be transferring the entire file. The rsync stats show: Total transferred file size is 2.55G and the file size is 2.4G so I believe it transferred the entire file.
Here is the output
/usr/bin/rsync -avrh --progress --compress --stats rsynctest x.x.x.x:/rsynctest
sending incremental file list
test
2.55G 100% 27.56MB/s 0:01:28 (xfer#1, to-check=0/1)
Number of files: 1
Number of files transferred: 1
Total file size: 2.55G bytes
Total transferred file size: 2.55G bytes
Literal data: 50.53K bytes
Matched data: 2.55G bytes
File list size: 39
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 693
Total bytes received: 404.38K
sent 693 bytes received 404.38K bytes 2.54K bytes/sec
total size is 2.55G speedup is 6305.64
It's not actually transferring the whole file. Look at the "Total bytes sent" and "Total bytes received" lines. It's only transferring the checksums of each block of N bytes (where N can vary), and when a given block's checksums turn out identical, that block isn't transferred. But since the file size has changed, it has to check the whole file for differences. And that's what you're seeing in the progress bar: rsync checking the whole file for differences. (Note also the speed of 27.56MB/s -- I doubt your Internet connection could actually maintain a quarter of a hundred megabytes per second transfer speed. Though if it can, more power to you.)

Decrypting a XOR encrypted file

I'm trying to decrypt a XOR encrypted file, after running the key length test using xortool I got this key: "fallen"..
# python xortool.py -c 00 /cygdrive/c/Users/Me/Desktop/ch3.bmp
The most probable key lengths:
1: 10.6%
3: 11.6%
6: 18.5%
9: 8.8%
12: 13.8%
15: 6.6%
18: 10.4%
24: 8.1%
30: 6.4%
36: 5.2%
Key-length can be 3*n
1 possible key(s) of length 6:
fallen
Whatever is there a way to decipher the file (a bmp file) and get the original one, using tools like openssl or gpg?? Do they have a XOR operation?
Neither OpenSSL nor GPG have such XOR functionality that I'm aware of, however writing a program to do it yourself should be trivial.
Given that you know that the file is a .bmp, you should be able to use this fact to decrypt the file quite easily, especially given that .bmp files have a well defined structure. For example, the first two bytes when decrypted should be 0x42, 0x4D (that's ASCII BM), and the following 4 bytes are the (big-endian) size of the entire file in bytes, so you should be able to get at least 6 bytes of the key immediately.
Since you already have xortool, just use xortool-xor from the xortool distribution:
python xortool/xortool-xor -s fallen /cygdrive/c/Users/Me/Desktop/ch3.bmp > decoded.bmp
Also note that xortool itself saves the decoded output in the xortool_out folder, so after using xortool to find the key, you could just do:
mv xortool_out/0_fallen decoded.bmp

Convert 16-bit signed PCM to usigned 16-bit

I have a audio wave file (*.wav) and I have the audio data formatted in signed 16-bit (from -32767 to +32768).
I want to convert them in unsigned 16-bit (from 0 to +65535).
Is there some idea how to do that using audacity, sox ot any other otol?
Even a c programm is welcome.
Thanks
Add 32768 to each sample. Note that this is equivalent to inverting the MSB (most-significant bit) of each sample.
In Audacity trying using the export command. (FILE - Export )

Resources