I have a input file in EBCDIC format. I have no idea how many records there are in the file. In that case, if I want to convert it to ASCII, what would be ibs, obs, and cbs?
The block sizes are generally not crucial these days. If you were actually handling tapes (you know, the old 1/2 inch reels rocking back and forth type, or maybe QIC, quarter-inch cartridge), then block sizes mattered. When playing with disk, the block sizes don't matter very much.
So, for ibs and obs, either leave the default, or just use bs=16k (or 256k, or any other number), which does both ibs and obs at the same size. The cbs (conversion buffer size) could probably be the same size, too.
Related
I read how sounds represented with numbers in computer here.
And I figured out that usual representation is that, we get 44,100 numbers between [-32767, 32767] per second.
Then to my imagination, there's got to be a big one-column matrix, right?
I'm a R user, so speaking in R, sound data of 3 seconds would be,
s <- 3
sound <- matrix(0, ncol = 1, nrow = 44100 * s)
nrow(sound)
#> [1] 132300
one-column matrix with 132,300 rows.
Is this really the case?
I want some analogous picture in my head, say, in case of a picture with 256 * 256,
if we RGB that picture, we get 3 matrices each with 256 * 256.
And in the case of sounds, we get a long long column? As I think about this again, it's not even a matrix after all. It's a column.
Am I right? I can't find any similar dataset searching Internet.
Any advices will be welcomed. Thanks.
The raw format that is created early in that linked question could look a lot like a single dimension array. And probably the signal that is sent to the speaker to make the sound could be represented similarly.
But you're unlikely to find a file on your computer that looks like that for several reasons:
Sound can be stored at different bit depth - that is how many bits for each 'number' CD Audio tracks have a 16 bit depth, but you could have 8 or 32 bits etc. In a straight stream of these numbers you need some how to know how far to read to the next number, so that information needs to be safed somewhere.
Sample rate can vary. If you've got a sequence of numbers representing an audio signal, then you need to know how long each number lasts for.
mostly sounds are more complex. Instead of a single source, you have stereo, or 5 channel, or whatever, so the system needs to be able to store / decode multiple pieces of information for the sounds you want to hear at a particular time
much of sound is repetitive, and so can often benefit from compression.
So most sounds are stored in a compressed format that includes wrapper information about how to decode it. The wrapper information includes how to decode the different audio channels, what sort of compression was used etc.
The closest you're likely to find are a .wav file (Windows) or .aiff (Mac). But even these include some metadata (sample rate and bit depth to start).
I'm working with DICOM files since a few days, using FO-DICOM.
I'm using a set of dicom files for my tests, and I've been printing the "Photometric Interpretation" and the "Sample Per Pixel" values, to have a better understanding of what kind of images I'm working with.
The result was "MONOCHROME2" for the Photometric Interpretation, and "1" for the Sample Per Pixel.
What I understood by reading the part3 of the standard is that MONOCHROME2 represent a gray scale, starting from black for its minimum values.
But what is the Sample Per Pixel exactly? I thought this was representing the number of bytes (and not bits) per pixel (that would be logic to have 8 bits per pixel for a scale of gray right?)
But my problem here is that actually, my images seem to have 32 bpp.
I'm working with 512*512 pixels images, and I converted them into byte arrays. So I was expecting arrays of 512*512=262144 bytes.
But I get arrays of 1048630 bytes (which is a bit more than 4*262144)
Does someone have an explanation?
EDIT:
Here's are some of my datas :
PhotometricInterpretation=MONOCHROME2
SamplePerPixel=1
BitsAllocated=16
BitsStored=12
HighBit=11
PixelRepresentation=0
NumberOfFrames=0
The attribute (0028,0002) SamplesPerPixel refers to color images only and tells you the number of planes which are present in the image (e.g. 3 for RGB), so you have
PhotometricInterpretation=RGB
SamplesPerPixel=3
With 8 bits per pixel (I will revisit BPP below). As long as you have PhotometricInterpretation = MONOCHROME1 or MONOCHROME2, you can expect the SamplesPerPixel to be 1 and nothing else.
What you do have to take into consideration is the number of bits per pixel:
BitsAllocated (0028,0100)
BitsStored (0028,0101)
HighBit (0028,0102)
These tell you how many bits are used to encode a pixel value (BitsAllocated) and which of these bits really contain grayscale information (BitsStored, HighBit). HighBit is zero-based and usually but not necessarily = BitsStored-1
An example to illustrate this: For CT images, it is very common to express gray values in hounsfield units which range from -1000 to +3000. These are represented by 12 bits which are stored with a 2-byte-alignment, so
BitsAllocated (0028,0100) = 16
BitsStored (0028,0101) = 12
HighBit (0028,0102) = 11
Another degree of freedom is PixelRepresentation which tells you if the pixel data is encoded unsigned (0) or in 2s complement (1). I have seen both for CT images, however signed pixel data is rather unusual for image types other than CT.
In your example, I would assume that Bits Allocated == 32 or (not very likely) that you have a dataset containing multiple images ('frames'), so NumberOfFrames (0028,0008) > 1. If Number of Frames is absent, you can safely assume to have only one frame.
I have over-simplified a bit here, especially about color images but I think this is complicated enough ;-). Basically, DICOM offers any thinkable degree of freedom to encode pixel data and describe the encoding in the header.
I think I have recommended you to have a look at the DCMTK in a recent post. The DicomImage class features a nice interface (getInterData()) which cares about all that stuff and provides the pixel data read from a DICOM file in a normalized format.
[EDIT]: Feel free to post a DICOM dump of your dataset here, I would have a look at it and tell you how to interpret the pixel data.
I'm trying to create (or, if I've somehow missed it in my research, find) an algorithm to encode/decode a bmp image into/from a QR code format. I've been using a guide (Thonky) to try to understand the basics of QR codes and I'm still not sure how to go about this problem, specifically:
Should I encode the data as binary or would numeric be more reasonable (assuming each pixel will have a max. value of 255)?
I've searched for information on the structured append capabilities of QR codes but haven't found much detail beyond the fact that it's supported by QR codes -- how could I implement/utilize this functionality?
And, of course, if there are any tips/suggestions to better store an image as binary data, I'm very open to suggestions!
Thanks for your time,
Sean
I'm not sure you'll be able to achieve that, as the amount of information a QR Code can hold is quite limited.
First of all, you'll probably want to store your image as raw bytes, as the other formats (numeric and alphanumeric) are designed to hold text/numbers and would provide less space to store your image. Let's assume you choose the biggest possible QR Code (version 40), with the smallest level of error correction, which can hold up to 2953 bytes of binary information (see here).
First option, as you suggest, you store the image as a bitmap. This format allows no compression at all and requires (in the case of an RGB image without alpha channel) 3 bytes per pixel. If we take into account the file header size (14 to 54 bytes), and ignore the padding (each row of image data must be padded to a length being a multiple of 4), that allows you to store roughly 2900/3 = 966 pixels. If we consider a square image, this represents a 31x31 bitmap, which is small even for a thumbnail image (for example, my avatar at the end of this post is 32x32 pixels).
Second option, you use JPEG to encode your image. This format has the advantage of using a compression algorithm that can reduce the file size. This time there is no exact formula to get the size of an image fitting in 2.9kB, but I tried using a few square images and downsizing them until they fit in this size, keeping a good (93) quality factor: this gives an average of about 60x60 pixel images. (On such small images, it's normal not to see an incredible compression factor between jpeg and bmp, as the file header in a jpeg file is far larger than in a bmp file: about 500 bytes). This is better than bitmap, but remains quite small.
Finally, even if you succeed in encoding your image in this QR Code, you will encounter an other problem: a QR Code this big is very, very hard to scan successfully. As a matter of fact, this QR Code will have a size of 177x177 modules (a "module" being a small white or black square). Assuming you scan it using a smartphone providing so-called "HD" frames (1280x720 pixels), each module will have a maximum size on the frame of about 4 pixels. If you take into account the camera noise, the aliasing and the blur due to the fact that the user is never perfectly idle when scanning, the quality of the input frames will make it very hard for any QR Code decoding algorithm to successfully get the QR Code (don't forget we set its error correction level on low at the beginning of this!).
Even though it's not very good news, I hope this helps you!
There is indeed a way to encode information on several (up to 16) QR Codes, using a special header in your QR Codes called "Structured append". The best source of information you can use is the norm about QR Codes (ISO 18004:2006); it's possible (but not necessarily easy) to find it for free on the web.
The relevant part (section 9) of this norm says:
"Up to 16 QR Code symbols may be appended in a structured format. If a symbol is part of a Structured Append message, it is indicated by a header block in the first three symbol character positions.
The Structured Append Mode Indicator 0011 is placed in the four most significant bit positions in the first symbol character.
This is immediately followed by two Structured Append codewords, spread over the four least significant bits of the first symbol character, the second symbol character and the four most significant bits of the third symbol character. The first codeword is the symbol sequence indicator. The second codeword is the parity data and is identical in all symbols in the message, enabling it to be verified that all symbols read form part of the same Structured Append message. This header is immediately followed by the data codewords for the symbol commencing with the first Mode Indicator."
Nevertheless, i'm not sure most QR Code scanners can handle this, as it's a quite advanced feature.
You can define a fixed image size, reduce jpg header parts and using just vital information about it, so you can save up to 480bytes of a ~500bytes normal header.
I was using this method to store people photos for a small-club ID cards, images about 64x64 pixels is enough.
I have an incomplete QRCode (about 30%). Is it possible to decode just the fragment of it? I would really like a code snippet - the language doesn't matter.
If you mean, can you decode the entire contents of a QR code even if part of the code is obscured or changed, then yes you can -- sometimes.
QR codes can be encoded with varying levels of redundancy, which are known as levels L, M, Q and H, and correspond to about 7%, 15%, 25% and 30% redundancy. This means you can lose up to that much of the barcode and still decode it. The more you lose, the harder it is to decode, but remains possible within those limits.
Note that certain regions of the QR code can't be lost. The finder patterns (squares at corners) must be findable; they can tolerate some distortion but there's no error correction to help that. Also, the regions around the finder patterns encode format and version. They have a different redundancy (2x encoding using BCH, not Reed-Solomon), but, if you lose too much of those tiny areas you'll not be able to decode, regardless of the main error correction.
I've seen three ways of doing conversion from bytes to megabytes:
megabytes=bytes/1000000
megabytes=bytes/1024/1024
megabytes=bytes/1024/1000
Ok, I think #3 is totally wrong but I have seen it. I think #2 is right, but I am looking for some respected authority (like W3C, ISO, NIST, etc) to clarify which megabyte is a true megabyte. Can anyone cite a source that explicitly explains how this calculation is done?
Bonus question: if #2 is a megabyte what are #1 and #3 called?
BTW: Hard drive manufacturers don't count as authorities on this one!
Traditionally by megabyte we mean your second option -- 1 megabyte = 220 bytes. But it is not correct actually because mega means 1 000 000. There is a new standard name for 220 bytes, it is mebibyte (http://en.wikipedia.org/wiki/Mebibyte) and it gathers popularity.
There's an IEC standard that distinguishes the terms, e.g. Mebibyte = 1024^2 bytes but Megabyte = 1000^2 (in order to be compatible to SI units like kilograms where k/M/... means 1000/1000000). Actually most people in the IT area will prefer Megabyte = 1024^2 and hard disk manufacturers will prefer Megabyte = 1000^2 (because hard disk sizes will sound bigger than they are).
As a matter of fact, most people are confused by the IEC standard (multiplier 1000) and the traditional meaning (multiplier 1024). In general you shouldn't make assumptions on what people mean. For example, 128 kBit/s for MP3s usually means 128000 bits because the multiplier 1000 is mostly used with the unit bits. But often people then call 2048 kBit/s equal to 2 MBit/s - confusing eh?
So as a general rule, don't trust bit/byte units at all ;)
Divide by 2 to the power of 20, (1024*1024) bytes = 1 megabyte
1024*1024 = 1,048,576
2^20 = 1,048,576
1,048,576/1,048,576 = 1
It is the same thing.
BTW: Hard drive manufacturers don't count as authorities on this one!
Oh, yes they do (and the definition they assume from the S.I. is the correct one). On a related issue, see this post on CodingHorror.
for convert byte to megabyte(MB)
use totalbyte/1000/1000
for convert byte to mebibyte (MiB)
use totalbyte/1024/1024
https://en.wikipedia.org/wiki/Byte#Multiple-byte_units
The answer is that #1 is technically correct based on the real meaning of the Mega prefix, however (and in life there is always a however) the math for that doesn't come out so nice in base 2, which is how computers count, so #2 is what people really use.
Megabyte means 2^20 bytes. I know that technically that doesn't mesh with the SI units, and that some folks have come up with a new terminology to mean 2^20. None of that matters. Efforts to change the language to "clarify" things are doomed to failure.
Hard-drive manufacturers use it to mean 1,000,000 bytes, because that's what it means in SI so they figure technically they aren't lying (while actually they are). That falls under lies, damn lies, and marketing.
Use the computation your users will most likely expect. Do your users care to know how many actual bytes are on a disk or in memory or whatever, or do they only care about usable space? The answer to that question will tell you which calculation makes the most sense.
This isn't a precision question as much as it is a usability question. Provide the calculation that is most useful to your users.
In general, it's wrong to use decimal SI prefixes (e.g. kilo, mega) when referring to binary data sizes (except in casual usage). It's ambiguous and causes confusion. To be precise you can use binary prefixes (e.g. 1 mebibyte = 1 MiB = 1024 kibibytes = 2^20 bytes). When someone else uses decimal SI prefixes for binary data you need to get more information before you can know what is meant.
Microsoft Windows Explorer shows file size in the "Properties" window. This is a conversion from the byte count using 2^20