Load raw memory blob in DPX format with Graphicsmagick - graphicsmagick

I have a library that generates a Big Endian 10-bit DPX image in a memory buffer. It's just the raw 10-bit RGB data, though, with no headers. I'm trying to load this data into an instance of Magick::Image like this:
Magick::Blob blob(dataBuffer, dataBufferSize;
image.read(blob, Magick::Geometry(width, height), 10 /*bits*/, "DPX");
This throws the following exception, though: Magick: Improper image header ()
Is it possible to load a raw DPX into a Magick::Image?

I don't think that your answer is a good one. It it is working by accident. Your blob data is likely to be in some other format than DPX. Specifying 'SDPX' (an unsupported format specification) allowed the file format detection to automatically work and select the correct format.
Using
enter code herMagick::Blob blob(dataBuffer, dataBufferSize);
image.read(blob);
should then be sufficient. Most image file formats do not require specifying the format or the depth.

Figured out my own answer here. I took a look at the DPX loading source and found out for this case this line:
image.read(blob, Magick::Geometry(width, height), 10 /*bits*/, "DPX");
should be:
image.read(blob, Magick::Geometry(width, height), 10 /*bits*/, "SDPX");

Related

How can I tell if my dicom files are compressed?

I have been working with dicom files that are about 4 MB each but I recently received some which are 280 KB each. I am not sure whether this is because they are from different CT scanners or if the new dicoms were compressed before being given to me.
Is there a way to find out and if they are compressed is there a way to uncompressed them to the original size?
This is in continuation to the other answer from #kritzel_sw.
If you see any of the following UIDs in (0002,0010) Transfer Syntax UID element:
1.2.840.10008.1.2 Implicit VR Endian: Default Transfer Syntax for DICOM
1.2.840.10008.1.2.1 Explicit VR Little Endian
1.2.840.10008.1.2.2 Explicit VR Big Endian
then the Pixel Data (7FE0,0010) Pixel Data is uncompressed. You will generally observe bigger file size here.
Not a part of your question, but objects other than image (PDF may be in case of Structured Report) can be encapsulated with following Transfer Syntax:
1.2.840.10008.1.2.1.99 Deflated Explicit VR Little Endian
Other well known values for Transfer Syntax mean that the Pixel Data is compressed.
Note that there are also private Transfer Syntax values possible for data set. Implementation of those values is generally private to the respective manufacturer.
Yes and yes.
I recommend the binary tools from the OFFIS DICOM toolkit, but you will be able to achieve the same results with different toolkits. You can find the dcmtk here.
How to find out if your files are compressed:
dcmdump <filename>
Have a look at the metaheader, the attribute Transfer Syntax UID (0002,0010) in particular. Dcmdump "translates" the unique identifier to the human readable transfer syntax, e.g.
(0002,0010) UI =LittleEndianExplicit # 20, 1 TransferSyntaxUID
The Transfer Syntax tells you whether or not the pixel data in this DICOM file is compressed.
How to decompress compressed images:
dcmdjpeg <compressed DICOM file in> <uncompressed DICOM file out>

Save pcl::PointCloud<pcl::PointXYZRGB> in format compatible with Meshlab

Is there any function in PCL library to save pcl::PointCloud<pcl::PointXYZRGB> point cloud in format XYZRGB that can be opened with Meshlab?
Seems pcl::io::savePCDFileASCII (filename, cloud); stores RGB values in some specific way.
For me it works, if I store it as PLY file in binary format. It seems as if Meshlab is having some troubles with ASCII files occasionally. Here is what works for me.
pcl::PointCloud<pcl::PointXYZRGB>::Ptr sceneCloud(new pcl::PointCloud<pcl::PointXYZRGB>);
//Fill cloud somehow...
std::string writePath = "your/path";
pcl::io::savePLYFileBinary(writePath, *sceneCloudPtr);
You can convert to .ply, .obj or any other supported format. Have a look to the demo pcd2ply in the PCL, or just use pcl::PLYWriter setting up the parameters depending on your needs:
pcl::PLYWriter writer;
writer.write (filename, cloud, Eigen::Vector4f::Zero (),
Eigen::Quaternionf::Identity (), binary, use_camera);

Resize HDF5 dataset in Julia

Is there a way to resize a chunked dataset in HDF5 using Julia's HDF5.jl? I didn't see anything in the documentation. Looking through the source, all I found was set_dims!(), but that cannot extend a dataset (only shrink it). Does HDF5.jl have the ability to enlarge an existing (chunked) dataset? This is a very important feature for me, and I would rather not have to call into another language.
The docs have a brief mention of extendible dimensions in hdf5.md excerpted below.
You can use extendible dimensions,
d = d_create(parent, name, dtype, (dims, max_dims), "chunk", (chunk_dims), [lcpl, dcpl, dapl])
set_dims!(d, new_dims)
where dims is a tuple of integers. For example
b = d_create(fid, "b", Int, ((1000,),(-1,)), "chunk", (100,)) #-1 is equivalent to typemax(Hsize)
set_dims!(b, (10000,))
b[1:10000] = [1:10000]
I believe I've got it figured out. The issue is that I forgot to give the dataspace a large enough max_dims. Doing that required digging into the lower-level API. The solution I found was:
dspace = HDF5.dataspace((6,20)::Dims, max_dims=(6,typemax(Int64)))
dtype = HDF5.datatype(Float64)
dset = HDF5.d_create(prt, "trajectory", dtype, dspace, "chunk", (6,10))
Once I created a dataset that can be resized appropriately, the set_dims! function resizes the dataset correctly.
I think I located a few minor issues with the API, which I had to work around or change in my local version. I will get in touch with the HDF5.jl owner regarding those. For those interested:
The constant H5S_UNLIMITED is of type Uint64, but the dataspace function will only accept tuples of Int64, hence why I used typemax(Int64) for my max_dims to imitate how H5S_UNLIMITED is derived.
The form of d_create which I used calls h5d_create incorrectly; it passes parent instead of checkvalid(parent).id (can be seen by comparison with other forms of d_create).

Why a hex file is used in burning program in micro controller?

When ever we program a micro controller we convert the C file into a hex file and then we burn that into controller.
My question is that why a hex file only, is that hex file a hexadecimal version of binary executable?
If yes then why do not we use a binary file instead?
if you are talking about an "intel hex" file the reason being is that it is ascii which makes it easy to examine and parse. true, it is innefficient in one way but compared to a raw binary it might be smaller. With a raw binary you only have one if any address associated, the starting address (not embedded in the file) in a hex file or motorola srecord which is a similar and often used format as well. both the ihex and srec formats are basically lines of ascii/hex numbers that represent a type a starting address, length data, and a checksum. there are non data lines in there but much of it will be data. so if your program has a few bytes at address 0x1000 and a few bytes at 0x80000000 then a .bin file would be at its smallest 0x8000000-0x1000 plus a few bytes but would typically be 0x80000000+ a few bytes (right, 2 gigabytes). Where an ihex or srec would be in the dozens of bytes total. the ihex and srec have built in checksums to help protect against corrupt files, not perfect of course but better than nothing at all...
Since then elf and coff and other formats have become popular. these are also based on blocks of data and not a complete memory image. these are binary, not ascii formats, but they are not just a memory image. chunks of data with address, type, etc are provided.
Because the ihex and srec are so simple to create and parse they will continue to be used for a long time, it does not take a lot of resources in a bootloader for example to handle receiving an ihex or srec file. (same with a binary of course, but the binary has a lot of fill data in it costing a lot of unnecessary transmission time).

InputB vs. Get; code pages; slow reading on unix server

We have been using the usual code to read in a complete file into a string to then parse in VB6. The files are ANSI text but encoded using whatever code page the user was in at the time (we have Chinese and English users for example). This is the code
Open FileName For Binary As nFileUnit
sContents = StrConv(InputB(LOF(nFileUnit), nFileUnit), vbUnicode)
However, we have discovered this is VERY slow reading a file from a server running unix/linux, particularly when the ownership of the file is not the same as the process doing the reading.
I have rewritten the above using Get and discovered it is much faster and does not suffer from any issues with file ownership. I appreciate that this might be solved by reconfiguring the server somehow, but I think since deiscovering even without that issue, the Get method is still much faster than InputB I'd like to replace my existing code using Get.
I wonder if someone could tell me if this will really do the same thing. In particular, is it correctly doing the ANSI to Unicode conversion and will this always be true. My testing suggests the following replacement code does the same thing but faster:
Open FileName For Binary As nFileUnit
sContents = String(LOF(nFileUnit), " ")
Get #nFileUnit, , sContents
I also realise I could use a byte array, but again my tests suggest the above is simpler and works. So how does the buffer work correctly (if you believe the online help for Get it talks of characters returned - clearly this would cause problems when reading in an ANSI file written on the Chinese code page with 2-byte Chinese characters in it).
The following might be of interest becuase the InputB approach is commonly given as the method to read a complete file, but it is much slower, examples
Reading 380Kb file across the network from the unix server
InputB (file owned) = 0.875 sec
InputB (not owned) = 72.8 sec
Get (either) = 0.0156 sec
Reading a 9Mb file across the network from the unix server
InputB (file owned) = 19.65 sec
Get (either) = 0.42 sec
Thanks
Jonathan
InputB() is CVar(InputB$()), and is known to be horribly slow. My suspicion is that InputB$() reads the bytes and converts them to Unicode using the current codepage via some stock logic for reading text from disk, then does another conversion back to ANSI using the current codepage.
You might be far ahead to use ADODB.Stream.LoadFromFile() to load complete ANSI text files. You can set the .Type = adTypeText and .Charset = the appropriate ANSI encoding as required to read Unicode back out of it via .ReadText(x) where x can be a number of bytes, or adReadAll or adReadLine. For line reading you can set .LineSeparator to adCR, adCRLF, or adLF as required.
Many Charset values are supported: KOI8 for Cyrillic, Big5 for Chinese, etc.

Resources