Create image from palette indexes with ImageSharp - .net-core

I have an array of bytes that represents the palette indexes of the pixels of an image, and I'm trying to convert this to an image with ImageSharp, so I can save it later as PNG, but I can't seem to find how, can anyone give me an idea on where to look? The palette is not important, I just need N different colors.

ImageSharp images represent the expanded pixel buffer, not references to color palettes. If you have the original palette use it to create the input buffer of actual pixel data. A byte array as described is not, in real terms a fully decoded image.
Reducing a color set to a maximum amount is a question of quantization. There are methods that allow you to do this as well as encoding options that allow saving images in indexed formats.

Related

How to use lcms2 to check for out-of-gamut colors?

I'm converting images with embedded ICC profiles to the sRGB color space using LittleCMS. I want to efficiently detect when this conversion is lossy due to the image containing pixels that are out-of-gamut for the sRGB (e.g. the image is using Display P3 and contains saturated colors).
Note that I don't want to merely check if the embedded profile has a wide gamut, because the image pixels may not be taking advantage of its full gamut, and still fit in sRGB.
I know there's cmsSetAlarmCodes, but this seems like it could cause false positives, because there's nothing stopping the image from actually containing the same color that I set for my alarm color. Does LCMS have some out of band signal for this?
Another approach that comes to my mind is applying conversion twice: to sRGB and then back to the original color space, and checking how lossy it was. But that more than doubles processing time. Is there a more efficient way?

Dicom - normalization and standardization

I am new to the field of medical imaging - and trying to solve this (potentially basic problem). For a machine learning purpose, I am trying to standardize and normalize a library of DICOM images, to ensure that all images have the same rotation and are at the same scale (e.g. in mm). I have been playing around with the Mango viewer, and understand that one can create transformation matrices that might be helpful in this regard. I have however the following basic questions:
I would have thought that a scaling of the image would have changed the pixel spacing in the image header. Does this tag not provide the distance between pixels, and should this not change as a result of scaling?
What is the easiest way to standardize a library of images (ideally in python)? Is it possible and should one extract a mean pixel spacing across all images, and then scaling all images to match that mean? or is there a smarter way to ensure consistency in scaling and rotation?
Many thanks in advance, W
Does this tag not provide the distance between pixels, and should this
not change as a result of scaling?
Think of the image voxels as fixed units of space, which are sampling your image. When you apply your transform, you are translating/rotating/scaling your image around within these fixed units of space. That is, the size and shape of the voxels doesn't change. They just sample different parts of your image.
You can resample your image by making your voxels bigger or smaller or changing their shape (pixel spacing), but this can be independent of the transform you are applying to the image.
What is the easiest way to standardize a library of images (ideally in
python)?
One option is FSL-FLIRT, although it only accepts data in NIFTI format, so you'd have to convert your DICOMs to NIFTI. There is also this Python interface to FSL.
Is it possible and should one extract a mean pixel spacing across all
images, and then scaling all images to match that mean? or is there a
smarter way to ensure consistency in scaling and rotation?
I think you'd just to have pick a reference image to register all your other images too. There's no right answer: picking the highest resolution image/voxel dimensions or an average or some resampling into some other set of dimensions all sound reasonable.

Different sizes of JPEG image in R

I am reading a jpeg image (input.jpg) and on writing the same image on disk(as output.jpg), why does its size getting changed. Can i retain the same size.
library(jpeg)
img <- readJPEG('input.jpg')
file.info('input.jpg')$size
writeJPEG(img,'output.jpg',1)
file.info('output.jpg')$size
#is different from input.jpg size
Well, what's you doing is not reading and writing back the same file. readJPEG decodes compressed (lossy) JPEG data into raster array, writeJPEG encodes it back again. To get approximately the same size, you should (at least) set quality parameter to appropriate value. See ?writeJPEG:
writeJPEG(image, target = raw(), quality = 0.7, bg = "white", color.space)
There are a number of factors that affect the compression:
Sampling Rates
Quantization table selection
Huffman tables (optimal or canned)
Progressive or Sequential
If progressive, the breakdown of the scans.
Metadata included
If you do not compress using the same settings as the input, you will get a different size.
You can also get rounding errors in the decompression process that could cause slight differences.

QPainter::drawImage prints different size than QImage::save and print from Photoshop

I'm scaling a QImage, currently as so (I understand there may be more elegant ways):
img.setDotsPerMeterX(img.dotsPerMeterX() * 2);
img.setDotsPerMeterY(img.dotsPerMeterY() * 2);
When I save:
img.save("c:\\users\\me\\desktop\\test.jpg");
and subsequently open and print the image from Photoshop, it is, as expected, half of the physical size of the same image without the scaling applied.
However, when I simply print the scaled QImage, directly from code:
myQPainter.drawImage(0,0,img);
the image prints at the original physical size - not scaled to half the physical size.
I'm using the same printer in each case; and, as far as I can tell, the settings are consistent between both print cases.
Am I misunderstanding something? The end goal is to successfully scale and print the scaled image directly from code.
If we look at the documentation for setDotsPerMeterX it states: -
Together with dotsPerMeterY(), this number defines the intended scale and aspect ratio of the image, and determines the scale at which QPainter will draw graphics on the image. It does not change the scale or aspect ratio of the image when it is rendered on other paint devices.
I expect that the reason for the latter case being the original size is that the image has already been drawn before the call to the functions to set the dots per meter. Or alternatively, set the dots per meter on the original image, before loading its content.
In contrast, when saving, it appears that the device which you save to is copying the values you have set for the dots per meter on the image, then drawing to that device.
I would expect creating a second QImage, setting its dots per meter, then copying from the original to that second image, it would achieve the result you're looking for. Alternatively, you may just be able to set the dots per meter before loading the content on the original QImage.

Do objects drawn by Flash Graphics class exist as objects?

Internally Flash obviously keeps a list of the primitives drawn using Graphics so I wondered if you have many such primitives in a Sprite, can you re-position/remove/alter individual items rather than clear and re-draw everything? Or is this deeper into the bowels of Flash than you're allowed (or recommended) to go?
Drawing primitives aren't accessible to user code once they've been committed to the graphics context, but if you need fast drawing objects you should use shapes instead of sprites. Sprites are containers that can contain other sprites and graphics contexts, Shapes are objects with only graphics contexts and non interactive.
Sprite -> DisplayObjectContainer - > InteractiveObject -> DisplayObject
Shape -> DisplayObject
Unfortunately, it is impossible: Once the items are drawn, you can only modify the full shape, but not the drawing itself.
To give you more of an explanation, I googled about how Flash actually calculates display objects. Unfortunately, I couldn't find anything specific.
But I found enough to make an educated guess: [EDIT]: I found a very interesting PDF on the Anatomy of a Flash. It explains the rendering tree and how graphics objects are treated internally.
I know for a fact that all shape tweens created in the IDE are compiled into shape sequences (each frame is stored as a separate image). And it makes sense to do it that way: Each new frame of the movie must be calculated, all vector images are added to a tree, each rendered as bitmaps, combined and drawn as one final bit plane, in order to be displayed. So it is only logical to do every possible shape calculation at compile time, rather than at runtime.
Then again, a bitmap would store 32 bits of color information for every single pixel, while vectors are stored in simple values, storing x and y coordinates, line style, fill style, etc. Some vectors can be grouped, so that for more complex shapes, line and fill styles only have to be stored once, and only coordinates are necessary for the rest. Also, primitive shapes like circles and rectangles require less information than objects combined from many individual points and lines.
[EDIT]: The above mentioned PDF says this:
Both AS3 and AS3 DisplayObjects are
converted to SObjects internally.
SObjects have a character associated.
Based on the character type it has
different drawing methods, but it all
resumes to drawing fills with
different source of colors.
It would take a very, very complex vector shape to require more single pieces of information than its bitmap representation, provided it is larger than a few pixels in width and height. Therefore, keeping simple shapes as vector representations consumes considerably less memory than storing full bitmaps - and so it is logical not to do shape rendering at compile time, as well (except for complicated shapes - then the "cacheAsBitmap" property comes into play).
Consider what I've said about vectors, line style and fill style, etc. - sounds quite a lot like the sequence of commands we have to write when drawing in ActionScript, right? I would assume these commands are simply converted 1:1 into exactly the kind of vector representations I was talking about. This would make the compiler faster, the binaries smaller, and the handling of both the IDE shapes and the AS shapes exactly the same.
[EDIT]: Turns out I was not quite right on that:
Edge & Colors
LSObjects tree is traversed and a list of edges is created
Edges have colors associated
Strokes are converted to edges
Colors are sources of display data, eg. Bitmaps, Video, Solid fills,
Gradients
Rasterization
Edges are sorted and a color is calculated for each pixel – pixels are
touched only once
Presentation
After the main rasterizer is done painting, the memory buffer is
copied to the screen
Now imagine all of those vectors were freely editable:
The sequence of commands would no longer be final! What if you were to add, move or erase one at runtime? For example: Having a rectangle inside of a filled rectangle subtracts the inner shape from the outer shape. What if you moved one of the corner points to the outside? The result would be a completely different shape! Or if you added one point? You could not store the shape as a rectangle any longer, requiring 5 point items to draw the same thing that once had been one rect item. In short: All the groupings and memory optimizations would no longer work. And it would also slow down runtime graphics considerably. That's why it is only allowed to add new elements to the shape, but not to modify them once they are drawn. And why you have to clear and redraw your graphics, if you want existing shapes to change.
[EDIT]: You can always do complex stuff by doing the calculations yourself. I still believe it was a good decision not to integrate those into basic graphics functionality.
With Flash CS5, and the XFL file format, this data is now accessible as XML.
For my example, you could make a tile map composed of 'Graphic' items from a MovieClip with various frames being various tiles. Instantly you come to the problem of needing to access those inaccessible frame indexes from 'Shape' objects.
If you put them into a symbol (even one that is not exported), you can find it in a file in your LIBRARY folder (after saving as 'xfl'). It mirrors the Library contents.
<DOMSymbolItem xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://ns.adobe.com/xfl/2008/" name="Tileset_Level_Test" itemID="4e00fe7f-00000450" linkageExportForAS="true" linkageClassName="Tileset_Level_Test" sourceLibraryItemHRef="Symbol 1" lastModified="1308719656" lastUniqueIdentifier="3">
<timeline>
<DOMTimeline name="Tileset_Level_Test">
<layers>
<DOMLayer name="Layer 1" color="#4FFF4F" current="true" isSelected="true" autoNamed="false">
<frames>
<DOMFrame index="0" keyMode="9728">
<elements>
<DOMSymbolInstance libraryItemName="Tileset_Test" name="" symbolType="graphic" firstFrame="8" loop="play once">
<transformationPoint>
<Point/>
</transformationPoint>
</DOMSymbolInstance>
<DOMSymbolInstance libraryItemName="Tileset_Test" name="" symbolType="graphic" firstFrame="4" loop="play once">
<matrix>
<Matrix tx="48"/>
</matrix>
<transformationPoint>
<Point/>
</transformationPoint>
</DOMSymbolInstance>
... lots more...
</elements>
</DOMFrame>
</frames>
</DOMLayer>
</layers>
</DOMTimeline>
</timeline>
</DOMSymbolItem>
The XML looks quite complex, but you can process it down to something much simpler with the XML class, and (for instance) construct a collision mask from a MovieClip mirroring those frame indexes, and identify spawn points and other special classes of things. Or you might process the data and draw the whole map yourself, having only needed a way to build it visually. All you might really care about is tx,ty attributes in the Matrix (for where a tile is placed), and 'firstFrame' attribute in the 'DOMSymbolInstance' (for which tile).
Anyways, you could preprocess it with an AIR applet to make just the data you want, and then either poop out a .as file to include in the project, or simplified XML, or whatever you like. Or use whatever other tools/languages you prefer, and add that processing step to your build scripting.
The xfl file format is also handy for tracking down and fixing all manner of things which Flash is too broken/buggy/AFU to fix, such as leftover font references in obscure parts of parts of parts.... You can either fix them in the library, or literally delete the file of the offending part, or edit the XML by hand. Grep and sed and find and xargs are all your friends for these tasks. Especially for things like snapping all coordinates to integer values, or proper cell boundaries, since all of Flash 'snapping' is horribly broken, too. Piping XML files through sed can be quite hazardous to files that you have not backed up, but quite rewarding for evil people who know what they're up to, and use version control.
Well every DisplayObject has only one graphic reference. So if you want to move (or scale etc.) several graphic objects in one Sprite, I suggest you use the display tree as it was intended.
Just add several children (Sprites or MovieClips or ...) in one Sprite each being redrawn when necessary.

Resources