Graphics.MeasureCharacterRanges giving wrong size calculations - gdi+

I'm trying to render some text into a specific part of an image in a Web Forms app. The text will be user entered, so I want to vary the font size to make sure it fits within the bounding box.
I have code that was doing this fine on my proof-of-concept implementation, but I'm now trying it against the assets from the designer, which are larger, and I'm getting some odd results.
I'm running the size calculation as follows:
StringFormat fmt = new StringFormat();
fmt.Alignment = StringAlignment.Center;
fmt.LineAlignment = StringAlignment.Near;
fmt.FormatFlags = StringFormatFlags.NoClip;
fmt.Trimming = StringTrimming.None;
int size = __startingSize;
Font font = __fonts.GetFontBySize(size);
while (GetStringBounds(text, font, fmt).IsLargerThan(__textBoundingBox))
{
context.Trace.Write("MyHandler.ProcessRequest",
"Decrementing font size to " + size + ", as size is "
+ GetStringBounds(text, font, fmt).Size()
+ " and limit is " + __textBoundingBox.Size());
size--;
if (size < __minimumSize)
{
break;
}
font = __fonts.GetFontBySize(size);
}
context.Trace.Write("MyHandler.ProcessRequest", "Writing " + text + " in "
+ font.FontFamily.Name + " at " + font.SizeInPoints + "pt, size is "
+ GetStringBounds(text, font, fmt).Size()
+ " and limit is " + __textBoundingBox.Size());
I then use the following line to render the text onto an image I'm pulling from the filesystem:
g.DrawString(text, font, __brush, __textBoundingBox, fmt);
where:
__fonts is a PrivateFontCollection,
PrivateFontCollection.GetFontBySize is an extension method that returns a FontFamily
RectangleF __textBoundingBox = new RectangleF(150, 110, 212, 64);
int __minimumSize = 8;
int __startingSize = 48;
Brush __brush = Brushes.White;
int size starts out at 48 and decrements within that loop
Graphics g has SmoothingMode.AntiAlias and TextRenderingHint.AntiAlias set
context is a System.Web.HttpContext (this is an excerpt from the ProcessRequest method of an IHttpHandler)
The other methods are:
private static RectangleF GetStringBounds(string text, Font font,
StringFormat fmt)
{
CharacterRange[] range = { new CharacterRange(0, text.Length) };
StringFormat myFormat = fmt.Clone() as StringFormat;
myFormat.SetMeasurableCharacterRanges(range);
using (Graphics g = Graphics.FromImage(new Bitmap(
(int) __textBoundingBox.Width - 1,
(int) __textBoundingBox.Height - 1)))
{
g.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias;
g.TextRenderingHint = System.Drawing.Text.TextRenderingHint.AntiAlias;
Region[] regions = g.MeasureCharacterRanges(text, font,
__textBoundingBox, myFormat);
return regions[0].GetBounds(g);
}
}
public static string Size(this RectangleF rect)
{
return rect.Width + "×" + rect.Height;
}
public static bool IsLargerThan(this RectangleF a, RectangleF b)
{
return (a.Width > b.Width) || (a.Height > b.Height);
}
Now I have two problems.
Firstly, the text sometimes insists on wrapping by inserting a line-break within a word, when it should just fail to fit and cause the while loop to decrement again. I can't see why it is that Graphics.MeasureCharacterRanges thinks that this fits within the box when it shouldn't be word-wrapping within a word. This behaviour is exhibited irrespective of the character set used (I get it in Latin alphabet words, as well as other parts of the Unicode range, like Cyrillic, Greek, Georgian and Armenian). Is there some setting I should be using to force Graphics.MeasureCharacterRanges only to be word-wrapping at whitespace characters (or hyphens)? This first problem is the same as post 2499067.
Secondly, in scaling up to the new image and font size, Graphics.MeasureCharacterRanges is giving me heights that are wildly off. The RectangleF I am drawing within corresponds to a visually apparent area of the image, so I can easily see when the text is being decremented more than is necessary. Yet when I pass it some text, the GetBounds call is giving me a height that is almost double what it's actually taking.
Using trial and error to set the __minimumSize to force an exit from the while loop, I can see that 24pt text fits within the bounding box, yet Graphics.MeasureCharacterRanges is reporting that the height of that text, once rendered to the image, is 122px (when the bounding box is 64px tall and it fits within that box). Indeed, without forcing the matter, the while loop iterates to 18pt, at which point Graphics.MeasureCharacterRanges returns a value that fits.
The trace log excerpt is as follows:
Decrementing font size to 24, as size is 193×122 and limit is 212×64
Decrementing font size to 23, as size is 191×117 and limit is 212×64
Decrementing font size to 22, as size is 200×75 and limit is 212×64
Decrementing font size to 21, as size is 192×71 and limit is 212×64
Decrementing font size to 20, as size is 198×68 and limit is 212×64
Decrementing font size to 19, as size is 185×65 and limit is 212×64
Writing VENNEGOOR of HESSELINK in DIN-Black at 18pt, size is 178×61 and limit is 212×64
So why is Graphics.MeasureCharacterRanges giving me a wrong result? I could understand it being, say, the line height of the font if the loop stopped around 21pt (which would visually fit, if I screenshot the results and measure it in Paint.Net), but it's going far further than it should be doing because, frankly, it's returning the wrong damn results.

I have a similar problem. I want to know how big the text I'm drawing is going to be, and where it's going to appear, EXACTLY. I haven't had the line-break problem, so I don't think I can help you there. I had the same problems you had with all the various measuring techniques available, including ending up with MeasureCharacterRanges, which worked okay for the left and right, but not at all for the height and top. (Playing with the baseline can work well for some rare applications though.)
I've ended up with a very inelegant, inefficient, but working solution, at least for my use case. I draw the text on a bitmap, check the bits to see where they ended up, and that's my range. Since I'm mostly drawing small fonts and short strings, it's been fast enough for me (especially with the memoization I added). Maybe this won't be exactly what you need, but maybe it can lead you down the right track anyway.
Note it requires compiling the project to allow unsafe code at the moment, as I'm trying to squeeze out every bit of efficiency from it, but that constraint could be removed if you wanted to. Also, it's not as thread safe as it could be right now, you could easily add that if you needed it.
Dictionary<Tuple<string, Font, Brush>, Rectangle> cachedTextBounds = new Dictionary<Tuple<string, Font, Brush>, Rectangle>();
/// <summary>
/// Determines bounds of some text by actually drawing the text to a bitmap and
/// reading the bits to see where it ended up. Bounds assume you draw at 0, 0. If
/// drawing elsewhere, you can easily offset the resulting rectangle appropriately.
/// </summary>
/// <param name="text">The text to be drawn</param>
/// <param name="font">The font to use when drawing the text</param>
/// <param name="brush">The brush to be used when drawing the text</param>
/// <returns>The bounding rectangle of the rendered text</returns>
private unsafe Rectangle RenderedTextBounds(string text, Font font, Brush brush) {
// First check memoization
Tuple<string, Font, Brush> t = new Tuple<string, Font, Brush>(text, font, brush);
try {
return cachedTextBounds[t];
}
catch(KeyNotFoundException) {
// not cached
}
// Draw the string on a bitmap
Rectangle bounds = new Rectangle();
Size approxSize = TextRenderer.MeasureText(text, font);
using(Bitmap bitmap = new Bitmap((int)(approxSize.Width*1.5), (int)(approxSize.Height*1.5))) {
using(Graphics g = Graphics.FromImage(bitmap))
g.DrawString(text, font, brush, 0, 0);
// Unsafe LockBits code takes a bit over 10% of time compared to safe GetPixel code
BitmapData bd = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte* row = (byte*)bd.Scan0;
// Find left, looking for first bit that has a non-zero alpha channel, so it's not clear
for(int x = 0; x < bitmap.Width; x++)
for(int y = 0; y < bitmap.Height; y++)
if(((byte*)bd.Scan0)[y*bd.Stride + 4*x + 3] != 0) {
bounds.X = x;
goto foundX;
}
foundX:
// Right
for(int x = bitmap.Width - 1; x >= 0; x--)
for(int y = 0; y < bitmap.Height; y++)
if(((byte*)bd.Scan0)[y*bd.Stride + 4*x + 3] != 0) {
bounds.Width = x - bounds.X + 1;
goto foundWidth;
}
foundWidth:
// Top
for(int y = 0; y < bitmap.Height; y++)
for(int x = 0; x < bitmap.Width; x++)
if(((byte*)bd.Scan0)[y*bd.Stride + 4*x + 3] != 0) {
bounds.Y = y;
goto foundY;
}
foundY:
// Bottom
for(int y = bitmap.Height - 1; y >= 0; y--)
for(int x = 0; x < bitmap.Width; x++)
if(((byte*)bd.Scan0)[y*bd.Stride + 4*x + 3] != 0) {
bounds.Height = y - bounds.Y + 1;
goto foundHeight;
}
foundHeight:
bitmap.UnlockBits(bd);
}
cachedTextBounds[t] = bounds;
return bounds;
}

Ok so 4 years late but this question EXACTLY matched my symptoms and I've actually worked out the cause.
There is most certainly a bug in MeasureString AND MeasureCharacterRanges.
The simple answer is:
Make sure you divide your width restriction (int width in MeasureString or the Size.Width property of the boundingRect in MeasureCharacterRanges) by 0.72. When you get your results back multiply each dimension by 0.72 to get the REAL result
int measureWidth = Convert.ToInt32((float)width/0.72);
SizeF measureSize = gfx.MeasureString(text, font, measureWidth, format);
float actualHeight = measureSize.Height * (float)0.72;
or
float measureWidth = width/0.72;
Region[] regions = gfx.MeasureCharacterRanges(text, font, new RectangleF(0,0,measureWidth, format);
float actualHeight = 0;
if(regions.Length>0)
{
actualHeight = regions[0].GetBounds(gfx).Size.Height * (float)0.72;
}
The explanation (that I can figure out) is that something to do with the context is triggering a conversion in the Measure methods (that doesn't trigger in the DrawString method) for inch->point (*72/100). When you pass in the ACTUAL width limitation it is adjusting this value so the MEASURED width limitation is, in effect, shorter than it should be. Your text then wraps earlier than it is supposed to and so you get a longer height result than expected. Unfortunately the conversion applies to the actual height result as well so it's a good idea to 'unconvert' that value too.

Could you try removing the following line?
fmt.FormatFlags = StringFormatFlags.NoClip;
Overhanging parts of glyphs, and
unwrapped text reaching outside the
formatting rectangle are allowed to
show. By default all text and glyph
parts reaching outside the formatting
rectangle are clipped.
That's the best I can come up with for this :(

I also had some problems with the MeasureCharacterRanges method. It was giving me inconsistent sizes for the same string and even the same Graphics object. Then I discovered that it depends on the value of the layoutRect parametr - I can't see why, in my opinion it's a bug in the .NET code.
For example if layoutRect was completely empty (all values set to zero), I got correct values for the string "a" - the size was {Width=8.898438, Height=18.10938} using 12pt Ms Sans Serif font.
However, when I set the value of the 'X' property of the rectangle to a non-integer number (like 1.2), it gave me {Width=9, Height=19}.
So I really think there is a bug when you use a layout rectangle with non-integer X coordinate.

To convert from points to dpi as in screen resolution you need to divide by 72 and multiply by DPI, for example:
graphics.DpiY * text.Width / 72
Red Nightengale was really close, because graphics.DpiY is usually 96 for screen resolutions.

Related

JavaFx ImageViewer from unsigned short array

I want to display an image received in a short[] of pixels from a server.
The server(C++) writes the image as an unsigned short[] of pixels (12 bit depth).
My java application gets the image by a CORBA call to this server.
Since java does not have ushort, the pixels are stored as (signed) short[].
This is the code I'm using to obtain a BufferedImage from the array:
private WritableImage loadImage(short[] pixels, int width, int height) {
int[] intPixels = new int[pixels.length];
for (int i = 0; i < pixels.length; i++) {
intPixels[i] = (int) pixels[i];
}
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
WritableRaster raster = (WritableRaster) image.getData();
raster.setPixels(0, 0, width, height, intPixels);
return SwingFXUtils.toFXImage(image, null);
}
And later:
WritableImage orgImage = convertShortArrayToImage2(image.data, image.size_x, image.size_y);
//load it into the widget
Platform.runLater(() -> {
imgViewer.setImage(orgImage);
});
I've checked that width=1280 and height=1024 and the pixels array is 1280x1024, that matches with the raster height and width.
However I'm getting an array out of bounds error in the line:
raster.setPixels(0, 0, width, height, intPixels);
I have try with ALL ImageTypes , and all of them produce the same error except for:
TYPE_USHORT_GRAY: Which I thought it would be the one, but shows an all-black image
TYPE_BYTE_GRAY: which show the image in negative(!) and with a lot of grain(?)
TYPE_BYTE_INDEXED: which likes the above what colorized in a funny way
I also have tried shifting bits when converting from shot to int, without any difference:
intPixels[i] = (int) pixels[i] & 0xffff;
So..I'm quite frustrated after looking for days a solution in the internet. Any help is very welcome
Edit. The following is an example of the images received, converted to jpg on the server side. Not sure if it is useful since I think it is made from has pixel rescaling (sqrt) :
Well, finally I solved it.
Probably not the best solution but it works and could help someone in ether....
Being the image grayscale 12 bit depth, I used BufferedImage of type TYPE_BYTE_GRAY, but I had to downsample to 8 bit scaling the array of pixels. from 0-4095 to 0-255.
I had an issue establishing the higher and lower limits of the scale. I tested with avg of the n higher/lower limits, which worked reasonably well, until someone sent me a link to a java program translating the zscale algorithm (used in DS9 tool for example) for getting the limits of the range of greyscale vlues to be displayed:
find it here
from that point I modified the previous code and it worked like a charm:
//https://github.com/Caltech-IPAC/firefly/blob/dev/src/firefly/java/edu/caltech/ipac/visualize/plot/Zscale.java
Zscale.ZscaleRetval retval = Zscale.cdl_zscale(pixels, width, height,
bitsVal, contrastVal, opt_sizeVal, len_stdlineVal, blankValueVal);
double Z1 = retval.getZ1();
double Z2 = retval.getZ2();
try {
int[] ints = new int[pixels.length];
for (int i = 0; i < pixels.length; i++) {
if (pixels[i] < Z1) {
pixels[i] = (short) Z1;
} else if (pixels[i] > Z2) {
pixels[i] = (short) Z2;
}
ints[i] = ((int) ((pixels[i] - Z1) * 255 / (Z2 - Z1)));
}
BufferedImage bImg
= new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
bImg.getRaster().setPixels(0, 0, width, height, ints);
return SwingFXUtils.toFXImage(bImg, null);
} catch (Exception ex) {
System.out.println(ex.getMessage());
}
return null;

Lines thickness changes spontaneously when scaling QGraphicsView

I am drawing lines in Qt using Graphics View framework. Since i want my picture to take the same portion of space when the window is resized, I override MainWindow::resizeEvent, so that graphics view is rescaled according to the resize event:
void MainWindow::resizeEvent(QResizeEvent *event) {
int w = event->size().width(), h = event->size().height();
int prev_w = event->oldSize().width(), prev_h = event->oldSize().height();
if (prev_w != -1) {
int s1 = std::min(prev_w, prev_h), s2 = std::min(w, h);
qreal k = (qreal)s2 / s1;
std::cerr << k << std::endl;
ui->graphicsView->scale(k, k);
}
}
However, doing so, my lines (that should have thickness of 1 pixel) sometimes have different thickness after resize. As I understand, it happens because coordinates of the objects after transforming to the GraphicsView are real, so are sometimes drawn with different number of pixels. That is unacceptable! I want lines to have same 1-pixel thickness all the time.
So, my question is: what is the usual solution for this problem? For now (based on my assumption above) I can only think of deleting all objects and creating new with integer coordinates, but rescaled (manually).
You need to set your line drawing to "cosmetic" in the QPen. This makes the lines non-scalable. Otherwise, Qt scales the line widths along with the scaling of the view. Look up QPen::setCosmetic. By default, drawing lines is not cosmetic.

Arduino Adafruit NeoMatrix library

I am testing Adafruit_NeoMatrix library, the example attached:
https://github.com/adafruit/Adafruit_NeoMatrix/blob/master/examples/tiletest/tiletest.pde
and I can't figure out the relation of the text length and the if sentence:
if(--x < -36) {
more text length imply increase the numer "36", but I don't really see the relation
The -36 defines the maximum displacement based on font width, screen width and text length. The standard font of Adafruit_GFX is 6px per character. You need this cursor value to render the font characters correctly.
Variables you need...
char exampleText[32] = "This is a test";
int pixelPerChar = 6;
int maxDisplacement;
Calculate the maximum displacement at the beginning ...
void setup()
{
maxDisplacement = strlen(exampleText) * pixelPerChar + matrix.width();
//...
}
In the loop function...
//...
if (--x < -maxDisplacement)
{
x = matrix.width();
}
//...
It would be easier to look at this if you write the if statement in the following equivalent way:
x--;
if(x < -36) { ... }
x is the cursor location which is the beginning of the string. At the beginning it is equal to the length of the row which means that the string is hidden to the right of the screen. Each iteration the cursor is moved one step to the left, untill it reaches the coordinate -36. If the string is shorter than 36, it will be hidden to the left of the screen. Then the whole routine is reinitilized with different color.

Can I make QPainter fonts operate in the same units as everything else?

I started with this
void draw_text (QPainter & p, const QString & text, QRectF target)
{
float scale = calculate_font_scale (p, text, target); // about 0.0005
QFont f = p .font ();
float old_size = f .pointSizeF ();
f .setPointSizeF (old_size * scale);
p .setFont (f);
// this prints the new font size correctly
qWarning ("old: %f, new: %f", old_size, p .font () .pointSizeF ());
// but that doesn't seem to affect this at all
p .drawText (position, text);
}
The QPainter's font has size has been correctly updated, as the qWarning line indicates, but the text draws much, much to big. I think this is because the QPainter coordinate system has been zoomed-in quite a lot and it seems setPointSizeF only works with sizes of at least 1. By eye it seems that the font is one "unit" high so I'll buy that explanation, although it's stupid.
I experimented with using setPixelSize instead, and although p.fontMetrics().boundingRect(text) yields a sane-looking answer, it is given in pixel units. One requirement for the above-function is that the bounding rect of the text is horizontally and vertically centred with respect to the target argument, which is in coordinates of a vastly different scale, so the arithmetic is no longer valid and the text is drawn miles off-screen.
I want to be able to transform the coordinate system arbitrarily and if, at the point, one "unit" is a thousand pixels high and I'm drawing text in a 0.03x0.03 unit box then I want the font to be 30 pixels high, obviously, but I need all my geometry to be calculated in general units all the time, and I need fontMetrics::boundingRect to be in these same general units.
Is there any way out of this or do I have to dick around with pixel calculations to appease the font API?
You simply have to undo whatever "crazy" scaling there was on the painter.
// Save the state
p.save();
// Translate the center of `target` to 0,0.
p.translate(-target.center());
// Scale so that the target has a "reasonable" size
qreal dim = 256.0;
qreal sf = dim/qMin(target.height(), target.width());
p.scale(sf, sf);
// Draw your text
p.setPointSize(48);
p.drawText(QRectF(dim, dim), Qt::AlignCenter | Qt::WordWrap, text);
// Restore the state
p.restore();

Color value with alpha of zero shows up as black

I'm using .NET 4.0. I don't know if this is a framework bug or if it's a GDI+ thing. I just discovered it while writing an app to swap color channels.
Let me try to explain the problem. I'm reading pixels from one bitmap, swapping the channels, and writing them out to another bitmap. (Specifically, I'm setting the output image's RGB values equal to the input image's alpha, and output's alpha equal to the input's green channel… or, to put it succinctly, A => RGB and G => A.) The code is as follows:
for (int y = 0; y < input.Height; y++)
{
for (int x = 0; x < input.Width; x++)
{
Color srcPixel = input.GetPixel(x, y);
int alpha = srcPixel.A;
int green = srcPixel.G;
Color destPixel = Color.FromArgb(green, alpha, alpha, alpha);
output.SetPixel(x, y, destPixel);
}
}
Similarly, I've tried this:
int color = green << 24 | alpha << 16 | alpha << 8 | alpha;
Color destPixel = Color.FromArgb(color);
output.SetPixel(x, y, destPixel);
For the most part, it works.
The problem: regardless of what the RGB values are, when alpha is zero, the resultant RGB value is always pure black (R:0, G:0, B:0). I don't know if this is some sort of FromArgb() "optimization" — using .NET Reflector, I don't see FromArgb() doing anything strange — or if Bitmap.SetPixel is the culprit — more likely since it defers to native code and I can't look at it. Either way, when alpha is zero, the pixel is black. This is not the behavior I expected. I need to keep RGB channels intact.
At first I thought it was a pre-multiplied alpha issue, because I'm loading DDS files using my home-brewed DDS loader (which I built to spec and which has never given me any issues), but when I specify an explicit alpha of 255, like this:
Color destPixel = Color.FromArgb(255, alpha, alpha, alpha);
...the RGB channels show up correctly — i.e., none of them turns out black — so it's definitely something within GDI+ that erroneously assumes RGB values can be safely ignored if the alpha is zero… which, to me, seems like a pretty stupid assumption, but, whatever.
Further exacerbating the problem is that the Color type is immutable, which makes sense for a structure, but it means I can't create a color and then assign the alpha… which, if SetPixel() is the culprit, wouldn't matter anyway. (I've tested this by geting the pixel again immediately after setting it and seeing the same results: zero alpha = zero RGB).
So, my question: has anyone dealt with this issue and come up with a relatively simple workaround? In an effort to keep my dependencies down, I am loathe to import a third-party image library, but since GDI+ is making buggy assumptions about my color channels, I may not have a choice.
Thanks for your help.
EDIT: I solved this, but I can't post the answer for another seven hours. Awesome.
Sorry for the delay. Anyway, I should have worked on this a bit longer before posting, because I found a solution about five or ten minutes later. To be clear, I didn't find a solution to the stated GDI+ issue, but I found a suitable workaround. I thought about how, in other API's, I would lock a surface and transfer bytes directly to another surface, so I took that approach. After a little help from MSDN, here's my code (sans error handling):
Bitmap input = Bitmap.FromFile(filename) as Bitmap;
int byteCount = input.Width * input.Height * 4;
var inBytes = new byte[byteCount];
var outBytes = new byte[byteCount];
var inBmpData = input.LockBits(new Rectangle(0, 0, input.Width, input.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(inBmpData.Scan0, inBytes, 0, byteCount);
for (int y = 0; y < input.Height; y++)
{
for (int x = 0; x < input.Width; x++)
{
int offset = (input.Width * y + x) * 4;
// byte blue = inBytes[offset];
byte green = inBytes[offset + 1];
// byte red = inBytes[offset + 2];
byte alpha = inBytes[offset + 3];
outBytes[offset] = alpha;
outBytes[offset + 1] = alpha;
outBytes[offset + 2] = alpha;
outBytes[offset + 3] = green;
}
}
input.UnlockBits(inBmpData);
Bitmap output = new Bitmap(input.Width, input.Height, PixelFormat.Format32bppArgb);
var outBmpData = output.LockBits(new Rectangle(0, 0, output.Width, output.Height), ImageLockMode.WriteOnly, output.PixelFormat);
Marshal.Copy(outBytes, 0, outBmpData.Scan0, outBytes.Length);
output.UnlockBits(outBmpData);
Notes: Marshal is under System.Runtime.InteropServices; BitmapData (inBmpData, outBmpData), ImageLockMode, and PixelFormat are under System.Drawing.Imaging.
Not only does this work perfectly, but it is phenomenally faster. I'll be using this technique from now on for all my channel swapping needs. (I've already used it in another, similar app.)
Sorry for the needless post. I at least hope this solution helps someone else.

Resources