PdfSharp: Why do I need to multiply positions and sizes by 1.25 to get the right result? - pdfsharp

I am evaluating PdfSharp to create PDF documents. While comparing it with MigraDoc I figured that I had to multiply each position (x, y) or size by 1.25 to get the intended result. For example if I set the page margins to 2 cm without the correction, I get margins of roughly 1.6 cm.
page.TrimMargins = new TrimMargins
{
All = XUnit.FromCentimeter(2)
};
When I multiply 2 with 1.25 I get the intended 2 cm margins:
page.TrimMargins = new TrimMargins
{
All = XUnit.FromCentimeter(2 * 1.25)
};
Same is with font sizes. I have to multiply the size by 1.25 to get the same size as MigraDoc or even Word would print.
My system does not have a custom scaling or a text size other than 100% by the way (my guess was this could be the cause).
Can someone explain what's going on here?
Edit:
With the help of TomasH I found out that when printing without auto scaling the sizing was perfect. PdfSharp obviously creates PDF documents too large. When doing the same with MigraDoc the PDF is also a bit too large as I found out, but only on a much smaller scale. The question that remains is why the document is too large and what MigraDoc does to correct the PDF size.
Here is my complete test code that only gives the correct positioning and sizing with the correction factor:
using (PdfDocument document = new PdfDocument())
{
// Create an empty page size A4 with defined margins
PdfPage page = CreatePage(document);
using (XGraphics graphics = XGraphics.FromPdfPage(page))
{
const double sizeCorrectionFactor = 1.25;
// Define the page margins. They must be multiplied by 1.25 to be correct!?
page.TrimMargins = new TrimMargins
{
All = XUnit.FromCentimeter(2 * sizeCorrectionFactor)
};
// Draw a string. The font size needs to be multiplied by 1.25 to be correct!?
double x = 0;
double y = 0;
graphics.DrawRectangle(XPens.Black, XBrushes.White, 0, 0, page.Width, page.Height);
graphics.DrawString("PdfSharp Measure Demo", new XFont("Verdana", 20 * sizeCorrectionFactor), XBrushes.Navy, x, y, XStringFormats.TopLeft);
// Draw a rectangle. Position and size must be multiplied by 1.25 to be correct!?
x = XUnit.FromCentimeter(2 * sizeCorrectionFactor);
y = XUnit.FromCentimeter(2 * sizeCorrectionFactor);
double width = XUnit.FromCentimeter(5 * sizeCorrectionFactor);
double height = XUnit.FromCentimeter(5 * sizeCorrectionFactor);
graphics.DrawRectangle(XPens.Red, XBrushes.Silver, x, y, width, height);
}
string pdfFilePath = Path.GetTempFileName() + ".pdf";
document.Save(pdfFilePath);
Process.Start(pdfFilePath);
}

I found the answer: I got the meaning of the (not well documented) TrimMargins property wrong. Setting the trim margins obviously adds the size of the margins to the width or height of the page. This means if trim margins are set, the page size is too large and usually gets scaled down when displaying or printing it. I set 2 cm for the trim margins, making the page obviously 1.25 times too large. The solution is to let all trim margins be 0 and account for any desired page margins in the printing code instead.

Related

GDI - Unexpected result for the OffsetRgn() function

I'm using Embarcadero RAD Studio C++ builder XE7.
For a drawing function using the Windows GDI, I need to add a clip region to the device context of a canvas.
By testing my code, I noticed that sometimes the clipping region was smaller than the expected size. I searched why and I found a strange behavior of the OffsetRgn() function which lets me a little puzzled.
To apply the clip region, I use a code similar to the following:
std::unique_ptr<TBitmap> pBitmap(new TBitmap());
pBitmap->PixelFormat = pf32bit;
pBitmap->AlphaFormat = afDefined;
pBitmap->SetSize(60, 7);
TCanvas* pCanvas = pBitmap->Canvas;
::SelectClipRgn(pCanvas->Handle, NULL);
const TRect sourceRect = pCanvas->ClipRect;
HRGN pClipRegion = ::CreateRectRgn(50, -2, 60, 8);
::SelectClipRgn(pCanvas->Handle, pClipRegion);
const TRect intermediateRect = pCanvas->ClipRect;
const int deltaX = pCanvas->ClipRect.Left - 50;
const int deltaY = pCanvas->ClipRect.Top - (-2);
::OffsetRgn(pClipRegion, -deltaX, -deltaY);
::SelectClipRgn(pCanvas->Handle, pClipRegion);
const TRect finalRect = pCanvas->ClipRect;
NOTE written like this and out of his context, the above code does not really make sense, and I know it's illogical. Please do not judge its quality, this is not the purpose of my question. I gathered several excerpts that I grouped into an executable code putting the problem forward.
The hardcoded values are an example of values I get in my application when the problem occurs. If I execute the above code, I measure:
left = 0, top = 0, right = 60, bottom = 7 in sourceRect value
left = 50, top = 0, right = 60, bottom = 7 in intermediateRect value
left = 50, top = 0, right = 60, bottom = 6 in finalRect
I however expected that the bottom value should also be equals to 7 in finalRect, which is the canvas limit, as I only moved the region and nothing else. So why it's value become suddenly smaller than expected?
So I finally found the substance of the case. Based on this post:
Why does calling GetRgnBox on the result of GetClipRgn return a very different rect than GetClipRect?
The clip region is applied in logical units relatively to the canvas origin, whereas the clipping rectangle I tried to apply was measured in pixels from a [0, 0] origin.
As I incorrectly thought in my code that the origin was always [0, 0] for the both systems, the resulting region could be incorrect in several special cases, causing this strange shifting I sometimes noticed between the clipping really applied and which I expected.
Measuring the canvas origin with the GetWindowOrgEx() function highlighted the issue.
However for the above shown case, issue came because the clip region was moved by an offset of -2, taking so the value of -4 on top and 6 on bottom, which is then clipped to fit the canvas bounds while the clip region is applied, resulting to a clipping with value of 0 on top and 6 on bottom.

Matter-js - How to get width and height of rectangle?

Matter-js - How to get width and height of rectangle ?
I need to know is there distance return method implemented in Matter-js .
// part.vertices[0] and part.vertices[1]
I wanna integrate tiles option.
This is how looks critical part (i use override function for Render.bodies it is most interest for me) .:
for (let x = 0; x < this.tiles; x++) {
c.drawImage(
texture,
texture.width * -part.render.sprite.xOffset * part.render.sprite.xScale,
texture.height * -part.render.sprite.yOffset * part.render.sprite.yScale,
texture.width * part.render.sprite.xScale,
texture.height * part.render.sprite.yScale);
}
const { min, max } = part.bounds
it will contain what you need in { x, y }
just substract max.x - min.x & max.y - min.y
I went with a solution very similar to the following:
var width = 30;
var height = 30;
var rect = Bodies.rectangle(150, 100, width, height, {density:0.01, className:"brick", width:width, height:height});
console.log(rect.className, rect.width); // "brick", 30
I decided to carry the original width/height information along with other custom properties such as className
The reason why is because bounds is affected by the rotation of any non perfectly-circular object eg. a rotated rectangle's bounds could be up to ~30% wider than it's actual width.
There are two solutions that I've found.
1- Create a class to wrap the matter.js body, which will also hold onto the height and width. ie:
class rectWrapper {
constructor(x, y, width, height, options){
this.width = width
this.height = height
this.body = Matter.Bodies.rectangle(x, y, width, height, options)
}
}
2- Another way is to use the magic of math to determine the distance between two coordinate points, using Body.vertices[0] and Body.vertices[1] for the width, and Body.vertices[0] and Body.vertices[3] for height. This would also account for any rotation. This link explains it clearly, for 2 and 3 dimensions:
https://sciencing.com/calculate-distance-between-two-coordinates-6390158.html
I would recommend writing your own "utility function" to do this. A heavy handed example might look like this:
function distance(x1, y1, x2, y2){
var x = Math.abs(x1-x2)
var y = Math.abs(y1-y2)
return Math.sqrt((x*x)+(y*y))
}
So a call might look like:
var rect = Matter.Bodies.rectangle(0,0,10,50)
var width = distance(rect.vertices[0].x, rect.verticies[0].y, rect.vertices[1].x, rect.vertices[1].y)
var height = distance(rect.vertices[0].x, rect.vertices[0].y, rect.vertices[3].x, rect.vertices[3].y)
Alternatively, if you happen to be using p5.js as your renderer, you can use p5.dist() which takes x1, y1, x2, y2 as arguments and returns the distance (basically the same as the function above):
https://p5js.org/reference/#/p5/dist
Note, this will only work for rectangles. If you're using different kinds of geometry, I would probably just make a wrapper class myself.

How to calculate the height of NSAttributedString, given width and number of lines?

I want to display 3 lines of NSAttributedString. Is there a way to figure out the needed height, based on width and number of lines?
And I don't want to create a UILabel to do the size calculation, since I want the calculation to be done in background thread.
I wonder why this is still unanswered. Anyhow, here's the fastest method that works for me.
Make an NSAttributedString Category called "Height". This should generate two files titled "NSAttributedString+Height.{h,m}"
In the .h file:
#interface NSAttributedString (Height)
-(CGFloat)heightForWidth:(CGFloat)width;
#end
In the .m file:
-(CGFloat)heightForWidth:(CGFloat)width
{
return ceilf(CGRectGetHeight([self boundingRectWithSize:CGSizeMake(width, CGFLOAT_MAX)
options:NSStringDrawingUsesLineFragmentOrigin|NSStringDrawingUsesFontLeading
context:nil])) + 1;
}
Here's what's happening:
boundRectWithSize:options:context get's a rect constrained to a width you pass to the method. The NSStringDrawingUsesLineFragmentOrigin option tells it to expect multiline string.
Then we fetch the height parameter from that rect.
In iOS 7, this method returns decimals. We need a round figure. ceilf helps with that.
We add an extra unit to the returning value.
Here's how to use it
NSAttributedString *string = ...
CGFloat height = [string heightForWidth:320.0f];
You can use that height for your layout computations.
The answer by #dezinezync answers half of the question. You'll just have to calculate the maximum size allowed for your UILabel with the given width and number of lines.
First, get the height allowed based on number of lines:
let maxHeight = font.lineHeight * numberOfLines
Then calculate the bounding rect of the text you set based on the criteria:
let labelStringSize = yourText.boundingRectWithSize(CGSizeMake(CGRectGetWidth(self.frame), maxHeight),
options: NSStringDrawingOptions.UsesLineFragmentOrigin,
attributes: [NSFontAttributeName: font],
context: nil).size
There is a method in TTTAttributedLabel called
+ (CGSize)sizeThatFitsAttributedString:withConstraints:limitedToNumberOfLines:
Basically,this method use some Core Text API to calculate the height, the key function is
CGSize CTFramesetterSuggestFrameSizeWithConstraints(
CTFramesetterRef framesetter,
CFRange stringRange,
CFDictionaryRef __nullable frameAttributes,
CGSize constraints,
CFRange * __nullable fitRange )
which I think ,is also used by
- (CGRect)textRectForBounds:limitedToNumberOfLines:
this is a workaround and I think there are better way...
static UILabel *label;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
label = [UILabel new];
});
label.attributedText = givenAttributedString;
CGRect rect = CGRectMake(0,0,givenWidth,CGFLOAT_MAX)
CGFloat height = [label textRectForBounds:rect
limitedToNumberOfLines:2].size.height;

How to do Joint tracking in Kinect with a scaled Image

I trying to do some Joint Tracking with kinect (just put a ellipse inside my right hand) everything works fine for a default 640x480 Image, i based myself in this channel9 video.
My code, updated to use the new CoordinateMapper classe is here
...
CoordinateMapper cm = new CoordinateMapper(this.KinectSensorManager.KinectSensor);
ColorImagePoint handColorPoint = cm.MapSkeletonPointToColorPoint(atualSkeleton.Joints[JointType.HandRight].Position, ColorImageFormat.RgbResolution640x480Fps30);
Canvas.SetLeft(elipseHead, (handColorPoint.X) - (elipseHead.Width / 2)); // center of the ellipse in center of the joint
Canvas.SetTop(elipseHead, (handColorPoint.Y) - (elipseHead.Height / 2));
This works. The question is:
How to do joint tracking in a scaled image, 540x380 for example?
The solution for this is pretty simple, i fugured it out.
What a need to do is find some factor to apply to the position.
This factor can be found takin the atual ColorImageFormat of the Kinect and dividing by the desired size, example:
Lets say i am working with the RgbResolution640x480Fps30 format and my Image (ColorViewer) have 220x240. So, lets find the factor for X:
double factorX = (640 / 220); // the factor is 2.90909090...
And the factor for y:
double factorY = (480/ 240); // the factor is 2...
Now, i adjust the position of the ellipse using this factor.
Canvas.SetLeft(elipseHead, (handColorPoint.X / (2.909090)) - (elipseHead.Width / 2));
Canvas.SetTop(elipseHead, (handColorPoint.Y / (2)) - (elipseHead.Height / 2));
I've not used the CoordinateMapper yet, and am not in front on my Kinect at the moment, so I'll toss out this first. I'll see about an update when I get working with the Kinect again.
The Coding4Fun Kinect Toolkit has a ScaleTo extension as part of the library. This adds the ability to take a joint and scale it to any display resolution.
The scaling function looks like this:
private static float Scale(int maxPixel, float maxSkeleton, float position)
{
float value = ((((maxPixel / maxSkeleton) / 2) * position) + (maxPixel/2));
if(value > maxPixel)
return maxPixel;
if(value < 0)
return 0;
return value;
}
maxPixel = the width or height, depending on which coordinate your scaling.
maxSkeleton = set this to 1.
position = the X or Y coordinate of the joint you want to scale.
If you were to just include the above function you could call it like so:
Canvas.SetLeft(e, Scale(640, 1, joint.Position.X));
Canvas.SetTop(e, Scale(480, 1, -joint.Position.Y));
... replacing your 640 & 480 with a different scale.
If you include the Coding4Fun Kinect Toolkit, instead of re-writing code, you could just call it like so:
scaledJoin = rawJoint.ScaleTo(640, 480);
... then plug in what you need.

Set image size in powerpoint using open xml

Im am generating a ppt-file using this tutorial here
Step 4 describes how to swap out the image placeholder.
My images has different dimensions, which makes some images look a little bit too funny.
Is there any way to resize the placeholder so it can keep the dimensions?
Edit: Ok, a better explanation: users can upload images of them selves. The images are stored on the server. I am generating a ppt-file with one user per slide. And for every slide there will be an image, if any. I can of course get the dimensions of every image, but how do I replace the placeholder with an image of another dimension than the placeholder?
Well, I can't tell you based on that tutorial, but I can tell you where it is done in Open XML (i.e. not the SDK).
Your picture will have an xfrm element with a set of values, like this:
<p:spPr>
<a:xfrm>
<a:off x="7048050" y="6248401"/>
<a:ext cx="972000" cy="288000"/>
</a:xfrm>
</p:spPr>
The values you want to change are the cx and cy of a:ext. Take your new picture's dimensions (h and w) from like a System.Drawing.Image object and take each of the values and multiply by 12700. So if the width of the picture is 400 pixels, the cx value of a:ext will be (400 x 12700 = 5080000).
This is how I did it:
using DocumentFormat.OpenXml.Packaging;
lets assume you have your SlidePart
In my case I wanted to check for the alt title of the pictures and replace them if it matches to my key.
//find all image alt title (description) in the slide
List<DocumentFormat.OpenXml.Presentation.Picture> slidePictures = slidePart.Slide.Descendants<DocumentFormat.OpenXml.Presentation.Picture>()
.Where(a => a.NonVisualPictureProperties.NonVisualDrawingProperties.Description.HasValue).Distinct().ToList();
now we check all the images:
//check all images in the slide and replace them if it matches our parameter
foreach (DocumentFormat.OpenXml.Presentation.Picture imagePlaceHolder in slidePictures)
now in the loop we look for Transform2D and modify it with our value:
Transform2D transform = imagePlaceHolder.Descendants<Transform2D>().First();
Tuple<Int64Value, Int64Value> aspectRatio = CorrectAspectRatio(param.Image.FullName, transform.Extents.Cx, transform.Extents.Cy);
transform.Extents.Cx = aspectRatio.Item1;
transform.Extents.Cy = aspectRatio.Item2;
And this function looks like this:
public static Tuple<Int64Value, Int64Value> CorrectAspectRatio(string fileName, Int64Value cx, Int64Value cy)
{
BitmapImage img = new();
using (FileStream fs = new(fileName, FileMode.Open, FileAccess.Read, FileShare.Read))
{
img.BeginInit();
img.StreamSource = fs;
img.EndInit();
}
int widthPx = img.PixelWidth;
int heightPx = img.PixelHeight;
const int EMUsPerInch = 914400;
Int64Value x = (Int64Value)(widthPx * EMUsPerInch / img.DpiX);
Int64Value y = (Int64Value)(heightPx * EMUsPerInch / img.DpiY);
if (x > cx)
{
decimal ratio = cx * 1.0m / x;
x = cx;
y = (Int64Value)(cy * ratio);
}
if (y > cy)
{
decimal ratio = cy * 1.0m / y;
y = cy;
x = (Int64Value)(cx * ratio);
}
return new Tuple<Int64Value, Int64Value>(x, y);
}
Important thing to note is that EMU per inch is 914400. In most cases you just need to divide it by 96, but for some monitors it is different. Therefore it is best to divide it the DPI for x and y.

Resources