I'm trying to implement the dicom veiwer. And i thought i'm almost done. But some CT images different with MATLAB. So i checked tags. Then i found something.
These images have two value of window center and window width.
window center = [2000], window width = [8000]
Then i calculate yMin, yMax.
yMin = (winCenter - 0.5 * winWidth)
yMax = (winCenter + 0.5 * winWidth)
if(inPixel <= yMin)
outpixel = 0;
else if (inPixel > yMax)
outPixel = 255;
else
outPixel = (((i - (winCenter - 0.5)) / (winWidth - 1)) + 0.5) * 255;
But the problem is this case.
window center = [-600;40], window width = [400;1200]
How can i calculate this values?
Anyone know how i can implement this.
It's not uncommon for CT images to be viewed using multiple window settings in order to see different features of the image. For example, you would use one window setting to look at bones and another to look at soft tissue. This is likely the reason that the modality equipment sent the window center (0028, 1050) and window width (0028, 1051) with a value multiplicity greater than one. So, your window setting in this case (center, width) is (-600, 400) or (40, 1200) and you can display using either setting.
Related
I'm using Embarcadero RAD Studio C++ builder XE7.
For a drawing function using the Windows GDI, I need to add a clip region to the device context of a canvas.
By testing my code, I noticed that sometimes the clipping region was smaller than the expected size. I searched why and I found a strange behavior of the OffsetRgn() function which lets me a little puzzled.
To apply the clip region, I use a code similar to the following:
std::unique_ptr<TBitmap> pBitmap(new TBitmap());
pBitmap->PixelFormat = pf32bit;
pBitmap->AlphaFormat = afDefined;
pBitmap->SetSize(60, 7);
TCanvas* pCanvas = pBitmap->Canvas;
::SelectClipRgn(pCanvas->Handle, NULL);
const TRect sourceRect = pCanvas->ClipRect;
HRGN pClipRegion = ::CreateRectRgn(50, -2, 60, 8);
::SelectClipRgn(pCanvas->Handle, pClipRegion);
const TRect intermediateRect = pCanvas->ClipRect;
const int deltaX = pCanvas->ClipRect.Left - 50;
const int deltaY = pCanvas->ClipRect.Top - (-2);
::OffsetRgn(pClipRegion, -deltaX, -deltaY);
::SelectClipRgn(pCanvas->Handle, pClipRegion);
const TRect finalRect = pCanvas->ClipRect;
NOTE written like this and out of his context, the above code does not really make sense, and I know it's illogical. Please do not judge its quality, this is not the purpose of my question. I gathered several excerpts that I grouped into an executable code putting the problem forward.
The hardcoded values are an example of values I get in my application when the problem occurs. If I execute the above code, I measure:
left = 0, top = 0, right = 60, bottom = 7 in sourceRect value
left = 50, top = 0, right = 60, bottom = 7 in intermediateRect value
left = 50, top = 0, right = 60, bottom = 6 in finalRect
I however expected that the bottom value should also be equals to 7 in finalRect, which is the canvas limit, as I only moved the region and nothing else. So why it's value become suddenly smaller than expected?
So I finally found the substance of the case. Based on this post:
Why does calling GetRgnBox on the result of GetClipRgn return a very different rect than GetClipRect?
The clip region is applied in logical units relatively to the canvas origin, whereas the clipping rectangle I tried to apply was measured in pixels from a [0, 0] origin.
As I incorrectly thought in my code that the origin was always [0, 0] for the both systems, the resulting region could be incorrect in several special cases, causing this strange shifting I sometimes noticed between the clipping really applied and which I expected.
Measuring the canvas origin with the GetWindowOrgEx() function highlighted the issue.
However for the above shown case, issue came because the clip region was moved by an offset of -2, taking so the value of -4 on top and 6 on bottom, which is then clipped to fit the canvas bounds while the clip region is applied, resulting to a clipping with value of 0 on top and 6 on bottom.
I want to display a message, and close the window when the user clicks. This should happen when the circle reaches the bottom of the window. I'm not sure how to go about this, everything works fine until the circle passes the bottom of the window, the closing message doesn't pop up and the window doesnt close on click. I'm using the graphics.py graphics library from Zelle for Python. I'm a beginner in Python so my knowledge is very limited right now. My code is as follows:
from graphics import *
def q2a():
win = GraphWin("window",400,400)
win.setCoords(0,0,400,400)
win.setBackground("light grey")
#drawing circle
circle = Circle(Point(200,100),30)
circle.setFill("red")
circle.draw(win)
#text
message = Text(Point(200,200),"Click Anywhere to Begin")
message.draw(win)
#clicking
while True:
click = win.checkMouse()
if click:
message.undraw()
while circle.getCenter().getY() < 170:
dy=1
dx = 0
dy *=-.01
circle.move(dx,dy)
if circle.getCenter()== 0:
circle.undraw()
gameover = Text(Point(200,200),"Game Over - Click to Close")
gameover.draw(win)
win.checkMouse()
win.close()
q2a()
I believe the problem is simpler than you're making it. One problem is that this is an infinite loop:
while circle.getCenter().getY() < 170:
dy=1
dx = 0
dy *=-.01
circle.move(dx,dy)
As circle's Y center starts at 100 and decreases, so it's always less than 170 so this loop never finishes and any code beyond this point is never executed. Let's use the circle's radius, 30, instead so the circle stops when it sits on the bottom of the window.
Another issue is that I believe you're using checkMouse() when you really want getMouse(). Read the documentation about the difference between these two commands.
Here's my rework of your code (with some style tweaks.) I changed the -0.01 increment to -0.1 as I've no patience!
from graphics import *
RADIUS = 30
HEIGHT, WIDTH = 400, 400
CENTER = Point(HEIGHT / 2, WIDTH / 2)
def q2a():
win = GraphWin("window", HEIGHT, WIDTH)
win.setCoords(0, 0, HEIGHT, WIDTH)
win.setBackground("light grey")
# drawing circle
circle = Circle(Point(WIDTH / 2, 100), RADIUS)
circle.setFill("red")
circle.draw(win)
# text
message = Text(CENTER, "Click Anywhere to Begin")
message.draw(win)
# moving
win.getMouse()
message.undraw()
while circle.getCenter().getY() > RADIUS:
circle.move(0, -0.1)
# end game
circle.undraw()
gameover = Text(CENTER, "Game Over - Click to Close")
gameover.draw(win)
win.getMouse()
win.close()
q2a()
I am evaluating PdfSharp to create PDF documents. While comparing it with MigraDoc I figured that I had to multiply each position (x, y) or size by 1.25 to get the intended result. For example if I set the page margins to 2 cm without the correction, I get margins of roughly 1.6 cm.
page.TrimMargins = new TrimMargins
{
All = XUnit.FromCentimeter(2)
};
When I multiply 2 with 1.25 I get the intended 2 cm margins:
page.TrimMargins = new TrimMargins
{
All = XUnit.FromCentimeter(2 * 1.25)
};
Same is with font sizes. I have to multiply the size by 1.25 to get the same size as MigraDoc or even Word would print.
My system does not have a custom scaling or a text size other than 100% by the way (my guess was this could be the cause).
Can someone explain what's going on here?
Edit:
With the help of TomasH I found out that when printing without auto scaling the sizing was perfect. PdfSharp obviously creates PDF documents too large. When doing the same with MigraDoc the PDF is also a bit too large as I found out, but only on a much smaller scale. The question that remains is why the document is too large and what MigraDoc does to correct the PDF size.
Here is my complete test code that only gives the correct positioning and sizing with the correction factor:
using (PdfDocument document = new PdfDocument())
{
// Create an empty page size A4 with defined margins
PdfPage page = CreatePage(document);
using (XGraphics graphics = XGraphics.FromPdfPage(page))
{
const double sizeCorrectionFactor = 1.25;
// Define the page margins. They must be multiplied by 1.25 to be correct!?
page.TrimMargins = new TrimMargins
{
All = XUnit.FromCentimeter(2 * sizeCorrectionFactor)
};
// Draw a string. The font size needs to be multiplied by 1.25 to be correct!?
double x = 0;
double y = 0;
graphics.DrawRectangle(XPens.Black, XBrushes.White, 0, 0, page.Width, page.Height);
graphics.DrawString("PdfSharp Measure Demo", new XFont("Verdana", 20 * sizeCorrectionFactor), XBrushes.Navy, x, y, XStringFormats.TopLeft);
// Draw a rectangle. Position and size must be multiplied by 1.25 to be correct!?
x = XUnit.FromCentimeter(2 * sizeCorrectionFactor);
y = XUnit.FromCentimeter(2 * sizeCorrectionFactor);
double width = XUnit.FromCentimeter(5 * sizeCorrectionFactor);
double height = XUnit.FromCentimeter(5 * sizeCorrectionFactor);
graphics.DrawRectangle(XPens.Red, XBrushes.Silver, x, y, width, height);
}
string pdfFilePath = Path.GetTempFileName() + ".pdf";
document.Save(pdfFilePath);
Process.Start(pdfFilePath);
}
I found the answer: I got the meaning of the (not well documented) TrimMargins property wrong. Setting the trim margins obviously adds the size of the margins to the width or height of the page. This means if trim margins are set, the page size is too large and usually gets scaled down when displaying or printing it. I set 2 cm for the trim margins, making the page obviously 1.25 times too large. The solution is to let all trim margins be 0 and account for any desired page margins in the printing code instead.
I've seen lots of questions on how to zoom the camera so an object fills the screen, but I'm trying to move the object to fill the screen.
I've been trying something like this using the original photos pixel size, and these objects have been scaled:
var dist = object.originalSize.height > $(window).height()
|| object.originalSize.width > $(window).width()
? ( $(window).height() / object.originalSize.height ) * 100
: 10;
var pLocal = new THREE.Vector3( 0, 0, -dist);
var target = pLocal.applyMatrix4( camera.matrixWorld );
var tweenMove = new TWEEN.Tween(object.position).to(target, 1500).easing(TWEEN.Easing.Cubic.InOut);
To come up with a vector to move the object to, however, I can't get the object to fill the screen. Any idea of the maths I need to calculate how far an object needs to be to fill the screen?
The object is a Object3D with different children depending on it's type.
I know the original photographs dimensions (object.originalSize.height) and I know the geometry that has been scaled up to fit with power of 2.
Any clue gratefully received on how to calculate the distance required from the camera to ensure the object fits inside the bounds of the screen.
I also know the bounding box of the item, i.e. from 1024 to 128.
This works, not sure why..
var vFOV = camera.fov * Math.PI / 180;
var ratio = 2 * Math.tan( vFOV / 2 );
var screen = ratio * (window.innerWidth / window.innerHeight) ;
var size = getCompoundBoundingBox( object ).max.y;
var dist = (size/screen) / 4;
I trying to do some Joint Tracking with kinect (just put a ellipse inside my right hand) everything works fine for a default 640x480 Image, i based myself in this channel9 video.
My code, updated to use the new CoordinateMapper classe is here
...
CoordinateMapper cm = new CoordinateMapper(this.KinectSensorManager.KinectSensor);
ColorImagePoint handColorPoint = cm.MapSkeletonPointToColorPoint(atualSkeleton.Joints[JointType.HandRight].Position, ColorImageFormat.RgbResolution640x480Fps30);
Canvas.SetLeft(elipseHead, (handColorPoint.X) - (elipseHead.Width / 2)); // center of the ellipse in center of the joint
Canvas.SetTop(elipseHead, (handColorPoint.Y) - (elipseHead.Height / 2));
This works. The question is:
How to do joint tracking in a scaled image, 540x380 for example?
The solution for this is pretty simple, i fugured it out.
What a need to do is find some factor to apply to the position.
This factor can be found takin the atual ColorImageFormat of the Kinect and dividing by the desired size, example:
Lets say i am working with the RgbResolution640x480Fps30 format and my Image (ColorViewer) have 220x240. So, lets find the factor for X:
double factorX = (640 / 220); // the factor is 2.90909090...
And the factor for y:
double factorY = (480/ 240); // the factor is 2...
Now, i adjust the position of the ellipse using this factor.
Canvas.SetLeft(elipseHead, (handColorPoint.X / (2.909090)) - (elipseHead.Width / 2));
Canvas.SetTop(elipseHead, (handColorPoint.Y / (2)) - (elipseHead.Height / 2));
I've not used the CoordinateMapper yet, and am not in front on my Kinect at the moment, so I'll toss out this first. I'll see about an update when I get working with the Kinect again.
The Coding4Fun Kinect Toolkit has a ScaleTo extension as part of the library. This adds the ability to take a joint and scale it to any display resolution.
The scaling function looks like this:
private static float Scale(int maxPixel, float maxSkeleton, float position)
{
float value = ((((maxPixel / maxSkeleton) / 2) * position) + (maxPixel/2));
if(value > maxPixel)
return maxPixel;
if(value < 0)
return 0;
return value;
}
maxPixel = the width or height, depending on which coordinate your scaling.
maxSkeleton = set this to 1.
position = the X or Y coordinate of the joint you want to scale.
If you were to just include the above function you could call it like so:
Canvas.SetLeft(e, Scale(640, 1, joint.Position.X));
Canvas.SetTop(e, Scale(480, 1, -joint.Position.Y));
... replacing your 640 & 480 with a different scale.
If you include the Coding4Fun Kinect Toolkit, instead of re-writing code, you could just call it like so:
scaledJoin = rawJoint.ScaleTo(640, 480);
... then plug in what you need.