Morph circle to oval in openscad - polygon

I am trying to create a fan duct in openscad, flattening the duct from circular to oval. Is there a way to do this in openscad? If not, is there any other programmatic way to generate this type of 3d model?
Thanks
Dennis

Assuming by 'oval' you mean elipse, then the following creates a solid tapering from a circle to an ellipse:
Delta=0.01;
module connector (height,radius,eccentricity) {
hull() {
linear_extrude(height=Delta)
circle(r=radius);
translate([0,0,height - Delta])
linear_extrude(height=Delta)
scale([1,eccentricity])
circle(r=radius);
}
}
connector(20,6,0.6);
You could make the tube by subtracting a smaller version:
module tube(height, radius, eccentricity=1, thickness) {
difference() {
connector(height,radius,eccentricity);
translate([0,0,-(Delta+thickness)])
connector(height + 2* (Delta +thickness) ,radius-thickness, eccentricity);
}
}
tube(20,8,0.6,2);
but the wall thickness will not be uniform. To make a uniform wall, use minkowski to add the wall:
module tube(height, radius, eccentricity=1, thickness) {
difference() {
minkowski() {
connector(height,radius,eccentricity);
cylinder(height=height,r=thickness);
}
translate([0,0,-(Delta+thickness)])
connector(height + 2* (Delta +thickness) ,radius, eccentricity);
}
}
tube(20,8,0.6,2);

There is another way by using the „scale“-parameter of linear_extrude(). It „scales the 2D shape by this value over the height of the extrusion. Scale can be a scalar or a vector“ (Documentation). Using a vector with x- and y-scalefactor, you get the modification, you wanted:
d = 2; // height of ellipsoid, diameter of bottom circle
t = 0.25; // wall thickness
w = 4; // width of ellipsoid
l = 10; // length of extrusion
module ellipsoid(diameter, width, height) {
linear_extrude(height = height, scale = [width/diameter,1]) circle(d = diameter);
}
difference() {
ellipsoid(d,w,l);
ellipsoid(d-2*t,w-2*t,l);
}

I like Chris Wallace answer but there was a bug in the Minkwoski, it should be h=Delta.
module tube(height, radius, eccentricity=1, thickness) {
difference() {
minkowski() {
connector(height,radius,eccentricity);
cylinder(h=Delta,r=thickness);
}
translate([0,0,-(Delta+thickness)])
connector(height + 2* (Delta +thickness) ,radius, eccentricity);
}
}
tube(20,8,0.6,2);

I don't know of a way to do it directly, but I can imagine approximating it with a series of stacked slices.
Start with a circle, and have a loop that changes the scale factor smoothly from circle to oval as you add slices to the stack. This will give you a stepped surface. If this is for a 3D printing application, if you make your slice thickness the same as your layer height, you might not even notice.

Related

D3 svg css - rotate line around centre using css

I have a plunker here - https://plnkr.co/edit/E2SJlCo141NhYatVEFMU?p=preview
I have 3 three bars that are made with a start and finish posiiton.
The start position can be higher or lower than the finish.
I want to draw arrows on the bars to illustrate if the start is before or after the finish.
I'm drawing the arrows on the bars and adding classes based on if the bar is going up or down.
I then wanted to use css to set the direction of the arrows by rotating them.
This is all working but arrow is not rotating around its center and so is positioned off the bar.
Is it possible to rotate the arrow and line around its center
.arrow-up{
transform: rotate(180deg);
transform-origin: center center;
}
This looks like it is an issue with transform-origin support in SVG - "Keywords and percentages refer to the canvas instead of the object itself". see the Browser compatibility section here. That comment was listed for Firefox but I experience the same problem for Chrome too.
To demonstrate I forced all 3 arrows to use the arrow-up class and you can see that they seem to have rotated around the same point.
https://plnkr.co/edit/yQ4X18eb7VCItxXswMww?p=preview
So, you can use a rotate transform directly on the SVG line. The following plunker has the start of the code you need but you'll need to calculate you centre x and y values form your data.
https://plnkr.co/edit/JyT9ORnnMCETgpMyCjm1?p=preview
Here's the bit of code you need, but as I say you'll need to replace 100 100 with you're centre of rotation x and y values. You'll be able to dispense with the arrow-up and arrow-down class too.
bar.enter()
.append("line")
.attr("x1", d => x(d.phase) + x.bandwidth()/2)
.attr("y1", (d, i) => {
if(d.start < d.finish){
return y(d.finish)+10;
}else{
return y(d.start)+10;
}
})
.attr("x2", d => x(d.phase) + x.bandwidth()/2)
.attr("y2", (d, i) => {
if(d.finish < d.start){
return y(d.finish)-15;
}else{
return y(d.start)-15;
}
})
.attr('class', (d, i) => {
return d.start > d.finish ? 'arrow-up' : 'arrow-up'
})
.attr("stroke","red")
.attr("stroke-width",2)
.attr("marker-end","url(#arrow)")
.attr("transform", (d) => {
console.log(d.start, d.finish)
console.log(x(d.phase), x.bandwidth()/2)
return `rotate(180 100 100)`
})

How to disable linear filtering for drawImage on canvas in javafx

I'm trying to draw scaled image on canvas in javafx. Using this code:
Image image = ...;
canvas.setWidth(scale * width);
canvas.setHeight(scale * height);
GraphicsContext gc = canvas.getGraphicsContext2D();
gc.drawImage(image, 0, 0, scale * width, scale * height);
// this gives same result
// gc.scale(scale, scale);
// gc.drawImage(editableImage, 0, 0, width, height);
It works really fast but makes blurred images like this:
This is not what I'd like to see. Instead I want to get this picture:
Which can be drawn by manually setting each pixel color with such code:
PixelReader reader = image.getPixelReader();
PixelWriter writer = gc.getPixelWriter();
for (int y = 0; y < scale * height; ++y) {
for (int x = 0; x < scale * width; ++x) {
writer.setArgb(x, y, reader.getArgb(x / scale, y / scale));
}
}
But I cannot use this approach as it's too slow. It took couple of seconds to draw 1Kb image scaled 8 times. So I ask if there's any way to disable this blurry effect for drawing on canvas?
UPD 10/07/2019:
Looks like the issue is fixed! Now GraphicsContext should have property "image smoothing" controlling this behavior.
INITIAL ANSWER
I guess I've found answer to my question. As this issue says that there's no way to specify filtering options in graphics context.
Description:
When drawing an image in a GraphicsContext using the drawImage()
method to enlarge a small image to a larger canvas, the image is being
interpolated (possibly using a bilinear or bicubic algorithm). But
there are times like when rendering color maps (temperature,
zooplancton, salinity, etc.) or some geographical data (population
concentration, etc.) where we want to have no interpolation at all
(ie: use the nearest neighbor algorithm instead) in order to represent
accurate data and shapes.
In Java2D, this is possible by setting the appropriate
RenderingHints.KEY_RENDERING on the Graphics2D at hand. Currently on
JavaFX's GraphicsContext there is no such way to specify how the image
is to be interpolated.
The same applies when shrinking images too.
This could be expanded to support a better form of smoothing for the
"smooth" value that is available in both Image and ImageView and that
does not seem to work very well currently (at least on Windows).
The issue was created in 2013 but it's still untouched so unlikely it will be resolved soon.

What is the gradient orientation and gradient magnitude?

I am currently studying a module in computer vision called edge detection.
I am trying to understand the meaning of gradient orientation and gradient magnitude.
As explained by Dima in his answer, you should be familiar with the mathematical concept of gradient in order to better understand the gradient in the field of image processing.
My answer is based on the answer of mevatron to this question.
Here you find a simple initial image of a white disk on a black background:
you can compute an approximation of the gradient of this image. As Dima explained in his answer, you have two component of the gradient, an horizontal and a vertical component.
The following images shows you the horizontal component:
it shows how much the gray levels in your image change in the horizontal direction (it is the direction of positive x, scanning the image from left to right), this change is "encoded" in the grey level of the image of the horizontal component: the mean grey level means no change, the bright levels mean change from a dark value to a bright value, the dark levels mean a change from a bright value to a dark value. So, in the above image you see the brighter value in the left part of the circle because it is in the left part of the initial image that you have the black to white transition that gives you the left edge of the disk; similarly, in the above image you see the darker value in the right part of the circle because it is in the right part of the initial image that you have the white to black transition that gives you the right edge of the disk. In the above image, the inner part of the disk and the background are at a mean grey level because there is no change inside the disk and in the background.
We can make analogous observations for the vertical component, it shows how the image change in the vertical direction, i.e. scanning the image from the top to the bottom:
You can now combine the two components in order to get the magnitude of the gradient and the orientation of the gradient.
The following image is the magnitude of the gradient:
Again, in the above image the change in initial image is encoded in the gray level: here you see that white means an high change in the initial image while black means no change at all.
So, when you look at the image of the magnitude of the gradient you can say "if the image is bright it means a big change in the initial image; if it is dark it means no change or very llittle change".
The following image is the orientation of the gradient:
In the above image the orientation is again encoded as gray levels: you can think at the orientation as the angle of an arrow pointing from the the dark part of the image to the bright part of the image; the angle is referred to an xy frame where the x runs from left to right while the y runs from top to bottom. In the above image you see all the grey level from the black (zero degree) to the white (360 degree). We can encode the information with color:
in the above image the information is encode in this way:
red: the angle is between 0 and 90 degree
cyan: the angle is between 90 and 180 degree
green: the angle is between 180 and 270 degree
yellow: the angle is between 270 and 360 degree
Here it is the C++ OpenCV code for producing the above images.
Pay attention to the fact that, for the computation of the orientation, I use the function cv::phase which, as explained in the doc, gives an angle of 0 when both the vertical component and the horizontal component of the gradient are zero; that may be convenient but from a mathematical point of view is plainly wrong because when both components are zero the orientation is not defined and the only meaningful value for an orientation kept in a floating-point C++ type is a NaN.
It is plainly wrong because a 0 degree orientation, for example, is already related to an horizontal edge and it cannot be used to represent something else like a region with no edges and so a region where orientation is meaningless.
// original code by https://stackoverflow.com/users/951860/mevatron
// see https://stackoverflow.com/a/11157426/15485
// https://stackoverflow.com/users/15485/uvts-cvs added the code for saving x and y gradient component
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <vector>
using namespace cv;
using namespace std;
Mat mat2gray(const cv::Mat& src)
{
Mat dst;
normalize(src, dst, 0.0, 255.0, cv::NORM_MINMAX, CV_8U);
return dst;
}
Mat orientationMap(const cv::Mat& mag, const cv::Mat& ori, double thresh = 1.0)
{
Mat oriMap = Mat::zeros(ori.size(), CV_8UC3);
Vec3b red(0, 0, 255);
Vec3b cyan(255, 255, 0);
Vec3b green(0, 255, 0);
Vec3b yellow(0, 255, 255);
for(int i = 0; i < mag.rows*mag.cols; i++)
{
float* magPixel = reinterpret_cast<float*>(mag.data + i*sizeof(float));
if(*magPixel > thresh)
{
float* oriPixel = reinterpret_cast<float*>(ori.data + i*sizeof(float));
Vec3b* mapPixel = reinterpret_cast<Vec3b*>(oriMap.data + i*3*sizeof(char));
if(*oriPixel < 90.0)
*mapPixel = red;
else if(*oriPixel >= 90.0 && *oriPixel < 180.0)
*mapPixel = cyan;
else if(*oriPixel >= 180.0 && *oriPixel < 270.0)
*mapPixel = green;
else if(*oriPixel >= 270.0 && *oriPixel < 360.0)
*mapPixel = yellow;
}
}
return oriMap;
}
int main(int argc, char* argv[])
{
Mat image = Mat::zeros(Size(320, 240), CV_8UC1);
circle(image, Point(160, 120), 80, Scalar(255, 255, 255), -1, CV_AA);
imshow("original", image);
Mat Sx;
Sobel(image, Sx, CV_32F, 1, 0, 3);
Mat Sy;
Sobel(image, Sy, CV_32F, 0, 1, 3);
Mat mag, ori;
magnitude(Sx, Sy, mag);
phase(Sx, Sy, ori, true);
Mat oriMap = orientationMap(mag, ori, 1.0);
imshow("x", mat2gray(Sx));
imshow("y", mat2gray(Sy));
imwrite("hor.png",mat2gray(Sx));
imwrite("ver.png",mat2gray(Sy));
imshow("magnitude", mat2gray(mag));
imshow("orientation", mat2gray(ori));
imshow("orientation map", oriMap);
waitKey();
return 0;
}
The gradient of a function of two variables x, y is a vector of the partial derivatives in the x and y direction. So if your function is f(x,y), the gradient is the vector (f_x, f_y). An image is a discrete function of (x,y), so you can also talk about the gradient of an image.
The gradient of the image has two components: the x-derivative and the y-derivative. So, you can think of it as vectors (f_x, f_y) defined at each pixel. These vectors have a direction atan(f_y / fx) and a magnitude sqrt(f_x^2 + f_y^2). So, you can represent the gradient of an image either an x-derivative image and a y-derivative image, or as direction image and a magnitude image.

Should i re-draw SurfaceLayer on every frame?

I've create simple example: background surface layer and 10 small "dots" on it (10 surface layers 10x10 px each filled with color via fillRect()). Paint method simply moves the dots around periodically:
private SurfaceLayer background;
private List<Layer> dots = new ArrayList<Layer>();
#Override
public void init()
{
background = graphics().createSurfaceLayer(graphics().width(), graphics().height());
background.surface().setFillColor(Color.rgb(100, 100, 100));
background.surface().fillRect(0, 0, graphics().width(), graphics().height());
graphics().rootLayer().add(background);
for (int i = 0; i < 10; i++)
{
SurfaceLayer dot = graphics().createSurfaceLayer(10, 10);
dot.surface().clear();
dot.surface().setFillColor(Color.rgb(250, 250, 250));
dot.surface().fillRect(0, 0, 10, 10);
dot.setDepth(1);
dot.setTranslation(random()*graphics().width(), random()*graphics().height());
dots.add(dot);
graphics().rootLayer().add(dot);
}
}
#Override
public void paint(float alpha)
{
for (Layer dot : dots)
{
if (random() > 0.999)
{
dot.setTranslation(random()*graphics().width(), random()*graphics().height());
}
}
}
Somehow java version draws all dots while html and android version draw only 1.
Manual doesn't clearly say if i should re-draw all these dots in every paint() call. And as far as i understood SurfaceLayer is meant for cases when you do not modify layer on every frame (so same buffer can be reused?), but this doesn't work.
So can you guys help me with correct SurfaceLayer usage? If i just filled a rectangular on SurfaceLayer - would it ramin on this layer forever or should i fill it in each paint call? If yes - is this different from ImmeadiateLayer?
You don't need to redraw a surface layer on every call to paint. As you have shown, you draw it only when preparing it, and the texture into which you've draw will be rendered every frame without further action on your part.
If the Android and HTML backend are not drawing all of your surface layers, there must be a bug. I'll try to reproduce your test and see if it works for me.
One note: creating a giant surface the size of the screen and drawing a solid color into it is a huge waste of texture memory. Just create an ImmediateLayer that calls fillRect() on every frame, which is far more efficient than creating a massive screen-covering texture.

Image Rotate 3D in Flex, but display another image on back of this image?

i want to rotate 3D an Image called img1 in Flex. I want to rotate it around y axis 180 degree. I can do this by using 3D effect already built in Flex but i want to do a bit more different.
I want during rotating, there's another image called img2 appear on back of img1 (in default case, the image appear on the back is img1) and when rotating finish, the image will be img2.
How can i do this ?
Thank you.
If you need no perspective effect, it's quite easy to do. A rough implementation (not tested!):
// Event.ENTER_FRAME event listener
void on_enter_frame(event:Event):void
{
// m_angle is a member of the class/flex component where on_enter_frame is declared
// ANGLE_DELTA is just a constant
m_angle += ANGLE_DELTA;
// Angle clamping to the range [0, PI * 2)
m_angle %= Math.PI * 2;
if (m_angle < 0)
m_angle += Math.PI * 2;
// If we currently look at the front side...
if (m_angle < Math.PI)
{
img1.visible = true;
img2.visible = false;
img1.scaleX = Math.cos(m_angle);
}
else
{
img1.visible = false;
img2.visible = true;
// If you omit negation, the back-side image will be mirrored
img2.scaleX = -Math.cos(m_angle);
}
}
So every frame we increase the rotation angle, clamp it to the range [0, PI * 2). Then depending on the value of the rotation angle, we hide/show the pair of your images, and then perform x-scaling of the visible image.
Thank you, now i found a solution. Please check it here, it's very easy to do.
http://forums.adobe.com/thread/921258

Resources