I made a semicircular 2D shape in Godot. Now I'm trying to give it proper collisions. So far, I've found no way to do this. The CollisionShape2D node only allows simple shapes like circles and rectangles, and the CollisionPolygon2D shape won't allow me to make the curved shape I require. Is there any way I can get the proper collisions?
I am not familiar with a built-in function that does this.
However, you can do two things:
Approximate the circular shape with the CollisionPolygon2D. This is simple and will work. However, might not be very efficient.
You override the collision method for your object. It is also simple:
Say it is a semicircle in the direction head_dir. Simply:
dir := head_dir
position := this object position
circRadius := semi-circle radius
otherPos := other object position
inside = false
if(dotProduct(dir, otherPos) >= 0):
inside = true
if(inside):
if(norm(otherPos - position) > radius)
inside = false
return inside
The second method will give you the exact collision you look for. All you need is update head_dir.
Related
Iam nooby in godot, I have to use A* to traslate the player to the goal position, but I do not know how to start, pls help!! basically I have just 2 tiles in the tilemap, 1 of them is allowed to pass over it, I have to extract I guess the allowed tile and calculate the distance between the position player with the position goal, getting the real distance and then check cell per cell which has the lowest cost, but I do not know how to do that :c
func get_player_init_pos():
var pos = map_to_world(Vector2(54,1))pos.y += half_cell_size.y
return pos
func is_tile_vacant(pos, direction):
var curr_tile = world_to_map(pos)
var next_tile = get_cellv(curr_tile + direction)
var next_tile_pos = Vector2()
if(next_tile == 0):
next_tile_pos = map_to_world(curr_tile + direction)
else:next_tile_pos = pos
return next_tile_pos
I have this, the first part of the code is to locate the player in the map and the second is for check the tile walls in the map
You could roll your own path finding algorithm. However, there is little point in doing so, since Godot has a AStar class you can use. However, you probably don't need to use that either, because Godot has a navigation system. And that is what I'm going to suggest you to use.
First of all, you can specify both navigation and collision polygons on your tiles. You need to have the navigation polygons. Go ahead and do them.
Second you want to have a Navigation2D in the scene tree and have your TileMap as a child.
And third, you can ask the Navigation2D for a path with get_simple_path, you pass the start and end positions as arguments and you get an array of points that make up the path.
Since you mention A*, I'll briefly explain using the AStar too anyway.
First, you need to add the cells with add_point. It requires ids. It is a good idea to be clever with the ids so you can compute the id for a given position. For example x * width + y if you know the size.
So you can iterate over the tiles in your TileMap and call add_point for each one (You don't need to add cell that are not passable).
Then you need to specify the connections with connect_points (it takes the ids of the points as parameters).
And finally you can call get_point_path passing the start and end ids. Again it gives you a array of points.
I am using map bounds to display markers that fall within the current viewport. Upon zooming or panning the bounds are recalculated and the markers that fall within these bounds are redrawn, effectively hiding any not contained within the viewport.
I would like to do it so that the markers draw slightly out of the current viewport, however this would involve extending the bounds equally from all sides rather than using bounds.extend(point). Is this possible?
//I would like to extend this value in order to draw features that are slightly off the viewport
var bounds = map.getBounds()
//This is how I am currently extending the bounds, it works but I am unsure if it is the correct way.
bounds.b.b = bounds.b.b - 0.5
bounds.b.f = bounds.b.f + 0.5
bounds.f.b = bounds.f.b - 0.5
bounds.f.f = bounds.f.f + 0.5
//Determining whether the feature lies within the current viewport
var result = bounds.contains(Featurecenter)
center = null
//If the feature lies within the viewport
if (result) {
Feature.setMap(map) //Making the feature visible on the map
}
Don't use b, f and such, which are undocumented properties, instead use documented methods such as getNorthEast(), getSouthWest() then lat() and lng() when you need to extract coordinates from a bounds object.
I don't know of any other method to extend a bounds object than to pass new coordinates to it (see below).
The LatLngBounds object has an extend() method that you must use to achieve what you want.
We don't know of "how much" you need to extend the bounds; depending on that, and on the current zoom level when you want to extend your bounds, you will probably need to do some maths.
If you need to extend your bounds by a given (and constant) distance, whatever zoom level is currently set, I suggest you to read some other Q/A that explain how to do that. For example:
Google maps distance based on the zoom level
Google Maps V3 - How to calculate the zoom level for a given bounds
I am working on a 3D mesh manipulator using this : http://leapmotion.com. So far, I have been able manipulate the points just fine, by 'grabbing' and moving them, however I now want to be able to rotate the mesh and work on the opposite face. What I have done is add an extra object that is called 'rotatable' as Shown below:
scene=new THREE.Scene();
camera = new THREE.PerspectiveCamera(70,window.innerWidth/window.innerHeight,1,8000)
renderer=new THREE.WebGLRenderer( { clearColor: 0x000000, clearAlpha: 1, maxLights:5 } )
//This is the 'Mesh Scene'
rotatable = new THREE.Object3D()
scene.add(rotatable)
//Mesh we are altering
var material = new THREE.MeshNormalMaterial()
material.side=2
var geom = new THREE.SphereGeometry(200,10,10);
var sphere = new THREE.Mesh(geom, material)
rotatable.add(sphere)
I am then trying to change the vertices of this sphere, but to do so I need to do a 'collision test' in order to see if the vertex is being 'grabbed' This involves check the vertex position and see if it coincides with one of your finger position (psuedoCode below)
if(finger.x == vertex.x && finger.y == vertex.y && finger.z == vertex.z){
vertex.grabbed = true
}
This works fine when the rotatable's rotation is zero, however when it starts to rotate, the collision test will still be testing for the unrotated vertex position (which makes sense). My question is how to find the position of the vertex in its 'scene / global' position. The only way I can think of doing this so far is to calculate the rotation of the 'rotatable' and use this vector to calculate the new vertex position.
I know nothing about math, so this may not be the way to go, and even if it is I will have to struggle through it so hard that I won't ever know if I'm just doing the math incorrectly, or this isn't the way I should go about calculating it. Obviously I'm willing to go through this work, but just want to make sure this is the way to do it, rather then an other simpler method.
If there are any other questions about the code, please let me know, and Thanks in advance for your time!
Isaac
To get the world position of a vertex specified in local coordinates, apply the object's world transform to the vertex like so:
vertex.applyMatrix4( object.matrixWorld );
(I am not familiar with leapmotion, so hopefully it does not impact this answer.)
Tip: maxLights is no longer required. And it is best to avoid material.side = 2. Use material.side = THREE.DoubleSide instead.
You can find the constants here: https://github.com/mrdoob/three.js/blob/master/src/Three.js
three.js r.55
This question already has an answer here:
Can I get vector data back out out of a Graphics object?
(1 answer)
Closed 9 years ago.
EDIT (for clarification):
I have a vector image with a simple contour, an irregular closed polygon.
I need to import it into Flash in a way that I can then programmatically access each of the segments that form the polygon.
Importing the vector image into the library as a MovieClip wasn't good because all I get is a shape from which I can take no geometry information at all.
My goal is being able to calculate the polygon's area and also calculating the intersection between the polygon and another polygon.
I guess I could write an Illustrator script that reads all the segments and writes a CSV files with their coordinates, but there has to be a simpler way, I mean, they're both vectorial, they should understand each other.
Thanks!
.
-- Old Post: --
I have a contour in vector graphics that I imported to the Flash library as a movieclip.
I Instanciate the movieclip and it has a Shape child which is the actual contour.
I need to be able to access the contour segments, i.e. the polygon's sides, to be able to get their starting and ending points, is there a way?
the Graphics class only allows to draw but what you draw, as with the Shape class, are not objects, it's not a polygon with sides or whatever.
Am I being clear?
Thanks
There is no way to read the data of a Graphics object (which is essentially what contains the information that you are after.) This applies to any vector graphics object that has already been drawn, either by the Graphics/drawing API itself, or in Flash CS3/CS4, or was embedded using the [Embed] meta-tag.
Your best bet if you need to calculate the algebraic area, or for some other reason retain the vectors in your algorithms, is definitely exporting an SVG or some single-purpose format (like a CSV of the points) from Illustrator, and parsing that in ActionScript.
Another option is to use a BitmapData, and draw the Shape object onto that, then counting the colored (opaque) pixels to numerically calculate it's area.
var bmp : BitmapData = new BitmapData(myShape.width, myShape.height, true, 0);
bmp.draw(myShape);
var i : uint;
var area : uint = 0;
var num_pixels : uint = bmp.width*bmp.height;
for (i=0; i<num_pixels; i++) {
var px : uint = bmp.getPixel32(i%bmp.width, Math.floor(i/bmp.height));
// Determine from px color/alpha whether it's part of the shape or not.
// This particular if statement should determine whether the alpha
// component (first 8 bits of the px integer) are greater than zero, i.e.
// not transparent.
if ((px >> 24) > 0)
area++;
}
trace('number of opaque pixels (area): '+area);
Depending on your application, you might also be able to use the BitmapData.hitTest() method for your collision detection.
I believe the best you can do is to retrieve a rectangular bounding box on the Shape object. Depending on how you imported it, you may or may not have direct access to the Shape object as an instance variable; however, if you do, you can call shapeVar.transform.getBounds() or shapeVar.transform.getRect() (bounds returns a rectangle inclusive of strokes on the shape, rect does not).
I'm curious, so I'm doing a bit of research on alternate means of getting some pixel bounds. I'll edit this further if I find something useful.
See also: Why is my image rotation algorithm not working?
This question isn't language specific, and is a math problem. I will however use some C++ code to explain what I need as I'm not experienced with the mathematic equations needed to express the problem (but if you know about this, I’d be interested to learn).
Here's how the image is composed:
ImageMatrix image;
image[0][0][0] = 1;
image[0][1][0] = 2;
image[0][2][0] = 1;
image[1][0][0] = 0;
image[1][1][0] = 0;
image[1][2][0] = 0;
image[2][0][0] = -1;
image[2][1][0] = -2;
image[2][2][0] = -1;
Here's the prototype for the function I'm trying to create:
ImageMatrix rotateImage(ImageMatrix image, double angle);
I'd like to rotate only the first two indices (rows and columns) but not the channel.
The usual way to solve this is by doing it backwards. Instead of calculating where each pixel in the input image ends up in the output image, you calculate where each pixel in the output image is located in the input image (by rotationg the same amount in the other direction. This way you can be sure that all pixels in the output image will have a value.
output = new Image(input.size())
for each pixel in input:
{
p2 = rotate(pixel, -angle);
value = interpolate(input, p2)
output(pixel) = value
}
There are different ways to do interpolation. For the formula of rotation I think you should check https://en.wikipedia.org/wiki/Rotation_matrix#In_two_dimensions
But just to be nice, here it is (rotation of point (x,y) angle degrees/radians):
newX = cos(angle)*x - sin(angle)*y
newY = sin(angle)*x + cos(angle)*y
To rotate an image, you create 3 points:
A----B
|
|
C
and rotate that around A. To get the new rotated image you do this:
rotate ABC around A in 2D, so this is a single euler rotation
traverse in the rotated state from A to B. For every pixel you traverse also from left to right over the horizontal line in the original image. So if the image is an image of width 100, height 50, you'll traverse from A to B in 100 steps and from A to C in 50 steps, drawing 50 lines of 100 pixels in the area formed by ABC in their rotated state.
This might sound complicated but it's not. Please see this C# code I wrote some time ago:
rotoZoomer by me
When drawing, I alter the source pointers a bit to get a rubber-like effect, but if you disable that, you'll see the code rotates the image without problems. Of course, on some angles you'll get an image which looks slightly distorted. The sourcecode contains comments what's going on so you should be able to grab the math/logic behind it easily.
If you like Java better, I also have made a java version once, 14 or so years ago ;) ->
http://www.xs4all.nl/~perseus/zoom/zoom.java
Note there's another solution apart from rotation matrices, that doesn't loose image information through aliasing.
You can separate 2D image rotation into skews and scalings, which preserve the image quality.
Here's a simpler explanation
It seems like the example you've provided is some edge detection kernel. So if what you want to is detect edges of different angles you'd better choose some continuous function (which in your case might be a parametrized gaussian of x1 multiplied by x2) and then rotate it according to formulae provided by kigurai. As a result you would be able to produce a diskrete kernel more efficiently and without aliasing.