Using Flex 4 Builder
Is it possible to draw 2 rectangular shapes of "Box A" and "Box B" and place them apart, next, adding a magnetic line (black line) between them which will keep them connected without having to manually update the line xy position?
It depends on what you mean by manually, practically your black line should be drawn between two points defined by BoxA & BoxB coordinates, anytime you move either of the boxes, you should call a method that will refresh your line.
As long as your points are referenced to BoxA & BoxB positions, refreshing the line is only a matter of recalling the method you've used to draw it.
//Pseudo Code
define BoxA position
define BoxB position
define PointA PointA = new Point( BoxA.centerX , BoxA.centerY)
define PointB PointB = new Point( BoxB.centerX , BoxB.centerY)
define drawLine method // draw line between PointA & PointB
drawLine();
move( BoxB ); //will change the value of PointB
drawLine();
Related
I have a 3D model in a coordinate system that is defined in metres. The coordinates have been transformed to have the centre of the bounding box of the model as the origin. A vertex with the coordinates (1, 0, 0) would thus lie 1 metre from the origin.
When trying to add the geometries to the map, with the actual latitude/longitude of the origin as geoPosition, they don't get placed at the exact location and appear smaller than they are. How could I solve this?
Thanks.
You can center the map at whatever point you would like in the world with this method:
map.setCameraGeolocationAndZoom(
//Singapore coordinates and zoom level 16:
new harp.GeoCoordinates(1.278676, 103.850216), 16
);
You can specify the projection type in the MapView's constructor.
To implement a globe projection:
const map = new harp.MapView({
canvas,
theme: "https://unpkg.com/#here/harp-map-theme#latest/resources/berlin_tilezen_base_globe.json",
projection: harp.sphereProjection,
//For tile cache optimization:
maxVisibleDataSourceTiles: 40,
tileCacheSize: 100
});
//And set it to a view where you can see the whole world:
map.setCameraGeolocationAndZoom(new harp.GeoCoordinates(1.278676, 103.850216), 4);
Please refer documentation for more reference:
https://developer.here.com/tutorials/harpgl/#modify-the-map
I am currently trying to make a group of arcs, with text above them. This seems to be working, but for only half the cases. The other half is ending up below the arc node and is invisible.
I have tried using node.tofront() , toback() etc but it is still not changing anything.
class pieslice extends Group{
pieslice(double centerx,double centery,double segpart,double totalseg){
Text value= new Text(String.format("%.2f",worth));
segment = totalseg;
Arc innerArc=new Arc();
Arc outerArc=new Arc();
outerArc.setType(ArcType.ROUND);
innerArc.setType(ArcType.ROUND);
value.setFill(Color.WHITE);
innerArc.setStrokeWidth(1);
innerArc.setRadiusX(150.0f);
innerArc.setRadiusY(150.0f);
outerArc.setRadiusX(innerArc.getRadiusX()+10);
outerArc.setRadiusY(innerArc.getRadiusY()+10);
outerArc.setFill(Color.WHITE);
innerArc.setStartAngle((360/segment)*segpart);
outerArc.setStartAngle((360/segment)*segpart);
innerArc.setCenterX(centerx);
innerArc.setCenterY(centery);
outerArc.setCenterX(centerx);
outerArc.setCenterX(centery);
innerArc.setLength(360/segment);
outerArc.setLength(360/segment);
innerArc.setStrokeWidth(1);
innerArc.setStroke(Color.BLACK);
innerArc.setFill(Color.color(Math.random(),Math.random(),Math.random()));
value.setX(150);
value.setFill(Color.BLACK);
value.getTransforms().add(new Rotate((360/segment)*segpart+((360/segment)/2),0,0));
System.out.println((360/segment)*segpart+((360/segment)/2));
this.getChildren().add(outerArc);
this.getChildren().get(0).setViewOrder(2);
this.getChildren().add(innerArc);
this.getChildren().add(value);}
I would expect that since I am adding the two arcs (Inner and outer for aesthetic effect only) and then the text, that the text would be rendered above the shapes, but that is not the case. Any ideas?
I am making a 2D platformer in Godot 3.0 and I want to have the player throw/shoot items using the mouse to aim (similar to bows and guns in Terraria). How would I go about doing this? I am using gdscript.
You can use look_at() method (Node2D and Spatial classes) and get_global_mouse_position():
func _process(delta):
SomeNode2DGun.look_at(get_global_mouse_position())
Subtract the player position vector from the mouse position and you'll get a vector that points from the player to the mouse. Then you can use the vector's angle method to set the angle of your projectiles and normalize the vector and scale it to the desired length to get the velocity.
extends KinematicBody2D
var Projectile = preload('res://Projectile.tscn')
func _ready():
set_process(true)
func _process(delta):
# A vector that points from the player to the mouse position.
var direction = get_viewport().get_mouse_position() - position
if Input.is_action_just_pressed('ui_up'):
var projectile = Projectile.instance() # Create a projectile.
# Set the position, rotation and velocity.
projectile.position = position
projectile.rotation = direction.angle()
projectile.vel = direction.normalized() * 5 # Scale to length 5.
get_parent().add_child(projectile)
I'm using a KinematicBody2D as the Projectile.tscn scene in this example and move it with move_and_collide(vel), but you can use other node types as well. Also, adjust the collision layers and mask, so that the projectiles don't collide with the player.
I want to ask for help to deal with the possible use of non-standard coordinates on the map Leaflet.
I want to use Leaflet to display custom maps with my own tile generator. Tiles are generated on the fly by script, depending on where it is planned to display (parameters {x}, {y}, {z} in the URL request to the script)
Map will be zoomable (from 0 to 10), size of ~16000 * 16000 tiles in maximum zoom, and 16 * 16 tiles in a minimum) and it will display a variety of objects, each object in a separate tile.
Each tile of 64 * 64 pixels is the object on map.
For each object (a square-tile) I want to display it`s information on mouse click, by sending via AJAX request to the server. I did not want to pre-load all information about all objects for the goal of optimization.
My main issue - I cannot understand how to correctly get the X Y Я coordinates of the tile on which mouse clicked.
Essentially because each tile when it is loaded from the server is bound to the grid {x}, {y}, {z}, and so I want to get these {x}, {y}, {z} from clicks on the map and send them for further AJAX request for getting information about the object.
Now it is possible to get click point as Latlng coordinates or coordinates in pixels relative to the upper-left corner of the map, which I cannot reference to tiles grid.
And also I wanted to know the possibility to get the coordinates of the click relative to the tile. For example, If the tile has dimensions of 64 * 64, and click was in the center of the tile, so how can I get relative coordinate of click [32, 32]?
Because If we knowing {X}, {Y}, {Z} coordinates of the tile and relative X* and Y* coordinates of click inside the tile, then we can do “universal alternative coordinate grid”.
May be this is not a problem and it can be solved easily, but I've never worked with any Maps API, and therefore I want to know the answer to this question.
Thanks in advance for your help!
Here is a working example for getting Zoom, X, and Y coordinates of clicked tile (using openstreet map): http://jsfiddle.net/84P9r/
function getTileURL(lat, lon, zoom) {
let xtile = parseInt(Math.floor((lon + 180) / 360 * (1 << zoom)));
let ytile = parseInt(Math.floor((1 - Math.log(Math.tan(lat.toRad()) + 1 / Math.cos(lat.toRad())) / Math.PI) / 2 * (1 << zoom)));
return zoom + "/" + xtile + "/" + ytile;
}
based on an answer https://stackoverflow.com/a/19197572
You can use the mouseEventToLayerPoint and mouseEventToContainerPoint methods in the Leaflet API to convert pixels onscreen to pixels relative to the top-left of the map, and then using a little math, you can derive the location within a tile.
This is what Leaflet does internally:
const tileSize = [256, 256]
let pixelPoint = map.project(e.latlng, map.getZoom()).floor()
let coords = pixelPoint.unscaleBy(tileSize).floor()
coords.z = map.getZoom() // { x: 212, y: 387, z: 10 }
While setting a QGraphicsItem rotation, I get different results upon the transformation origin point while using setRotation() and using:
transform = QTransform()
transform.rotate(myAngle)
myItem.setTransform(transform)
In both portion of code, I set setTransformOriginPoint() to the same point.
Results are:
While using setRotation() method, the item is rotated upon its transformation origin point.
While using the QTransform object, the item is rotated upon item's origin, that is, point (0,0).
My code is more complex than that, but I think It applies the same. The QGraphicsItem is in fact a QGraphicsItemGroup and I can check the issue adding just one item, and in my rotation procedure change the setRotation() method for the QTransform object. The latter, ignores the setTransformOriginPoint().
I'm having this issue for a while, and I dig a lot of resources. I browse the Qt C++ code, and I can see that the setRotation() method modifies a field calles rotation (a real value) in the TransformData structure within the QGraphicsItem. The origin point is also a two field real value in such a structure called xOrigin and yOrigin respectively. The transformation is stored in the tranform field. All this information is used in a variable called: transformData.
So, I don't get why the transformation set in the transformData->transform field is ignoring the values transformData->xOrigin and transformData->yOrigin at the time of being applied.
The code I used to test that issue is the following relevant part (I have an rotate item that receives mouse inputs and applies rotation to the item itself):
# This method using QTransform object....
def mouseMoveEvent(self, event):
if self.pressed:
parent = self.parentItem()
parentPos = parent.boundingRect().center()
newPoint = event.scenePos()
iNumber = (newPoint.x()-parentPos.x())-((newPoint.y()-parentPos.y()))*1j
angle = cmath.phase(iNumber)+1.5*math.pi
self.appliedRotation = (360-math.degrees(angle))%360 - self.angleOffset
transform = QTransform()
transform.rotate(self.appliedRotation)
self.parentItem().setTransform(transform)
# ...Against this one using setRotation()
def mouseMoveEvent(self, event):
if self.pressed:
parent = self.parentItem()
parentPos = parent.boundingRect().center()
newPoint = event.scenePos()
iNumber = (newPoint.x()-parentPos.x())-((newPoint.y()-parentPos.y()))*1j
angle = cmath.phase(iNumber)+1.5*math.pi
self.appliedRotation = (360-math.degrees(angle))%360 - self.angleOffset
self.parentItem().setRotation(self.appliedRotation)
On both, previously the setTransformOriginPoint() is set, but it's not a relevant part to show the code, but just to know that it is done.
I'm getting frustrated to not find a solution to it. As it seems so straightforward, why setting a rotation transformation matrix does not use the transformation origin point that I have set and while using setRotation() method works fine? That question took me to the source code, but now is more confusing as rotation is keeping separated from the transformation applied...
I was solving the same problem. I found out that QGraphicsItem::setTransformOriginPoint() is accepted only for QGraphicsItem::setRotation(). It is ignored for QGraphicsItem::setTransform().
I use this code to reach the same behavior for QTransform():
transform = QtGui.QTransform()
centerX = item.boundingRect().width()/2
centerY = item.boundingRect().height()/2
transform.translate( centerX , centerY )
transform.rotate( -rotation )
transform.translate( -centerX , -centerY )
item.setTransform( transform )