Is there an algorithm for snapping to an isometric grid?
This is the one I came up with:
def Iso(argument0,argument1):
a = round(pygame.mouse.get_pos()[1]/argument1 - pygame.mouse.get_pos()[0]/argument0);
b = round(pygame.mouse.get_pos()[1]/argument1 + pygame.mouse.get_pos()[0]/argument0);
x = (b - a)/2*argument0;
y = (b + a)/2*argument1;
return (x,y)
and it looks like this:
Anyone got any ideas??
Here is my code:
import pygame
from pygame.locals import *
pygame.init()
screen=pygame.display.set_mode((640,480))
curs=pygame.image.load('white-0.gif').convert()
curs.set_alpha(100)
g1=pygame.image.load('green-0.gif').convert()
tiles=[]
def Iso(argument0,argument1):
a = round(pygame.mouse.get_pos()[1]/argument1 - pygame.mouse.get_pos()[0]/argument0);
b = round(pygame.mouse.get_pos()[1]/argument1 + pygame.mouse.get_pos()[0]/argument0);
x = (b - a)/2*argument0;
y = (b + a)/2*argument1;
return (x,y)
class Tile(object):
def __init__(self,spr,pos1,pos2):
self.pos=(pos1,pos2)
self.spr=spr
while True:
screen.fill((90,90,0))
mse=pygame.mouse.get_pos()
for e in pygame.event.get():
if e.type==QUIT:
exit()
if e.type==MOUSEBUTTONUP:
if e.button==1:
pos=Iso(16,16)
tiles.append(Tile(g1,pos[0],pos[1]))
pos=Iso(16,16)
screen.blit(curs, (pos[0],pos[1]))
for t in tiles:
screen.blit(t.spr,t.pos)
pygame.display.update()
UPDATE:
Managed to get it to work like this:
Just having a few depth issues..
You are converting pixels to an isometric view. Presumably, you want to snap to (isometric) tiles instead.
Multiply your isometric x,y by (width/2),(height/2) where width and height are your isometric tile dimensions. Since that radically changes the scale, you might want to divide both by a constant; if you don't do that, only moving the mouse in the very top left of your screen will make something show up.
Apart from the isometric part, this is exactly what one would do for a top-down grid.
Related
I'm creating a game that takes place on a map, and the player should be able to scroll around the map. I'm using real-world data from NASA as a 5700 by 2700 pixel image split into 4 smaller ones, each corresponding to a hemisphere:
How I split up the image:
The player will be viewing the world through a camera, which is currently in a 4:3 aspect ratio, which can be moved around. Its height and width can be described as two variables x and y, currently at 480 and 360 respectively.
Model of the camera:
In practice, the camera is "fixed" and instead the tiles move. The camera's center is described as two variables: xcam and ycam.
Currently, the 4 tiles move and hide flawlessly. The problem arises when the camera passes over the "edge" at 180 degrees latitude. What should happen is that the tiles on one side should show and move as if the world was a cylinder without any noticeable gaps. I update xcam by doing this equation to it:
xcam = ((xcam + (2700 - x) mod (5400 - x)) - (2700 - x)
And the tiles' centers update according to these equations (I will focus only on tiles 1 and 2 for simplicity):
tile1_x = xcam - 1350
tile1_y = ycam + 650
tile2_x = xcam + 1350
tile2_y = ycam + 650
Using this, whenever the camera moves past the leftmost edge of tile 1, it "skips" and instead of tile 1 still being visible with tile 2 in view, it moves enough so that tile 2's rightmost edge is in the camera's rightmost edge.
Here's what happens in reality: ,
and here's what I want to happen: .
So, is there any way to update the equations I'm using (or even completely redo everything) so that I can get smooth wrapping?
I think you unnecessarily hard-code a number of tiles and their sizes, and thus bind your code to those data. In my opinion it would be better to store them in some variables, so that they can be easily modified in one place if data ever changes. This also allows us to write a more flexible code.
So, let's assume we have variables:
// logical size of the whole Earth's map,
// currently 2 and 2
int ncols, nrows;
// single tile's size, currently 2700 and 1350
int wtile, htile;
// the whole Earth map's size
// always ncols*wtile and nrows*htile
int wmap, hmap;
Tile tiles[nrows][ncols];
// viewport's center in map coordinates
int xcam, ycam;
// viewport's size in map coordinates, currently 480 and 360
int wcam, hcam;
Whenever we update the player's position, we need to make sure the position falls within an allowed range. But, we need to establish the coordinates system first in order to define the allowed range. For example, if x values span from 0 to wmap-1, increasing rightwards (towards East), and y values span from 0 to hmap-1, increasing downwards (toward South), then:
// player's displacement
int dx, dy;
xcam = (xcam + dx) mod wmap
ycam = (ycam + dy) mod hmap
assures the camera position is always within the map. (Assumed the mod operator always returns non-negative value. Should it work like the C language % operator, which returns negative result for negative dividend, one needs to add a divisor first to make sure the first argument is non-negative: xcam = (xcam + dx + wmap) mod wmap, etc.)
If you'd rather like to have xcam,ycam = 0,0 at the center of a map (that is, at the Greenwich meridian and the equator), then the allowed range would be -wmap/2 through wmap/2-1 for x and -hmap/2 through hmap/2 - 1 for y. Then:
xcam = (xcam + dx + wmap/2) mod wmap - wmap/2
ycam = (ycam + dy + hmap/2) mod hmap - hmap/2
More generally, let x0, y0 denote the 'zero' position of camera relative to the upper-left corner of the map. Then we can update the camera position by transforming it to the map's coordinates, then shifting and wrapping, and finally transforming back to camera's coordinates:
xmap = xcam + x0
ymap = ycam + y0
xmap = (xmap + dx) mod wmap
ymap = (ymap + dy) mod hmap
xcam = xmap - x0
ycam = ymap - y0
or, more compactly:
xcam = (xcam + dx + x0) mod wmap - x0
ycam = (ycam + dy + y0) mod hmap - y0
Now, when we know the position of the viewport (camera) relative to the map, we need to fill it with the map tiles. And a new decision must be made here.
When we travel from Anchorage, Alaska (western hemisphere) to the North, we eventually reach the North Pole and then we'll find ourselves in the eastern hemisphere, headin South. If we proceed in the same direction, we'll get to Kuusamo, Norway, then Sankt Petersburg, Russia, then Kiev, Ukraine... But that would be a travel to the South! We usually do not describe it as a next part of the initial North route. Consequently, we do not show the part 'past the pole' as an upside-down extension of a map. Hence the map should never show tiles above row number 0 or below row nrows-1.
On the other hand, when we travel along circles of latitude, we smoothly cross the 0 and 180 meridians and switch between the eastern and western hemisphere. So if the camera view covers area on both sides of the left or right edge of the map, we need to continue filling the view with tiles from the other end of the tiles array. If we use a map scaled down, so that it is smaller than the viewport, we may even need to iterate that more than once!
The left edge of a camera view corresponds to the 'longitude' of xleft = xcam - wcam/2 and the right one to xrght = xcam + wcam/2. So we can step across the viewport by the tile's width to find out appropriate columns and show them:
x = xleft
repeat
show a column at x
x = x + wtile
until x >= xrght
The 'show a column at x' part requires finding appropriate column, then iterating across the column to show corresponding tiles. Let's find out which tiles fit the camera view:
ytop = ycam - hcam/2
ybot = ycam + hcam/2
y=ytop
repeat
show a tile at x,y
y = y + htile
until y >= ybot
To show the tile we need to locate appropriate tile and then send it to appropriate position in the camera view.
However, we treat column number differently from the row number: columns wrap while rows do not:
row = y/htile
if (0 <= row) and (row < nrows) then
col = (x/wtile) mod ncols
xtile = x - (x mod wtile)
ytile = y - (y mod htile)
display tile[row][col] at xtile,ytile
endif
Of course xtile and ytile are our map-scale longitude and latitude, so the 'display tile at' routine must transform them to the camera view coordinates by subtracting the camera position from them:
xinwiev = xtile - xcam
yinview = ytile - ycam
and then apply the resulting values relative to the camera view's center at the displaying device (screen).
Another level of complication will appear if you want to implement zooming in and out the view, that is dynamic scaling of the map, but I'm sure you'll find out yourself which calculations will need applying the zoom factor for correct results. :)
So to give further context lets say I have an image that is 200px by 200px with a rectangle on it, its red below:
I know the height and width of the image, the coordinates of the rectangle and also the height and width of the red rectangle.
So what I need to know is if I flip this whole image (including the rectangle) is there a way to work out what the new coordinates are of the red rectangle? I'd imagine there must be some kind of formula or algorithm I can apply to get these new coordinates.
This was already answered over here. The below is the function that worked best for my use case which is very similar to yours.
def rotate(point, origin, degrees):
radians = np.deg2rad(degrees)
x,y = point
offset_x, offset_y = origin
adjusted_x = (x - offset_x)
adjusted_y = (y - offset_y)
cos_rad = np.cos(radians)
sin_rad = np.sin(radians)
qx = offset_x + cos_rad * adjusted_x + sin_rad * adjusted_y
qy = offset_y + -sin_rad * adjusted_x + cos_rad * adjusted_y
return int(qx), int(qy)
In addition to this sometimes when you rotate the points you get negative values(depending on degrees of rotation), in cases like these you need to add the height and or width of the image you are rotating to the value. In my case below the images were of fixed size (416x416)
def cord_checker(pt1):
for item in pt1:
if item<0:
pt1[pt1.index(item)]=416+item
else: pass
return pt1
finally to get the coordinates of the rotated point
pt1=tuple(cord_checker(list(rotate((xmi,ymi),origin=(0,0),degrees*=))))
*degrees can be 90,180 etc
If the image is centered around the origin (0,0), you can just flip the signs of the x-coordinates to do a horizontal flip or the y-coordinates to do a vertical flip while preserving the origin as your center.
You can also flip an arbitrary image by flipping the signs:
# Horizontal flip
new_x = -x
new_y = y
# Vertical flip
new_x = x
new_y = -y
but the center coordinates will not be the same. If you want the same center coordinates, you'd have to shift it back.
In a web app, I would like to let the user select a region of interest in a plotted image using the nice box/lasso selection tools of bokeh. I would the like to receive the selected pixels for further operations in python.
For scatter plots, this is easy to do in analogy with the gallery,
import bokeh.plotting
import numpy as np
# data
X = np.linspace(0, 10, 20)
def f(x): return np.random.random(len(x))
# plot and add to document
fig = bokeh.plotting.figure(x_range=(0, 10), y_range=(0, 10),
tools="pan,wheel_zoom,box_select,lasso_select,reset")
plot = fig.scatter(X, f(X))
#plot = fig.image([np.random.random((10,10))*255], dw=[10], dh=[10])
bokeh.plotting.curdoc().add_root(fig)
# callback
def callback(attr, old, new):
# easily access selected points:
print sorted(new['1d']['indices'])
print sorted(plot.data_source.selected['1d']['indices'])
plot.data_source.data = {'x':X, 'y':f(X)}
plot.data_source.on_change('selected', callback)
however if I replace the scatter plot with
plot = fig.image([np.random.random((10,10))*255], dw=[10], dh=[10])
then using the selection tools on the image does not change anything in plot.data_source.selected.
I'm sure this is the intended behavior (and it makes sense too), but what if I want to select pixels of an image? I could of course put a grid of invisible scatter points on top of the image, but is there some more elegant way to accomplish this?
It sounds like the tool you're looking for is actually the BoxEditTool. Note that the BoxEditTool requires a list of glyphs (normally these will be Rect instances) that will render the ROIs, and that listening to changes should be set using:
rect_glyph_source.on_change('data', callback)
This will trigger the callback function any time you make any changes to your ROIs.
The relevant ColumnDataSource instance (rect_glyph_source in this example) will be updated so that the 'x' and 'y' keys list the center of each ROI in the image's coordinates space, and of course 'width' and 'height' describe its size. As far as I know there isn't currently a built-in method for extracting the data itself, so you will have to do something like:
rois = rect_glyph_source.data
roi_index = 0 # x, y, width and height are lists, and each ROI has its own index
x_center = rois['x'][roi_index]
width = rois['width'][roi_index]
y_center = rois['y'][roi_index]
height = rois['height'][roi_index]
x_start = int(x_center - 0.5 * width)
x_end = int(x_center + 0.5 * width)
y_start = int(y_center - 0.5 * height)
y_end = int(y_center + 0.5 * height)
roi_data = image_plot.source.data['image'][0][y_start:y_end, x_start:x_end]
IMPORTANT: In the current version of Bokeh (0.13.0) there is a problem with the synchronization of the BoxEditTool at the server and it isn't functional. This should be fixed in the next official Bokeh release. For more information and a temporary solution see this answer or this discussion.
I'm trying to guide the player to the last enemy on the field, via having an icon on the edge of the screen pointing towards the last enemy outside the screen.
The screen is the rectangle, the player is the point within the rectangle, and the enemy is the point outside of the rectangle.
Example
The solutions I can think of are finding which side will be intersected (Which I'm not sure how to do correctly, but I imagine first involves finding if it will be intersecting on vertical or horizontal sides) and then using a linear equation to find the x or y. Or, you could use a line-line-intersection method on each side, but as the screen rect never changes, this seems a little overkill.
I've got my first solution working on paper for one test case, but can't get it working at all with in Unity.
Does anyone have a solution or could push me in the right direction? Thanks a lot.
Let's rectangle edges have equation
X = XLeft
Y = YBottom
X = XRight
Y = YTop
Player-enemy vector
D = (D.X, D.Y) = (E.X - P.X, E.Y - P.Y)
if D.X is positive, consider intersection with right edge (otherwise with left)
P.X + t * D.X = XRight
if D.Y is positive, consider intersection with top edge (otherwise with bottom)
P.Y + u * DY.Y = YTop
Solve equation for t and u. If u is less than t, intersection with horizontal edge goes first, and find X-coordinate of intersection with expression
XInt = P.X + u * D.X
How can one calculate the camera distance from an object in 3D space (an image in this case) such that the image is at its original pixel width.
Am I right in assuming that this is possible given the aspect ratio of the camera, fov, and the original width/height of the image in pixels?
(In case it is relevant, I am using THREE.js in this particular instance).
Thanks to anyone who can help or lead me in the right direction!
Thanks everyone for all the input!
After doing some digging and then working out how this all fits into the exact problem I was trying to solve with THREE.js, this was the answer I came up with in JavaScript as the target Z distance for displaying things at their original scale:
var vFOV = this.camera.fov * (Math.PI / 180), // convert VERTICAL fov to radians
var targetZ = window.innerHeight / (2 * Math.tan(vFOV / 2) );
I was trying to figure out which one to mark as the answer but I kind of combined all of them into this solution.
Trigonometrically:
A line segment of length l at a right angle to the view plane and at a distance of n perpendicular to it will subtend arctan(l/n) degrees on the camera. You can arrive at that result by simple trigonometry.
Hence if your field of view in direction of the line is q, amounting to p pixels, you'll end up occupying p*arctan(l/n)/q pixels.
So, using y as the output number of pixels:
y = p*arctan(l/n)/q
y*q/p = arctan(l/n)
l/tan(y*q/p) = n
Linear algebra:
In a camera with a field-of-view of 90 degrees and a viewport of 2w pixels wide, the projection into screen space is equivalent to:
x' = w - w*x/z
When perpendicular, the length of a line on screen is the difference between two such xs so by normal associativity and commutivity rules:
l' = w - w*l/z
Hence:
w - l' = w*l/z
z = (w - l') / (w*l)
If your field of view is actually q degrees rather than 90 then you can use the cotangent to scale appropriately.
In your original question you said that you're using css3D. I suggest that you do the following:
Set up an orthographic camera with fov = 1..179 degrees, where left = screenWidth / 2, right = screenWidth / - 2, top = screenHeight / 2, bottom = screenHeight / - 2. Near and far planes do not affect CSS3D rendering as far as I can tell from experience.
camera = new THREE.OrthographicCamera(left, right, top, bottom, near, far);
camera.fov = 75;
now you need to calculate the distance between the camera and object in such way that when the object is projected using the camera with settings above, the object has 1:1 coordinate correspondence on screen. This can be done in following way:
var camscale = Math.tan(( camera.fov / 2 ) / 180 * Math.PI);
var camfix = screenHeight / 2 / camscale;
place your div to position: x, y, z
set the camera's position to 0, 0, z + camfix
This should give you 1:1 coordinate correspondence with rendered result and your pixel values in css / div styles. Remember that the origin is in center and the object's position is the center of the object so you need to do adjustments in order to achieve coordinate specs from top-left corner for example
object.x = ( screenWidth - objectWidth ) / 2 + positionLeft
object.y = ( screenHeight - objectHeight ) / 2 + positionTop
object.z = 0
I hope this helps, I was struggling with same thing (exact control of the css3d scene) but managed to figure out that the Orthographic camera + viewport size adjusted distance from object did the trick. Don't alter the camera rotation or its x and y coordinates, just fiddle with the z and you're safe.