Right now I've got about a 10x10 grid of squares that the player can move 1 square at a time on.
When they hop to a square, I need an animation to play based on the sprite_index of the square they're jumping to and the one they just came from.
I've got the "jumping to" one sorted out.
in a collision event between the player and the square (other here being the square):
with(other){
if sprite_index = sGreenH {
instance_create(x,y,oGreenPlayerAni)
(also is there a better way to do the above? instead of spawning it ontop of what's there can I delete it/replace it THEN put something?)
So now I'm trying to get an animation to play from the square the character is leaving. I can do that with the player collision w/ square :
xx = xprevious and yy = yprevious
instance_create(xx,yy, someanimation)
problem there is that I can't customize which animation plays. There are 4 possible colors of animations to use for 4 diff color squares.
so I tried with the collision event in my square with the player making a variable like
if sprite_index = sGreen {
global.previousColor = 1
for each of my colors. and then in my player's collision event with the square again I have
if global.previousColor = 1 {
instance_create(xx,yy, oGreenHollowAni)
And then I get an error when I move.
Code square colliding with player (player on a square) : http://puu.sh/n9zCY/2f226b6d3c.png
Code player colliding with square : http://puu.sh/n9zK6/deac1a09f5.png
Error : http://puu.sh/n9zPj/ea84a9a943.png
I am not sure if I understand your question right. As far as I can see, you do always create a new instance when the player is moving?
If so, that is not good. When you create your 10x10 grid of squares I guess you create an array, in which you put the information which color square gets displayed, fe. array[x][y] = color.green ... you could then create an enum an define green = 1, blue = 2 ...
This array would be a global one.
From the player class you then check on which square you are currently and if you move you check on which square you will be. With these informations you can define the animation.
let me know if this was what you meant.
eric
Related
Iam nooby in godot, I have to use A* to traslate the player to the goal position, but I do not know how to start, pls help!! basically I have just 2 tiles in the tilemap, 1 of them is allowed to pass over it, I have to extract I guess the allowed tile and calculate the distance between the position player with the position goal, getting the real distance and then check cell per cell which has the lowest cost, but I do not know how to do that :c
func get_player_init_pos():
var pos = map_to_world(Vector2(54,1))pos.y += half_cell_size.y
return pos
func is_tile_vacant(pos, direction):
var curr_tile = world_to_map(pos)
var next_tile = get_cellv(curr_tile + direction)
var next_tile_pos = Vector2()
if(next_tile == 0):
next_tile_pos = map_to_world(curr_tile + direction)
else:next_tile_pos = pos
return next_tile_pos
I have this, the first part of the code is to locate the player in the map and the second is for check the tile walls in the map
You could roll your own path finding algorithm. However, there is little point in doing so, since Godot has a AStar class you can use. However, you probably don't need to use that either, because Godot has a navigation system. And that is what I'm going to suggest you to use.
First of all, you can specify both navigation and collision polygons on your tiles. You need to have the navigation polygons. Go ahead and do them.
Second you want to have a Navigation2D in the scene tree and have your TileMap as a child.
And third, you can ask the Navigation2D for a path with get_simple_path, you pass the start and end positions as arguments and you get an array of points that make up the path.
Since you mention A*, I'll briefly explain using the AStar too anyway.
First, you need to add the cells with add_point. It requires ids. It is a good idea to be clever with the ids so you can compute the id for a given position. For example x * width + y if you know the size.
So you can iterate over the tiles in your TileMap and call add_point for each one (You don't need to add cell that are not passable).
Then you need to specify the connections with connect_points (it takes the ids of the points as parameters).
And finally you can call get_point_path passing the start and end ids. Again it gives you a array of points.
Guys I want to do this without GML I guess it'senter image description here easy to do it.
You can see in the image I have uploaded. there are three sprites and I want them to enter the room randomly at specific location.
Guys I'm new to game maker this is my first game.
I'm making a shooting game for Android. You can refer the image for clear idea about the game.
Things I did are: I made 3 sprite red, yellow, green
What I want to do is:
I want the red yellow green sprite moving in vertically and the player gonna shoot them.
I want the sprite to get created automatically at random location(at the top I mean) and move in vertical direction and should not overlap each other.
How should I do it?
Well, there's a lot of different way to do this. The simplest one is to create an object, assign it a sprite, spawn it at a random y and assign it a vspeed.
in the object create event :
sprite_index = your_sprite;
x = 0;
y = random(room_width);
vspeed = the_speed_you_want;
I suppose you want to be able to shoot at the sprites, so it is better to make an object than to simply draw them.
If you don't want them to overlap, you can define 3 possible starting positions and randomly chose one with the choose() function.
I would like to move a button's position (X & Y) within a specific area of a view object.
What I've done so far is, I've added a view object on to my main screen and I've also added a button inside the view, so far OK.
Now, I managed to get the bounds of the view object and I know the size of my button say (50x50). So what I want is, when I click on the button, it should move to another random location but within the view object, it should not move outside the view object boundaries. Right now what happens in my button appears to be moving a portion of it outside the view object based on the random X & Y.
Any help will be appreciated.
I managed to get it as well. Here is how I did it:
I created a function which I pass the shape to and return the location that I want the piece to move to which is within the bounds of the view.
func randomizeLocation(shape: CGRect) -> CGPoint {
let randomX = arc4random_uniform(UInt32(gameBoard.frame.width - shape.width)) + UInt32(shape.width / 2)
let randomY = arc4random_uniform(UInt32(gameBoard.frame.height - shape.height)) + UInt32(shape.height / 2)
return CGPointMake(CGFloat(randomX), CGFloat(randomY))
}
Just in speaking about the x coordinate (left/right)
The logic here is that arc4random_uniform will return a random # between 0 and the given number (- 1).
But you don't want a number starting at zero because that would put the object off the edge of the screen to the left by 1/2. So, if you add 1/2 of the width to the overall function, you will never go off to the left side of the screen.
To keep from going off the right side of the screen you want a maximum number of the screen width minus 1/2 of the width of the object. To get this, you must specify the number you pass to the arc4random function as minus the whole width of the shape, because when you add back half, you are shifting by 1/2 of the width of your object on both sides (minimum and maximum return value).
So to use this function, take your object, lets say a UILabel and set its center to this functions result as such:
let myLabel = UIButton(type: .Custom)
myLabel.frame = CGRectMake(100, 100, 100, 100)
myLabel.center = randomizeLocation(myLabel.bounds)
And that it. Btw, I did not account for the -1 in the return b/c i figured the one pixel was too insignificant to bother with..
Whenever I develop games that use mouse input, I will get confused of calculating the mouse position. Especially the z position.
The ways I saw many using.
mouse position z = mouse position y.
z = distance between camera and object.
z = difference b/w object z and camera z. (I am using. Doesn't work when camera and object is rotated).
z = some arbitrary value. (many use 0 and some other values).
others.
Which method is correct? Is there any other method which is correct?
Please let me know.
The same answer as in your second mouse based question applies here too: Mouse based aiming Unity3d
TL;DR: use a raycast from camera to intersect the plane that the action is on.
Vector3 pz = Camera.main.ScreenToWorldPoint(Input.mousePosition);
pz.z should be what you are asking for if I'm understanding right. Tell me if it works
The mouse position is in theory external to the game world itself. Therefore, the mouse position relates simply to X and Y co-ordinates of the screen space in which you are interacting with your game (IE width:height of your game).
What you're asking seems to be more of "How do I model the mouse position in my game world?"
As noted by Mario, Camera.main.ScreenToWorldPoint(Input.mousePosition) will convert your mouse position to world co-ordinates. However, this assumes that your main camera is where you want to convert to. In reality, you want to call the ScreenToWorldPoint method on the Camera that is rendering whatever space it is you are wanting to interact with. For example, you may have your main game world at (0, 0, 0) but you may be rendering your GUI on top using a separate camera that renders objects at (-5000, 0, 0).
To answer your question, to model the mouse z position it should simply be the same z value as your Camera. You can then perform calculations on that value to suit your particular needs.
IE:
1) mouse.position.z = mouse.position.y - These are entirely different. Now you're just using an arbitrary value
2) Distance between camera and object - That's a calculation made from your original object.position.z and original mouse.position.z. Not your actual z value.
3) See 2.
4) See 1.
I am implementing a pan tool in our software's 3D view which is supposed to work much like the grab tool of, say, Photoshop or Acrobat Reader. That is, the point the user grabs onto with the mouse (clicks and holds, then moves the mouse) stays under the mouse cursor as the mouse moves.
This is a common paradigm and one that's been asked about on SO before, the best answer being to this question about the technique in OpenGL. There is another that also has some hints, and I have been reading this very informative CodeProject article. (It doesn't explain many of its code examples' variables etc, but from reading the text I think I understand the technique.) But, I have some implementation issues because my 3D environment's navigation is set up quite differently to those articles, and I am seeking some guidance.
My technique - and this might be fundamentally flawed, so please say so - is:
The scene 'camera' is stored as two D3DXVECTOR3 points: the eye position and a look point. The view matrix is constructed using D3DXMatrixLookAtLH like so:
const D3DXVECTOR3 oUpVector(0.0f, 1.0f, 0.0f); // Keep up "up", always.
D3DXMatrixLookAtLH(&m_oViewMatrix, &m_oEyePos, &m_oLook, &oUpVector);
When the mouse button is pressed, shoot a ray through that pixel and find: the coordinate (in unprojected scene / world space) of the pixel that was clicked on; the intersection of that ray with the near plane; and the distance between the near-plane point and object, which is the length between those two points. Store this and the mouse position, and the original navigation (eye and look).
// Get the clicked-on point in unprojected (normal) world space
D3DXVECTOR3 o3DPos;
if (Get3DPositionAtMouse(roMousePos, o3DPos)) { // fails if nothing under the mouse
// Mouse location when panning started
m_oPanMouseStartPos = roMousePos;
// Intersection at near plane (z = 0) of the ray from camera to clicked spot
D3DXVECTOR3 oRayVector;
CalculateRayFromPixel(m_oPanMouseStartPos, m_oPanPlaneZ0StartPos, oRayVector);
// Store original eye and look points
m_oPanOriginalEyePos = m_oEyePos;
m_oPanOriginalLook = m_oLook;
// Store the distance between near plane and the object, and the object position
m_dPanPlaneZ0ObjectDist = fabs(D3DXVec3Length(&(o3DPos - m_oPanPlaneZ0StartPos)));
m_oPanOriginalObjectPos = o3DPos;
Get3DPositionAtMouse is a known-ok method which picks a 3D coordinate under the mouse. CalculateRayFromPixel is a known-ok method which takes in a screen-space mouse coordinate and casts a ray, and fills the other two parameters with the ray intersection at the near plane (Z = 0) and the normalised ray vector.
When the mouse moves, cast another ray at the new position, but using the old (original) view matrix. (Thanks to Nico below for pointing this out.) Calculate where the object should be by extending the ray from the near plane the distance between the object and near plane (this way, the original object and new object points should be in parallel plane to the near plane.) Move the eye and look coordinates by this much. Eye and Look are set from their original (when panning started) values, with the difference being from the original mouse and new mouse positions. This is to reduce any precision loss from incrementing or decrementing by granular (integer) pixel movements as the mouse moves, ie it calculates the whole difference in navigation every time.
// Set navigation back to original (as it was when started panning) and cast a ray for the mouse
m_oEyePos = m_oPanOriginalEyePos;
m_oLook = m_oPanOriginalLook;
UpdateView();
D3DXVECTOR3 oRayVector;
D3DXVECTOR3 oNewPlaneZPos;
CalculateRayFromPixel(roMousePos, oNewPlaneZPos, oRayVector);
// Now intersect that ray (ray through the mouse pixel, using the original navigation)
// to hit the plane the object is in. Function uses a "line", so start at near plane
// and the line is of the length of the far plane away
D3DXVECTOR3 oNew3DPos;
D3DXPlaneIntersectLine(&oNew3DPos, &m_oPanObjectPlane, &oNewPlaneZPos, &(oRayVector * GetScene().GetFarPlane()));
// The eye/look difference /should/ be as simple as:
// const D3DXVECTOR3 oDiff = (m_oPanOriginalObjectPos - oNew3DPos);
// But that lags and is slow, ie the objects trail behind. I don't know why. What does
// work is to scale the from-to difference by the distance from the camera relative to
// the whole scene distance
const double dDist = D3DXVec3Length(&(oNew3DPos - m_oPanOriginalEyePos));
const double dTotalDist = GetScene().GetFarPlane() - GetScene().GetNearPlane();
const D3DXVECTOR3 oDiff = (m_oPanOriginalObjectPos - oNew3DPos) * (1.0 + (dDist / dTotalDist));
// Adjust the eye and look points by the same amount, so orthogonally changed
m_oEyePos = m_oPanOriginalEyePos + oDiff;
m_oLook = m_oPanOriginalLook + oDiff;
Diagram
This diagram is my working sketch for implementing this:
and hopefully explains the above much more simply than the text. You can see a moving point, and where the camera has to move to keep that point at the same relative position. The clicked-on point (the ray from the camera to the object) is just to the right of the straight-ahead ray representing the center pixel.
The problem
But, as you've probably guessed, this doesn't work as I hope. What I wanted to see was the clicked-on object moving with the mouse cursor. What I actually see is that the object moves in the direction of the mouse, but not enough, ie it does not keep the clicked-on point under the cursor. Secondly, the movement flickers and jumps around, jittering by up to twenty or thirty pixels sometimes, then flickers back. If I replace oDiff with something constant this doesn't occur.
Any ideas, or code samples showing how to implement this with DirectX (D3DX, DX matrix order, etc) will be gratefully read.
Edit
Commenter Nico below pointed out that when calculating the new position using the mouse cursor's moved position, I needed to use the original view matrix. Doing so helps a lot, and the objects stay near the mouse position. However, it's still not exact. What I've noticed is that at the center of the screen, it is exact; as the mouse moves further from the center, it gets out by more and more. This seemed to change based on how far away the object was, too. By pure 'I have no idea what I'm doing' guesswork, I scaled this by a factor of the near/far plane and how far away the object was, and this brings it very close to the mouse cursor, but still a few pixels away (1 to, say, 30 at the extreme edge of the screen, which is enough to make it feel wrong.)
Here's how i solve this problem.
float fieldOfView = 45.0f;
float halfFOV = (fieldOfView / 2.0f) * (DEGREES_TO_RADIANS);
float distanceToObject = // compute the world space distance from the camera to the object you want to pan
float projectionToWorldScale = distanceToObject * tan( halfFov );
Vector mouseDeltaInScreenSpace = // the delta mouse in pixels that we want to pan
Vector mouseDeltaInProjectionSpace = Vector( mouseDeltaInScreenSpace.x * 2 / windowPixelSizeX, mouseDeltaInScreenSpace.y * 2 / windowPixelSizeY ); // ( the "*2" is because the projection space is from -1 to 1)
// go from normalized device coordinate space to world space (at origin)
Vector cameraDelta = -mouseDeltaInProjectionSpace * projectionToWorldScale;
// now translate your camera by "cameraDelta".
Note this works for an field of view apsect ratio of 1, i think you would have to break up the "scale" into separate x and y components if they vertical field of view was different than the horizontal field of view
Also, you mentioned a "look at" vector. I'm not sure how my math would need to change for that since my camera is always looking straight down the z-axis.
One problem is your calculation of the new 3d position. I am not sure if this is the root cause, but you might try it. If it doesn't help, just post a comment.
The problem is that your offset vector is not parallel to the znear plane. This is because the two rays are not parallel. Therefore, if the have the same length behind znear, the distance of the end point to the znear plane cannot be equal.
You can calculate the offset vector with the theorem of intersecting lines. If zNearA and zNearB are the intersection points of the znear plane with ray A and ray B respectively, then the theorem states:
Length(original_position - cam_position) / Length(offset_vector) = Length(zNearA - cam_position) / Length(zNearB - zNearA)
And therefore
offset_vector = Length(original_position - cam_position) / Length(zNearA - cam_position) * (zNearB - zNearA)
Then you can be sure to move on a line that is parallel to the znear plane.
Just try it out and see if it helps.