Game Maker - How do I create homing effect for a projectile? - game-maker

I'm trying to make a homing projectile for my bullet hell game and I'd need to be able to calculate the angle between the target and projectile relatively to the projectile's angle (0 degrees would be the direction the projectile is pointing). Right now the angle calculation is absolute done using point_direction, but the problem is when the target is at the 4th sector the projectile starts steering the wrong way. Another issue is that if the projectile does a 180 degree turn while chasing the target (or moves down if fired by enemy) the steering direction will get inverted. I have also tried mp_potential_ functions but their pathfinding is too "agressive".
This is what my current code looks like:
if(instance_exists(obj_fighter1)) {
var target;
target = instance_nearest(x, y, obj_fighter1);
if(target != noone) {
var angle_to_target;
angle_to_target = point_direction(x,y,target.x,target.y);
if(angle_to_target < direction) {
direction -= 2;
}
if(angle_to_target > direction) {
direction += 2;
}
}
}
Hopefully this information is enough and is understandable.

Okay, a common Game Maker question. The routine I use is below. Looking at it, it could do with a bit of refactoring, but it does work.
var wantDir;
var currDir;
var directiondiff;
var maxTurn;
// want - this is your target direction \\
wantDir = argument0;
// max turn - this is the max number of degrees to turn \\
maxTurn = argument1;
// current - this is your current direction \\
currDir = direction;
if (wantDir >= (currDir + 180))
{
currDir += 360;
}
else
{
if (wantDir < (currDir - 180))
{
wantDir += 360;
}
}
directiondiff = wantDir - currDir;
if (directiondiff < -maxTurn)
{
directiondiff = -maxTurn
}
if (directiondiff > maxTurn)
{
directiondiff = maxTurn
}
return directiondiff
So you'd call this, and it'll return you a value that you can add to your missile's angle. So if you call it scr_get_angle, your code might then look like this:
if(instance_exists(obj_fighter1)) {
var target;
target = instance_nearest(x, y, obj_fighter1);
if(target != noone) {
var angle_to_target;
angle_to_target = point_direction(x,y,target.x,target.y);
direction += scr_get_angle(angle_to_target, 2);
}
}

Related

Realistic BOAT/SHIP movement and rotation (2d)

I want to move my boat to the position where i clicked BUT with a realistic movement and rotation :
(http://i.imgur.com/Pk8DOYP.gif)
Here is my code (attached to my boat gameobject) :
Basically, when i click somewhere, it move the boat until it reach the point that i clicked in the first place (i simplified my code for you)
using UnityEngine;
using System.Collections;
public class BoatMovement : MonoBehaviour {
private Vector3 targetPosition;
private float speed = 10f;
private bool isMoving;
void Update(){
if (!isMoving && Input.GetMouseButton (0)) {
targetPosition = Camera.main.ScreenToWorldPoint (Input.mousePosition);
isMoving = true;
}
if (isMoving) {
moveToPosition ();
}
}
void moveToPosition() {
transform.position = Vector3.MoveTowards (transform.position, new Vector3(targetPosition.x, targetPosition.y, 0f), speed * Time.deltaTime);
if (transform.position.x == targetPosition.x && transform.position.y == targetPosition.y) {
isMoving = false;
}
}
}
After some research and tries, i didn't find a way to do what i want.
Thanks for your help
There's two parts to this problem, which should be kept completely separate from each other and at no time does the equation of one interfere with the equation of the other.
The first is the forward movement of the ship, which can be achieved in two ways:
If you are using a rigidbody
void Propel()
{
float speed = 150f;
RigidBody rb = GetComponent<RigidBody>();
Transform transform = GetComponent<Transform>();
rb.AddForce(transform.forward * speed * Time.deltaTime); // <--- I always forget if its better to use transform.forward or Vector3.forward. Try both
}
If no rigidbody
void Propel()
{
float speed = 150f;
Transform transform = GetComponent<Transform>();
transform.Translate(transform.forward * speed * Time.deltaTime, Space.World);
}
Now the second would be the turning of the ship, which also can be achieved in two ways:
With rigidbody
IEnumerator TurnShip(Vector3 endAngle)
{
float threshold = Single.Epsilon;
float turnSpeed = 150f;
RigidBody rb = GetComponent<RigidBody>();
while (Vecotr3.Angle(transform.forward, endAngle) > threshold)
{
rb.AddTorque(transform.up * turnSpeed * Time.deltaTime);
yield return null;
}
}
Without rigidbody
IEnumerator TurnShip(Vector3 endAngle)
{
float threshold = Single.Epsilon;
float turnSpeed = 150f;
float step = turnSpeed * Time.deltaTime;
while (Vector3.Angle(transform.forward, endAngle) > threshold)
{
newDir = Vector3.RotateTowards(transform.forward, endAngle, step);
transform.rotation = Quaternion.LookRotation(newDir);
yield return null;
}
}
Then of course, the IEnumerators are called like so:
StartCoroutine(TurnShip(new Vector3(12f, 1f, 23f));
Couple things to note:
This is pseudo code, i haven't tested any of it so just know that making it work is up to you, i'm only providing you with the correct path to follow.
All the variables in the beginnings of the methods are meant to be global variables, so declare them where you please.
I've used a Transform called target here, just replace it with the Vector from your mouse click.
public Transform target;
private void Update()
{
transform.position += transform.forward * Time.deltaTime;
Vector3 targetDir = target.position - transform.position;
float step = Time.deltaTime / Mathf.PI;
Vector3 newDir = Vector3.RotateTowards(transform.forward, targetDir, step, 0.0f);
transform.rotation = Quaternion.LookRotation(newDir);
}
The key to this is the RotateTowards function and calculating a new vector to look towards on each frame based on the current position.
If your boats have some kind of rotation speed stat then modify the float called step to suit. If they have a speed stat then modify the transform.position += transform.forward * Time.deltaTime with some multiplier.
I guess you'll want to put some conditions in as to when to stop the movement and rotation, perhaps using a coroutine.
Declare
private Vector3 targetPosition;
private float targetDistance;
Then at the start of your move, do :
targetPosition = Camera.main.ScreenToWorldPoint (Input.mousePosition);
targetDistance = Vector3.Distance(targetPosition, transform.position);
While moving call this : (in your update loop for example)
turnSpeed = 0.062f * targetDistance;
moveSpeed = 35f * targetDistance;
//Important /!\ : you need to add Linear drag on your rigidbody or it will keep adding
RigidBody.AddForce(transform.up * moveSpeed * Time.deltaTime);
var newRotation = Quaternion.LookRotation (transform.position - targetPosition, Vector3.forward);
newRotation.x = 0f;
newRotation.y = 0f;
transform.rotation = Quaternion.Slerp (transform.rotation, newRotation, Time.deltaTime * turnSpeed);
Finally, add a linear drag value to your rigidbody (1 would be ok).
Thanks to maksymiuk who helped me and took a lot of his time trying to figure out a solution.

Unity - Making Camera Lock on To Enemy and stay behind player?

I am attempting to create a Camera that moves with the Player but locks onto an enemy when the player clicks the lock on button. The behaviour is almost working as I want it, the camera locks onto the target. And when the player stand in-front of the target it works fine. However as soon as the player runs past the target, the camera behaves strangely. It still looks at the Enemy, however it does not stay behind the player. Here is the code that dictates the behaviour:
if(MouseLock.MouseLocked && !lockedOn){ // MOUSE CONTROL:
Data.Azimuth += Input.GetAxis("Mouse X") * OrbitSpeed.x;
Data.Zenith += Input.GetAxis("Mouse Y") * OrbitSpeed.y;
} else if(lockedOn) { // LOCKON BEHAVIOUR:
FindClosestEnemy();
}
if (Target != null) {
lookAt += Target.transform.position;
base.Update ();
gameObject.transform.position += lookAt;
if(!lockedOn){
gameObject.transform.LookAt (lookAt);
} else if(enemyTarget != null) {
Vector3 pos1 = Target.transform.position ;
Vector3 pos2 = enemyTarget.transform.position ;
Vector3 dir = (pos2 - pos1).normalized ;
Vector3 perpDir = Vector3.Cross(dir, Vector3.right) ;
Vector3 midPoint = (pos1 + pos2) / 2f;
gameObject.transform.LookAt (midPoint);
}
}
And the Code for Finding the nearest Enemy:
void FindClosestEnemy () {
int numEnemies = 0;
var hitColliders = Physics.OverlapSphere(transform.position, lockOnRange);
foreach (var hit in hitColliders) {
if (!hit || hit.gameObject == this.gameObject || hit.gameObject.tag == this.gameObject.tag){
continue;
}
if(hit.tag != "Enemy") // IF NOT AN ENEMY: DONT LOCK ON
continue;
var relativePoint = Camera.main.transform.InverseTransformPoint(hit.transform.position);
if(relativePoint.z < 0){
continue;
}
numEnemies += 1;
if(enemyTarget == null){
print ("TARGET FOUND");
enemyTarget = hit;
}
}
if(numEnemies < 1){
lockedOn = false;
enemyTarget = null;
}
}
As I said, teh behaviour almost works as expected, however I need the camera to stay behind the player whilst locked on and it must face the enemy/midPoint between the enemy and player. How can this be done? Thank you for your time.
To clarify your intent: you want to lock the position relative to the target (player), whilst setting the camera rotation to look at either the target or a secondary target (enemy)? And your current code performs the rotation correctly but the positioning is buggy?
The easiest way to fix the camera relative to another object is to parent it in the scene. In your case you could add the camera as a child under the Player game object.
If you would rather not do this then look at your positioning code again:
lookAt += Target.transform.position;
base.Update ();
gameObject.transform.position += lookAt;
I don't know where lookAt comes from originally but to me this looks all wrong. Something called lookAt should have nothing to do with position and I doubt you want to += anything in the positioning code given that you want a fixed relative position. Try this instead:
public float followDistance; // class instance variable = distance back from target
public float followHeight; // class instance variable = camera height
...
if (Target != null) {
Vector3 newPos = target.position + (-Target.transform.forward * followDistance);
newPos.y += followHeight;
transform.position = newPos;
}
This should fix the positioning. Set the followDistance and followHeight to whatever you desire. Assuming your rotation code works this should fix the problem.

Unity3D - 2D object rotation based on touch moved (diff between touches)

I´m newbie in Unity. I want rotate my 2D object based on user touch moved (moved finger on the screen). I have this code:
void Update ()
{
if (Input.touches.Length > 0) {
t = Input.GetTouch (0);
if (t.phase == TouchPhase.Moved) {
Vector3 movePos = new Vector3 (t.position.x, t.position.y, 0);
var objectPos = Camera.main.WorldToScreenPoint (transform.position);
var dir = movePos - objectPos;
transform.rotation = Quaternion.Euler (new Vector3 (0f, 0f, Mathf.Atan2 (dir.y, dir.x) * Mathf.Rad2Deg));
}
}
}
This code rotate the object based on user touch but when I touch screen again in another position and do touch move, it will rotate the whole object to the actual touch and then it will do correct object rotation based on touch move.
And I dont´t want rotate the whole object based on touch position but rotate the object only based on touch move. Do you understand me? Can you help me? How should I rewrite my code?
If I understand you, try to use this code below:
private float turnSpeed = 5f;
private Vector2 movement;
void Update()
{
Vector2 currentPosition = transform.position;
if (Input.touchCount > 0)
{
Touch touch = Input.GetTouch(0);
if (touch.phase == TouchPhase.Moved)
{
Vector2 moveTowards = Camera.main.ScreenToWorldPoint(touch.position);
movement = moveTowards - currentPosition;
movement.Normalize();
}
}
float targetAngle = Mathf.Atan2(movement.y, movement.x) * Mathf.Rad2Deg;
transform.rotation = Quaternion.Slerp(transform.rotation, Quaternion.Euler(0, 0, targetAngle), turnSpeed * Time.deltaTime);
}
Let me know if is what you want. Also, there is a complete sample here: https://github.com/joaokucera/unity-2d-object-rotation
Look into using deltaPosition instead of position on your touch. That should get you in the right direction.
var movedVector = t.deltaPosition;
Edit:
Here is a possible integration with your existing code. I don't have Unity on this PC so this is entirely untested. The main idea is you are getting a change in the finger position between frames. You then scale that change by move speed, and of course, the change in time between frame renders (delta time).
How the object rotates relative to that information is up to you. I just inserted the logic into your existing code.
float moveSpeed = 2.0f;
void Update ()
{
if (Input.touches.Length > 0) {
t = Input.GetTouch (0);
if (t.phase == TouchPhase.Moved) {
var delta = t.deltaPosition * moveSpeed * Time.deltaTime;
transform.rotation = Quaternion.Euler (new Vector3 (0f, 0f, Mathf.Atan2 (delta .y, delta.x) * Mathf.Rad2Deg));
}
}
}

SKNode scale from the touched point

I have added UIPinchGestureRecognizer to my scene.view to scale my content. I actually scale the parent node where all my visible contents reside. But I have problem though with scaling point. The thing is node scale from the lower-left corner. It's definitely not what I want. Do I have to write lots of code to be able to scale from the point where pinching occurs? Could you please give some hints as to what way to follow.
I have been working on the same problem and my solution is shown below. Not sure if it is the best way to do it, but so far it seems to work. I'm using this code to zoom in and out of an SKNode that has several SKSpriteNode children. The children all move and scale with the SKNode as desired. The anchor point for the scaling is the location of the pinch gesture. The parent SKScene and other SKNodes in the scene are not affected. All of the work takes place during recognizer.state == UIGestureRecognizerStateChanged.
// instance variables of MyScene.
SKNode *_mySkNode;
UIPinchGestureRecognizer *_pinchGestureRecognizer;
- (void)didMoveToView:(SKView *)view
{
_pinchGestureRecognizer = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(handleZoomFrom:)];
[[self view] addGestureRecognizer:_pinchGestureRecognizer];
}
// Method that is called by my UIPinchGestureRecognizer.
- (void)handleZoomFrom:(UIPinchGestureRecognizer *)recognizer
{
CGPoint anchorPoint = [recognizer locationInView:recognizer.view];
anchorPoint = [self convertPointFromView:anchorPoint];
if (recognizer.state == UIGestureRecognizerStateBegan) {
// No code needed for zooming...
} else if (recognizer.state == UIGestureRecognizerStateChanged) {
CGPoint anchorPointInMySkNode = [_mySkNode convertPoint:anchorPoint fromNode:self];
[_mySkNode setScale:(_mySkNode.xScale * recognizer.scale)];
CGPoint mySkNodeAnchorPointInScene = [self convertPoint:anchorPointInMySkNode fromNode:_mySkNode];
CGPoint translationOfAnchorInScene = CGPointSubtract(anchorPoint, mySkNodeAnchorPointInScene);
_mySkNode.position = CGPointAdd(_mySkNode.position, translationOfAnchorInScene);
recognizer.scale = 1.0;
} else if (recognizer.state == UIGestureRecognizerStateEnded) {
// No code needed here for zooming...
}
}
The following are helper functions that were used above. They are from the Ray Wenderlich book on Sprite Kit.
SKT_INLINE CGPoint CGPointAdd(CGPoint point1, CGPoint point2) {
return CGPointMake(point1.x + point2.x, point1.y + point2.y);
}
SKT_INLINE CGPoint CGPointSubtract(CGPoint point1, CGPoint point2) {
return CGPointMake(point1.x - point2.x, point1.y - point2.y);
}
SKT_INLINE GLKVector2 GLKVector2FromCGPoint(CGPoint point) {
return GLKVector2Make(point.x, point.y);
}
SKT_INLINE CGPoint CGPointFromGLKVector2(GLKVector2 vector) {
return CGPointMake(vector.x, vector.y);
}
SKT_INLINE CGPoint CGPointMultiplyScalar(CGPoint point, CGFloat value) {
return CGPointFromGLKVector2(GLKVector2MultiplyScalar(GLKVector2FromCGPoint(point), value));
}
I have translated ninefifteen's solution for Swift and Pinch Gestures. I spent a couple days trying to get this to work on my own. Thank goodness for ninefifteen's Obj-C post! Here is the Swift version that appears to be working for me.
func scaleExperiment(_ sender: UIPinchGestureRecognizer) {
var anchorPoint = sender.location(in: sender.view)
anchorPoint = self.convertPoint(fromView: anchorPoint)
let anchorPointInMySkNode = _mySkNode.convert(anchorPoint, from: self)
_mySkNode.setScale(_mySkNode.xScale * sender.scale)
let mySkNodeAnchorPointInScene = self.convert(anchorPointInMySkNode, from: _mySkNode)
let translationOfAnchorInScene = (x: anchorPoint.x - mySkNodeAnchorPointInScene.x, y: anchorPoint.y - mySkNodeAnchorPointInScene.y)
_mySkNode.position = CGPoint(x: _mySkNode.position.x + translationOfAnchorInScene.x, y: _mySkNode.position.y + translationOfAnchorInScene.y)
sender.scale = 1.0
}
Can't zoom I don't know why but the main problem is those SKT_INLINE. I've googled them and didn't found anything about 'em... The problem is when I copy/paste them in my project the compiler tells me I have to add an ";" right after them. I wonder if that's the reason that I can zoom.
In Swift 4, my SKScene adds the UIPinchGestureRecognizer to the view but passes handling of the pinch gesture off to one of its SKNode children that is created in the scene's init(), due to some reasons not relevant here. Anyhow, this is ninefifteen's answer from the perspective of what s/he calls _mySkNode. It also includes a little code to limit the zoom and does not use the convenience functions listed at the bottom of his post. The #objc part of the declaration allows the function to be used in #selector().
Here is what is in my SKScene:
override func didMove(to view: SKView) {
let pinchRecognizer: UIPinchGestureRecognizer = UIPinchGestureRecognizer(target: self.grid, action: #selector(self.grid.pinchZoomGrid))
self.view!.addGestureRecognizer(pinchRecognizer)
}
And this is the relevant section in my SKNode:
// Pinch Management
#objc func pinchZoomGrid(_ recognizer: UIPinchGestureRecognizer){
var anchorPoint: CGPoint = recognizer.location(in: recognizer.view)
anchorPoint = self.scene!.convertPoint(fromView: anchorPoint)
if recognizer.state == .began {
// No zoom code
} else if recognizer.state == .changed {
let anchorPointInGrid = self.convert(anchorPoint, from: self.scene!)
// Start section that limits the zoom
if recognizer.scale < 1.0 {
if self.xScale * recognizer.scale < 0.6 {
self.setScale(0.6)
} else {
self.setScale(self.xScale * recognizer.scale)
}
} else if recognizer.scale > 1.0 {
if self.xScale * recognizer.scale > 1.5 {
self.setScale(1.5)
} else {
self.setScale(self.xScale * recognizer.scale)
}
}
// End section that limits the zoom
let gridAnchorPointInScene = self.scene!.convert(anchorPointInGrid, from: self)
let translationOfAnchorPointInScene = CGPoint(x:anchorPoint.x - gridAnchorPointInScene.x,
y:anchorPoint.y - gridAnchorPointInScene.y)
self.position = CGPoint(x:self.position.x + translationOfAnchorPointInScene.x,
y:self.position.y + translationOfAnchorPointInScene.y)
recognizer.scale = 1.0
} else if recognizer.state == .ended {
// No zoom code
}
}

d3js: Use view port center for zooming focal point

We are trying to implement zoom buttons on top of a map created in D3 - essentially as it works on Google maps. The zoom event can be dispatched programmatically using
d3ZoomBehavior.scale(myNewScale);
d3ZoomBehavior.event(myContainer);
and the map will zoom using the current translation for the view. If using zoom buttons the focal point (zoom center) is no longer the translation but the center of the view port. For zoom using the scroll wheel we have the option of using zoom.center - but this apparently have no effect when dispatching your own event.
I'm confused as to how a calculate the next translation taking the new scaling factor and the view port center into account.
Given that I know the current scale, the next scale, the current translation and the dimensions of the map view port how do I calculate the next translation, so that the center of the view port do not change?
I've recently had to do the same thing, and I've got a working example up here http://bl.ocks.org/linssen/7352810. Essentially it uses a tween to smoothly zoom to the desired target scale as well as translating across by calculating the required difference after zooming to centre.
I've included the gist of it below, but it's probably worth looking at the working example to get the full effect.
html
<button id="zoom_in">+</button>
<button id="zoom_out">-</button>
js
var zoom = d3.behavior.zoom().scaleExtent([1, 8]).on("zoom", zoomed);
function zoomed() {
svg.attr("transform",
"translate(" + zoom.translate() + ")" +
"scale(" + zoom.scale() + ")"
);
}
function interpolateZoom (translate, scale) {
var self = this;
return d3.transition().duration(350).tween("zoom", function () {
var iTranslate = d3.interpolate(zoom.translate(), translate),
iScale = d3.interpolate(zoom.scale(), scale);
return function (t) {
zoom
.scale(iScale(t))
.translate(iTranslate(t));
zoomed();
};
});
}
function zoomClick() {
var clicked = d3.event.target,
direction = 1,
factor = 0.2,
target_zoom = 1,
center = [width / 2, height / 2],
extent = zoom.scaleExtent(),
translate = zoom.translate(),
translate0 = [],
l = [],
view = {x: translate[0], y: translate[1], k: zoom.scale()};
d3.event.preventDefault();
direction = (this.id === 'zoom_in') ? 1 : -1;
target_zoom = zoom.scale() * (1 + factor * direction);
if (target_zoom < extent[0] || target_zoom > extent[1]) { return false; }
translate0 = [(center[0] - view.x) / view.k, (center[1] - view.y) / view.k];
view.k = target_zoom;
l = [translate0[0] * view.k + view.x, translate0[1] * view.k + view.y];
view.x += center[0] - l[0];
view.y += center[1] - l[1];
interpolateZoom([view.x, view.y], view.k);
}
d3.selectAll('button').on('click', zoomClick);
A more succinct version of Wil's solution:
var vis = d3.select('.vis');
var zoom = d3.behavior.zoom()...
var width = .., height = ..;
function zoomByFactor(factor) {
var scale = zoom.scale();
var extent = zoom.scaleExtent();
var newScale = scale * factor;
if (extent[0] <= newScale && newScale <= extent[1]) {
var t = zoom.translate();
var c = [width / 2, height / 2];
zoom
.scale(newScale)
.translate(
[c[0] + (t[0] - c[0]) / scale * newScale,
c[1] + (t[1] - c[1]) / scale * newScale])
.event(vis.transition().duration(350));
}
};
function zoomIn() { zoomByFactor(1.2); }
function zoomOut() { zoomByFactor(0.8); }
I've found this to be quite difficult to do in practice. The approach I've taken here is to simply create a mouse event that triggers the zoom when the zoom buttons are used. This event is created at the center of the map.
Here's the relevant code:
.on("click", function() {
var evt = document.createEvent("MouseEvents");
evt.initMouseEvent(
'dblclick', // in DOMString typeArg,
true, // in boolean canBubbleArg,
true, // in boolean cancelableArg,
window,// in views::AbstractView viewArg,
120, // in long detailArg,
width/2, // in long screenXArg,
height/2, // in long screenYArg,
width/2, // in long clientXArg,
height/2, // in long clientYArg,
0, // in boolean ctrlKeyArg,
0, // in boolean altKeyArg,
(by > 0 ? 0 : 1), // in boolean shiftKeyArg,
0, // in boolean metaKeyArg,
0, // in unsigned short buttonArg,
null // in EventTarget relatedTargetArg
);
this.dispatchEvent(evt);
});
The whole thing is a bit of a hack, but it works in practice and I've found this much easier than to calculate the correct center for every offset/zoom.

Resources