SKNode scale from the touched point - scaling

I have added UIPinchGestureRecognizer to my scene.view to scale my content. I actually scale the parent node where all my visible contents reside. But I have problem though with scaling point. The thing is node scale from the lower-left corner. It's definitely not what I want. Do I have to write lots of code to be able to scale from the point where pinching occurs? Could you please give some hints as to what way to follow.

I have been working on the same problem and my solution is shown below. Not sure if it is the best way to do it, but so far it seems to work. I'm using this code to zoom in and out of an SKNode that has several SKSpriteNode children. The children all move and scale with the SKNode as desired. The anchor point for the scaling is the location of the pinch gesture. The parent SKScene and other SKNodes in the scene are not affected. All of the work takes place during recognizer.state == UIGestureRecognizerStateChanged.
// instance variables of MyScene.
SKNode *_mySkNode;
UIPinchGestureRecognizer *_pinchGestureRecognizer;
- (void)didMoveToView:(SKView *)view
{
_pinchGestureRecognizer = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(handleZoomFrom:)];
[[self view] addGestureRecognizer:_pinchGestureRecognizer];
}
// Method that is called by my UIPinchGestureRecognizer.
- (void)handleZoomFrom:(UIPinchGestureRecognizer *)recognizer
{
CGPoint anchorPoint = [recognizer locationInView:recognizer.view];
anchorPoint = [self convertPointFromView:anchorPoint];
if (recognizer.state == UIGestureRecognizerStateBegan) {
// No code needed for zooming...
} else if (recognizer.state == UIGestureRecognizerStateChanged) {
CGPoint anchorPointInMySkNode = [_mySkNode convertPoint:anchorPoint fromNode:self];
[_mySkNode setScale:(_mySkNode.xScale * recognizer.scale)];
CGPoint mySkNodeAnchorPointInScene = [self convertPoint:anchorPointInMySkNode fromNode:_mySkNode];
CGPoint translationOfAnchorInScene = CGPointSubtract(anchorPoint, mySkNodeAnchorPointInScene);
_mySkNode.position = CGPointAdd(_mySkNode.position, translationOfAnchorInScene);
recognizer.scale = 1.0;
} else if (recognizer.state == UIGestureRecognizerStateEnded) {
// No code needed here for zooming...
}
}
The following are helper functions that were used above. They are from the Ray Wenderlich book on Sprite Kit.
SKT_INLINE CGPoint CGPointAdd(CGPoint point1, CGPoint point2) {
return CGPointMake(point1.x + point2.x, point1.y + point2.y);
}
SKT_INLINE CGPoint CGPointSubtract(CGPoint point1, CGPoint point2) {
return CGPointMake(point1.x - point2.x, point1.y - point2.y);
}
SKT_INLINE GLKVector2 GLKVector2FromCGPoint(CGPoint point) {
return GLKVector2Make(point.x, point.y);
}
SKT_INLINE CGPoint CGPointFromGLKVector2(GLKVector2 vector) {
return CGPointMake(vector.x, vector.y);
}
SKT_INLINE CGPoint CGPointMultiplyScalar(CGPoint point, CGFloat value) {
return CGPointFromGLKVector2(GLKVector2MultiplyScalar(GLKVector2FromCGPoint(point), value));
}

I have translated ninefifteen's solution for Swift and Pinch Gestures. I spent a couple days trying to get this to work on my own. Thank goodness for ninefifteen's Obj-C post! Here is the Swift version that appears to be working for me.
func scaleExperiment(_ sender: UIPinchGestureRecognizer) {
var anchorPoint = sender.location(in: sender.view)
anchorPoint = self.convertPoint(fromView: anchorPoint)
let anchorPointInMySkNode = _mySkNode.convert(anchorPoint, from: self)
_mySkNode.setScale(_mySkNode.xScale * sender.scale)
let mySkNodeAnchorPointInScene = self.convert(anchorPointInMySkNode, from: _mySkNode)
let translationOfAnchorInScene = (x: anchorPoint.x - mySkNodeAnchorPointInScene.x, y: anchorPoint.y - mySkNodeAnchorPointInScene.y)
_mySkNode.position = CGPoint(x: _mySkNode.position.x + translationOfAnchorInScene.x, y: _mySkNode.position.y + translationOfAnchorInScene.y)
sender.scale = 1.0
}

Can't zoom I don't know why but the main problem is those SKT_INLINE. I've googled them and didn't found anything about 'em... The problem is when I copy/paste them in my project the compiler tells me I have to add an ";" right after them. I wonder if that's the reason that I can zoom.

In Swift 4, my SKScene adds the UIPinchGestureRecognizer to the view but passes handling of the pinch gesture off to one of its SKNode children that is created in the scene's init(), due to some reasons not relevant here. Anyhow, this is ninefifteen's answer from the perspective of what s/he calls _mySkNode. It also includes a little code to limit the zoom and does not use the convenience functions listed at the bottom of his post. The #objc part of the declaration allows the function to be used in #selector().
Here is what is in my SKScene:
override func didMove(to view: SKView) {
let pinchRecognizer: UIPinchGestureRecognizer = UIPinchGestureRecognizer(target: self.grid, action: #selector(self.grid.pinchZoomGrid))
self.view!.addGestureRecognizer(pinchRecognizer)
}
And this is the relevant section in my SKNode:
// Pinch Management
#objc func pinchZoomGrid(_ recognizer: UIPinchGestureRecognizer){
var anchorPoint: CGPoint = recognizer.location(in: recognizer.view)
anchorPoint = self.scene!.convertPoint(fromView: anchorPoint)
if recognizer.state == .began {
// No zoom code
} else if recognizer.state == .changed {
let anchorPointInGrid = self.convert(anchorPoint, from: self.scene!)
// Start section that limits the zoom
if recognizer.scale < 1.0 {
if self.xScale * recognizer.scale < 0.6 {
self.setScale(0.6)
} else {
self.setScale(self.xScale * recognizer.scale)
}
} else if recognizer.scale > 1.0 {
if self.xScale * recognizer.scale > 1.5 {
self.setScale(1.5)
} else {
self.setScale(self.xScale * recognizer.scale)
}
}
// End section that limits the zoom
let gridAnchorPointInScene = self.scene!.convert(anchorPointInGrid, from: self)
let translationOfAnchorPointInScene = CGPoint(x:anchorPoint.x - gridAnchorPointInScene.x,
y:anchorPoint.y - gridAnchorPointInScene.y)
self.position = CGPoint(x:self.position.x + translationOfAnchorPointInScene.x,
y:self.position.y + translationOfAnchorPointInScene.y)
recognizer.scale = 1.0
} else if recognizer.state == .ended {
// No zoom code
}
}

Related

What CALayer draw(in ctx:) coordinates should be used to draw in the correct point with the correct scale?

I have a NSView which uses CALayers to created content and want to generate pdf output in vector format. Firstly is this possible, and secondly what is the logical coordinate system that Apple refers to in their documentation, and lastly how are you supposed to get CALayers default background and border properties to draw ? If I use these properties then they are not drawn into the context and if I draw the border then this duplicates the property settings. It's a bit confusing as to when you should or shouldn't use these properties, or whether one should use CALayers at all when creating vector output e.g. pdf document.
The view has the following layers:
The views layer which is larger than the drawing area
A drawing layer which contains an image and sublayers
Sublayers which have some vector drawing including text
The NSView class and CALayer subclasses are listed below.
The view layer and the drawing layer are drawn in the correct locations but all the drawing layer subviews are drawn in the wrong place and are the wrong size.
I assume this is because the drawing layer has a transform applied to is and the drawing below is not taking that into account. Is this the issue and how would I apply the layers transforms to the CGContext - if this is the solution - to get things in the right place?
class DrawingView: NSView {
override func draw(_ dirtyRect: NSRect) {
super.draw(dirtyRect)
guard let context = NSGraphicsContext.current?.cgContext else {
return
}
// In theory should generate vector graphics
self.layer?.draw(in: context)
//Or
// Generates bitmap image
//self.layer?.render(in: context)
}
}
class DrawingLayer: CALayer {
override func draw(in ctx: CGContext) {
drawBorder(in: ctx)
if let subs = self.sublayers {
for sub in subs {
sub.draw(in: ctx)
}
}
}
func drawBorder(in ctx: CGContext){
let rect = self.frame
if let background = self.backgroundColor {
ctx.setFillColor(background)
ctx.fill(rect)
}
if let borders = self.borderColor {
ctx.setStrokeColor(borders)
ctx.setLineWidth(self.borderWidth)
ctx.stroke(rect)
}
}
}
Here is my solution - using different function to draw in PDF. I am still not sure I understand why the system doesn't behave consistently when drawing to pdf and to the screen.
EDIT: - Part of the problem it seems is that I was using the layers frame for calculating size/position in the draw() functions but when a layer gets transformed its frame changes size and shape to fit the original rotated rectangle !! So tip - save your original rectangle elsewhere and then your drawings won't suddenly go weird when you apply a rotation or use bounds!!
EDIT2: - So render(in:) works, "kind of" - some 3D transforms get ignored, such as CATransform3DMakeTranslation(). Rotation transform seems to work fortunately so just make sure you set the correct origin to avoid the need for the translate transform.
When drawing in the pdf the NSView's draw(dirtyRect:) function is called by the CALayers draw(in:) functions don't automatically get called so it seems you have to call them all the way up the hierarchy yourself, remembering to translate the coordinate system for each sublayer - if they are drawing in their local coordinate systems. And be aware that this does not guarantee vector output in the pdf - it seems some things like text gets rendered as bitmaps. If anyone has an explanation for how things behave and why please explain.
In any event I have just fallen back to using the CALayer.render(in:) function which causes the full layer hierarchy to be rendered - and still with some vector and some bitmapped elements !
class DrawingView: NSView {
var drawingLayer: DrawingLayer? {
return self.layer?.sublayers?[0] as? DrawingLayer
}
override func draw(_ dirtyRect: NSRect) {
//super.draw(dirtyRect)
guard let context = NSGraphicsContext.current?.cgContext else {
return
}
// In theory should generate vector graphics
self.drawingLayer?.drawPdf(in: context)
//Or
// Generates bitmap image
//self.layer?.render(in: context)
}
}
class DrawingLayer: CALayer {
var fillColor: CGColor?
var lineColor: CGColor?
var lineWidth: CGFloat = 0.5
var scale: CGFloat = 1.0
// No need for this since we don't need to call draw for each sublayer
// that gets done for us!
override func draw(in ctx: CGContext) {
print("drawing DrawingLayer \(name ?? "")")
// if let subs = self.sublayers {
// for sub in subs {
// sub.draw(in: ctx)
// }
// }
}
func drawPdf(in ctx: CGContext) {
ctx.saveGState()
ctx.translateBy(x: self.frame.minX, y: self.frame.minY)
if let subs = self.sublayers {
for sub in subs {
(sub as! NormalLayer).drawPdf(in: ctx)
}
}
ctx.restoreGState()
}
}
class NormalLayer: CALayer {
var fillColor: CGColor?
var lineColor: CGColor?
var lineWidth: CGFloat = 0.5
var scale: CGFloat = 1.0
override func draw(in ctx: CGContext) {
drawBorder(in: ctx)
}
func drawBorder(in ctx: CGContext){
let bds = self.bounds
let rect = CGRect(x: bds.minX*scale, y: bds.minY*scale, width: bds.width*scale, height: bds.height*scale)
print("drawing NormalLayer \(name ?? "") \(rect)")
if let background = self.fillColor {
ctx.setFillColor(background)
ctx.fill(rect)
}
if let borders = self.lineColor {
ctx.setStrokeColor(borders)
ctx.setLineWidth(self.lineWidth)
ctx.stroke(rect)
}
}
func drawPdf(in ctx: CGContext) {
ctx.saveGState()
ctx.translateBy(x: self.frame.minX, y: self.frame.minY)
drawBorder(in: ctx)
ctx.restoreGState()
}
}

Corner Radius Not Working?

im trying to setup a circle image view and when I set the corner radius to perform the operation it does absolutely nothing. I've looked at various threads and solutions none worked
import UIKit
class AlterProfileViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
view?.backgroundColor = UIColor.white
navigationItem.title = "Profile Settings"
view.addSubview(selectProfileImage)
///Constraints for all views will go here
_ = selectProfileImage.anchor(view.centerYAnchor, left: view.leftAnchor, bottom: nil, right: nil, topConstant: -275, leftConstant: 135, bottomConstant: 0, rightConstant: 0, widthConstant: 100, heightConstant: 100)
// selectProfileImage.layer.cornerRadius = selectProfileImage.frame.size.width/2
///////////////////////////////////////////////
// Do any additional setup after loading the view.
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
//Where all buttons and labels will be added
//will just be a nice looking image view to be next to the profile settings button
lazy var selectProfileImage: UIImageView = {
let selectPicture = UIImageView()
// self.selectProfileImage.layer.cornerRadius = self.selectProfileImage.frame.size.width / 2;
selectPicture.image = UIImage(named: "Paris")
// selectPicture.layer.cornerRadius = selectPicture.frame.size.width / 2;
selectPicture.clipsToBounds = true
selectPicture.translatesAutoresizingMaskIntoConstraints = false
selectPicture.contentMode = .scaleAspectFill
selectPicture.layer.shouldRasterize = true
selectPicture.layer.masksToBounds = true
return selectPicture
}()
///////////////////////////////////////////////////////////////////////////////////
}
None of the methods seem to work im actually kind of stumped right now
Given that you layout with AutoLayout I would suspect the image view simply doesn't have the correct size when you calculate the radius. The image view is initialized with a size of 0,0 and thus the calculated radius will be 0 as well. Instead, move the radius calculation in viewDidLayoutSubviews after calling super:
func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
selectProfileImage.layer.cornerRadius = selectProfileImage.frame.size.width / 2;
selectProfileImage.layer.masksToBounds = true
}

how i can draw mesh topology in sprit kit using swift

I am trying to draw mesh Topology (graph) in sprite kit using swift. I am new for sprite kit . Please give any suggestion or sample code.
So as I mentioned in the comments, I don't know how to make a perfect topography algorithm, however I did come up with something that can replicate you picture.
Basically you add a bunch of plots as SKNodes, then use the .position property to use as the start and end points for a line drawn with CGPath. From that path, you can create a SKShapeNode(path: CGPath).
I also added a custom button in here that uses delegation, but it is completely separate from the actual "guts" of the topography. It's just a button.
// Overly complex way of creating a custom button in SpriteKit:
protocol DrawLinesDelegate: class { func drawLines() }
// Clickable UI element that will draw our lines:
class DrawLinesButton: SKLabelNode {
weak var drawLinesDelegate: DrawLinesDelegate?
init(text: String, drawLinesDelegate: DrawLinesDelegate) {
super.init(fontNamed: "Chalkduster")
self.drawLinesDelegate = drawLinesDelegate
self.text = text
isUserInteractionEnabled = true
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
print(drawLinesDelegate)
drawLinesDelegate?.drawLines()
}
required init?(coder aDecoder: NSCoder) { fatalError("") }
override init() { super.init() }
};
class GameScene: SKScene, DrawLinesDelegate {
var plots = [SKShapeNode]()
// var lines = [SKShapeNode]() // This may be useful in a better algorithm.
var nodesDrawnFrom = [SKShapeNode]()
var drawLinesButton: DrawLinesButton?
func drawLine(from p1: CGPoint, to p2: CGPoint) {
let linePath = CGMutablePath()
linePath.move(to: p1)
linePath.addLine(to: p2)
let line = SKShapeNode(path: linePath)
line.strokeColor = .red
line.lineWidth = 5
// lines.append(line) // Again, may be useful in a better algo.
addChild(line)
}
func drawLines() {
// Remove all lines: // Again again, may be useful in a better algorithm.
/*
for line in lines {
line.removeFromParent()
lines = []
}
*/
// The plot that we will draw from:
var indexNode = SKShapeNode()
// Find indexNode then draw from it:
for plot in plots {
// Find a new node to draw from (the indexNode):
if nodesDrawnFrom.contains(plot) {
continue
} else {
indexNode = plot
}
// Draw lines to every other node (from the indexNode):
for plot in plots {
if plot === indexNode {
continue
} else {
drawLine(from: indexNode.position, to: plot.position)
nodesDrawnFrom.append(indexNode)
}
}
}
}
func addNode(at location: CGPoint) {
let plot = SKShapeNode(circleOfRadius: 50)
plot.name = String(describing: UUID().uuid)
plot.zPosition += 1
plot.position = location
plot.fillColor = .blue
plots.append(plot)
addChild(plot)
}
override func didMove(to view: SKView) {
drawLinesButton = DrawLinesButton(text: "Draw Lines", drawLinesDelegate: self)
drawLinesButton!.position.y = frame.minY + (drawLinesButton!.frame.size.height / 2)
addChild(drawLinesButton!)
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let location = touches.first!.location(in: self)
addNode(at: location)
}
}
imperfect algo:
Here you can see that there are multiple lines being drawn from one to another when you had mid-points (that is something blocking a straight line):
You would need to add another entire section to the algorithm to check for this.
Another important thing to note is that SKShapeNode() is very unperformant, and it would be best to transform all of these to SpriteNodes, or to bit-blit the entire scene onto a static texture.
However, having them all as ShapeNodes give you the most flexibility, and is easiest to explain here.

Unity - Making Camera Lock on To Enemy and stay behind player?

I am attempting to create a Camera that moves with the Player but locks onto an enemy when the player clicks the lock on button. The behaviour is almost working as I want it, the camera locks onto the target. And when the player stand in-front of the target it works fine. However as soon as the player runs past the target, the camera behaves strangely. It still looks at the Enemy, however it does not stay behind the player. Here is the code that dictates the behaviour:
if(MouseLock.MouseLocked && !lockedOn){ // MOUSE CONTROL:
Data.Azimuth += Input.GetAxis("Mouse X") * OrbitSpeed.x;
Data.Zenith += Input.GetAxis("Mouse Y") * OrbitSpeed.y;
} else if(lockedOn) { // LOCKON BEHAVIOUR:
FindClosestEnemy();
}
if (Target != null) {
lookAt += Target.transform.position;
base.Update ();
gameObject.transform.position += lookAt;
if(!lockedOn){
gameObject.transform.LookAt (lookAt);
} else if(enemyTarget != null) {
Vector3 pos1 = Target.transform.position ;
Vector3 pos2 = enemyTarget.transform.position ;
Vector3 dir = (pos2 - pos1).normalized ;
Vector3 perpDir = Vector3.Cross(dir, Vector3.right) ;
Vector3 midPoint = (pos1 + pos2) / 2f;
gameObject.transform.LookAt (midPoint);
}
}
And the Code for Finding the nearest Enemy:
void FindClosestEnemy () {
int numEnemies = 0;
var hitColliders = Physics.OverlapSphere(transform.position, lockOnRange);
foreach (var hit in hitColliders) {
if (!hit || hit.gameObject == this.gameObject || hit.gameObject.tag == this.gameObject.tag){
continue;
}
if(hit.tag != "Enemy") // IF NOT AN ENEMY: DONT LOCK ON
continue;
var relativePoint = Camera.main.transform.InverseTransformPoint(hit.transform.position);
if(relativePoint.z < 0){
continue;
}
numEnemies += 1;
if(enemyTarget == null){
print ("TARGET FOUND");
enemyTarget = hit;
}
}
if(numEnemies < 1){
lockedOn = false;
enemyTarget = null;
}
}
As I said, teh behaviour almost works as expected, however I need the camera to stay behind the player whilst locked on and it must face the enemy/midPoint between the enemy and player. How can this be done? Thank you for your time.
To clarify your intent: you want to lock the position relative to the target (player), whilst setting the camera rotation to look at either the target or a secondary target (enemy)? And your current code performs the rotation correctly but the positioning is buggy?
The easiest way to fix the camera relative to another object is to parent it in the scene. In your case you could add the camera as a child under the Player game object.
If you would rather not do this then look at your positioning code again:
lookAt += Target.transform.position;
base.Update ();
gameObject.transform.position += lookAt;
I don't know where lookAt comes from originally but to me this looks all wrong. Something called lookAt should have nothing to do with position and I doubt you want to += anything in the positioning code given that you want a fixed relative position. Try this instead:
public float followDistance; // class instance variable = distance back from target
public float followHeight; // class instance variable = camera height
...
if (Target != null) {
Vector3 newPos = target.position + (-Target.transform.forward * followDistance);
newPos.y += followHeight;
transform.position = newPos;
}
This should fix the positioning. Set the followDistance and followHeight to whatever you desire. Assuming your rotation code works this should fix the problem.

Unity3D - 2D object rotation based on touch moved (diff between touches)

I´m newbie in Unity. I want rotate my 2D object based on user touch moved (moved finger on the screen). I have this code:
void Update ()
{
if (Input.touches.Length > 0) {
t = Input.GetTouch (0);
if (t.phase == TouchPhase.Moved) {
Vector3 movePos = new Vector3 (t.position.x, t.position.y, 0);
var objectPos = Camera.main.WorldToScreenPoint (transform.position);
var dir = movePos - objectPos;
transform.rotation = Quaternion.Euler (new Vector3 (0f, 0f, Mathf.Atan2 (dir.y, dir.x) * Mathf.Rad2Deg));
}
}
}
This code rotate the object based on user touch but when I touch screen again in another position and do touch move, it will rotate the whole object to the actual touch and then it will do correct object rotation based on touch move.
And I dont´t want rotate the whole object based on touch position but rotate the object only based on touch move. Do you understand me? Can you help me? How should I rewrite my code?
If I understand you, try to use this code below:
private float turnSpeed = 5f;
private Vector2 movement;
void Update()
{
Vector2 currentPosition = transform.position;
if (Input.touchCount > 0)
{
Touch touch = Input.GetTouch(0);
if (touch.phase == TouchPhase.Moved)
{
Vector2 moveTowards = Camera.main.ScreenToWorldPoint(touch.position);
movement = moveTowards - currentPosition;
movement.Normalize();
}
}
float targetAngle = Mathf.Atan2(movement.y, movement.x) * Mathf.Rad2Deg;
transform.rotation = Quaternion.Slerp(transform.rotation, Quaternion.Euler(0, 0, targetAngle), turnSpeed * Time.deltaTime);
}
Let me know if is what you want. Also, there is a complete sample here: https://github.com/joaokucera/unity-2d-object-rotation
Look into using deltaPosition instead of position on your touch. That should get you in the right direction.
var movedVector = t.deltaPosition;
Edit:
Here is a possible integration with your existing code. I don't have Unity on this PC so this is entirely untested. The main idea is you are getting a change in the finger position between frames. You then scale that change by move speed, and of course, the change in time between frame renders (delta time).
How the object rotates relative to that information is up to you. I just inserted the logic into your existing code.
float moveSpeed = 2.0f;
void Update ()
{
if (Input.touches.Length > 0) {
t = Input.GetTouch (0);
if (t.phase == TouchPhase.Moved) {
var delta = t.deltaPosition * moveSpeed * Time.deltaTime;
transform.rotation = Quaternion.Euler (new Vector3 (0f, 0f, Mathf.Atan2 (delta .y, delta.x) * Mathf.Rad2Deg));
}
}
}

Resources