Looping ground tile as character moves in unity 2D - 2d

I am trying to create a ground tile that, once is off screen moves to the forward position in front of the other ground tile that is currently in the screen.
I have barely tried anything in code that would be even slightly useful to add to this question. Could someone point me in the right direction to be able to get this done so I don't have to just duplicate hundreds of tiles?
Thanks!

Attach this scrip on your Tileobject.
using UnityEngine;
public class SpawnTiles : MonoBehaviour {
private Material currentTile;
public float speed;
private float offset;
// Use this for initialization
void Start () {
currentTile = GetComponent<Renderer>().material;
}
// Update is called once per frame
void Update () {
offset += 0.001f;
currentTile.SetTextureOffset ("_MainTex", new Vector2 (offset * speed, 0));
}
}
Edit
This is what you want?
If YES, follow theses steps.
Create > 3D Object > Quad (Resize as you want)
Select your tile sprite:
Texture type = texture
Wrap mode = repeat
Now creat new Material: Create > Material
Select: Shader = Unlit/Texture
Texture select your previous texture.
Drag and drop this Material inside Quadobject.
Adjust tiles at inspector.
Result
Creat new scrip SpawnTiles and attach at Quadobject.
using UnityEngine;
public class SpawnTiles : MonoBehaviour {
private Material currentTile;
public float speed;
private float offset;
// Update is called once per frame
void Update () {
GetComponent<Renderer>().material.mainTextureOffset = new Vector2 (Time.time * speed, 0f);
}
}
Ajust tiles movement speed at inspector.
Finish rename Quadobject as you want.

If you want to only move the back tile to the front, you probably don't need it, but I still highly suggest looking into object pooling.
To detect if a camera can't see the tile, use this: Renderer.isVisible
To move the tile, if all the tiles are the same dimensions, you could just increase it's position.x by the amount you need.

I have found one solution using WorldPointToViewport().
Here is my script:
using UnityEngine;
using System.Collections;
public class groundTilingScript : MonoBehaviour {
Transform ground; //For reference to the transform
Camera cam; //Reference to Main Camera
float groundWidth; //The width of the transform, used for calculating current max x position of transform and next placement x position
private float nextXPos = 0.0f; //Store next x position in variable for easier reading
// Use this for initialization
void Start () {
//Set up References
ground = transform;
cam = Camera.main;
//Store Ground width (Width of the ground tile)
groundWidth = ground.GetComponent<Renderer> ().bounds.size.x;
}
// Update is called once per frame
void Update () {
//Create new Vector3 to be used in WorldToViewportPoint so it doesn't use the middle of the ground as reference
Vector3 boxRightPos = new Vector3 (ground.position.x + groundWidth/2, ground.position.y, ground.position.z);
//Store view Position of ground
Vector3 viewPos = cam.WorldToViewportPoint (boxRightPos);
//If the ground tile is left of camera viewport
if (viewPos.x < 0) {
//gameObject is offscreen, destroy it and re-instantiate it at new xPosition
float currentRightX = ground.position.x + groundWidth;
nextXPos = currentRightX + groundWidth;
Instantiate (gameObject, new Vector3 (nextXPos, ground.position.y, ground.position.z), ground.rotation);
Destroy (gameObject);
}
}
}
NOTE: I have to set up two ground tiles initially to give room for camera to go further than the ground tile. (One tile on screen, one off)
Again, this is just my attempt. I am completely open to criticism.

Related

Unity offset parallax fails on web builds

thanks in advance for helping me out.
I'm wrapping up work on a 2D platformer that has a background parallax effect using a texture offset on five different layers. The two cloud layers just have their offset scrolling over time, and the three ground layers scroll based on the camera position. This all works perfectly in Unity and in the PC build and looks great.
However, on the Unity Web Player and webGL builds, instead of the texture wrapping, it looks like this:
I'm new here, so I can't embed images yet it seems
I have no idea what is causing the problem, but instead of wrapping around and acting like a conveyor belt, it just streaks like that. Here is my parallax code:
using UnityEngine;
using System.Collections;
public class sceneryParallax : MonoBehaviour {
[SerializeField]
private Transform cameraTransform;
[SerializeField]
private Renderer renderBGFar;
[SerializeField]
private Renderer renderBGMid;
[SerializeField]
private Renderer renderBGNear;
public float offsetFar;
public float offsetMid;
public float offsetNear;
void Update () {
float xOffset = cameraTransform.position.x;
Vector2 farOffset = new Vector2((xOffset / offsetFar) % 1.0f, 0.0f);
Vector2 midOffset = new Vector2((xOffset / offsetMid) % 1.0f, 0.0f);
Vector2 nearOffset = new Vector2((xOffset / offsetNear) % 1.0f, 0.0f);
renderBGFar.material.mainTextureOffset = farOffset;
renderBGMid.material.mainTextureOffset = midOffset;
renderBGNear.material.mainTextureOffset = nearOffset;
}
}
Also, here is a screen of my texture setup:
Link
Sorry if there are any formatting problems! I really appreciate any help you can offer. If you need more information, let me know!
Thanks!
If your texture wrapping mode is set to clamp, it will reuse the last available pixel. It sounds like texture mode repeat is what you need, opposed to clamping, per the documentation:
This is useful for preventing wrapping artifacts when mapping an image
onto an object and you don't want the texture to tile. UV coordinates
will be clamped to the range 0...1. When UVs are larger than 1 or
smaller than 0, the last pixel at the border will be used. See Also:
Texture.wrapMode, texture assets.

How to scale the contents of a QGraphicsView using the QPinchGesture?

I'm implementing an image viewer on an embedded platform. The hardware is a sort of tablet and has a touch screen as input device. The Qt version I'm using is 5.4.3.
The QGraphicsView is used to display a QGraphicsScene which contains a QGraphicsPixmapItem. The QGraphicsPixmapItem containts the pixmap to display.
The relevant part of the code is the following:
void MyGraphicsView::pinchTriggered(QPinchGesture *gesture)
{
QPinchGesture::ChangeFlags changeFlags = gesture->changeFlags();
if (changeFlags & QPinchGesture::ScaleFactorChanged) {
currentStepScaleFactor = gesture->totalScaleFactor();
}
if (gesture->state() == Qt::GestureFinished) {
scaleFactor *= currentStepScaleFactor;
currentStepScaleFactor = 1;
return;
}
// Compute the scale factor based on the current pinch level
qreal sxy = scaleFactor * currentStepScaleFactor;
// Get the pointer to the currently displayed picture
QList<QGraphicsItem *> listOfItems = items();
QGraphicsItem* item = listOfItems.at(0);
// Scale the picture
item.setScale(sxy);
// Adapt the scene to the scaled picture
setSceneRect(scene()->itemsBoundingRect());
}
As result of the pinch, the pixmap is scaled starting from the top-left corner of the view.
How to scale the pixmap respect to the center of the QPinchGesture?
From The Docs
The item is scaled around its transform origin point, which by default is (0, 0). You can select a different transformation origin by calling setTransformOriginPoint().
That function takes in a QPoint so you would need to find out your centre point first then set the origin point.
void QGraphicsItem::setTransformOriginPoint(const QPointF & origin)

(Qt) Rendering scene, different items in the same relative positions

I have a QSqlTableModel model that contains my data.
I have made a QGraphicsScene scene and a QGraphicsView view so the user can move around same myQGraphicsTextItem text items until the desired position.
Something like this:
myQWidget::myQWidget()
{
//these are member of my class
chequeScene = new QGraphicsScene();
chequeView = new QGraphicsView();
model = new QSQLTableModel();
//populate model, inialize things here...
//add predefined items to the scene
setScene();
}
there's a button to show the view and move the textitems of scene. It works well.
there's a button that calls the slot print that belongs to the class. It configures a QPrinter and then calls the following paint method myQWidget::paint(), after that scene->render() is called.
The porpoise of the method below is to print data on a paper that is configured to have the same size than the scene while printing the data in the same relative position the textItem had on the scene. Can't do it with QList it doesn't order the items in the same way I added them to the scene.
Here is my code below, it prints with overlapping of some fields doe to QList order items as they appear on the scene.
void myQWidget::paint()
{
qreal dx = 0;
qreal dy = 0;
QList<QGraphicsItem*> L = chequeScene->items();
for (int j=0; j<model->columnCount(); j++) {
if(!L.isEmpty())
{
//Saves the position on dx, dy
dx = L.first()->scenePos().x();
dy = L.first()->scenePos().y();
chequeScene->removeItem( L.first() );
delete L.first();
L.removeFirst();
}
QString txt("");
//selecting printing formar for each column
switch(j)
{
case COLUMNADEFECHA:
txt = QDate::fromString(model->data(model->index(chequenum,j)).toString(), "yyyy/MM/dd").toString("dd/MM/yyyy");
break;
case COLUMNADECHEQUES:
break;
default:
txt = model->data(model->index(chequenum,j)).toString();
break;
}
//filtering not important columns
if(j!=COLUMNADECHEQUES)
{
//Supposubly item with the desired information is added to the scene
//on the same position it had before. Not working.
GraphicsTextItem *item=new GraphicsTextItem();
item->setPlainText(txt);
item->setPos(dx,dy);
chequeScene->addItem(item);
}
}
}
Any idea on how to get this working?
I think as you are getting the scenePos in dx and dy but are setting it using setPos function.
Also as you are using your GraphicsTextItem and not QGraphicsTextItem maybe a look at your paint method will help in understanding the problem.
Try using item->mapFromScene(dx, dy) and then use those coordinates to set the item position by item->setPos(..).
Hope This Helps..

Displaying image as background of QGraphicsScene

I try to use QGraphicsView to display a map with some QGraphicItem-subclass showing region centers of the map. Conceptually, I organize the map as follow:
QGraphicsView
QGraphicsScene
QGraphicsPixmapItem : background image, fixed until next call of loadSetting
QGraphicsRectItem : legend, position relative to bg is fixed throughout app
QGraphicsEllipseItem : region centers
I want the map to behave as follow:
no scrollbars to be displayed, and the background image fillup all the visible area of the view/scene.
when the widget is re-sized, the QGraphics*Items will re-size themselves accordingly (as if the view is zoomed)
relative positions of QGraphicsEllipseItems, remain fixed until next call of loadSetting()
Now I have problem in getting the background image displayed properly.
Constructor [I'm adding this view to a QTabWidget directly: myTab->addTab("name", my_view_); ]
MyView::MyView(QWidget *parent) : QGraphicsView(parent) {
bg_pixmap_ = new QGraphicsPixmapItem();
legend_ = new MapLegend();
setScene(new QGraphicsScene(this));
scene()->addItem(bg_pixmap_);
scene()->addItem(legend_);
}
Load map setting (during program execution, this method may be invoked multiple times)
void MyView::loadSetting(Config* cfg) {
if (!cfg) return;
/* (a) */
scene()->clearFocus();
scene()->clearSelection();
for (int i = 0; i < symbols_.size(); i++)
scene()->removeItem(symbols_[i]);
qDeleteAll(symbols_);
symbols_.clear();
/* (a) */
/* (b) */
background_ = QPixmap(QString::fromStdString(cfg->district_map));
bg_pixmap_->setPixmap(background_);
for (size_t i = 0; i < cfg->centers.size(); i++) {
qreal x = cfg->centers[i].first * background_.width();
qreal y = cfg->centers[i].second * background_.height();
MapSymbol* item = new MapSymbol(x, y, 10);
symbols_.append(item);
scene()->addItem(item);
}
/* (b) */
update();
}
Questions
Now all items except the 'bg_pixmap_' got displayed, and I checked the 'background_' variable that it loads the image correctly. Is there anything I missed?
How do I implement the resizeEvent of MyView to cope with the desired 'resize-strategy'?

flex: Drag and drop- object centering

In a drag+drop situation using Flex, I am trying to get the object center aligned to the point of drop- somehow, irrespective of the adjustments to height and width, it is always positioning drop point to left top.
here is the code..
imageX = SkinnableContainer(event.currentTarget).mouseX;
imageY = SkinnableContainer(event.currentTarget).mouseY;
// Error checks if imageX/imageY dont satisfy certain conditions- move to a default position
// img.width and img.height are both defined and traced to be 10- idea to center image to drop point
Image(event.dragInitiator).x = imageX-(img.width)/2;
Image(event.dragInitiator).y = imageY-(img.height)/2
The last 2 lines don't seem to have any effect. Any ideas why-must be something straightforward, that I am missing...
You can use the following snippet:
private function on_drag_start(event:MouseEvent):void
{
var drag_source:DragSource = new DragSource();
var drag_initiator:UIComponent = event.currentTarget as UIComponent;
var thumbnail:Image = new Image();
// Thumbnail initialization code goes here
var offset:Point = this.localToGlobal(new Point(0, 0));
offset.x -= event.stageX;
offset.y -= event.stageY;
DragManager.doDrag(drag_initiator, drag_source, event, thumbnail, offset.x + thumbnail.width / 2, offset.y + thumbnail.height / 2, 1.0);
}
Here is one important detail. The snippet uses stage coordinate system.
If you use event.localX and event.localY, this approach will fail in some cases. For example, you click-and-drag a movie clip. If you use localX and localY instead of stage coordinates, localX and localY will define coordinates in currently clicked part of the movie clip, not in the whole movie clip.
Use the xOffset and yOffset properties in the doDrag method of DragManager.
Look here for an example.

Resources