Im kinda new to unreal and im trying to make a moving platformer. I have tried with level sequencer just like in a 3D project and here it doesnt work. Its a 2D side scrolling platformer and im using blueprints. Any help?
There are basically two approaches to this kind of scenario. The first one is more Old-School: Sprites.
Unreal has Paper2D which is a decent way to use Sprites and SpriteSheets in a project. The rest of the code is basically the same as with any 3D project, since this is just a 2D plane handling frames.
The second approach is better, stronger, yields better results quality-wise but requires knowing basic 3D and rigging to do it. Simply treat your 3D game as a 2D one with a lateral camera and a 3D skeleton consisting in planes.
If you're doing pixel art, though, the first one should suffice.
Related
I'm trying to generate my own procedural map in UDK to create an organic ooze material.
I have searched the docs at epic and I couldn't find a location to cover the topic of creating movement/transitions within the map. Is there a way with unreal script to code the variances in the surface over time?
I was assuming I could transition between 3-4 images but I can't find the solution.
I was confused by procedural. Procedural in games means to generate a level or the world from a (randomly generated) number.
For your question, it is possible to fade/blend from one texture to another in a material in UDK. But UnrealScript is not the best point to start.
Use the lerp node in the material editor for that. Plug texture 1 in A and textures 2 in B. Calculate a value between 0 and 1 to blend between the two and plug that in Alpha of the lerp node.
You can chain multiple lerp node together by plugging a lerp node in the B of another lerp node, but I suggest that to get it working for just 2 textures to understand how it works first.
You can calculate the blending value/Alpha dependent on time by using the time node.
As you are properly not familiar with materials in UDK, I suggest that you watch this excellent tutorial video about materials from 3DBuzz. This will give you the basic understanding how materials in UDK work. After that you will know what exactly I was trying to explain in this post. 1 hour may seem a bit long, but it is easy to understand and follow and I watched the whole thing, too, when I started with UDK.
Is there a way to create and use a simple 3d model on the Unreal Engine?
Your best bet would be to create the initial 3D asset in a third party tool and import it into the IDE. From there you can change the texture map, and manipulate the aesthetics in one way or another, but the initial 3D model should be in an external 3D format, and then placed as a prefab into your world.
Creating an object dynamically in UDK is cumbersome and requires lots of tweaking, and won't save much in terms of cost of resources. Especially if you want it to look good and more than just 3D meshes thrown together rudimentarily. It is possible, but almost not worth it, especially if you have 3DSMax, Maya, Cinema4D, MotionBuilder, or one of the other hundred tools available to do the grunt work for you.
Most 3D Engines (IE Unity, UDK, Torque, Cry and now Havok) support many formats, and especially the unversal FBX. You could even use google sketchup and export to DAE or FBX format to get it into your Engine. Grant it you lose a lot of the elements, but the basic 3D mesh stays relatively in tact.
I'm currently evaluating the possibilities to implement a navigable 3D scene which allows to render multiple 2D layers. To be a bit more precise, I would like to display multiple graphs in a 3D environment in order to pinpoint simularities and differences between those graphs. Considering the following screenshot, there would be two graphs (one black, one grey), which are equivalent - for different graphs, deviant nodes might, i.e, be highlighted in red.
I am working with Qt's Graphic View Framework and established an editable graph editor using QGraphicsScene and several QGraphicsItems, which I separately from this project.
Qt provides OpenGL support, e.g., the QGLWidget and I had a look at the provided examples. Given, that I have not worked with OpenGL (I did some work with Java3D though) I would love if some people can share their experience.
Several solutions came to my mind:
Render every QGraphicsView to a QPixmap and display them in 3D, which would make the graphs navigatable but would prohibit any picking of elements etc.
Create an equivalent 3D element for every 2D graph element and "transform" every QGraphicsView into an 3D representation. I guess this would be quite some work (espacially as I have not worked with OpenGL)
Maybe there is an easy way to "place" the QGraphicScenes, the view, or just the QGraphicsItems in a QGLWidget without many adaptions and still register the usual "mouseclickevents" etc.
For a first implementation a plain navigable "viewer" which displayes multiple graphs in different layers would sufficient. But I would like to keep it extendable in order to add, e.g., picking, in the future.
The Qt3D project provides a class called QGraphicsEmbedScene which does exactly what you are asking for.
Is there a way to correctly tween/animate meshes in Flash authoring tool?
Shape tweens don't recognise movement of specific vertices, don't preserve connections and generally mess things up. Shape hints are too few for any non-trivial mesh, and too much manual labor anyway.
I am trying to accomplish smooth animation between two mesh shapes, but with all the points and vertices preserved, and no new points/vertices added.
Meshes in question are strictly 2D, but I won't mind if the solution called for Actionscript/Papervision3D assistance, although the authoring of keyframe mesh states needs to be done interactively in flash authoring tool (too complex shapes/movements to code them by hand).
Ideas?
Depending on the animation, bones might help. Though you need Flash CS4 to use them.
Their available from the standard toolbar.
Whats the best way to detect collisions in a 2d game sprites? I am currently working in allegro and G++
There are a plethora of ways to detect collision detection. The methods you use will be slightly altered if depending on if your using a 2d or 3d environment. Also remember when instituting a collision detection system, to take into account any physics you may want to implement in the game (needed for most descent 3d games) in order to enhance the reality of it.
The short version is to use bounding boxes. Or in other words, make each entity in the world a box, then check if each of the axises of the box are colliding with other entities.
With large amounts of entities to test for collisions you may want to check into an octree. You would simple divide the world into sectors, then only check for collision between objects in the same sectors.
For more resources, you can go to sourceforge and search for the Bullet dynamics engine which is an open source collision detection and physics engine, or you could check out http://www.gamedev.net which has plenty of resources on copious game development topics.
Any decent 2D graphics library will either provide its own collision detection functions for everything from aligned sprites to polygons to pixels, or have one or more good third party libraries to perform those functions. Your choice of engine/library/framework should dictate your collision detection choices, as they are likely far more optimized than what you could produce alone.
For Allegro there is Collegro. For SDL there is SDL_Collide.h or SDL-Collide. You can use I_COLLIDE with OpenGL. DarkBASIC has a built in collision system, and DarkPhysics for very accurate interactions including collisions.
Use a library, I recommend Box2D
This question is pretty general. There are many ways to go about collision detection in a 2d game. It would help to know what you are trying to do.
As a starting point though, there are pretty simple methods that allow for detection between circles, rectangles, etc. I'm not a huge fan of gamedev.net, but there are some good resources there about this type of detection. One such article is here. It covers some basic material that might help you get started.
Basic 2d games can use rectangles or circles to "enclose" an object on the screen. Detection of when rectangles overlap or when circles overlap is fairly straightfoward math. If you need something more complicated (such as convex artibrary polys), then the solution is more complicated. Again, gamedev.net might be of some help here.
But really to answer your question, we need to know what you are trying to do? What type of game? What type of objects are you trying to collide? Are you trying to collide with screen boundaries, etc.
Checking for collision between two balls in 2D is easy. You can google it but basically you check if the length of the two balls radius combined is larger or equal to the distance between the center of the two balls.
Then you can find the collision point by taking the unit vector between the center of the balls and multiply it with one of the balls radius.
Implementation of a collision detection system is a complicated matter, but you want to consider three points.
World of objects. Space Partitioning.
If you do a collision check against every 2d sprite in your world against everything else, you'll have a slow slow program! You need to prioritize. You need to partition the space. You can use an orthogonal grid system and slice your world up into a 2d grid. Or you could use a BSP tree, using lines as the seperator function.
Broad phase collision detection
This uses bounding volumes such as cylinders or elipses (whichever approximates the shape of your sprites the best) to determine whether or not objects are worth comparing in more detail. The math for this is easy. Learn your 2d matrix transformations. And for 2d intersection, you can even use high powered video cards to do a lot of the work!
Narrow phase collision detection
Now that you've determined that two or more objects are worth comparing, you step into your fine tuned section. The goal of this phase is to determine the collision result. Penetration depth, volume encompassed, etc... And this information will be fed into whatever physics engine you got planned. In 3d this is the realm of GJK distance algs and other neato algorithms that we all love so much!
You can implement all of this generically and specify the broad and narrow resolutions polymorphically, or provide a hook if you're working in a lower level language.
Collisions between what? It depends whether you use sprites, concave polygons, convex polygons, rectangles, squares, circles, points...