How would I make sure that the player has to be near a shelf to search it in my game - game-maker

I have been making a game in Game maker 1.4 and can't figure out how to make it so that it is required for a player to have Collison with a specific object such as a shelf, desk, Etc. To search the items in it. I'm trying to implement a inventory system. im using the Gamemaker engine language to program this game. I have my player able to walk left and right to interact with objects. If anyone has any info about how I could check for player position/ Collison from my object please help me.

You should check for collision with objects in front first, if that works well, then you can reuse that check to see if it collides with a specific object.
If you use wall objects to check collission with, then you can make an "interactable object" using the wall object as a parent.
Collision by itself is complicated though, there are many ways you can go with it.
I use collision_point() myself to check if objects are colliding in front (and side/back) of the character. https://manual.yoyogames.com/GameMaker_Language/GML_Reference/Movement_And_Collisions/Collisions/collision_point.htm
But that's just the start, you may want to search up collision handling for the game you're making to handle it smoother.

Related

Sketchup API for navigating around a model (eventually to integrate with Leap Motion)

I'm trying to use to SketchUp API to navigate around 3D models (zoom, pan, rotate, etc). My ultimate aim is to integrate it with a Leap Motion app.
However, right now I think my first step would be to figure out how to control the basic navigation gestures via the Sketchup API. After a bit of research, I see that there are the 'Camera' and the 'Animation' interfaces, but I think they would be more suited to 'hardcoded' paths and motions within the script.
Therefore I was wondering - does anyone know how I can write a plugin that is able to accept inputs from another program (my eventual Leap Motion App in this case) and translate it into specific navigation commands using the Sketchup API (like pan, zoom, etc). Can this be done using the 'Camera' and the 'Animation' interfaces (in some sort of step increments), or are there other interfaces I should be looking at.
As usual, and examples would be most helpful.
Thanks!
View, Camera and the Animation class is what you are looking for. Maybe you don't even need the Animation class, you might just be ok with using the time in the UI class. Depends on the details of what you will be doing.
You can set the camera directly like so:
Sketchup.active_model.active_view.camera.set(ORIGIN, Z_AXIS, Y_AXIS)
or you can use View.camera= which also accept a transition time argument if you find that useful.
For bridging input you could always create a Ruby C Extension that takes care of the communication between the applications. There are some quirks in getting C Extensions work for SketchUp Ruby as oppose to regular Ruby though - depending on how you compile it.
I wrote a hello world example a couple of years ago: https://bitbucket.org/thomthom/sketchup-ruby-c-extension
Though note that I have since found a better solution for Windows, using the Development Kit from Ruby Installers: http://rubyinstaller.org/
This answer is related to my comment above regarding the view seemingly 'jumping' when I assign a new camera to the current view using camera=, but not if I use camera.set.
I figured out this was happening because the camera FOV for the original camera was different, and the new camera was defaulting to an FOV of 30. By explicitly creating the camera with the optional perspective and FOV arguments from the initial camera solves this problem:
new_camera = Sketchup::Camera.new new_eye, new_target, curr_camera.up, curr_camera.perspective?, curr_camera.fov
Hope people find this useful!

Get the next direction help provided GPS location

I am working on proof of concept project that is an augmented reality based navigation system. Basically, it is supposed to provide users a video of real scene with additional navigational signs that are created with the use of GPS location.
The thing I need to do is not a complete useful system but rather a proof of concept. There were mainly two tasks that I had to do. First real image of the scene is processed to determine the road in it, and with the use of GPS I will place the navigational sign like turn right, go straight on the road. The challenge I have here is how i can fetch the required navigation information. I considered Google Maps Api but it requires you to have your google host. I also made some research about how to get directions from it, but it seems that results are not very well structured for my purposes. I mean I need something like having a certain number of outputs and rigorously defined (e.g. turn right, turn left, etc) but its output is very unstructured. How do you think I can achieve this task. Thanks for any help.
This Link might help for Google Map Location extraction
https://developers.google.com/maps/documentation/javascript/streetview
You can use Layar , or other available AR sdks(Metaio, QUalcomm also provide one ) for AR related Overlays
Good luck !

Is it possible to trace references between objects in Flash, in the same way as the Flash Builder profiler?

I'm trying to find memory leaks in a fairly large Flex application and I'm tired of using the paltry tools Flash Builder makes available.
Specifically, I want to analyse the relationships of objects in memory, using the same information Flash Builder's tools appear to have access to. I.e. which objects are in memory, and which objects they have references to, and have references to them. Given that information, I want to construct a directed graph whose nodes are live objects and whose edges are references from one object to another. From there I want to search for dominators, which should be a good indication of which objects are leaking.
I believe Eclipse does something similar for Java.
Unfortunately, Flash Builder only allows the export of its captured profiling data in a binary form that's only parsable by Flash Builder. Rather than try to reverse engineer their output I decided to try to capture the data myself, since they make their profiling API available in the flash.sampler.* package.
So far I've managed to collect the objects that are currently live in memory, get their allocation traces, and references to the objects that I can inspect, which is most of what I need. But I can't figure out how the FB profiler traces back-references to the GC root. The only way I can see to do it is to inspect every object in memory, and for each object inspect each of its properties, and so on, until I find a chain to an object classified as "root" level. But since I can only follow references on publicly visible properties, it's entirely possible I'll miss lots of references that prevent garbage collection.
How does the Flash Builder profiler do it?
My suspicion is that it doesn't just use the sampler.* API to capture information, but supplements that with queries performed through a debugger connection, which is probably out of scope for my work. But in the absence of any way to verify that, I'm hoping it's possible using only the sampler API.
Actually in IMHO if Flash (Flex) Builder's memory / performance tooling is paltry then you aren't using it right. The key to understanding the tooling you have available - has been available since the 4.0 SDK and and I've been using it for every project I've been assigned to as the 'runtime-analysis-guy'.
Live View:
We all know about this one, it gives you a "live" view of what's currently available. While the current instance count is useful, what's even more useful is the cumulative. This helps track down the errant methods which create way too many objects.
Loitering Object View:
You probably aren't using this one, but trust me once you do, you won't go back. With this it helps to have clearly defined small screen / application states (eg. 1. A starting point, 2. The ability to create a dialog 3. A closing point which is the same state as 1). In your application, get to the place you want to test. Then click the memory snapshot function - the "colored lines icon." Now in your application, run through steps 2 and 3. Go back to the profiler, and click this again. Here you can now either terminate or pause your application. Select both memory profiles and click the loitering object function - "the green icon." In theory this list will be empty, but it won't. This shows you what objects have been marked for [sweep] but not [reap]'ed.
D-Click any object and this gives you a detail view with a list of every reference that still holds onto this object. I'll give you a hint right now, if you haven't created a deconstruction process in your application (eg IDestroyable interface), stop right now and go back and do this. You must assign null to every object that is not a complex primative. This means every class, every array, vector, eventlistener and so on.
The sampler package is the only thing the tool uses as far as I'm aware - after all the tool doesn't inject any code into your application at the time of invocation. It's a comparison of all objects with the NewObjectSample and the DeletedObjectSample, and looking at the getSavedThis() going back up the prototype chain (this should return an object where you can call the getSavedThis() on it and so on).
http://help.adobe.com/en_US/flashbuilder/using/WS6f97d7caa66ef6eb1e63e3d11b6c4d0d21-7edf.html
Hope this helps.

Strategy for sharing OpenGL resources

I'm creating a CAD-like app (Qt-based), it will be a multiple document interface and each document will contain about 5 viewports (derived from QGLWidget). As such I need my flat shader to be shared across the entire application, and then the 3D assets (models stored as VBOs) to be shared across each document i.e. the 5 viewports.
I thought as long as I shared around the shader program and VBO GLuint addresses all will automagickly work - it doesn't. I think because each viewport/context has it's own address space on the graphics card, if anyone knows better please inform!
I would like to have the shader compiled on application start, but this is proving difficult as I need a valid QGLWidget to get OpenGL into a valid state beforehand. But as I need to share the QGLWidgets (through their constructor) to have them share resources, one needs to be created and shown before the others can be instantiated. But this is highly impractical as multiple views to be shown at once to the user.
This must be easier than I'm making out because it's hardly groundbreaking stuff, but I am really struggling - can anyone point me in the right direction?
Thanks, Cam
Here's what usual CAD/MDI applications are doing:
they create a shared context that serves for well, sharing resources.
they use wglShareLists when creating a new OpenGL rendering context for giving access to the resource ids of the shared context.
wglShareLists can be used for sharing VBOs, textures, shaders, etc, not only display lists (sharing DLs is the legacy usage, hence the function name).
I don't remember if you need to create resources with the shared context or if you can create them on any contexts.
If you're not on windows, see glXCreateContext. That should put you on track.
Edit:
I've looked at Qt, it looks like it's abstracted with member QGLContext::create.

Flex: Testing UI components at the click level?

I've been working on a Flex component and I'd like to write some automated tests for it. The trouble is, the UI testing tools I've looked at (FlexMonkey and Selenium Flex API) don't simulate "enough":
Most of the bugs which have come up so far relate to the way Flex deals with dragging and dropping, which these libraries can't simulate accurately enough. For example, I need to test a case where there is a "drop" event which occurs in the bottom half of a component – neither FlexMonkey nor Selenium Flex API can do that (they may simulate a mouse event, but they won't include coordinates).
So, is there any "good" way to automate that sort of testing?
Edit: After much research, it looks like the only piece of software that can do this is iMacros, which is Windows-only and the interface is... Lacking. So I'm going to be writing my own. Basically, it will put an HTTP interface on java.awt.Robot so code (in any language) can simulate mouse/keyboard events. If you're interested, PM me and I'll keep you updated.
Edit 2: I have published the first version of the framework I wrote, Blunderbuss, over at BitBucket: http://bitbucket.org/wolever/blunderbuss/ . You'll need Jython to run it (http://www.jython.org/), but after that the flex-client example should work.
Videos of Blunderbuss live over at Vimeo:
Automating Flex testing with Blunderbuss
Blunderbuss test suite running
At the moment this remains a proof-of-concept, as I haven't had the cycles to clean it up and make it more useable… But maybe enough people bothering me would give me that time :)
I've used Eggplant to test Flash and AIR apps without having to add any hooks into the code. It's a great tool but it's quite expensive. It simulates a real user by VNC-ing into a system and uses image recognition - among other things - to interact with the app.
I am definitely interested in your custom Java class, and (though I am not the best at Java (yet...)), I would be willing to help out if you're thinking of making this collaborative.
As to Flash MouseEvents. Unfortunately, there really isn't an accurate way to simulate the drag/drop experience in Flash. MouseEvents, when generated by the mouse, are handled in a very different way than regular events and while you could simulate actions by passing events into the handling functions, or by making the dispatcher fire a new DragEvent( DragEvent.DRAG_DROP..., it will not be the same as having the user interact with it. And for some functionality (like gaining access to the clipboard), nothing inside Flash will accomplish your goals.
To be honest, you're probably headed in the right direction -- using something which is not written in Flash to drive faked mouse events is probably your best bet.
I've never had to use it in Flex but i recently stumbled across some info on automation packages in the MS Surface SDK... after looking into it those classes automated user behavior which can be used for testing i.e. move a fake mouse to this point, perform this action. As you're using Flex mx.automation packages and classes. My guess (and hope) is that you'd be able to achieve what you want using these classes.
You could also try auto-hotkey - it is similarly a macro-editing program but it has proven to be very efficient and you can write scripts and set it up very easily.

Resources