I encountered some strange problem with JavaFX.
The use case is: I want to zoom out a swatch.
I implemented a Scroll event handler, like this:
private void handleSwatchScrollAction(ScrollEvent se){
if(se.getEventType().equals(ScrollEvent.SCROLL)){
System.out.println("Enter into handleSwatchScrollAction ");
//The following two lines just want to calculate a ration to scale
double diff = (se.getDeltaY() / 40) * 2;
double diffFactor = diff / 2 / this.fxDial.getRadius() + 1;
System.out.println("diffFactor is: " + diffFactor);
//The following lines is to set scale
this.fxSwatch.setScaleX(diffFactor);
this.fxSwatch.setScaleY(diffFactor);
}
System.out.println("finish handleDialScrollAction");
}
Problem is, when I use scroll to zoom out, the first time zoom out operation works well.
But from the second time, from the view there is no change.
The log looks right, but just no change to display, everything the same as before.
for example, the log is always like this when I continually do zoom out operation(scroll up), no matter the first time or after that:
Enter into handleSwatchScrollAction
(deltaX, deltaY) = (0.0, 40.0)
diffFactor is: 1.005
finish handleSwatchScrollAction
So, why from the view there is no change from the second time to do zoom out(scroll up) operation?
Thank you so much in advance!
You are just setting the scale values to the same value (1.005) every time you scroll. The first time, you will see a zoom of 0.5% or so; but on subsequent scroll events you are not changing the value.
You need something like:
double diffFactor = ... // as before
double scale = this.fxSwatch.getScaleX() * diffFactor ;
this.fxSwatch.setScaleX(scale);
this.fxSwatch.setScaleY(scale);
Related
Okay so I am making one of those scrolling shooter Galaga-type game using Game Maker Studio. I created the first enemy and set up a spawner for them. They are supposed to just fly downwards towards your ship. That worked fine. But when I made the 2nd enemy, I wanted to make it move more slowly and side-to-side. I also wanted to make them bounce off the edges of the screen. But it just won't work. I can't figure what the hell the problem is and it it driving me insane. If anyone has any ideas, please, share them with me. If you need any more info on the game i can provide it. Here is the code for the step event of the 2nd enemy:
// Control the enemy
if (y > room_height+16)
{
instance_destroy();
}
// Die code
if (armor <= 0)
{
instance_create(x, y, o_explosion_center);
instance_destroy();
}
// Bounce off edges
if (x >= room_width-16)
{
hspeed = -1;
}
if (x < 16)
{
hspeed = 1;
}
First of all, you didn't say what wasn't working. The code you posted is correct, everything depends on the expected result.
One issue I can see id if this code is used by the two enemies. You want them to have different speeds, but once they bounce, their horizontal speeds will be 1 because you set hspeed to 1 and -1. When you create them, you should set a move_speed variable, and for the bouncing, write in the step event :
hspeed = -1*move_speed //instead of hspeed = -1
and
hspeed = move_speed //instead of hspeed = 1
This way, they will keep their initial speeds.
For more help, could you please explain what doesn't work and post the creation code ?
How can I make water pour from a flask in game maker where have obj_flask , obj_water , and obj_container.
I want to make the obj_water pour from obj_flask into the obj_container.
This depends hugely on how you want to achieve this effect. You could for example have an animated sprite stretching from the flask to the container. Or you could create water droplet instances at a given time rate and let them be affected by gravity. Or you could use a particle system, but this usually gives you less control if you want to check if it actually hit the container.
I can show you how to make the second idea to get you started.
obj_jug
Step Event:
execute code:
x = mouse_x;
y = mouse_y;
if (mouse_check_button(mb_left))
{
instance_create(x + 32, y + 8, obj_droplet);
}
obj_droplet
Create Event:
execute code:
a = 1;
v = 0;
Step Event:
execute code:
v += a;
y += v;
if (y >= window_get_height())
{
instance_destroy();
}
Collision Event with object obj_container:
destroy the instance
This will not give a great effect, but it will do what is being asked.
I have a 2D array structure to represent a grid of tiles that is a part of the game I am making. One aspect of the game is that the grid is filled in in a somewhat random fashion, based on analysis of a text file. Right from the outset though, I already realised that just leaving it be pretty much randomly done like this without sticking in some kind of validity checks or prevention mechanism, to stop really badly configured grid from forming, would not work out. The main problem I want to avoid is too many tiles that would be untraversable being close together, potentially severing chunks of the grid from the rest.
The idea I came up with to try avoid some really bad grids is to check when assigning a tile value to each "grid square" during generation with logic like this
if (tileBeingInserted.isTraversable()) {
//all is well
return true;
} else {
//we may have a problem, are there too many untraversables nearby?
//Proceed to check all squares "around" the current one.
}
To be clear, checking around the current square means checking the square immediately adjacent in each of the 8 cardinal directions. Now, my problem is that I am trying to reason out how to code this so that it will certainly not give a RangeErrorat any point or at least catch it and recover if it must. As an example, you could clearly take one of the corner squares to be the worst scenario in the sense that only 2 of the squares the algorithm would want to check are within the array's bounds. Naturally, if a RangeErrorhappens for this reason I just want the program to progress onward without issue so the structure
try {
//check1
//check2...8
} catch (RangeError e) {
}
is unacceptable because as soon as a single out of range square is tested the code falls out of the check block. An alternative I thought of, but do not like because of its messiness, would be to individually wrap each check in a try-catch and yes that would work I guess but that's some horrid looking code...so can anyone help me out here? Is there perhaps a different angle from which to come at this problem of avoiding the RangeErrors that I am not seeing?
So my code for testing whether another untraversable tile should be placed has shaped up like this:
bool _tileFitsWell(int tileTypeInt, int row, int col)
{
//...initialise some things, set stuff up
...
if (tile.traversable == true) {
//In this case a new traversable tile is being put in, so no problems.
return true;
} else {
//begin testing what tiles are around the current tile
//Test NW adjacent
if (row > 0 && col > 0) {
temp = tileAt(row - 1, col - 1);
if (!temp.traversable) {
strikeCount++;
}
}
//Test N adjacent
if (row > 0) {
temp = tileAt(row - 1, col - 1);
if (!temp.traversable) {
strikeCount++;
}
}
//Test NE adjacent
if (row > 0 && col < _grid[0].length - 2) {
temp = tileAt(row - 1, col 1);
if (!temp.traversable) {
strikeCount++;
}
}
//Test W adjacent
if (col > 0) {
temp = tileAt(row, col - 1);
if (!temp.traversable) {
strikeCount++;
}
}
}
return strikeCount < 2;
}
The code inside each "initial" if-statement (the ones that check row and col) is a bit pseudocode-ish for simplicity's sake. As I explained in a previous comment, the reason why I don't need to check tiles in the other 4 cardinal directions is since these checks are done while filling the map, tiles in those positions will always be either uninitialised or just out of bounds, depending on what tile the function is called to check at a given time.
I'm seeing rendering differences between the first time a page is loaded and subsequent reloads (caused by pressing the browser refresh button). In the latter case, often features (such as placemarks or polygons) are rendered as if with altitudes based upon terrain elevation data that is current at the time of feature rendering but will later be updated as more granular elev data arrives -- but the rendering/altitudes for the features will not be updated.
This seems to happen on reloads, not on the initial page load. The effect is most obvious in rough terrains, where subsequently-arriving elev data can be quite different from early data.
Here's a demonstration: http://jsfiddle.net/x4PEM/1/ Note the shape of the polygon on the initial load, then press 'run' to cause a refresh. Note rendering differences.
If the feature happens to get an altitude that is below the final terrain elevation, the feature may not be rendered at all (until the user maybe causes a re-rendering by tilting, etc).
Is there something wrong with my code (which is based upon Google samples)? Is there a way to prevent this from happening (maybe an event I need to catch to delay feature rendering until elev data has been received)? Is it a bug that should be reported to Google?
(I noticed this because I've been writing code with a text editor and then press refresh on a browser to see the results; similar to pressing 'run' on jsFiddle. End-users wouldn't be doing refreshes as intensively, but still, they may press refresh.)
Here's the code:
var ge;
google.load("earth", "1");
function init() {
google.earth.createInstance('map3d', initCB, failureCB);
}
function initCB(instance) {
ge = instance;
ge.getWindow().setVisibility(true);
var lat = 37.204193;
var lon = -112.934429;
var dlat = 0.005;
var dlon = 0.005;
var alt = 100;
var la = ge.createLookAt('');
la.set(lat, lon, 0, ge.ALTITUDE_RELATIVE_TO_GROUND, 0, 45, 2000);
ge.getView().setAbstractView(la);
var polygonPlacemark = ge.createPlacemark('');
var polygon = ge.createPolygon('');
polygon.setAltitudeMode(ge.ALTITUDE_RELATIVE_TO_GROUND);
polygonPlacemark.setGeometry(polygon);
var outer = ge.createLinearRing('');
outer.setAltitudeMode(ge.ALTITUDE_RELATIVE_TO_GROUND);
outer.getCoordinates().pushLatLngAlt(lat + dlat, lon - dlon, alt);
outer.getCoordinates().pushLatLngAlt(lat + dlat, lon + dlon, alt);
outer.getCoordinates().pushLatLngAlt(lat - dlat, lon + dlon, alt);
outer.getCoordinates().pushLatLngAlt(lat - dlat, lon - dlon, alt);
polygon.setOuterBoundary(outer);
polygonPlacemark.setStyleSelector(ge.createStyle(''));
polygonPlacemark.getStyleSelector().getPolyStyle().getColor().set('ff008000');
ge.getFeatures().appendChild(polygonPlacemark);
}
function failureCB(errorCode) {
alert("GE init failed");
}
google.setOnLoadCallback(init);
Is there something wrong with my code?
No the code is correct.
In your example you are setting the polygon ALTITUDE_RELATIVE_TO_GROUND at 100m but sometimes the the ground altitude data hasn't finished streaming when the element is drawn. This is what accounts for the differing rendering you are seeing.
Is it a bug that should be reported to Google?
Yes, I think it is a bug with how relatively positioned elements work with streamed elevation data.
I have seen a similar issue with flying to a model's relative location, see this report: https://code.google.com/p/earth-api-samples/issues/detail?id=263
I would file a new bug report and reference that one in it.
Is there a way to prevent this from happening?
I think the work-around given in that bug-report could be applicable here too.
Simply set the polygons altitude to a known value and its mode to ALTITUDE_ABSOLUTE.
Failing that you can also force the terrain to redraw by calling something like the following.
function redrawTerrain() {
var current = ge.getOptions().getTerrainExaggeration();
ge.getOptions().setTerrainExaggeration(0);
ge.getOptions().setTerrainExaggeration(current);
}
I trying to do some Joint Tracking with kinect (just put a ellipse inside my right hand) everything works fine for a default 640x480 Image, i based myself in this channel9 video.
My code, updated to use the new CoordinateMapper classe is here
...
CoordinateMapper cm = new CoordinateMapper(this.KinectSensorManager.KinectSensor);
ColorImagePoint handColorPoint = cm.MapSkeletonPointToColorPoint(atualSkeleton.Joints[JointType.HandRight].Position, ColorImageFormat.RgbResolution640x480Fps30);
Canvas.SetLeft(elipseHead, (handColorPoint.X) - (elipseHead.Width / 2)); // center of the ellipse in center of the joint
Canvas.SetTop(elipseHead, (handColorPoint.Y) - (elipseHead.Height / 2));
This works. The question is:
How to do joint tracking in a scaled image, 540x380 for example?
The solution for this is pretty simple, i fugured it out.
What a need to do is find some factor to apply to the position.
This factor can be found takin the atual ColorImageFormat of the Kinect and dividing by the desired size, example:
Lets say i am working with the RgbResolution640x480Fps30 format and my Image (ColorViewer) have 220x240. So, lets find the factor for X:
double factorX = (640 / 220); // the factor is 2.90909090...
And the factor for y:
double factorY = (480/ 240); // the factor is 2...
Now, i adjust the position of the ellipse using this factor.
Canvas.SetLeft(elipseHead, (handColorPoint.X / (2.909090)) - (elipseHead.Width / 2));
Canvas.SetTop(elipseHead, (handColorPoint.Y / (2)) - (elipseHead.Height / 2));
I've not used the CoordinateMapper yet, and am not in front on my Kinect at the moment, so I'll toss out this first. I'll see about an update when I get working with the Kinect again.
The Coding4Fun Kinect Toolkit has a ScaleTo extension as part of the library. This adds the ability to take a joint and scale it to any display resolution.
The scaling function looks like this:
private static float Scale(int maxPixel, float maxSkeleton, float position)
{
float value = ((((maxPixel / maxSkeleton) / 2) * position) + (maxPixel/2));
if(value > maxPixel)
return maxPixel;
if(value < 0)
return 0;
return value;
}
maxPixel = the width or height, depending on which coordinate your scaling.
maxSkeleton = set this to 1.
position = the X or Y coordinate of the joint you want to scale.
If you were to just include the above function you could call it like so:
Canvas.SetLeft(e, Scale(640, 1, joint.Position.X));
Canvas.SetTop(e, Scale(480, 1, -joint.Position.Y));
... replacing your 640 & 480 with a different scale.
If you include the Coding4Fun Kinect Toolkit, instead of re-writing code, you could just call it like so:
scaledJoin = rawJoint.ScaleTo(640, 480);
... then plug in what you need.