Collision detection is not working correctly - javafx

I have to submit a Breakout clone and I'm struggling with the collision detection of the ball and the bricks. Basically, the collision detection works, but the ball destroys the brick about 10 pixels away from the visual object. I'm checking the bounds of both objects, but I guess the problem is that the ball is a moving object and the brick is a static one.
for (int i = 0; i < n; i++) {
for (int j = 0; j < m; j++) {
brick = brickArray[i][j];
if (brick == null)
continue;
areBricksLeft = true;
Bounds brickBounds = brick.getBoundsInParent();
Bounds ballBounds = ball.getBoundsInParent();
if (brickBounds.intersects(ballBounds) ) {
brick.removeBrickAt(i, j, brick, brickArray, brickPane);
didHitBrick = true;
}
}
}

Thanks for the hint I found the mistake. I replaced my condition with this:
double ballX = ball.getLayoutX() + ball.getRadius();
double ballY = ball.getLayoutY() + ball.getRadius();
if ((ballX <= brickBounds.getMaxX() - 10 && ballX >= brickBounds.getMinX() -10) &&
(ballY <= brickBounds.getMaxY() - 10 && ballY >= brickBounds.getMinY() - 10)) {
brick.removeBrickAt(i, j, brick, brickArray, brickPane);
didHitBrick = true;
}
Now it is possible to adjust the collision by substracting and adding values to the bounds.

Related

Game Of Life ends quickly (Java)

I've created a basic version of the Game Of Life: each turn, the board is simulated by a 2D array of 1's and 0's, after which another class creates a drawing of it for me using the 2d array
I've read all the other questions here regarding this game, but no answer seems to work out for me....sorry if I'm beating a dead horse here.
I think I have a problem with my algorithm, thus maybe the board gets filled with the wrong amount of dead and alive cells and thus ends rather quickly (5-10 turns).
I've found an algorithm here to scan all the neighbors and even added a count = -1 in case it a cell in the grid scans itself as it's own neighbor, but I think I'm missing something here.
public static void repaint(board game, int size,int[][] alive, int[][] newGeneration)
{
int MIN_X = 0, MIN_Y = 0, MAX_X =9, MAX_Y =9, count;
for ( int i = 0; i < size; i++ )
{
for (int j = 0; j < size; j++) //here we check for each matrix cell's neighbors to see if they are alive or dead
{
count = 0;
if (alive[i][j] == 1)
count = -1;
int startPosX = (i - 1 < MIN_X) ? i : i - 1;
int startPosY = (j - 1 < MIN_Y) ? j : j - 1;
int endPosX = (i + 1 > MAX_X) ? i : i + 1;
int endPosY = (j + 1 > MAX_Y) ? j : j + 1;
for (int rowNum = startPosX; rowNum <= endPosX; rowNum++)
{
for (int colNum = startPosY; colNum <= endPosY; colNum++)
{
if (alive[rowNum][colNum] == 1)
count++;
}
}
if (alive[i][j] == 0 && count == 3) //conditions of the game of life
newGeneration[i][j] = 1; //filling the new array for the next life
if (alive[i][j] == 1 && count < 2)
newGeneration[i][j] = 0;
if (alive[i][j] == 1 && count >= 4)
newGeneration[i][j] = 0;
if (alive[i][j] == 1 && count == 3)
newGeneration[i][j] = 1;
}
}
game.setAlive(newGeneration); //we created a new matrix with the new lives, now we set it
SetupGUI(game,size); //re drawing the panel
}
}
What am I doing wrong? thanks for the help.

How to keep my QMainWindow always inside of the desktop?

I want to keep my QMainWindow always inside of the desktop, so I add the implementation for QMainWindow::moveEvent :
void MainWindow::moveEvent(QMoveEvent *ev)
{
if(ev->pos().x() < 0) setGeometry(0, ev->oldPos().y(), width(), height());
}
But when I move the window to out of desktop left bounding, the app is crashed.
What is wrong with this code? Why it is crashed? Is my solution correct?
//--Update:
I tried with this:
int newx = ev->pos().x(),
newy = ev->pos().y();
if(ev->pos().x() < 0) newx = 0;
if(ev->pos().y() < 0) newy = 0;
move(newx, newy);
It worked without crash but I'm not satisfied because of the moving is not smooth.
This should smoothly help with the upper left corner .. but you'll need to add some more conditions to get it working for all four sides.
posX and posY are member variables.
void MainWindow::moveStep() { // [SLOT]
int movX = 0, movY = 0;
if(posX < 0) movX = 1;
if(posY < 0) movY = 1;
move(posX + movX, posY + movY);
}
void MainWindow::moveEvent(QMoveEvent *ev) {
if(ev->pos().x() < 0 || ev->pos().y() < 0) {
posX = ev->pos().x();
posY = ev->pos().y();
QTimer::singleShot(10, this, SLOT(moveStep()));
}
}
To have it even more elegantly consider using QVariantAnimation with a QRect and setGeometry().

how to make a map in xna 4 with matrix from text file

I am trying to make a map by reading a text file line by line (because i cant find how to do that word by word). So I make a map00.txt that looks like "33000000111" (every number is one row, first 2 rows are number of columns and rows so matrix that I load it into looks like
000
000
111
). Now I am supposed to draw 3 tiles at the bottom (1=draw tile). I do so by drawing tile at its position in matrix * window height(width) / matrix number of rows(columns).
PROBLEM: i cant get the right parameters for current window width and height.
Code for loading tiles:
public int[,] LoadMatrix(string path)
{
StreamReader sr = new StreamReader(path);
int[,] a = new int[int.Parse(sr.ReadLine().ToString()),
int.Parse(sr.ReadLine().ToString())];
for(int i = 0; i < a.GetLength(0); i++)
for (int j = 0; j < a.GetLength(1); j++)
{ a[i, j] =int.Parse(sr.ReadLine().ToString()); }
sr.Close();
return a;
}
Code for drawing tiles:
public void DrawTiles(SpriteBatch sp, GraphicsDeviceManager gdm)
{
for(int i = 0; i < matrix.GetLength(0); i++)
for(int j = 0; j < matrix.GetLength(1); j++)
{
if (i == 1)
{
sp.Draw(tile,
new Rectangle(j * (gdm.PreferredBackBufferWidth / 3),//matrix.GetLength(1),
i * (gdm.PreferredBackBufferWidth / 3),//matrix.GetLength(0),
gdm.PreferredBackBufferWidth / matrix.GetLength(1),
gdm.PreferredBackBufferHeight / matrix.GetLength(0)),
Color.White);
}
}
}
but the result is that they are drawn about 40 pixels above the bottom of the screen!
and i tried with GraphicsAdapter.DefaultAdapter.CurrentDisplayMode.Height(Width) but i get the same result. And when i put calculated numbers that should (in theory) be width/columns and heigth/rows i get what i want. So any suggestions would be VERY appriciated because i am stuck at this for a long time on google and Stack Overflow.
Here is a reworked version of your Draw code, which should work:
public void DrawTiles(SpriteBatch sp, GraphicsDeviceManager gdm)
{
//You would typically pre-compute these in a load function
int tileWidth = gdm.PreferredBackBufferWidth / matrix.GetLength(0);
int tileHeight = gdm.PreferredBackBufferWidth / matrix.GetLength(1);
//Loop through all tiles
for(int i = 0; i < matrix.GetLength(0); i++)
{
for(int j = 0; j < matrix.GetLength(1); j++)
{
//If tile value is not 0
if (matrix[i,j] != 0)
{
sp.Draw(tile, new Rectangle(i * tileWidth, j * tileHeight, tileWidth, tileHeight), Color.White);
}
}
}
}

QImage OpenCV - setPixel with only green and black colors

I made a simple graphical user interface with Qt and I use OpenCV for making processing on webcam streaming, i.e canny edge detection.
I try to implement a switch between two displays of the webcam :
1*) "normal Mode" : a grayscale display where webcam gives a border detection video with grayscale color
2*) "greenMode" : a green and black display where webcam gives the same "border detected" but with green and black colors.
The first one works (with grayscale) works. Here's the result :
Now I have problems with the second one. Here's the part of the code where I can't find a solution :
// Init capture
capture = cvCaptureFromCAM(0);
first_image = cvQueryFrame( capture );
// Init current qimage
current_qimage = QImage(QSize(first_image->width,first_image->height),QImage::Format_RGB32);
IplImage* frame = cvQueryFrame(capture);
int w = frame->width;
int h = frame->height;
if (greenMode) // greenMode : black and green result
{
current_image = cvCreateImage(cvGetSize(frame),8,3);
cvCvtColor(frame,current_image,CV_BGR2RGB);
for(int j = 0; j < h; j++)
{
for(int i = 0; i < w; i++)
{
current_qimage.setPixel(i,j,qRgb(current_image->imageData[i+j*w+1],current_image->imageData[i+j*w+1],current_image->imageData[i+j*w+1]));
}
}
}
else // normal Mode : grayscale result WHICH WORKS
{
current_image = cvCreateImage(cvGetSize(frame),8,1);
cvCvtColor(frame,current_image,CV_BGR2GRAY);
for(int j = 0; j < h; j++)
{
for(int i = 0; i < w; i++)
{
current_qimage.setPixel(i,j,qRgb(current_image->imageData[i+j*w+1],current_image->imageData[i+j*w+1],current_image->imageData[i+j*w+1]));
}
}
}
gaussianfilter(webcam_off);
border_detect(webcam_off);
cvReleaseImage(&current_image);
repaint();
The "greenMode" doesn't seem to put good pixels with this "setPixel" (I take the middle rgb value : current_image->imageData[i+j*w+1]) :
current_image = cvCreateImage(cvGetSize(frame),8,3);
cvCvtColor(frame,current_image,CV_BGR2RGB);
for(int j = 0; j < h; j++)
{
for(int i = 0; i < w; i++)
{
current_qimage.setPixel(i,j,qRgb(current_image->imageData[i+j*w+1],current_image->imageData[i+j*w+1],current_image->imageData[i+j*w+1]));
}
}
Here's what I get :
Firstly, the output is not green and black and secondly, it's zoomed compared to the grayscale image.
Could you have any clues to get the greenMode ?
qRgb(current_image->imageData[i+j*w+1],current_image->imageData[i+j*w+1],current_image->imageData[i+j*w+1])
You're using an identical value for all three RGB color components. R == G == B will always result in grey.
To convert an RGB value to green/black, you could for example convert to greyscale (using the luminosity method) and then tint it green:
const int v = qRound( 0.21 * pixel.red() + 0.71 * pixel.green() + 0.07 * pixel.blue() );
setPixel( i, j, qRgb( v, 0, 0 ) );
(There are probably more sophisticated methods for the tinting).
For the scaling, I assume the error occurs when calculating the index for current_image. You're using the same (i+j*w+1) for both images, but the grey has 1 channel and the second 3 (third cvCreateImage argument). So the latter will have two more values per pixel.

vector iterator not dereferncable....?

`void calc_distance(vector fingerTips, CvPoint palmCenter,IplImage *source)
{
double distance = 0;
vector<CvPoint>::iterator p;
if (fingerTips.size() != NULL && fingerTips.size() <= 5 && fingerTips.size() >= 1)
{
if ((fingerTips.size() > 1) || (fingerTips.size() <= 5))
{
distance = 0;
p = fingerTips.begin();
CvPoint forefinger = *p;
CvPoint secondfinger;
for( ;p != fingerTips.end(); )
{
p++;
secondfinger = *p;
distance += sqrt(double((forefinger.x - secondfinger.x) *
(forefinger.x - secondfinger.x) + (forefinger.y - secondfinger.y) * (forefinger.y - secondfinger.y)) );
cvLine(source,forefinger,secondfinger,cvScalar(1.0,1.0,1.0),3,8);
forefinger = secondfinger;
}
}
}
}
//as parameters i passed the the vector of fingertips co-ordinate and the center of the palm along with image source
but still im getting the error: "vector iterator not differefercable"....
the error occurs in the 2nd iteration at " secondfinger = *p;" line.....
please help me.....
here i am trying to get distance between each fingers and summing each of the them to get
the final distance....
please help me...
You check p != end, then P++, then dereference. You should just do a standard for loop:
for(; p != fingerTips.end(); p++)
{
//p++ //This is gone now. It's up in the for loop
...
}
instead of having p++ in your for loop
You need to dereference p before you increment it; otherwise, you end up trying to dereference fingertips.end() when you get to the end of the collection, which cannot be dereferenced.
for( ;p != fingerTips.end(); ++p)
{
secondfinger = *p;
distance += sqrt(double((forefinger.x - secondfinger.x) * (forefinger.x - secondfinger.x) + (forefinger.y - secondfinger.y) * (forefinger.y - secondfinger.y)) );
cvLine(source,forefinger,secondfinger,cvScalar(1.0,1.0,1.0),3,8);
forefinger = secondfinger;
}

Resources