Move prediction in a chess game? - math

Is it possible to find the pattern of a chess player and predict the most appropriate next move?
Is there any algorithm can solve this problem? Can you suggest any reference to find out the algorithm.

You could try it with this http://en.wikipedia.org/wiki/Computer_chess#Leaf_evaluation and http://en.wikipedia.org/wiki/Evaluation_function and also take a look at http://en.wikipedia.org/wiki/Deep_Thought_%28chess_computer%29
Maybe that helps...

Programmer Puzzle: Encoding a chess board state throughout a game
Chess game in JavaScript
Is there a perfect algorithm for chess?
This could help, technically there's no computer with power enough to solve a chess problem perfectly.
search more on stackoverflow for more views !

To a degree. An easy means of prediction in AI is the use of Case-based reasoning agents.
Assuming your chess pattern detector has been trained on a fairly large number of games, it will indeed be able to guess an opponent's moves based on current board state and previous moves. The correctness of its suppositions are of course dependent on how many games it has been trained on, as well as the content of games it has been trained on.

Related

What's the best method to implement multiplayer on a Unity Billiard game?

I'm making an online billiard game. I've finished all the mechanics for single player, online account system, online inventory system etc. Everything's fine but I've gotten to the hardest part now, the multiplayer. I tried syncing the position of each ball every frame but the movement wasn't smooth at all, the balls would move back and forth and it looked "bad" in general. Does anyone have any solution for this ? How do other billiard games like the one in Miniclip do it, I'm honestly stuck here and frustrated as it took me a while to learn Photon networking then to find out it's not that good at handling the physics synchronization.
Would uNet be a better choice here ?
I appreciate any help you give me. Thank you!
This is done with PUN already: https://www.assetstore.unity3d.com/en/#!/content/15802
You can try to play with synchronization settings or implement custom OnPhotonSerializeView (see DemoSynchronization in PUN package). Make sure that physic simulation disabled on synchronized clients. See DemoBoxes for physics simulation sample.
Or, if balls can move along lines only, do not send all positions every frame. Send positions and velocities only when balls colliding and do simple velocity simulation between. This can work even with more comprehensive physics but general rule is the same: synchronize it at key points. Of course this is not as simple as automatic synchronization.
Also note that classic billiard is turnbased game and you do not have all the complexity of players interaction. In worst case you can 'record' simulation on current player client and 'playback' it on others.

Improve a puzzle game AI using machine learning

My motivation for asking this question is that I have found an interesting problem using machine learning on a graph data set. There are papers out there on this subject. For example, "Learning from Labeled and Unlabeled Data on a Directed Graph" (Zhou, Huang, Scholkopf). However I do not have a background in artificial intelligence or machine learning so I would like to write a smaller program for a more general audience before working on anything scientific.
Several years ago I wrote a game called Solumns. It is an evil variant of the classic Sega game Columns. Inspired by bastet, it bruteforces for colour combinations disadvantageous to the player. It is hard.
I would like to improve upon its AI. I figure the game space (grid of coloured blocks, column positions, column colours) fits a graph structure better than a list of attributes. If that is the case, then this problem is similar to my research problem.
I am considering using the following high-level plan to solve this problem:
I'm thinking what would be useful is if the AI opponent could assign a fitness rating to a possible move based on more data than the number of existing squares on the board after the move. I'm thinking using a categoriser. Train on the move and all past moves, using the course of the rest of the game as a measure of success.
I am also thinking of developing a player bot that can beat the standard AI opponent. This could be useful when generating data for 1.
Use a sample of the player bot's games to build an AI that beats the strategic player. Maybe use this data for 1, too.
Write a fun AI that delegates to a possible combination of 1, 3, and the original AI, when appropriate, which I will determine using experimentation to find heuristic fudge factors.
To build the player bot, I figured I could use brute force to compute the sample space. Then use machine learning techniques such as those used in building Random Forests to create some kind of decision maker.
Building the AI opponent is where I am most perplexed.
Specific questions then:
Rating moves sounds like the kind of thing people do with chess, and although I'll admit my approach may be ignorant, there is a lot about this in literature and I can learn from that. Question is, should the player bot and AI opponent create the data sample? It sounds like I'm getting confused between different sample sets, which sounds like a recipe for bad training. Should I just play the game a bunch?
What kind of algorithm should I consider for training the player bot against the current AI?
What kind of algorithm should I consider for training an AI opponent against the player bot?
Extra info:
I am deliberately not asking if this strategy is a good fit for programming a game AI. Sure, you might be able to write a great AI more simply (after all it is already difficult to play). I want to learn about machine learning by doing something fun.
The original game was written in a mixture of racket and C. I am porting it to jruby for various reasons, likely with extensions or RPC calls to another faster language. I am not so interested in existing language-specific solutions here. I want to develop skills in this area and am not afraid to implement an algorithm for myself.
You can get the source for the original game here
I would not go for machine learning here. Look at game playing AIs.
You have an adversarial games (like Go) with two asymmetric players:
The user who places the pieces,
and the computer who chooses the pieces (instead of choosing pieces by chance).
I would probably start with Monte Carlo Tree Search.

Locoroco game collision detection

Can anyone explain what is the math/physics behind this popular PSP game Locoroco. My understanding about this game collision mechanism is , the whole world is made with bezier curves? If yes - my question is how to build a such a huge endless level , character of these game I guess its blob physics?
Is it tile based level? Please help where to start the research on this topic.
http://www.gotoandplay.it/_articles/2003/12/bezierCollision.php
Vertex level/specific physics with some optimized triangulation algorithm would be my approach.
If you are looking into making a similar game as Locoroco I'd use google to get my answers. A fast search gave me this.
http://www.allegro.cc/forums/thread/587860

Hidden Markov Models instead of FSM in a first person shooter game

I have been working on a course project in which we implemented an FPS using FSMs, by showing a top 2d view of the game, and using the bots and players and circles. The behaviour of bots was deterministic. For example, if the bot's health drops to below a threshhold, and the player is visible, the bot flees, else it looks for health packs.
However, I felt that in this case the bot isn't showing much of intelligence, as most of the decisions it takes are based on rules already decided by us.
What other techniques could I use, which would help me implement some real intelligence in the bot? I've been looking at HMMs, and I feel that they might help in bringing more uncertainty in the bot, and the bot might start being more autonomous in taking decisions than depending on pre defined rules.
What do you guys think? Any advice would be appreciated.
I don't think using a hidden Markov model would really be more autonomous. It would just be following the more opaque rules of the model rather than the explicit rules of the state machine. It's still deterministic. The only uncertainty they bring is to the observer, who doesn't have a simple ruleset to base predictions on.
That's not to say they can't be used effectively - if I recall correctly, several bots for FPS games used this sort of system to learn from players and develop their own AI.
But this does depend exactly on what you want to model with the process. AI is not really about algorithms, but about representation. If all you do is pick the same states that your current FSM has and observe an existing player's transitions, you're not likely to get a better system than having an expert input carefully tweaked rules for an FSM.
Given that you're not going to manage to implement "some real intelligence" as that is currently considered beyond modern science, what is it you want to be able to create? Is it a system that learns from its own experiments? A system that learns by observing human subjects? One that deliberately introduces unusual choices in order to make it harder for an opponent to predict?

Intersection of a line - game development

I am creating a game where I want to determine the intersection of a single line. For example if I create a circle on the screen I want to determine when I have closed the circle and figure out the points that exist within the area.
Edit: Ok to clarify I am attempting to create a lasso in a game and I am attempting to figure out how I can tell if the lasso's loop is closed. Is there any nice algorithm for doing this? I heard that there is one but I have not found any references searching on my own.
Edit: Adding more detail
I am working with an array of points. These points happen to wrap around and close. I am trying to figure out a good way of testing for this.
Thanks for the help.
Thoughts?
Your question has been addressed many times in the game development literature. It falls under the broad category of "collision detection." If you are interested in understanding the underlying algorithms, the field of computational geometry is what you want.
Bounding rectangle collision detection in Java
Collision detection on Stack Overflow
Circle collision detection in C#
Collision detection algorithms
Detailed explanation of collision detection algorithms
Game development books will also describe collision detection algorithms. One book of this sort is Game Physics by Eberly.

Resources