Here are some interesting posts made recently.

Game developers are increasingly keen to try ML techniques, but it does take some know-how and experience.

- The Secret to Building Game AI that Learns Realistically
- Alternatives to Online Learning for Actor Behaviors
- Game AI: Beauty and the Beast

Planners have proven themselves very effective in recent games, and hierarchical planners are the next logical step.

`AiGameDev.com` also has summaries of recent events, very useful for catching up with what went on…

- AIIDE ‘07 Conference Coverage
- Pushing the Limits of Game AI Technology
- Apply AI 2007 Roundtable Report

Be sure to check out the site/blog if you’re interested in game AI!

]]>You can subscribe to articles like these by signing up for the combined artificial intelligence feed. Be sure to let me know (using the form below) if you have a particular topic at heart so we can bump it up the list of things to write about.

On a related note, I’m exploring ways to liven up this kind of content and drag it kicking and screaming into the world of Web 2.0. I would appreciate your feedback on the subject:

Loading …

Any comments are also welcome on the subject.

]]>Assume your program has data samples and needs to draw certain conclusions about each of them. This is the case when mining for information in large databases, or trying to find an appropriate behaviour for a particular situation. You can apply decision trees to this problem as long as:

**The data samples each have attributes, either discrete or continuous values.**For example, a symbol indicating if it was overcast or not, and the humidity level.**The predictions must also be discrete or continuous values.**For example, the estimated temperature or a prediction of whether it will rain or not.

Decision trees assume that it’s possible to check the attributes of the data samples multiple times, and use this information to refine the predicted value.

A decision tree is based on a hierarchical data-structure, where each branch in the tree represents a condition. This condition typically evaluates to a boolean, with true/false each corresponding to a child node. However, it’s also possible to have more than two child nodes — for example if ranges are used as conditions. The leaves in the tree store the predictions for the target value(s).

A simple traversal algorithm is also necessary to traverse the tree, evaluating each decision and then selecting the appropriate sub-tree to enter. The algorithm is then applied recursively until a leaf node is reached. This process is rather efficient as very little memory is required to store a tree, and only simple conditional checks need to be evaluated while predicting.

A decision tree can be used in almost any classification problem (to predict discrete symbols) or regression problem (to predict continuous values). They are particularly useful in data-mining because of their efficiency. However, such decision trees cannot solve all types of problems because of assumptions of the hierarchical model.

In practice, to apply the traversal algorithm successfully, a good decision tree is needed. This can be obtained in multiple ways:

- The decision tree may be edited by an expert.
- The decision tree can be induced from data samples.

Since expert solutions may take time to develop and tune, supervised learning is often used.

- See papers about the decision tree in the resources section on AI Depot.

**The current state is (at least partly) available to the program.**For example, the program knows the position of the pieces on the game board.**It’s possible to search through successor states in the future.**The program knows how the pieces end up when a move is made.**There’s a way to evaluate the utility of certain states.**For instance, knowing when about the winning/loosing conditions.

Since this is a competitive setting, you can assume that the others pick actions resulting in a worse outcome for your program. Then, minimax works by taking the predictions of future states, and working back to determine the utility of predecessor states. When your program has estimated the utility of the next states, it can easily select the best.

Minimax is a policy to select optimal utility actions assuming the worst from other programs. Typically, minmax is combined with a depth-first search algorithm that traverses the game tree, predicting what the opponents are likely to do and what your program should do. As such, there are two different levels in the search tree: 1) for the opponents and 2) for team-members.

- The level in the game tree dealing with opponents is known as
`MIN`, since they will typically**minimize**the utility of the outcome state. - Conversely, the search tree’s level dealing with team-members is known as
`MAX`, since they will typically**maximize**the utility of the outcome state.

The minimax search, in effect, minimizes the maximum possible loss, or looking at it the other way, maximizes the minimum gain.

Minimax is applied to many games with great success, especially when the branching factor is small (i.e. there are few options each turn), when the situation is deterministic, and the state is fully observable.

However, in practice, there are significant challenges to applying Minimax to a logic game. Notably, the game tree is usually too large to search until the end, so good estimator functions are required for any state (and not just terminal states). This is not a trivial problem, and significantly affects the outcome of the algorithm. Most state estimation functions are hand written for each problem by experts.

The pseudo-code for a minimax search looks like this:

def minimax(node, depth) if node.is_terminal() or depth == 0: return node.calculate_heuristic() if node.is_opponent(): a = +inf for child in node.get_children(): a = min(a, minimax(child, depth-1)) else: a = -inf for child in node.get_children(): a = max(a, minimax(child, depth-1)) return a

On the bright side, alpha-beta pruning can be implemented easily to improve the efficiency of the search.

In classical statistical decision theory, an estimator is used to estimate a parameter . Also assume a risk function , usually specified as the integral of a loss function. Given this framework, is called “*minimax*” if it satisfies:

- See papers on minimax search in the resources section on
**AI Depot**.

As artificial intelligence enthusiasts and developers, we’re interested in hearing what you have to say. There are lots of ideas and content in the pipeline for the **AI Depot**, but we’re particularly interested in what you want from us.

Those of you who fill in the questionnaire will get exclusive access to the new content before anyone else! You also get the satisfaction of knowing you helped out in shaping the future direction of this site.

We’re looking forward to hearing from you, but be sure to subscribe to our new artificial intelligence feed to hear from us daily!

]]>