How to Develop an iPad Board Game App: Part 2/2

In this second and final part of the series, you will keep score, add some nice animations, and add a computer opponent. In the end, you’ll have a complete and fun game. By Colin Eberhardt.

Leave a rating/review
Save for later
Share
You are currently viewing page 3 of 4 of this article. Click here to view the first page.

Robot Redux - Making the Computer a Little Smarter

The current technique that the computer opponent employs is not a bad one. However, it fails to take into consideration what its opponent might do on their next turn, which a good Reversi player would do. A high-scoring move by the computer might still be countered by an even better move by his opponent on their next turn.

To improve the computer's chance of success — and to make the human player sweat a little more! — you will implement a multiple-level look ahead model, instead of the current model which only looks at the next turn to determine the optimum move.

The computer currently looks for the next move that provides the maximum score for the computer. As an example, if the computer determines that there are three different moves that it can make, then it will pick the one with the highest score as illustrated below:

However, if you look at the next turn, the opponent (i.e. you!) will be trying to minimize the exact same score that the computer is trying to maximize. In the example below, if you look at what happens on the next turn, the situation will be a bit different:

With the one-step look ahead model, the computer would have picked the move that resulted in a score of ‘5’, i.e. the move marked ‘A’. However, on the next turn, a skilled human opponent would pick the move that gives the minimum score for the computer, and thus would select move ‘B’.

You can see that by looking ahead to the next round of moves, it's easy to determine that ‘C’ is the move that will result in the best score, with the expectation that the next move after that will be ‘D’.

But why stop there? The computer, it its infinite wisdom and computing power, can continue to analyze the tree of potential moves and game states, even all the way to the end of the game! In comparison, a human competitor will struggle to look more than a few steps ahead.

The approach described here is a minimax algorithm where one player seeks to maximize their game score, while their opponent seeks to minimize this same score.

Time to put all this theory into practice and improve the computer's chances of winning!

Open up SHCComputerOpponent.h and change the existing init method to add another argument:

- (id) initWithBoard:(SHCReversiBoard*)board
               color:(BoardCellState)computerColor
            maxDepth:(NSInteger)depth;

Looking ahead to the very end of the game will take a lot of processing time, as there are many, many possible game states to analyze. Therefore, the depth argument will be used to specify how many steps to evaluate when looking for the optimum position.

Open up SHCComputerOpponent.m and add an instance variable that stores the given depth:

NSInteger _maxDepth;

Replace the init method with the following code to use the new depth parameter:

- (id)initWithBoard:(SHCReversiBoard *)board color:(BoardCellState)computerColor maxDepth:(NSInteger)depth
{
    if (self = [super init])
    {
        _board = board;
        _computerColor = computerColor;
        _maxDepth = depth;
        [_board.reversiBoardDelegate addDelegate:self];
    }
    return self;
}

Before diving into the minimax logic, you'll need to add some utility methods. At the top of SHCComputerOpponent.m add the definition for the following struct immediately under the #import lines:

typedef struct
{
    NSInteger column;
    NSInteger row;
} Move;

This will make it easier to pass moves between methods.

Next, add the following method to SHCComputerOpponent.m in the implementation section:

// returns an array of valid next moves for the given board
- (NSArray*) validMovesForBoard:(SHCReversiBoard*) board
{
    NSMutableArray* moves = [[NSMutableArray alloc] init];
    
    for (NSUInteger row = 0; row < 8; row++)
    {
        for (NSUInteger col = 0; col < 8; col++)
        {
            if ([board isValidMoveToColumn:col andRow:row])
            {
                Move move = {.column = col, .row = row};
                [moves addObject:[NSValue valueWithBytes:&move objCType:@encode(Move)]];
            }
        }
    } 
    
    return moves;
}

This method returns an array of all the possible next moves for the given board state. Notice that because Move is a struct, it needs to be boxed within an NSValue object since NSArrays and NSMutableArrays can only store objects.

Replace the previous simplistic implementation of makeNextMove with the following:

// Determines which move to make
- (void)makeNextMove
{
    Move bestMove = {.column = -1, .row = -1};
    NSInteger bestScore = NSIntegerMin;
    
    // check every valid move for this particular board, then select the one with the best 'score'
    NSArray* moves = [self validMovesForBoard:_board];
    for(NSValue* moveValue in moves)
    {
        Move nextMove;
        [moveValue getValue:&nextMove];
        
        // clone the current board and make this move
        SHCReversiBoard* testBoard = [_board copyWithZone:nil];
        [testBoard makeMoveToColumn:nextMove.column andRow:nextMove.row];
        
        // determine the score for this move
        NSInteger scoreForMove = [self scoreForBoard:testBoard depth:1];
        
        // pick the best
        if (scoreForMove > bestScore || bestScore == NSIntegerMin)
        {
            bestScore = scoreForMove;
            bestMove.row = nextMove.row;
            bestMove.column = nextMove.column;
        }
    }
    
    if (bestMove.column != -1 && bestMove.row != -1)
    {
        [_board makeMoveToColumn:bestMove.column andRow:bestMove.row];
    }
}

The improved makeNextMove method iterates over all of the possible next moves, determines the score for each with scoreForBoard, then picks the best move.

Still working in SHCComputerOpponent.m, add the following code to scoreForBoard, where everything comes together:

// Computes the score for the given board
- (NSInteger)scoreForBoard:(SHCReversiBoard*)board depth:(NSInteger)depth
{
    // if we have reached the maximum search depth, then just compute the
    // score of the current board state
    if (depth >= _maxDepth)
    {
        return _computerColor == BoardCellStateWhitePiece ?
            board.whiteScore - board.blackScore :
            board.blackScore - board.whiteScore;
    }
    
    NSInteger minMax = NSIntegerMin;
    
    // check every valid next move for this particular board
    NSArray* moves = [self validMovesForBoard:board];
    for(NSValue* moveValue in moves)
    {
        // Extract the Move struct from the NSValue box
        Move nextMove;
        [moveValue getValue:&nextMove];
        
        // clone the current board and make the move
        SHCReversiBoard* testBoard = [_board copyWithZone:nil];
        [testBoard makeMoveToColumn:nextMove.column andRow:nextMove.row];
        
        // compute the score for this board with a recursive call
        NSInteger score = [self scoreForBoard:testBoard depth: depth+1];
        
        // pick the best score
        if (depth % 2 == 0)
        {
            if (score > minMax || minMax == NSIntegerMin)
            {
                minMax = score;
            }
        }
        else
        {
            if (score < minMax || minMax == NSIntegerMin)
            {
                minMax = score;
            }
        }
    }
    
    return minMax;
}

The scoreForBoard:depth: method uses a recursive minimax algorithm to determine the score for a given board. The method is called from makeNextMove with a depth of 1 to start off.

If the maximum search depth has been exceeded (depth >= _maxDepth), the score is simply the difference in black points vs. white points for the current game state as previously computed.

If the search depth has not been exceeded, then all potential next moves are considered. The highest or lowest score is selected based on whether the algorithm is looking for a min value or a max value.

The recursion can be seen in the code just after the board has been cloned and a move has been made, where the score is calculated by calling scoreForBoard on itself and passing a depth of depth+1.

Through this recursive approach, the minmax value is ‘bubbled up’ for each move, and the selected move is the one that gives the best result for the given depth.

There's only one final step required to put this new and improved computer player into action!

Open up SHCViewController.m, locate the line in the viewDidLoad method where the computer player is created, and replace that line with the following:

_computer = [[SHCComputerOpponent alloc] initWithBoard:_board
                                                 color:BoardCellStateWhitePiece
                                              maxDepth:5];

This creates a computer player that searches the tree to a depth of ‘5’.

Build and run your app!

See if you can still beat the computer at this level! Many have tried and failed — are you "the one"? :]

Colin Eberhardt

Contributors

Colin Eberhardt

Author

Over 300 content creators. Join our team.