Abstract: In this post, we start writing a solution to a variation of the hand game “Chopsticks”. In this post, we create a graph that maps all of the possible moves and states. We also implement a version of minimax called negamax.

Chopsticks is a popular hand game for two or more players. In Chopsticks, players extended a number of fingers on each hand, and by “attacking” opponent’s hands, add to their score. The game has a wide array of variations, many of which are popular but for the sake of this Solution, we will be using the “Cutoff” and “Even Split” variations.
Here are the full rules to our version, they are based on the rules found on the Chopsticks Wikipedia page:
- Each player begins with one finger raised on each hand. After the first player, turns proceed clockwise.
- On a player’s turn, they must either attack or split, but not both.
- To attack, a player uses one of their live hands to strike at an opponent’s live hand. The number of fingers on the opponent’s struck hand increases by the number of fingers on the hand used to strike.
- If any hand of any player would have five or more fingers than the hand is considered “dead”. To represent this, it is traditional to use a fist, or have no fingers raised.
- To split, a player must possess one dead hand and a second hand that either has two or four fingers up. To split, a player strikes their own two hands together, and transfers half of their current fingers to the “dead” hand, returning it to the game.
- A player wins once all opponents are eliminated(by each having two dead hands at once.)
Before we can solve this game, we must first create a graph of all possible states, and all possible moves at each state. To this, I have created two classes, a main GameState class, that will serve as our main game state node, and an ExploredState. While there is nothing interesting about the implementation of GameState, having ExploredState is critical for not revisiting states.
One of the main challenges of writing a solution for Chopsticks is that it is a game that can revert to previously visited states, creating cycles within our graph. This is a major problem, as minimax/negamax our Depth First Search algorithms, that will never finish if a cycle exists.
In order to get around this, we use a second class, ExploredState. When generating child nodes if we generate a node that we have already generated before, we create an ExploredState node that points to the previously generated GameState instance. Then, during our algorithm for generating the full game graph, we don’t expand ExploredState nodes, instead only expanding the GameState version of the state. With that in mind our full play algorithm looks like this.
def play(inputState, startingPlayer):
# Getting initial values
startNode = GameState(inputState, startingPlayer)
alreadySolvedStates = {}
stateQueue = Queue()
stateQueue.put(startNode)
while not stateQueue.empty():
currentState = stateQueue.get()
if isinstance(currentState, ExploredState):
continue
if checkForWin(currentState.getState()):
currentState.setWinner(not currentState.isPlayer())
continue
possibleStates = getFollowingMoves(currentState)
for state in possibleStates:
if (state, not currentState.isPlayer()) in alreadySolvedStates:
currentState.addChildren(ExploredState(alreadySolvedStates[(state, not currentState.isPlayer())]))
else:
newNodeState = GameState(state, not currentState.isPlayer())
newNodeState.setParent(stateQueue)
currentState.addChildren(newNodeState)
alreadySolvedStates[(state, not currentState.isPlayer())] = newNodeState
stateQueue.put(newNodeState)
return startNode
With the entire game now mapped, we can move on to the much more challenging part, creating a solution for the game.
As I mentioned before, the game tree has cycles in it, meaning that when running minimax, we will end up in an infinite loop. This, is obviously an issue, and can be fixed by adding a “depth” counter. A depth counter tracks how many recursive levels we have gone down, and when it reaches a certain depth we stop recursively calling minimax. Traditionally, one would then run a heuristic to calculate the likely score of the board state. However, in our case we simply treat it as a draw. Largely, because this project is still unfinished.
Also, I decided to use a shorter version of minimax called negamax. Negamax is the same algorithm just simplified in terms of code length. For a full explanation, see one of these two articles – Wikipedia or Chessprogramming.
With all that in mind here is what our negamax algorithm looks like:
def negamax(node, depth, player):
if depth > 22:
return 0
if isinstance(node, ExploredState):
node = node.state
if node.winner != None:
if node.winner == player:
node.setEvaluatedValue(20 - depth)
return 20 - depth
node.setEvaluatedValue(-20 + depth)
return -20 + depth
value = -1000
bestMove = None
for child in node.getChildren():
checkValue = -negamax(child, depth + 1, not player)
if checkValue > value:
bestMove = child
value = checkValue
node.setEvaluatedValue(value)
return value
To increase the running time, and get a deeper search, one can simply change the expression at the top. At this time, the deepest search I have run is to depth 21, which resulted in a draw. However, I am confident that there is a better way to find the solution, as the game has a very limited number of nodes available. When searching, a lot of the time, negamax ends up recalculating already calculated states. This can be avoided by using Transposition Tables, something that I will be doing when I revisit this project.
Final Thoughts, Conclusions, and Commentary:
For starters, I must apologies for quite a late post this week. There were quite a number of important events that took place this week, and I found myself with less time to focus on this blog. Additionally, I have somewhat already found myself in what seems to be new territory. Since the rules for this game are somewhat different from the normal versions of Chopsticks, no solution seems to exist online. At the same time, the issue of having many cycles in the graph has also produced a multitude of problems, including issues with tracing back the negamax.
I have spent at least 30 hours attempting to get a full solution for the game finished by the end of the week, but to no avail. Articles on the topic are a bit limited, as well are scholarly papers. I found one published in 1990 by USSR scientists (WOW!), although it proved to be little help. Still, I am confident a solution to the game exists, and given enough time I should be able to come up with it. The full code will be released once I do.
As for next week, we will probably look at either a game that is already solved or something lighter. My first full game AI will probably be for Ticket to Ride, although I also very much would like to take a look at Reiner Knizia’s Lost Cities. Both of those are probably a little ways off, but expected a deeper look at either of those before the end of the month!
Finally, thanks to maxrothman as one of his codes provided me with the idea of using a separate class for already generated states.
Update: Since this post, my hard drive has failed, and I have lost all of the code for this project, as well as some other starts. As such, I may not return to this project. Although the issue of cycles in game-graphs is something I will return to, it will likely be in the context of a different game. Finally, the next post will be the start of a project on the co-operative trick-taking game – The Crew.