(mdp) One-Player Stochastic Games (MDP Games)

ASWinReach

class ggsolver.mdp.ASWinReach(graph, final=None, player=1, **kwargs)[source]
graph()

Returns the input game graph.

is_solved()

Returns if the game is solved or not.

reset()

Resets the solver.

solution()

Returns the solved game graph. The graph contains two special properties:

  • node_winner (node property): Maps each node to the id of player (1/2) who wins from that node.

  • edge_winner (edge property): Maps each edge to the id of player (1/2) who wins using that edge.

solve()[source]

Alg. 45 from Principles of Model Checking. Using the same variable names as Alg. 45.

state2node(state)

Helper function to get the node id associated with given state.

win_acts(state)

Retuns the list of winning actions from the given state.

win_region(player)

Returns the winning region for the player.

winner(state)

Returns the player who wins from the given state.

PWinReach

class ggsolver.mdp.PWinReach(graph, final=None, player=1, **kwargs)[source]
graph()

Returns the input game graph.

is_solved()

Returns if the game is solved or not.

reset()

Resets the solver.

solution()

Returns the solved game graph. The graph contains two special properties:

  • node_winner (node property): Maps each node to the id of player (1/2) who wins from that node.

  • edge_winner (edge property): Maps each edge to the id of player (1/2) who wins using that edge.

solve()[source]

Abstract method.

state2node(state)

Helper function to get the node id associated with given state.

win_acts(state)

Retuns the list of winning actions from the given state.

win_region(player)

Returns the winning region for the player.

winner(state)

Returns the player who wins from the given state.