Artificial Intelligence A Modern Approach 3rdd Edition): Figure 4.24, page 152.
function LRTA*-AGENT(s') returns an action inputs: s', a percept that identifies the current state persistent: result, a table, indexed by state and action, initially empty H, a table of cost estimates indexed by state, initially empty s, a, the previous state and action, initially null if GOAL-TEST(s') then return stop if s' is a new state (not in H) then H[s'] <- h(s') if s is not null result[s, a] <- s' H[s] <- min LRTA*-COST(s, b, result[s, b], H) b (element of) ACTIONS(s) a <- an action b in ACTIONS(s') that minimizes LRTA*-COST(s', b, result[s', b], H) s <- s' return a function LRTA*-COST(s, a, s', H) returns a cost estimate if s' is undefined then return h(s) else return c(s, a, s') + H[s']
Figure 4.24 LRTA*-AGENT selects an action according to the value of neighboring states, which are updated as the agent moves about the state space.
Note: This algorithm fails to exit if the goal does not exist (e.g. A<->B Goal=X), this could be an issue with the implementation. Comments welcome.
@author Ciaran O'Reilly
@author Mike Stampone