Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Anne M.

Pages: [1]
Sure! You may know them as Marcov decision processes - essentially it is a decision support framework for games (or anything else) that has a random element and a decision element. An objective function is created to optimize the score at the end of the game.  In Lost Cities, the objective function would likely be simple: maximize the differential in the end game scores between the bot and the player. The strict form of MDPs can rely on explicit enumeration of the entire decision space, but this obviously not ideal for large problems.  As a way to get around explicitly naming the entire decision space, there are approximate decision processes (ADPs) that sacrifice optimal solutions for processing time, and I initially thought this might be what the bots are using.  Thinking this through now though, I think it would be hard to use MDPs here because you have to adhear to the "memoryless" property - ie the probably of something happening cannot depend on what happened in the past. Clearly it would take a lot to force Lost Cities into that structure.

Typically we see MDPs where policies can be set based on the current state. For example, given the current rainfall and weather patterns, how many crops should be planted?

I just wanted to say thank you for such a great response! I've only been playing for a few weeks, but this gave me a great idea of how the bots work without giving too much away.  I've done some work with stochastic decision processes and I couldn't figure out how the bots were computing so quickly.  Thanks for the great read!

Pages: [1]