At the Level Up conference’s dinner, Lisbeth Klastrup taught a number of us a very simple game that I think was either called the river game or the bridge game. It’s a guessing game, with extremely simple rules. One person thinks of… something. The other players ask questions in the form of “Can I cross the river if…” and if the question refers to something which fulfills the requirements of the answer, the player is told “yes.” For example, if I’m thinking of a rotisserie chicken, and a player asked “Can I cross the river if I speak Pali?” I’d answer no. There are a number of border-line cases, of course – if they player asked “Can I cross the river if I’m from Delaware?” – well, there are (in theory) chickens in Delaware, and probably rotisserie chickens even in that benighted realm, so a “yes” or “maybe” answer would be acceptable. The game is a child’s game like 20 questions or I-spy. Among the first games that any child would play – and probably almost impossible to implement in a computer game today. It’s analogous to the chastened observation made by many AI/robotics researchers: chess, from an AI perspective, is a very easy problem. Walking is a very hard problem (although, as I’ve noted earlier, at least they’ve got sumo down.) What would it take to get a computer to be able to play this game, both as guesser and answerer? A very rich ontology and a well-functioning inferential engine, for starters.