This is quite a good discussion of an assertion made in an earlier paper that reinforcement learning is enough for general AI (my coverage here). "The researchers hypothesize that the right reward is all you need to create the abilities associated with intelligence, such as perception, motor functions, and language." Is this true? The idea has good empirical support, but as this discussion suggests, it breaks down in practice. "There’s a tradeoff between environment complexity, reward design, and agent design."