Рецензия на книгу "why greatness cannot be planned"

Кросс-пост https://engineeringideas.substack.com/p/review-of-why-greatness-cannot-be #сиОдО


Kenneth Stanley's and Joel Lehman's book Why Greatness Cannot Be Planned: The Myth of the Objective has two components: theoretical, describing something about the reality, and ethical (practical, political), prescribing what people and other agents should do.

The theoretical part

The open-ended discovery

On the theoretical side, the authors show that the ideas from the quality diversity field of science (I'm hesitant even to say "computer science" because computation and algorithms are now ubiquitous in all disciplines of science; Steven Wolfram even calls multicomputation, which is closely related to quality diversity, a new paradigm for theoretical science) apply to many domains of human and agentic activity: choosing projects (business or scientific) to join (or deciding what project to start), looking for a partner, deciding what to do with one's life and career next.

The authors show that in these domains, setting too far-reaching goals (objectives) and trying to purposefully reach them is often counterproductive. The objective is "too far-reaching" is when there is no clear path towards this objective in the sight (no recipe, no step-by-step algorithm of predictable efficacy), i. e., it is not in the adjacent possible (in terms of knowledge, technology, social readiness, etc.). The immediate reason why attempting to reach an objective is often counterproductive, i. e., why greatness cannot be planned, is that the structure of human progress is complex: the only possible paths to many discoveries (technologies, achievements, desired states of the system) are counterintuitive and cannot be planned ahead. In another paper which I refer to below, the authors vividly describe the structure of the progress as a "circuitous webs of stepping stones".

Authors pose that to keep the pace of civilisational progress unrestricted, people and agents need to stop attempting to purposefully reach the objectives beyond the adjacent possible and explore things, ideas, projects, decisions more freely, open-endedly instead, guided only by the interestingness of these things, and respecting only the constraints of the physical reality.

The authors also show that the same principle applies to collaborative work practices and societal practices of education and choosing the scientific projects to fund. In these three areas, the progress is stifled by the idea that collaborators, or educators, or funding bodies should reach some consensus. So, here, the consensus is the counterpart of the objective from the domains of personal and agentic activity mentioned above. However, since these collective and societal activities are just facets of the general civilisational progress, the arrangement that maximises the pace of progress is, again, increasing the diversity of thought and letting collaborators, educators, and people who fund science make their own decisions, chart their own path, guided by their intuition, experience, and the desire to explore. The only additional ingredient is to ensure that these agents review each other's work and exchange ideas (i. e., "cross-pollinate").

Another good succinct exposition of the idea of open-ended algorithms can be found in the "Introduction" section of the paper "POET: Open-ended Reinforcement Learning through Unbounded Invention of Learning Challenges and their Solutions" which is co-authored by both authors of the book.


I generally agree with the theoretical argument of the book.

However, I think the idea that the pure novelty-seeking (or interestingness-seeking) algorithm maximises the probability of reaching some particular desirable objective (and the expected time until this happens) is at least over-simplified, and at worst not correct anymore. The book was published in 2015, and there has been a lot of progress in the fields of evolutionary algorithms, machine learning, and intricate combinations thereof, so this might not be a fault of the authors.

Even for layman exposition, I think it would be valuable to provide more details about how and when exactly agents should move from one project (idea, decision, job, etc.) to another, and how they should choose these projects, i. e. what exactly should interestingness mean. Otherwise, the book is not more insightful than any self-help book that advises readers to "follow their gut feeling".

The authors give just one heuristic for finding interesting projects (ideas): the project generates controversy, i. e. there is no consensus around it: some people (colleagues, experts, evaluators) think it's good (interesting, smart, important), others think it's bad (uninteresting, stupid, unimportant).

However, there should be more to choosing interesting (important) projects than just controversy. Perhaps, people and agents should learn from biological and civilisational history to devise non-trivial heuristics about when to switch from project to project and how to evaluate new projects. As Nassim Taleb puts it: "To understand the future, have respect to the past, curiosity to historical record, grasp of the notion of heuristics."

What to look at in history books and biographies and what to try to learn from them is also unclear. Currently, the best strategy available for a layman is to read some books (and how much time to devote to such reading is also unclear) and to hope that their "wet" neural net will subconsciously capture just what it should capture from these books, and thus the intuition, the "taste for interesting/important projects" of the person is improved. I think the research of intelligence could provide more concrete advice if have not done this already. See, for example, section 3.2.2, "Improved transfer strategy" in the paper linked above.

At some point, the authors say that human intuition is the best guide for interestingness. In 2021, I'm not sure they themselves would still agree with this.

Parallels with Frankl and Watts

The general idea that open-endedness works better than setting far-reaching objectives was stated many times before Why Greatness Cannot Be Planned, but probably never at this detail, and not using science as the basis. For example, Viktor Frankl:

Don’t aim at success—the more you aim at it and make it a target, the more you are going to miss it. For success, like happiness, cannot be pursued; it must ensue, [...] Happiness must happen, and the same holds for success: you have to let it happen by not caring about it. I want you to listen to what your conscience commands you to do and go on to carry it out to the best of your knowledge. Then you will live to see that in the long-run—in the long-run, I say!—success will follow you precisely because you had forgotten to think about it.

Alan Watts:https://www.youtube-nocookie.com/embed/byQrdnq7_H0?rel=0&autoplay=0&showinfo=0&enablejsapi=0

The whole process of nature is an integrated process of immense complexity, and it's really impossible to tell whether anything that happens in it is good or bad because you will never know what will be the consequences of a misfortune, or you never know what will be the consequences of a good fortune.

Parallels with Kaufmann

Stuart Kaufmann apparently comes even closer (I say "apparently" because I didn't read the original Kaufmann's writing, but only this post), his "theory of adjacent possible" from At Home in the Universe is basically the equivalent of Stanley and Lehman's idea of the open-ended exploration and is also rooted in science (Kaufmann is a biologist).

Parallels and differences with Hamming

It's interesting to contrast the idea of the open-ended discovery with earlier writing of Richard Hamming (The Art of Doing Science and Engineering):

The main difference between those who go far and those who do not is some people have a vision and the others do not and therefore can only react to the current events as they happen.

The accuracy of the vision matters less than you might suppose, getting anywhere is better than drifting, there are potentially many paths to greatness for you, and just which path you go on, so long as it takes you to greatness, is none of my business. You must, as in the case of forging your personal style, find your vision of your future career, and then follow it as best you can.

What it takes to be great in one age is not what is required in the next one. Thus you, in preparing yourself for future greatness (and the possibility of greatness is more common and easy to achieve than you think, since it is not common to recognise greatness when it happens under one’s nose) you have to think of the nature of the future you will live in. The past is a partial guide, and about the only one you have besides history is the constant use of your own imagination. Again, a random walk of random decisions will not get you anywhere near as far as those taken with your own vision of what your future should be.

There are parallels between Stanley&Lehman and Hamming ("there are many paths", the achievement is often serendipitous: both Stanley&Lehman and Hamming cite Louis Pasteur: "chance favours only the prepared mind") as well as differences: Hamming talks about some great vision and attempts to predict the future. I think even if we interpret Hamming's words very flexibly, we cannot square them into the framework proposed by Stanley and Lehman, so I think that Hamming didn't derive all the right lessons from his own experience and observations.

Parallels with Caulfield

The authors' description of how education and collaboration should be changed to be more distributed, diverse, and open-ended is very close to Mike Caulfield's "Federated Education: New Directions in Digital Collaboration". Federated Wiki is a concrete implementation of Stanley and Lehman's ideas about how collaboration should look like.

Parallels to Gall

The idea that setting far-reaching objectives, an attempt to plan for greatness is counterproductive is related to Gall's law:

A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.

Parallels and differences with Taleb

The idea of the open-ended search is closely related to the flaneur strategy described by Nassim Taleb in Antifragile:

[A flaneur is] someone who, unlike a tourist, makes a decision opportunistically at every step to revise his schedule (or his destination) so he can imbibe things based on new information obtained.

Taleb’s flâneur is someone who seeks out optionality. Because you can’t predict what’s going to happen, he argues, you stand to gain more by positioning yourself in such a way that you always have options (and preferably ones with great upside and little to no downside). That way you can evaluate once you have all the necessary information and make the most rational decision. His flâneur is an experimenter, a master of trial and error. He’s a self-learner who is never the prisoner of a plan. The rational flâneur merely needs to a) avoid things that hurt him, b) keep trying new things, and c) be able to recognize when he achieves a favorable outcome. In this way, he achieves freedom through opportunism.

Taleb also looks at novelty-seeking as a way to gain robustness, the idea which Stanley and Lehman don't mention in their book:

Someone who made many different mistakes (but never repeated) is more reliable than somebody who hasn't ever done one.

Taleb offers a simple insight into why the open-ended search is the strategy that maximises the overall rate of discoveries. Because the returns of experiments (projects, startups) are non-linear (convex), trying everything (for a small cost) maximises the total expected return because it minimises the chances of missing the single biggest breakthrough (unicorn). That's why, Taleb poses, Y Combinator has the best venture investing strategy: it enters as many startups as possible for the lowest possible amount of money.

Taleb also shares Stanley and Lehman's view on collaboration:

Collaboration is the drive. Since you cannot force and predict collaborations, you cannot see where the world is going.

On the other hand, I think there is also a serious difference between the philosophies of Stanley&Lehman and Taleb. Although Stanley and Lehman mention risk in a few places in a book, they don't discuss it at all and its relationship to the maximisation of the rate of progress via unrestricted, open-ended search. Yet, I agree with Taleb who points out that this is crucially important to discuss progress and risk together, not separately:

It's completely pointless to count benefits without considering the probability of failure.

I assume that Stanley and Lehman omitted the discussion of the risk from the book deliberately rather than out of ignorance, and this is a part of their political position which they translate in the book. I discuss this side of the book below.

No algorithm (even the open-ended search) can guarantee always reaching a specific objective

This idea from the book looks like a restatement of the fact from the theory of computability that there exist uncomputable problems, for example, the halting problem.

A single metric rarely captures the essence of what you really care about

This idea from the book is closely related to Goodhart's law ("when a measure becomes a target, it ceases to be a good measure") and Eliezer Yudkowsky's idea of complexity (fragility) of value.

Random evolution moves from simple to more complex behaviours

I think this is over-simplified. Here's what Wikipedia says about the evolution of biological complexity:

Although there has been an increase in the maximum level of complexity over the history of life, there has always been a large majority of small and simple organisms and the most common level of complexity appears to have remained relatively constant.

The free energy principle might be a more holistic statement about what evolution does and does not. Stanley and Lehman also write something that seems to hint towards the free energy principle: "Eventually, doing something novel always requires learning something about the world." but don't explain this phrase.

The ethical and political part

Laissez-faire exploration to maximise the rate of progress

In the book, the authors have never mentioned that the open-ended search (discovery, exploration) should be restricted by anything apart from the constraints of the physical reality, even the freedoms and rights of other people and agents. This seems to be deliberate, and is a part of the ethical and political position that the authors convey in the book, albeit they don't admit this explicitly: on the contrary, they take an ostensibly "non-judgmental" position, which, of course, is an ethical/political position itself:

Letting go of objectives is also difficult because it means letting go of the idea that there is a right path. It's tempting to think of progress as a set of projects, some of them on the wrong path and some of them on the right. [...] When there is no destination there can't be a right path. Instead of judging every activity for its potential to succeed, we should judge our projects for their potential to spawn more projects. [...] the only important thing about a stepping stone is that it leads to more stepping stones, period.

Judgmentalism is the natural habitat of the objective seeker, always worried about where everyone else will end up.

Disagreement and divergence are virtues that deserve to be protected. What is the real danger of someone disagreeing with your path aside from ending up in a different location? [...] If you don't have a clear objective, you can't be wrong, because wherever you end up is okay.

So, the authors seem to be far-right libertarians who take as a premise that the rate of progress should be maximised.

Parallels with Deutsch

This position of Stanley and Lehman is similar to the position of David Deutsch. In The Beginning of Infinity, Deutsch writes:

Strategies to prevent foreseeable catastrophies are bound to fail eventually, and cannot prevent unforeen. To prevent those, we need rapid progress in science and as much wealth as possible.

Deutsch is also critical of the sustainability movement, he connects it to stasis (second meaning of the verb "to sustain").

Contra: Meadows, Taleb, Ord, Bostrom

I disagree with Stanley&Lehman's and Deutsch's that maximising the rate of progress is desirable.

Donella Meadows writes in "Leverage Points: Places to Intervene in a System":

Asked by the Club of Rome to show how major global problems — poverty and hunger, environmental destruction, resource depletion, urban deterioration, unemployment — are related and how they might be solved, Forrester made a computer model and came out with a clear leverage point1: Growth. Not only population growth, but economic growth. Growth has costs as well as benefits, and we typically don’t count the costs — among which are poverty and hunger, environmental destruction, etc. — the whole list of problems we are trying to solve with growth! What is needed is much slower growth, much different kinds of growth, and in some cases no growth or negative growth.

This is in an accord with Nassim Taleb's idea that slowing down is a part of the flaneur's credo. As I already cited Taleb above:

It's completely pointless to count benefits without considering the probability of failure.

Also: "You are defined by your worst day, not your best".

To me, it seems that maximising the rate of progress in AI rather than AI ethics and AI safety increases the overall civilisational (existential) risks (see Toby Ord's The Precipice), which is unacceptable.

In The Precipice, Toby Ord supports Nick Bostrom's idea of differential (technological) development, which is the hastening of risk-reducing progress and the delaying of risk-increasing progress.

"Negative" objectives (limitations, constraints)?

Inspired by Nassim Taleb's via negativa ("the study of what not to do"; "it's easier to say what God is not than what it is"), I think it might be possible (and beneficial) to introduce far-reaching negative personal, organisational, national, and civilisational objectives by defining limitations (constraints, laws) on what humans and agents are permitted to explore.

An obvious and (I hope) non-controversial limitation that I mentioned above is that projects (experiments) of people and agents shouldn't endanger the freedoms and life of other people and agents.

Also, although I agree with Stanley and Lehman that we cannot predict what the future will (and should) look like in 2050, I believe that this is not difficult to conclude that this future should not include burning a lot of fossil fuels and anthropogenic biological species extinction at today's rate. Therefore, I think that a global limitation on the projects that people can start should be that they should not directly increase per-capita carbon footprint or lead to such an increase as their second- and third-order consequences, accounting for the effects such as Jevons paradox.

Such a limitation could even be dressed as respecting the constraints of the physical reality, the carrying capacity of the Earth more specifically, which Stanley and Lehman themselves call to respect.

The objection of the advocates of the unrestricted open-ended search may object: "What if an innovation that leads to a temporary increase of carbon footprint (for instance; or doesn't conform to some other "negative" limitation that I suggested to impose) will serendipitously lead to a massive benefit several stepping stones down the road of progress?" To this I would reply that "negative" limitations don't imply stifling the progress: since the structure of the progress is complex and there are many possible paths to the same outcome, there should always be paths only through the stepping stones that satisfy a certain (reasonable) limitation or constraint. Moreover, limitations foster creativity, and thus could even turn out to positively affect the rate of progress on the aggregate (especially "healthy" progress, as per the doctrine of differential development).

Why does the system (including the people within it) tend to maximise the rate of progress?

Stuart Kaufmann proposes a theory about this (as recited in this post):

It’s possible that the biosphere as a whole, or the network of information under-girding the interacting components that make up any complex system— whether enzymes, gazelles, or Miley Cyrus—actually maximizes the rate of exploration of the adjacent possible in order to increase the diversity of what can happen next, with the caveat that it does so at a rate that it can get away with being boisterously flamboyant and not self-implode.

That a subconscious yet ultimately directed effort to build risk and unpredictably into the system itself to ensure its own survival might sound like something out of a science fiction novel is not lost on me, and at the prospect of anthropomorphizing the Earth and everything on it, a major scientific faux-pas for sure, it appears that Gaia does have plans for us yet, even if its “purpose” is in fact a completely natural, emergent property of our co-efforts to share in the construction of the worlds we live in.

This reminds me of Sidney Dekker's note in Drift into Failure:

A complex system operates far from equilibrium. Components need to get inputs constantly to keep functioning. Without it, a complex system won’t survive in a changing environment. The performance of a complex system is typically optimised at the edge of chaos, just before the system’s behaviour can become unrecognisably turbulent.

Благодарю
С английским не ок, прочел пост с переводчиком. Жду выход книги на русском.