Reviews for
The Outer Limits of Reason:
What Science, Mathematics, and Logic Cannot Tell Us
By Noson S. Yanofsky
Published by MIT Press
September 4, 2013:
Zocalo Public Square also found in Slate.
The Nutshell:
Brooklyn College and City University
of New York computer scientist Yanofsky shows how investigating the
impossible—paradoxes of language, problems even the fastest computers cannot
solve, and mathematical equations without answers—can help us better understand
the possible.
Literary
Lovechild Of:
Roy Sorenson’s A Brief History of the Paradox:
Philosophy and the Labyrinths of the Mind and Deborah J. Bennett’s Logic
Made Easy: How to Know When Language Deceives You.
You'll Find It
On Your Bookshelf If:
When you can’t sleep, you factor six-digit numbers in
your head.
Cocktail Party
Fodder:
All adjectives that describe
themselves—“English,” “polysyllabic”—are autological or homological. All
adjectives that do not describe themselves—“French,” “monosyllabic”—are heterological.
But is “heterological” indeed heterological? (The question is a paradox.)
For Optimal
Benefit:
Suspend your faith in reason and settle down for a
deeply focused read that will clear things up by leaving them unclear. Hey, we
told you to suspend your faith in reason.
Snap Judgment:
Yanofsky makes problems and questions that should make
your headache delightful to read and think about.
===================================================================
September 1, 2013
A starred review!
Rather than write about what he knows, Yanofsky (Quantum Computing for Computer Scientists) prefers to explore the topic of what he doesn't know—or rather what we as humans cannot know. In this refreshingly original and ambitious philosophical inquiry, he attempts to map the limitations of human reason by examining the established conundrums, paradoxes, impossibilities within science and technology. Divided by subject area (including language, philosophy, science, mathematics, computing), each chapter lays out an array of paradoxes and unsolvable problems, clearly and concisely guiding readers into and around the worlds of reason. The examples range in complexity and some may be more familiar than others, such as his explanations of the Liar's Paradox, "this sentence is false." The more complicated contradictions, such as George Cantor's proof that an infinite set of numbers between 0 and 1 is vastly larger than an infinite set of natural numbers (1, 2, 3, 4 . . .), the author unpacks succinctly within the framework of modern life. He writes, "It would be foolhardy to cross a modern suspension bridge if you knew that the engineer did not believe in Cantor's work." Yanofsky takes on this mindboggling subject with confidence and impressive clarity. He eases the reader into the subject matter, ending each chapter with further readings. His book is a fascinating resource for anyone who seeks a better understanding of the world through the strangeness of its own limitations and a must-read for anyone studying information science. Illus. (Sept.)
===================================================================
October 10, 2013
I was intrigued by a book advertisement I saw on the Boston MBTA: "An Exploration of The Scientific Limits of Knowledge That Challenges Our Deep-Seated Beliefs About Our Universe, Our Rationality, and Ourselves." At first I was a bit skeptical with such a bombastic line. But how can one resist such a come on?
This is a popular science book about what is beyond the ability of reason to know. By "reason" Yanofsky means anything using exact thought like math, logic, computers, physics and even a little philosophy. Questions of what human beings can know is part of a branch of philosophy called epistemology. Such philosophers usually talk about the theories of Lock, Berkley, Hume, Kant etc. In this book, the field is updated and scientific epistemology is discussed. There are many modern results that show that there are objects, that cannot exist, calculations that cannot be performed, and problems that cannot be solved. The book weaves a beautiful tapestry together and shows that many of these limitations in different fields are of the same form.
As a computer professional, I was naturally most interested in the two chapters about limitations of computers. Chapter 5 is about problems that can theoretically be solved but in fact, for any reasonable sized inputs will not be solved for trillions of centuries. The core of the chapter is the idea of NP-Complete problems such as Satisfiability (SAT) or the Traveling Salesperson Problem (TSP). One usually thinks of TSP as a hard computer problem that you can explain to any child, but Yanofsky stresses TSP as a limitation of human knowledge. He explains why most people believe there does not exist a simple algorithms for such problems. Yanofsky finishes this chapter off with a discussion of approximation algorithms and problems that are even harder than NP-Complete problems.
Chapter 6 is not about hard computer problems (complexity theory) but impossible computer problems (computability theory). The classic example is Turing's Halting Problem. There does not exist a program that can tell for any given program and any given input if the program with that input will halt or go into an infinite loop. The chapter also discusses some other unsolvable computer problem and shows how they are connected. There is a discussion of Turing's oracle idea and how this classifies all unsolvable problems. The chapters ends with a short (too short) and inconclusive discussion of whether humans — as opposed to computers — can solve such problems (is AI possible?)
The rest of the book deal with other types of limitations. Chapter 2 discusses limitations of language. Chapter 3 mentions some classical philosophical issues like Zeno's paradoxes and the topic of vagueness. Chapter 4 discusses the counterintuitive notions of infinity and the fact that there are different levels of infinity. Chapter 7 is about three fields of physics: chaos theory, quantum theory, and relativity theory. Chapter 8 deals with philosophy of science issues. Chapter 9 talks about some limitations of mathematics including some basic math problems that a computer (and a human?) can never solve. The chapters of the book are for the most part independent which makes it easier for the reader to read topics that interests her. Chapter 10 summarizes the whole book.
The section on quantum theory deserves special mention. Yanofsky spends 38 pages describing the world of quantum mechanics. But rather than telling the life stories of the founders of quantum theory (too easy, too boring) or trying to teach the math behind quantum theory (too hard), Yanofsky goes through seven or eight experiments in quantum theory and tells us what the results of the experiments show about the universe and our knowledge of the universe. Included in this is the mysterious topic of entanglement and Bell's famous inequality. I had to read that part twice but I can proudly say that I understand it now.
After reading the whole book, my favorite part is the last chapter. Here everything magically comes together in an amazing way. In the first part of the chapter, Yanofsky gives a four-part classification of all the limitations discussed in the book. Within this classification he makes fascinating links between various limitations in different areas. He connects NP-Complete problems and the butterfly effect; the Halting Problem and the barber paradox; language paradoxes and mathematical limitations. From this "high" point of view, all the different limitations fit together perfectly and one can clearly see the whole beautiful landscape.
One of the central ideas in the book is the concept of self-referential paradox. This is a paradox that comes about from a system that can talk about itself. In chapter 2, the liar paradox is shown to come about because English sentences can talk about English sentences. In chapter 3, there is a discussion of time-travel paradoxes (as in the Back to the Future movies) which come about because events that happen because time traveler can make events that affect themselves. Turing's Halting Problem is shown to come about from the fact that programs deal with programs (as operating systems do.) And Gödel's famous Incompleteness Theorems comes from that fact that mathematics can talk about itself. These are just a few of the more famous self-referential paradoxes mentioned in the book. They form a thread that shows that the same scheme of reason is playing a role in many different areas.
There are some philosophical parts of the book that I cannot truly judge. In chapter 3 there is discussion about the problem of identity (to what extent is something the same even when it changes) and the problem of personal identity (to what extent is someone the same even when they change? What makes a human being a human being?) In chapter 8 there is a discussion of the problem of induction; the "unreasonable effectiveness of mathematics" and the anthropic principle. In these philosophical parts of the book, Yanofsky makes some cogent arguments about different issues. Not all his arguments are totally convincing. I like the more scientific and technical parts of the book where it is easy to tell who is right and who is wrong.
I must mention Yanofsky's style. The writing is crisp and totally clear. Although I learned about the Halting Problem when I was in school, I never truly understood Turing's proof of why no computer can ever solve the Halting Problem till I read Yanofsky's proof. There are also some very helpful charts and diagrams. There are a few Venn diagrams that show different classifications. The book is also very funny. There is a sprinkling of some very clever lines that make it a pleasure to read. The footnotes are also full of such hilarious gems.
A coworker told me that this does not surprise him since he read a textbook coauthored by Yanofsky titled Quantum Computing for Computer Science. My coworker said that it was the only book on quantum computing that one can read without a PhD in physics. He said that book was also very clear.
This book is not hard to
read. It is beautifully written in a very understandable way. The reason why
the book is fascinating is because it has so many diverse topics. And yet,
Yanofsky manages to connect all the topics. This book will get you to think
about what we can know about the universe in a totally new and exciting way.
===================================================================
October 14, 2013
Finished reading: The Outer Limits of Reason: What Science, Mathematics, and Logic
Cannot Tell Us Hardcover by Noson S. Yanofsky. Pretty
good book. I read a draft of it a few years ago, and must say that the
final version came together very nicely. Highly recommend! While large chunks
of it are interesting, I particularly liked the non-scientific (more
philosophical) topics found towards the end, like in chapter 8. Discussion of
problems of inference could've been a bit longer... that's like the core of our
knowledge limitations---we learn by inference, means most things we learn
(knowledge outside of deduction) is potentially iffy. That's the limitation on
scientific method itself! One would think that `science' (if done right) leads
us towards truth of some sort---but that's not guaranteed! For all we know,
there may not even be a `truth' to move towards. While I should've enjoyed the
computer science bits, it seems those ideas have hit my brain too many times
over the years; there's only so much halting or np-completeness that my brain
can take.
=====================================================================
October 21, 2013
I finished reading The Outer Limits of Reason What Science, Mathematics, and Logic Can[not] Teach Us by Noson S. Yanofsky. Noson S. Yanofsky focuses on reason as a way to seek out facts and avoid falsehoods. I found some more of the ideas stimulating. It would be easy to take each chapter in this book and turn it into a full length nonfiction book. The content is very dense with ideas. There are diagrams, but not long complex diagrams with lots of math. The book is also indexed and has notes on each chapter. Right now, I am thinking about the idea that observing things changes what is being observed. I am also thinking about how science is limited to what we can observe. I don't think we will run out of new things to observe any time soon. There is still plenty of outer space which has not been explored and parts of the deep ocean where people have not been. I enjoyed reading this book, it stretched both my imagination and intellect. People asked about it while I was on the train to work. They wanted to know if it was about physics. For me this covered philosophy, math, language, physics, quantum physics, computer science, and history of science.
=====================================================================
October 21, 2013
If you’re looking for a
refreshing break from the usual investing/trading fare, let me recommend Noson
S. Yanofsky’s The Outer Limits of Reason: What Science, Mathematics, and
Logic Cannot Tell Us (MIT Press, 2013). Written for the layman, it explores
the realm of the unsolvable, unprovable, and unknowable. Some (perhaps even
most) of the material will be familiar, but Yanofsky offers a compelling
synthesis of various “outer limits” problems.
Some computing challenges are staggering—that is, practically unsolvable, even
if not theoretically impossible. For instance, Euler may have solved the seven
bridges of Königsberg problem with sheer brain power,
but solving the traveling salesman problem for 100 cities—assuming that our
computer can check a million routes in a second—would take 2.9 * 10142
centuries! Splitting a hundred numbers into two sets to see if the sum of one
part equals half the sum of all the elements would take a mere
401,969,368,413,314 centuries. And trying to predict with any degree of
precision an event in a chaotic system, or a complex adaptive system, is a
pretty hopeless undertaking; it’s not even a question of time or computer
power. (Just ask any trader.)
Many problems, such as the halting problem or the tiling problem, are
undecidable—at least by computer logic. Often these problems suffer from some
form of self-referential limitation. Think of time-travel paradoxes, Russell’s
paradox, Gödel’s first incompleteness theorem or my personal
stroll-down-memory-lane favorite (I guess because, when I was first introduced
to logic, I learned a new word in the process) the heterological paradox. In
fact, Yanofsky writes, “the universe is the ultimate self-referential system;
the universe uses scientists to study itself.” (p. 343)
Other problems stem from the chasm between the describable and the
indescribable—the former countably infinite, the
latter (presumably) uncountably infinite. That is,
“there is no longest word or longest novel, because there is no limit to the
longest formula, and so on. This makes language infinite. However, it can be
alphabetized or counted, which makes language countably
infinite. … It is plausible to say that there is an uncountably
infinite number of phenemona that can occur. This is
stated without proof because I cannot quantify all phenomena. To quantify them,
I would have to describe them and I cannot do that without language.” (p. 175)
Yanofsky doesn’t break new ground in this book, but he offers a “one-stop”
emporium for those who enjoy pondering the limits of reason. I had a grand time
reading it.
====================================================================
October 31, 2013
"The Outer Limits of Reason" -- Noson Yanofsky …I will have much more to say about this book in near future (am reading it now) -- it is quite simply THE BEST math-related book I've ever read, pulling together, as it does, all the sorts of issues I'm most interested in: self-reference, paradox, infinity, logic, uncertainty, epistemology, physics… I hope this volume reaches a much wider audience.
====================================================================
November 4, 2013
There are inherent limits to logic that can't be resolved, and they bedevil our minds too, says Noson Yanofsky in The Outer Limits of Reason
"THIS sentence is false." This sentence is also where the problems start. If true, it is false; if false, it is true. Extracting its true truth is like ironing out a Möbius strip.
Things in the world we experience, however, tend to be distinctly one thing or the other. Language is a messy, human construct, so perhaps we shouldn't worry too much if it doesn't always map one-to-one with reality. But in The Outer Limits of Reason, Noson Yanofsky, an information scientist at the City University of New York, shows that our problems with reasoning about the world go much deeper than that.
Mathematics is pure reason in symbolic form. Set theory, the underpinning of all modern mathematics, has an equivalent to that unreasonable sentence above in the form of Bertrand Russell's famous paradox: consider a set containing all sets that do not contain themselves. Does that set contain itself? Such logical limitations are systemic. Kurt Gödel and others showed that no set of fundamental mathematical axioms can be used to prove itself true. The logical axioms that underlie everyday things like arithmetic depend on us accepting as reasonable the notion that infinity comes in several different sizes.
Reason is even good enough to tell us there are things reason can't tell us. In the notoriously "hard" travelling salesman problem, there is always a shortest route connecting very many cities – but even the remorseless logic of a computer the size of the universe is never going to be able to crunch through the possibilities to tell us what it is. It is a problem logistics firms wrestle with every day.
Uncomputability isn't the half of it. Three-quarters of a century ago, Alan Turing asked if an idealised computer, given any algorithm and its input, would be able to predict whether it will halt on a given output, or go into a never-ending loop. The answer to this "halting problem" is no: computer self-analysis is logically fundamentally undecidable. Next time you are inclined to scream at Microsoft's blue screen of death, be charitable to Bill Gates.
Yanofsky provides an entertaining and informative whirlwind trip through limits on reason in language, formal logic, mathematics – and in science, the culmination of humankind's attempts to reason about the world. Themes emerge, such as the consistent sticking point of self-reference. The sentence that doesn't know whether it is true or not, Russell's set that doesn't know whether it contains itself or not, or the computer that doesn't know whether it is about to loop the eternal loop: these are all entities asked to decide logically something about themselves.
The same stumbling block might mean we can only take science so far. Quantum mechanics is our most successful theory of reality, bar none, and yet we find its predictions of particles that are in two places at once, or cats that are both dead and alive, "unreasonable". It is a challenge to our classically schooled logic.
But we cannot observe these predictions directly because, in quantum experiments, our act of observing something seems to change what's observed – we are ourselves part of the experiment. Is this the ultimate problem of self-reference, one that suggests a limit to how much we can ever reason about the world?
The problem of human consciousness looms large, not just in the quantum problem. In thinking about thinking we have to use thought. Our brains are computational machines like any other, and so presumably subject to the same fundamental limits on their ability to reason. So what allows the human mind to establish that there are limits beyond which it cannot think?
Yanofsky wisely and humbly declines to speculate on the answer. But a reader of this book will more readily understand what the question is.
And that sentence is true.
===================================================================
In 1979, like a lot of people, I picked up a book entitled "Gödel,
Escher, Bach" by an unknown author named
Douglas Hofstadter... and was blown away. The book was easily one of the most
creative, thought-provoking I'd ever encountered; over the years it became
internationally famous (often simply referred to as "GEB") as
did its author who won a Pulitzer and National Book Award his first time
out-of-the-gate. (Interestingly, years later, Hofstadter would write that
almost all reviewers and discussers of GEB miconstrued
his own goals with the book -- people often read into it whatever they wanted
-- but nonetheless it remains a keen treatise on human cognition, and Hofstadter has written several volumes since. [p.s. -- I
recently discovered a great 1982 read (pdf) from Hofstadter on self-reference
and Gödel theory HERE.]
34 years later I've finally been bowled over by another book… not as
creative, ground-breaking, or Rorschach-like as Hofstadter's effort, but still
a vitally important, rich read, and the author, not surprisingly, is also a
Hofstadter fan.
A brief look at:
"The Outer Limits
of Reason" by Noson Yanofsky
Before picking up this book I'd not heard of "Noson Yanofsky," so I
was astounded that this is the best, most lucidly-written volume for lay
readers I've ever encountered on the underlying or foundational topics I most
enjoy, related to mathematics; including issues that cross the boundaries of
math, logic, philosophy, physics, and computer science.
In terse summary:
After an introductory chapter, Chapter 2 delves into "language
paradoxes" and self-reference (a topic that runs throughout the volume),
including the Berry Paradox, Richards Paradox, and the 'interesting-number
paradox,' in addition to even more common ones. Chapter 3 moves on to
"philosophical conundrums," followed by "infinity" in
Chapter 4. Chapters 5 and 6 delve into a range of computer science issues.
Chapter 7, the longest and perhaps most difficult one (50+ pages), covers
"scientific limitations," including quantum mechanics and multiverse
ideas. This is followed, in turn, by chapters on "metascientific
perplexities," "mathematical obstructions," and a final wrap-up
chapter on "reason" and its limits.
More specifically, all the following topics (and more) are brought into focus
in this volume:
paradoxes
self-reference
infinity
epistemology
logic
sets
axioms
algorithms
uncertainty
P vs. NP
quantum mechanics
relativity
entanglement
Halting problem
Galois theory
Mandelbrot set
chaos
Anthropic principle
Platonism
scientific induction
Thomas Kuhn/paradigm shifts
Hume/Hempel/Karl Popper/ falsifiability
Gödel incompleteness
The writing is clear, interesting, and comprehensible, covering a lot of ground,
without proceeding to such advanced elements as to throttle the reader along
the way (the editor has done a fantastic job!). The book ends with 15 pages of
excellent "notes" to the individual chapters, and a dozen pages of
good bibliographical references (each chapter ends with suggestions for further
reading as well).
One Amazon reviewer wrote "Reading
this book could be a mini education!" and that captures my own
feeling as well.
Having said all this I should note that the typical mathematician won't learn
any new math here; a typical physicist won't learn any new physics, and a
philosopher won't find new philosophy here... Rather, what is wonderful and
well-crafted (and rare) is the weaving together of all these (and more) areas
into a single tapestry on the nature of human rationality across such fields --
something I believe all students should have exposure too. To some degree most
of the chapters are self-contained units that can almost be read in any order
and be enjoyed, but reading from beginning to end is likely the best way to
appreciate Yanofsky's progression of thought and complexity, as he puts it
"from concrete to abstract."
The book's subtitle is, "What Science,
Mathematics, and Logic Cannot Tell Us" and that is the
central, important theme of this offering: that despite the success that
science, math, and logic meet in providing us with information, there exist
"truths" or information which are not only very difficult to gather,
but which are inherently beyond our capacity to attain.
"Certainty," and the hubris that often follows it, is one of the most
perilous dispositions humans can have… especially so in politics, religion, and
other arenas of culture… but even within science, where empirical evidence
reigns supreme, there are real limits to certainty and knowledge that need to
be recognized -- I know of no more important science lesson a book can pass
along, and I know of no book that does it as well as this one!
New Scientist reviewed Yanofsky's book here:
http://tinyurl.com/n5ld6yv
And in the blog-post just prior to this one I interviewed Dr. Yanofsky.
Choice magazine May 2014.
This fascinating account describes the limitations of reasoning. Yanofksy (Brooklyn College; coauthor with M. Mannucci, Quantum Computing for Computer Scientists, CH, Apr'09, 46-4494) includes numerous examples from many different fields, providing convincing arguments that there are limitations to understanding the …
http://www.cro3.org/content/51/09/51-4991.extract
On knowing what we cannot know.
By Nickerson, Raymond S.
PsycCRITIQUES, Vol 59(17), 2014, No Pagination Specified.
Abstract
Reviews the book, The Outer Limits of Reason: What Science, Mathematics, and Logic Cannot Tell Us by Noson S. Yanofsky (see record 2013-37681-000). Yanofsky provides a rollicking journey of the mind that takes him and his readers through a myriad of intriguing—sometimes baffling—concepts, paradoxes, conundrums, conjectures, and proofs. This book is an immensely interesting read—an impressive and thought-provoking book. It is liberally illustrated with helpful diagrams and is extensively footnoted, containing many pointers to other sources of information on the topics discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved)