Why Siri is not going to “Wake Up” anytime soon. Cancer, Evolution, and the Nature of Nature: Part 3

By Perry Marshall

June 24, 2021 | 0 Comment

In the year 2000, Silicon Valley titan Bill Joy delivered a bombshell in Wired magazine: “Why the Future Doesn’t Need Us.”  He warned: Future machines will have no need of human beings, and we will have to compete with them for resources.

Not only is this NOT going to happen, it’s a handy distraction from the real problem.

All those sci-fi “computers taking over the world” stories like HAL9000 and Terminator are impossible given the current definition of computer.

As are the predictions of mass unemployment. As is the much-ballyhooed “singularity” when we humans upload ourselves into the cloud and become immortal. 

Such scenarios require something vastly different than anything we are building today. 

Moore’s law says computers double in speed every two years and the price drops in half. Still doesn’t change anything. A pocket calculator was as dumb as a box of rocks in 1975, and Siri is as dumb as a box of rocks today. Speed is not the issue. To solve the problem of intelligence, we need brand new “laws of physics.” 

Erwin Schrodinger said so in his book What Is Life in 1943, and now Mathematics itself proves this. This is gigantic and still awaits our discovery.


When your six year old talks to Alexa, it only takes her 60 seconds to figure out: No one is home. Everyone instinctively knows the difference between a machine and a dog or goldfish. Life is so special, non-life can’t even fool a six year old.

The difference between computers and life is:

Computers obey instructions. 

Life creates instructions. 

The difference cannot be overstated.

I’ve developed a mathematical proof of this. Please don’t assume only mathematicians can understand this. The proof is simple and most people can understand it, as I shall explain.

Alan Turing, the British mathematician who famously cracked the German code in WWII, invented the concept of the computer in 1936. His “Turing Machine” is a black box with an input, an output, memory, and a program running 1s and 0s. Turing is now pictured on the UK’s new £50 note.

Everyone is familiar with Turing Machines because we carry them around in our pockets every day. You’re reading this blog post on one right now.

Turing’s Insight #1 was: A computer can crunch any math formula. 

Turing’s Insight #2 was: It’s impossible to know whether it will find the answer without running the program first.

The implications run for miles. The simplicity of these statements is what makes these so profound. They establish clear boundaries for what computers can and cannot do. When combined with a third insight below, they define a sharp boundary between computers and life itself.

Insight #1, computers do math, is why computers crunch data far faster than we can. It’s why any language, text, audio or video can be converted to 1s and 0s and transmitted. Anything that can be reduced to formulas can be computed.

Insight #2, you have to run the program before you know if it will finish, is why computers crash. It’s far less appreciated than insight #1, but it proves what computers cannot do. Computers can’t solve a math problem if it can’t be reduced to a formula. As any math professor can assure you, there are many problems in mathematics that can’t be reduced to formulas.


Because computers by definition only do deductive reasoning. But humans, animals, plants and cells do inductive reasoning. 

Deductive reasoning is straightforward logic. “A is true therefore B is true.” “All men are mortal, therefore Socrates is mortal.” “3+9=12.” Working from general principles to specific truths. 

Inductive reasoning is solving riddles“B is true, so maybe A is true as well.” “Everyone I know of who got old, died. Therefore will everyone die?” That’s a riddle, and the answer is: Probably. But you can’t prove it. Induction (also known as “inference”) is working from specific truths to general principles. There’s no way to know an exception won’t be found. Swans are white, but you cannot guarantee that there are no black swans.

Computers cannot make inferences. In fact, even when they appear to infer, they are only making pre-programmed deductions. This comes from the very definition of computing and math functions. In your high school algebra course, they told you “A function is when there is ONE Y for a given X. If there are two possible answers, it’s not a function.” In the formula Y = X2 there is only one output for any input.

Computers cannot make choices. They obey rules. 

This is where things get interesting! Because how do you “trick” a computer into making a choice?

Easy. You insert a random number into the program. Suddenly the computer has options because it is no longer 100% predictable. It spits out a list and you pick an option. The GPS offers you two or three routes rather than one. Maybe it chooses from the three options itself.But… how does it choose?

“Drive the route that takes the least amount of time.” It still has to be given a set of rules in order to decide. 

There’s no way around this – the computer needs rules to get anything done. It can’t make up its own. 

It doesn’t matter whether your computer is twelve transistors strung together on your kitchen table, or the latest Pentium with billions of transistors running neural network software; this basic fact never changes. You can add inputs and more outputs, but none of that alters its essential nature. Computers run on pure logic. Period.

Which brings us to the problem:

All logic requires choices. Logic cannot create itself. 

In mathematics, choices are called axioms. Examples of axioms:

“In geometry, a line segment can be extended infinitely in both directions.”

“Our numbering system has ten symbols, 0 through 9. When we want a number bigger than 9, we add digits.” 

The decision to do geometry with straight lines is a choice. Base 10 is a choice. Our number system could just as well be based on 2 or 8 or 37 symbols. In DNA it’s 4.

So… where does choice come from?

The knee jerk answer some people give is “It’s random.” While randomness can generate all the options you could ever want, someone still has to choose where to insert the randomness and what the criteria for success are. 

Nobody battles randomness more than communications engineers. It’s called noise. It comes from the sun, arc welders, cosmic rays, thunderstorms and spark plugs. Noise is any interference you don’t want. In 1948 Claude Shannon called this “information entropy.” He showed that the equations for it are identical to heat entropy.

Heat entropy says: Once the toast pops out of your toaster, it gets cold, not hot, EVERY SINGLE TIME.

Information entropy says: Once you add noise to a signal, it gets worse, not better, EVERY SINGLE TIME. 

One way ticket.

In a computer, you can add noise anywhere you want and it will generate options for you. But without further choices being made by external agents, all the noise will get you is degradation.

A random letter generator would take longer than the age of the universe to write the sentence you’re reading right now. You can have all the natural selection you want… but agents have to make choices in order to generate information.

So there are three levels of information:

  1. Choice
  2. Computation
  3. Chance

They always run in that order. Choice creates rules. Rules are the rails that computers run on. Chance is the opposite of choice. Chance erases rules. In my paper I label these Cognition, Codes, and Chemicals. Three distinct levels of causation. “Chemicals” only generate chance interactions, not information. I offer a $10 million prize for anyone who can name an exception.

Chance is noise and noise destroys. But agents harness chance. We do this in games by rolling dice. Chance plays an essential but limited role in any game, especially good games. In poker, the order of the cards is random and that’s why we shuffle the deck. But the number of cards isn’t random; the cards themselves aren’t random; and the rules aren’t random. 

Computation doesn’t create life. 

Life creates computation.

It’s that simple. In my paper Biology transcends the limits of computation I phrase these statements in formal mathematical language, but as I said, most people can understand these concepts pretty easily.

So no matter how sophisticated your machine learning / AI / expert system / robot / neural network is, it still runs on choice first (made by living things, usually humans), computation second, chance third. No exceptions. 

I said at the beginning that the hype around AI is really just a distraction from the real problem.

What is the real problem?

The real problem is: Someone owns every AI and machine learning platform. 100% of what these platforms do results from choices that human beings made. 

I’m the guy who wrote the book on Google and Facebook advertising. My 20 years as a consultant have revolved around “the Big Four” – Google, Facebook, Apple and Amazon. The ad platforms are heavily dependent on AI that cost billions to build, and now earns hundreds of billions of dollars.

As I say in the paper:

The author has written bestselling books on both these platforms and educated hundreds of thousands of advertisers in the use of these technologies. These platforms do not exhibit any form of cognition. If the capabilities of Google or Facebook rivaled even that of a tiny colony of bacteria, the market caps of these companies would skyrocket overnight.

When Elon Musk warns us AI is going to take over the world, it’s about spiking Tesla’s stock price, not future reality. AI is incredibly useful… but over-hyped.

AI is all “A” and no “I”

All the starry-eyed musings about AI becoming smarter than humans are a convenient distraction from this fact:

All machine learning algorithms are controlled by humans who deliberately sway public sentiment, politics, elections, medical and health advice, pandemic policies, news stories, media content, financial markets, government policies and mass movements to a degree most people cannot fathom.

1% of the people steer 99% of the conversation.

Welcome to the Matrix. You’re living in it now.

Most people think the problem is “greedy advertisers stealing my personal information.” But advertisers themselves only get the information you give them. It’s Google, Facebook, Apple and Amazon who have your information. For the most part, all advertisers care about is: Not showing an ad to you if you’re not likely to buy. 

The real question you should be asking is: What are the agendas of the owners and managers of these platforms? 

It’s not like their agendas are a big secret – it’s easy enough to sketch that out. Just watch what they do, what they censor and who they ban. This affects everything you see on your computer, smartphone and “news feed.” Most people are oblivious. 

Also, dear reader, please notice that every computer needs a caretaker. Despite decades of warnings that computers are going to create massive unemployment, the computer industry continues to employ more and more and more and more people.

This will never change – not until someone invents a completely different kind of computer that is not a computer. That New Thing, whatever it is, will be something entirely different.

In Part 4 I’ll explore what that New Thing might look like.


Perry Marshall

NASA Jet Propulsion Labs uses his 80/20 Curve as a productivity tool. His reinvention of the Pareto Principle is published in Harvard Business Review. His Ultimate Guide to Google Ads is the best-selling book on internet advertising. A business strategist and electrical engineer, Perry founded the largest science research award in history. The $10 million Evolution 2.0 Prize will be judged by scientists from Harvard, Oxford, and MIT. Seeing that existing financial incentives favor prolonging cancer rather than curing it; and realizing the medical profession has incorrectly defined the disease in the first place... he chose to apply entrepreneurial thinking to the problem.


It's time to
Donate Now