Can a Computer Devise a Theory of Everything?

Not precisely to do with books, but pickings are slim during the Thanksgiving holiday and following weekend.

From The New York Times:

Once upon a time, Albert Einstein described scientific theories as “free inventions of the human mind.” But in 1980, Stephen Hawking, the renowned Cambridge University cosmologist, had another thought. In a lecture that year, he argued that the so-called Theory of Everything might be achievable, but that the final touches on it were likely to be done by computers.

“The end might not be in sight for theoretical physics,” he said. “But it might be in sight for theoretical physicists.”

The Theory of Everything is still not in sight, but with computers taking over many of the chores in life — translating languages, recognizing faces, driving cars, recommending whom to date — it is not so crazy to imagine them taking over from the Hawkings and the Einsteins of the world.

Computer programs like DeepMind’s AlphaGo keep discovering new ways to beat humans at games like Go and chess, which have been studied and played for centuries. Why couldn’t one of these marvelous learning machines, let loose on an enormous astronomical catalog or the petabytes of data compiled by the Large Hadron Collider, discern a set of new fundamental particles or discover a wormhole to another galaxy in the outer solar system, like the one in the movie “Interstellar”?

At least that’s the dream. To think otherwise is to engage in what the physicist Max Tegmark calls “carbon chauvinism.”

. . . .

“Ultimately, I want to have machines that can think like a physicist.”

. . . .

Their tool in this endeavor is a brand of artificial intelligence known as neural networking. Unlike so-called expert systems like IBM’s Watson, which are loaded with human and scientific knowledge, neural networks are designed to learn as they go, similarly to the way human brains do. By analyzing vast amounts of data for hidden patterns, they swiftly learn to distinguish dogs from cats, recognize faces, replicate human speech, flag financial misbehavior and more.

“We’re hoping to discover all kinds of new laws of physics,” Dr. Tegmark said. “We’re already shown that it can rediscover laws of physics.”

Last year, in what amounted to a sort of proof of principle, Dr. Tegmark and a student, Silviu-Marian Udrescu, took 100 physics equations from a famous textbook — “The Feynman Lectures on Physics” by Richard Feynman, Robert Leighton and Matthew Sands — and used them to generate data that was then fed to a neural network. The system sifted the data for patterns and regularities — and recovered all 100 formulas.

“Like a human scientist, it tries many different strategies (modules) in turn,” the researchers wrote in a paper published last year in Science Advances. “And if it cannot solve the full problem in one fell swoop, it tries to transform it and divide it into simpler pieces that can be tackled separately, recursively relaunching the full algorithm on each piece.”

In another more challenging experiment, Dr. Tegmark and his colleagues showed the network a video of rockets flying around and asked it to predict what would happen from one frame to the next. Never mind the palm trees in the background. “At the end, the computer was able to discover the essential equations of motion,” he said.

Link to the rest at The New York Times

7 thoughts on “Can a Computer Devise a Theory of Everything?”

  1. Given that the headline is a question the answer is presumably “no”.

    I’m not sure whether or not to be impressed by the examples given as they sound perilously like sophisticated curve fitting – which is part of the pattern matching at which neural networks thrive, and which has little to do with what human’s normally conceive as intelligence – but this may be unfair. However, a much bigger problem is that the very existence of a TOE is no more than a hopeful hypothesis for which there is no supporting evidence.

    • Gödel’s incompleteness theorem pretty much proves that a Theory Of Everything can never be found. If complete, it is wrong, and if correct, it is incomplete. This applies to any logical system sufficiently large and copious to allow the operations of basic arithmetic, and the universe (since we can deal with it, or parts of it, mathematically) is certainly such a system.

      • The proof of Godel’s theorem applies to systems of well-formed formulae. How much of the world we deal with can be captured in well-formed formulae? It’s a leap of faith. I love the elegance of mathematics, but I fear it is a special case.

    • 1- The more recent “AI” pattern detection systems have become useful for being able to identify patterns out of incomplete data.

      2- The TOE believers operate under the assumption that we know enough about the universe to be able to (somehow) infer the parts we don’t know. To their credit, their Standard Model has demonstrated some predictive power in previously unknown regimes. So do the various String and M-theory models. A problem in telling them apart is there’s a seeming infinite variation of the latter and they’ve yet to predict anything different from the Standard Model.

      3- The Standard Model itself has been decried as “sophisticated curve fitting”, usually by proponents of the “clean and elegant” String theories which arise from first principles. The latter are decried as “mathematical doodling” by the other camp. Neither camp has succeeded in demonstrating supremacy and conclusively explaining both gravity and quantum effects. The debate continues.

      4- Neither camp even tries to explain “why” physics works tbe way it does, only “how”. Without the why neither side will truly have a Theory of Everything, a label that BTW comes from outside the field. The insiders, starting with Einstein, have limited themselves to trying to unify relativity and quantum mechanics while acknowledging some things would still remain outside a successful theory. Which, properly speaking, only model the universe as we see it. Few actually thing we know anywhere near “everything”. “Theory of Everything” is media hype.

      5- Thus, the headline question can be safely answered with a big NO, yet inference software may yet prove a useful tool in directing researchers attention in the “right” direction. It is proving useful in other “lesser” disciplines wrestling with the problem of incomplete data. The most notable so far has been identifying a fourth (fifth, depending on how one sees “Hobbits”) human lineage out of limited DNA data of the various homo subspecies that is neither neanderthal, denisovan, or cromagnon. The genes survive but no skeletal remains have been found. It may yet be proven to be a phantom artifact from the incomplete data on the extinct subspecies but for now it remains un-falsified.

      The stories remain unfinished…

  2. I agree with a lot of what you say, but my reading suggests that claims of “string theory predictions” are pretty much just hype and fall apart when closely examined, so your second paragraph could just end “they’ve yet to predict anything.”

    If someone comes up with a string compactification that actually drops the standard model out as a prediction I will be thoroughly impressed (and very, very surprised!), but as you imply these models seem not only to not be able to do this but to be radically underdetermined. As for M theory I don’t think it really exists, other than as a hope that the various versions of string theory can be pulled together when you add one more dimension (but then I’m a “show me the equations” king of guy so I’m much more impressed by GR’s field equations, or by QM, than by anything we’ve seen since). It is also my impression that most people supposedly working on string theory have given up on a TOE (I may be wrong about this of course).

    I don’t think it fair to call the standard model “sophisticated curve fitting” as there is some very sophisticated mathematics underlying it. Any “fitting” really just involves adding in the empirically derived constants (particle masse and the like) and within its own limited domain it has great predictive success (though it would be much better if those constants could be derived from first principles or at least a smaller number of values).

    If the SM and GR are to be combined and dark matter and the totally misnamed “dark energy” explained, then the result will come close to what we might hope for from a TOE. It will not be for “everything” of course and this fact means that we don’t really need to be too concerned about Godel (not that physicists would care; they generally are not ones for mathematical rigour). What use it will be – other than fun for scientists – is another matter. My guess is not very much as we already know that reductionism really does not work (at least in the sense of starting with our most basic theory and deriving everything else from it) though maybe SF aficionados can dream of a warp drive.

    • Actually, I’m skeptical of both dark matter and dark energy.
      I expect both will go away with better data. They smack of epicycles.
      Quantified Inertia, for one, does away with both. Other, better answers likely exist.
      And the problem with the Hubble constant is also an issue.

      The Standard Model depends a wee bit too much on measured data for my tastes. It really does smack of curve fitting, however sophisticated the math might be. Plus the menagerie of particles? Too big. A theory of the elemental nsture should be elemental.


      As far as I’m concerned, both camps can believe whatever they want but I’m sure there is a more complete and less baroque theory to be found in the future. All it takes is another Rutherford moment to break the curve fitting.

Comments are closed.