Published in Pioneers

Michael Nielsen on visualizations, biological systems, and making a new science

By Devon Zuegel

Product

72 min read

Michael Nielsen is a quantum physicist, science writer, computer programming researcher, and modern polymath working on tools to expand human capacity to think and create. He’s previously authored pioneering quantum computing books, propelled forward the open science movement, and published research on artificial intelligence. He now researches meta-science at the Astera Institute, while writing about his many interests online.

DEVON: Hello, and welcome to Tools and Craft. I'm your host, Devon Zuegel and today I'm talking with Michael Nielsen. Michael is a scientist who explores ideas and tools that help people think and create, both individually and collectively. Michael helped pioneer quantum computing and the modern open science movement, and he also has the honor of being the person who originally introduced me to the ideas of tools for thought almost a decade ago. So really, we can thank him for the existence of this entire podcast. 

Michael defies field boundaries more than anyone I can think of. He's leapt around from physics to metascience, to educational tools, to programming and beyond - and he's made tremendous contributions to each field he bumps into along the way. So Michael, thank you so much for taking the time for this conversation. 

MICHAEL: Thanks for having me on Devon. 

DEVON: One thing that you said before is that you believe far better social processes are possible in science, and that these could activate great latent potential for discovery. How have these social processes changed since you first started doing science?

MICHAEL: There's a lot more emphasis on accountability. I think that sounds like a good thing. In many ways, it is a good thing. In particular, a lot of science funders around the world do more exercises where they try to assess just how well things are going. Probably the best known example of this is in the UK, where they run what used to be called the Research Assessment Exercise, changed the name to Research Excellence Framework. They assess all the universities and university departments to try and determine the quality of their research and that determines the way they get funded. When I was in Australia, as an academic, there was a similar exercise taking place. 

It's maybe a little bit mixed. People end up gaming the system or trying to game the system. They worry so much about the impact this has on funding, that they can actually end up essentially, Goodhart’s Law, where what gets measured gradually becomes a target. People start to target these kinds of exercises, and it actually can sometimes distort what it is they do. That's not always a good thing. That's a really big change.

DEVON: And how do they measure quality?

MICHAEL: It varies from place to place. There are two common approaches. One is bibliometrics, so they will count the number of papers and citations. Those are not terrible things to be looking at. But when they become targets, they become really quite strange things to be looking at. There's an interesting paper I was just looking at earlier today, which suggests that the use of those kinds of measures in Italy since 2011, has actually perhaps led to the creation of citation rings and lots of self citation and things like that. It's not certain that that's what's going on, but at least suggestive that this is not very good for the process of science. So that's one sort of approach to this sort of very data driven approach, where you're not really looking at what's actually being done at all. You're just using this very gross sort of metadata. 

You need to understand [science] in a way that is not immediately quantitative or scalable, before you can develop good quantitative measures.

The other approach tends to be more panels, where individuals, usually pretty well known senior scientists, will try and assess research outputs from different people. They might ask that every person in the department has to submit their three best, five best, or however many best papers from the last three years, five years, or however many years. And they'll try and assess how important that work was, and develop some kind of aggregate sense of a department or a university. That approach is also pretty common with lots of variations between different funders. It's certainly a little bit harder to game, but there are still ways that it can be gamed.

DEVON: Let's assume that a scientist does not want to commit fraud, but they do want to craft their research agenda to get as high of a score on this as possible, maybe maximize the number of citations they can get. What are the characteristics of research agendas or lines of inquiry that will tend to get more citations and tend to map to this score more highly?

MICHAEL: There's at least two things. Joining very large collaborations is often a way of increasing this for a couple of reasons. One is you may end up as an author on a very large number of different papers, where you're just sort of part of the collaboration that is producing it. Maybe you're working on a detector in the particle physics experiment. And that detector is used in a lot of different experiments. As a result, you end up as an author on a lot of different papers. What's more, there will be quite legitimately a lot of self citation amongst those papers. 

The other thing that it will certainly tend to do is drive people to work on things which are extremely fashionable, which have a lot of people already working on them and there's some potential for you to have a high impact in this kind of way. It's significantly more difficult to carve out a career, doing something where you're the only person or there's only one or two other people in the world who are currently interested  in that topic, you're just not going to look so good on a bunch of these measures. When people are very aware of that kind of thing, it tends to suppress such work.

DEVON: Hmm. That's really interesting. And collaboration that produces many papers, maybe each of those papers is very good and of a very high quality, but they're closer in the ideas space to each other than papers on completely different topics. So it seems a little odd for that to be counted as the same number of citations, even if it's sort of the same cluster of concepts.

One of the reasons why we regard an idea is tremendously important, is because it was a priori, so unlikely… so there's some funny, interesting tension between the importance of an idea and how a priori plausible it is

MICHAEL: Yeah, absolutely. It's just sort of a funny thing. It's using citations for a purpose they're not designed for at all. The person who started the Science Citation Index, and sort of all interest in citation counting, Eugene Garfield, he wasn't thinking of it as a method for impact assessment at all. Well, that's a little too strong. He was very slightly interested in it, but that wasn't his main interest. He was interested in tracking the lineage of ideas. In particular, he was really interested in enabling people to track down errors in the scientific record, so quite a different sort of purpose. Citations themselves, they're not meant to be intrinsically a measure of importance. It's just saying, we were influenced in this way by that paper. 

DEVON: It also might have been a more effective measure initially, before people caught on that it was going to be used that way. The idea of taking a snapshot of a system is very different from using a metric for a system that is going to be an input to the system itself later on, because now you change the system by observing it. 

If you had to design a quantitative metric for the quality of science, how would you do it? Or would you reject the concept entirely?

MICHAEL: I think what I probably do, if you put a gun to my head and said I had to do it, is recruit a lot of different teams, made up of very different types of people. There'd be a lot of historians of science, sociologists of science, philosophers of science - people who are interested in the qualitative evolution of ideas and the details to try and understand things in that very qualitative way. You need to understand it in a way that is not immediately quantitative or scalable, before you can develop good quantitative measures. Scientists are not making their judgments individually just by citation counting, or something like that. You try to understand the evolution of ideas in your own area and figure out what is actually intrinsically important, what should you be working on? What kinds of ideas are actually important for the future of the field? And people, you know, there's a fair amount of heterogeneity, different people have different opinions and it's not necessarily clear that you can actually entirely eliminate that heterogeneity. I believe it is probably true that those differences of opinion are actually very generative and necessary.  If you take that point of view seriously, then no metric is ever going to capture all of that. So the attempt to say what is most important is actually answering the wrong question, what you should be trying to do instead is generate the portfolio in different directions, some of which may be, in some sense, inconsistent. 

Let me give you an example. Quantum Gravity is a really sort of famous example, in physics, there's this long standing problem resolving an inconsistency between the theories of quantum mechanics and the theory of general relativity, our best theory of gravity. There are a number of different schools of thought about how best to do this. If you insist on just picking out something as the best, you will then concentrate all your attention on one particular approach. If instead you say,  just allow for at least some period of time, different approaches to flourish and not try and determine which of the two, three, five or 10 different approaches is best. I think, usually, that's a significantly better way. 

It’s something I like about Silicon Valley actually. If you have two competitors in a space, let's say you talk about, say, Netflix and Blockbuster, sort of two classic competitors. The fact is for a long time, although they're competing for the same market, in some respects, they have their own internal infrastructure, capital, and momentum. And you don't have a situation where employees at Blockbuster are conducting the performance evaluations for people at Netflix, and determining who gets promoted and these kinds of things. Yet, the situation in science often resembles that a little bit, where somebody in one school of thought is actually doing the peer review for somebody in a completely different school of thought. I think it's actually good to be able to have those silos exist independently for some period, extended period of time. Ultimately, you do want to be able to sort of make a resolution of which is the better approach. But in the meantime, giving one veto power over the other just seems kind of hopeless to me. Certainly, you know, I'm quite certain that the CEO of Blockbuster would have given very negative performance review, the people at Netflix for a long time.

DEVON: As soon as you made that analogy, I was like, Oh, my goodness, that is exactly what's going on. The imagery that pops to mind of the silos is of the Galapagos Islands. On these islands, different species were able to evolve independent of each other and you end up with much greater diversity.

Computers would begin to be based on evolutionary processes and other ideas from biology. We would accept this because we get so much power from them, but we give up some of our control.

How do you keep those silos, while also allowing some of the cross pollination of ideas that is also so important?

MICHAEL: I mean, the cross pollination of ideas is independent to the mutual evaluation of the ideas. And to be clear, some mutual evaluation is great, but you don't want that to be sort of the only factor that determines people's continued existence.

One of the ideas that people have talked about and explored that is becoming a popular area of investigation is the idea of randomizing grant funding. So basically, somebody applies with a project idea. Instead of having a peer review, where you essentially try and rank all the proposals and only find the “best”. Instead, there's kind of a basic sanity check round of peer review, so that people with crank proposals are being rejected. Then after that, there's simply a random selection made and whoever's lottery numbers come up, get funded. The benefit of doing this kind of thing is that you're not suppressing ideas, you're not putting the Blockbuster employees in charge of the performance evaluation of the Netflix employees. You're just relying on individual scientists to make the best judgment about what they think is the best possible idea, subject to that basic checkup, sanity check. I think the first place that did this in the world was the New Zealand Health Research Council, I think it's called, and a few other grant agencies have done small pilot trials since then, but it's certainly not a widely used approach.

DEVON: I suppose if you as a funding agency knew the relative importance of different research ahead of time, you would have already done science that is sort of solving the problem.

MICHAEL: Yeah, I mean, it's such a funny thing, where one of the reasons why we regard an idea is tremendously important, is because it was a priori, so unlikely. I know one of my favorite examples. One of my heroes, the biologist Lynn Margulis, she's the person who proposed the incredible, almost ludicrous idea of symbiosis. She was opposed for a long time but to my understanding, this is now accepted by biologists. So, there's some funny, interesting tension between the importance of an idea and how a priori plausible it is. She's one of these people who just has tremendous force and sort of willingness to persist even when everybody else is telling her over and over that she said that she's wrong.

Something also really interesting about biology is that we regard both single celled systems and multicellular organisms, both as forms of life. And that's incredibly interesting that we call we unite both of those. Why don't we regard an animal with, you know, 1000 cells? Why don't we regard a thought as 1000 separate living beings? It's interesting that we don't do that, like what's sort of the boundary between what we consider an organism and and a cell is, at least to me, not entirely obvious.

I always feel like JavaScript seems a little bit more, almost more biological. 

DEVON: I wonder now if we're missing things, because when we put things in categories, it's often so that we can think about them and make generalizations about the things in the category. So now it makes me wonder, what are we missing by blobbing them all together? And what might we learn if we were to split them apart?

MICHAEL: Yeah, one of my favorite little facts is that apparently in linguistics, the subject of what is a word is somewhat controversial. There's sort of different schools of thought on what should be considered a word or not. As a physicist, you get really used to having often very clean categories and some notion of sort of correct categories. At first, it's really frustrating that there doesn't seem to be these quick clean, conceptual cleavages. It’s really interesting.

DEVON: I often feel that way, moving into different realms from programming. Programming is a little messier, maybe than physics, but it still has a lot of things that are literally binary in the sense of they're a boolean value. And when things are digital, there's a sense that this code produces the results I'm looking for, or it does not. Whereas moving into other realms more like design or politics. Often I'll ask someone a question that I think must have a simple answer. And it's always like, well, it depends. And sometimes it might even depend on what the person involved had for lunch that day. So there's no clean answer.

MICHAEL: I just feel like JavaScript is kind of a little bit of an exception in something. It's so surprising, I often can't remember how stuff is done. Because, there's all these weird exceptions and things are done in strange ways. If you want to delete a DOM element, you need to find the parent and then delete the child. This is crazy as a design, whereas something like Python seems much more like, they tried to make a language that was pretty consistent. I always feel like JavaScript seems a little bit more, almost more biological. There's no real uniformity or consistency, it evolved to be whatever it is.

DEVON: That is a great way to put it. And I see that as its greatest strength and its greatest weakness.

MICHAEL: I mean, it's a beautiful language in some really interesting ways. I don't know if you've ever read Kevin Kelly's book, Out of Control.

DEVON: No, I haven't

MICHAEL: He wrote this great book in the early 90s and at the time, I thought, “Oh, this is good”, but it wasn't like making a really deep impression on me. The thesis is that human beings have gotten really good at designing and building systems, which they understand really well and where they try and get very close, tight control of all the different elements. Sort of like Le Corbusier, the modernist city planner, lots of straight lines and this kind of stuff. 

Kelly's thesis was that we were going to move our machines to much more biological kinds of systems - we would essentially trade off power against the ability to control. Computers would begin to be based on evolutionary processes and other ideas from biology, and we would accept this because we would get so much power from them. But we would give up some of our expectations. You're sort of seeing this, these systems that formerly were under control, or getting out of control.

DEVON: It reminds me of conversations I've had with a lot of friends who previously ran fairly small companies and then scaled them up, or they are deciding whether or not they want to go for being a larger organization. And when it's just you and two or three other employees, it's very easy to see what's going on and have visibility and control the situation. But there's only so much that four people can get done. Whereas if you're running a 10,000 person organization, you have very little control over what's going on. But you get a lot more power. 

It also reminds me of a lot of artificial intelligence discourse, where so much around being able to understand what an AI is doing is interpretability. It is such a key first step to being able to control it - something that I think a lot of people working on AI are concerned about, because the capabilities are far beyond what we understand at the moment.

MICHAEL: Just to go back to your mutual friend who is a CEO, the comment was that it is often harder for them to know what's going on inside the company than it is for almost anybody else. I think when you're in a very senior position, although it gives you like this bird's eye view, there are certain things that the person who is the underling might be uncomfortable sharing with their boss or their boss's boss, or their boss's boss's boss's boss's boss. Just for sort of natural reasons, you don't want to complain that you don't like the lunch menu, or that you worry a lot about your relationship to that person. So it creates a really interesting opacity to the person at the top of the organization. In some sense, the CEO has less visibility into some parts of the organization than just a random new hire who can go and find out what people really think about certain issues.

DEVON: It's that observability effect again. It's similar to the citations like, as soon as someone's looking at something, their behavior will change, or at least mask itself so that it doesn't get read in a way that is unfavorable, or out of control for that particular person.

MICHAEL: Yeah, that seems exactly right.

I've noticed over and over again, that many sorts of guilty pleasures like that, ultimately end up becoming, five years or 10 years later, the creative projects that I work on.

DEVON: You'd said something about biology being one of the fields that you aren't so familiar with. I would go so far as to say it's one of the few fields that you're not familiar with considering how many different places that you've worked in science? As you've gone from different fields to different fields? What are signs that you read that help you see that you maybe can contribute something to a field you hadn't previously studied?

MICHAEL: There are two things that go on. One, sometimes I just get curious about the field. And, yeah, there's no particular sort of opportunity to contribute. I might just pay attention to it for some period of time, talk with people about it, read about it. So I'm not working in the field at that point. What I want is just one insight that seems worth building on and like it might actually potentially be important. It's a very local kind of thing, it's not anything more than just having one little creative thing that feels like it should be done. Often actually, it feels like a little tiny creative thing, a project that should take a week and that takes three months, six months, or even two years. It's sort of unfolding, I do a lot of things like that. I've never seriously worked on AI, but every once in a while, I'll just take a week or something because there'll be some little question I have that I can't find the answer to in any paper. So I'll just play around and try to get a little bit of insight into it, and maybe write up some notes. Yeah, you sort of just repeat that often enough. Sometimes things start to seem very interesting in some particular direction and maybe actually start to develop some expertise. I don't know, I don't have a sort of a conscious theory of how this happens.

DEVON: When was the last time that you had a little project like that, that ended up expanding to take more of your headspace than you anticipated? 

MICHAEL: Yeah. A project with Kanjun Q, that's an essay that we thought might take a couple of months to write about? Well, actually, we didn't even know what we did what I now think it was about, we had no idea that it was going to be about that, at first, it's just going to be some thoughts on science funding and how it might be done better. And it's turned into something completely different. It's taken 18 months, and many 1000s of hours to do. It started with some simple ideas and turned out to be about the extent to which the scientific system, the discovery system that we have, is able to learn and change and update itself. That system does a really great job of updating its ideas - lots of people can contribute to that. But if you're talking about the social processes and institutions of power by which it operates, those only change relatively slowly. That's the question that ultimately, we got interested in and started to think about.

DEVON: I remember you talking about some of these questions many, many, many years ago. So it sounds like something you've been thinking about and chewing on in the back of your head for many years - is that typical of the types of problems that ended up grabbing your attention for long periods of times where you've had something working in the back of your head for a while, or sometimes it comes out of the blue, and it's something you've never really thought about before.

MICHAEL: Both happen. In this particular case? About 15 years ago, I took a few years to work as an open science advocate, and to develop a bunch of ideas around open science. And those are all sort of very concrete instantiations of this problem, how do you change and update the social processes or science. I didn't conceive of them in that kind of abstract form then, but later came back and realized, “oh I've seen a lot of problems with these very specific forms”. They're actually instances of a much more general pattern. In this particular case, it was kind of revisiting something from a long time ago. 

One pattern I really like. Actually, maybe this is just an excuse for self indulgence. I think most people have, you know, various guilty pleasures, things that they enjoy doing, or reading or whatnot, that they feel that they really shouldn't. They should be doing hard work on the actual serious projects. I've noticed over and over again, that many sorts of guilty pleasures like that, ultimately end up becoming, five years or 10 years later, the creative projects that I work on. I was a quantum physicist, I wasn't supposed to be thinking about open science. But I just got really interested in the way Linux kernel development was being done and the way Wikipedia was being constructed and then a whole bunch of other strange questions. This sounded very highfalutin, there were also many other sorts of less highfalutin things, but that's what turned ultimately into the interest in open science. This pattern just sort of happens over and over again.

We get interested in things, for reasons that often we can't articulate. In fact, it's the fact that we can't articulate them that we get interested in... There's some interesting new structure in the world that is actually grabbing our attention.

DEVON: Yeah, that's been a pattern in my own work as well. Although it's it's quite different types of work, but the kinds of things that I find myself working on thinking about today are the types of things that I was blogging, thinking about, and researching, in my off hours three to six years ago, which I always thought, well, I could never work on that. 

MICHAEL: Paul Buchheit had this really nice heuristic he followed when it started investing. He would sometimes turn some founders down. They would pitch  and he would think “that's obviously not going to work”. And then he would keep thinking about their proposal over and over in the days after. Eventually, he'd realize that whenever that was the case, he should go back and invest. His unconscious mind, I guess, was trying to tell him something about the proposal. That's a very general pattern. We get interested in things, for reasons that often we can't articulate. In fact, it's the fact that we can't articulate them that we get interested in. It's the sense of, you know, there's some interesting new structure in the world that we haven't seen before that is actually grabbing our attention. But of course, once you start to master that, it becomes a tool that you can use in other pursuits. Anyway, that's my way of justifying many wasted hours on YouTube and other places.

DEVON: I think it's very compelling. I think I'm gonna be thinking about this over the next few days, in fact.

MICHAEL: Yeah, urbanism. So yeah, one of your great interests is very much that for me. We both know and love the work of Jane Jacobs, and many people like that. I had no work-related reason for reading Jane Jacobs, that's for sure, but she's one of the people who influenced me the most.  Just the way she thinks about building complex systems, what it means to do well, and how we should do it. That influences absolutely everything else I do, but I wasn't thinking that when I read her work, it was just that this is really interesting for some reason I don't entirely understand.

DEVON: Now that you say that, I see a lot of resonance between the way you think about science as an ecosystem, and the way she talks about cities as ecosystems and how they can remain healthy and strong, and vibrant. 

MICHAEL: Yeah. It's another example of the use of decentralized knowledge to improve society. That very much so influences the way I think about science funding. You have sort of the eyes on the street in Jane Jacobs, which provides safety, security, and so many other things. In science, you can either have very centralized funders making all the decisions, or you can try and devolve a lot of trust out to individual scientists, and trust that individual scientists may actually have a much better idea of how to spend their talents, than some centralized kind of vetoing power. There's a structural similarity between those arguments. There's many, many ways that that kind of thing has influenced my thinking about science funding.

DEVON: The beauty of a truly powerful analogy is that it can predict things that you hadn't considered yet. Instead of being that one to one mapping, it's many to many mapping. Once you observe something about one system, you realize, “oh, maybe this is true about the other one as well”. So have you, have you had that experience with comparing cities to science?

MICHAEL: Well, actually, I'm just having it right now. In fact, I've never really made this particular sort of type of analogy before. I'm just wondering, you know, what the analogy of safety and eyes on the street is in science? Yeah, maybe there's some analogy to be made to the process by which we keep scientific results reliable. 

Jane Jacobs makes this lovely point that you want city block sizes to be very small. The smaller they are, the more rapidly you can get mixing, because you have a cross strait so people from other states can come in and instead of having a one dimensional system, you actually start to get access to, to a full two dimensional grid. I'm wondering what the analog of small block sizes is in science. Actually, maybe it's got something to do with the size of the organizational units you use to do science. Maybe if they're relatively small and they're put in serious contact with each other on a relatively frequent basis, there's kind of an interdisciplinary confab where two or three of these units get together and talk to each other. 

DEVON: On the safety one, it makes me think about policing and how communities that have the highest rates of safety and the lowest rates of crime, typically not because they have tons of police on the streets. It tends to be because there's social and cultural norms that keep people from harming each other. Also, lots of other factors like wealth and opportunities, and whether people have options besides robbing each other, and so on. 

I have never been a scientist, but I could imagine that it's much more motivating for people to do great science and have honest results, if it's because they want to impress their fellow scientists, and they want to legitimately find an interesting contribution. Whereas if there’s a national body that makes sure that your science is good, that's probably a lot less personally motivating.

I certainly had the experience of starting to really understand mathematics as a teenager, and really giving up video games as a result… Why would I be trying to play video games, when I could be doing something that was 1000 times more rewarding?

MICHAEL: That's certainly true. But I think there's an even more fundamental thing, which is just the enjoyment of doing good work and understanding. 

I certainly had the experience of starting to really understand mathematics as a teenager, and really giving up video games as a result, because mathematics was so much more interesting. Then a little bit later, science was so much more interesting. Like, why would I be trying to play video games, when I could be doing something that was 1000 times more rewarding? Just intrinsically, I didn't need anybody else. Mathematics really has this incredible clarity in that way.

I remember first understanding Euclid’s proof, that there are infinitely many primes, and the idea that the square root of two is irrational - it's just the most incredibly beautiful thing. Then you start to be given little problems of your own and you start to have the same kind of shock of understanding. That's just so rewarding. So I think there's a lot of that as well, which is independent of impressing your peers, which is not to say that's not also important. I think for many people, discovery is already such a motivating factor. I find I meet a surprising number of people who have heard about the replication crisis in psychology or things like this. They think that scientists need to be policed, or that somehow the incentives aren't right. Sometimes they don't understand what an enormous intrinsic incentive there is. 

DEVON: On the point of intrinsic motivation, how widespread do you think that is in the population of scientists? Like if you were to pluck a random scientist out of a hat? Do you think that that would apply to the person more likely than not?

MICHAEL: Certainly, the scientists I've met in my life, I think the great majority of them, if they wanted to become wealthy or powerful, or high status, that many other things that they could have done that would have afforded them greater opportunity to do that. I think most of them just absolutely adored science. That's not always true, actually it tends to be that they either absolutely adored science as kids, or else there's sort of a little bit of luck or chance. They were in the right lab at the right time and all of a sudden they realized, “Oh, this is great”. So they forwent those other possibilities. 

With that said, it is difficult to escape some type of careerism. People like to eat. Most people, all other things being equal, prefer to have a higher salary. There are these very sad cases of people committing fraud or something very dubious. Very often, what's going on is people who just become too invested in some false notion of success. They're disconnected from what they're really supposed to be doing. It's sort of sad individually, and of course, pretty terrible for our society.

DEVON: I could even imagine a situation where that's not inconsistent with some intrinsic motivation. I can reflect on times where, let's say, I'm having a conversation with somebody about a topic I'm deeply interested in. I actually want to learn from them really, really badly. But then sometimes I'll have a moment where I think “If I asked this question, I will look stupid, it will lower my status in their eyes, because they will think 'she doesn't know that already?'”. I always try very hard to push through that feeling because I value information, but I can feel the tug of “maybe maybe I should hold back, maybe I shouldn't ask this question". I don't think that that is inconsistent with me wanting to know the answer. It's just another feeling that is layered on top. Those two things, while they end up having conflicting results, they can exist, they can certainly coexist.

In something you wrote, you quoted Paul Dirac. They said that in the 1920s, when quantum mechanics was discovered, it was a period in which it was very easy for any second rate physicists to do first rate work. The UN said history suggests that the early days of new scientific fields are often golden ages, with fundamental questions about the world being answered quickly and easily. And this feels to me like one of those, you know, when you see things, but I'm really curious, like what to you? Does it mean to create a new field?

MICHAEL: Oh, yeah, that's a really interesting question. What is a field that is a really interesting one, and really complicated. And as far as I know, there's not. Okay, so like, one possible set of associations you might have is, to some deep set of related ideas. For example, the Maxwell Lorenz equations are used to describe electromagnetic phenomena, this is an incredibly deep set of ideas. You can spend all of your days just studying the consequences of these equations, and understanding all kinds of electrical and magnetic phenomena. So in that kind of approach, to think about what a field is, a field is somehow a social structure that grows up around a particular deep set of ideas.

Now, something like physics, it's not a single deep set of ideas. It's actually a whole lot of sort of loosely related, when there's quantum mechanics is condensed matter physics, there's astrophysics, there's hydrodynamics. It's a loose agglomeration. So I've started with deep ideas, but then you also have these kinds of political structures that sit on top of them, you know, is it a field? Is it something which has a named department at universities? You've made it entirely a political thing. 

But actually, the time at which you start to get program officers and program managers at major funders is often a very important time of transition. In a field, it will, you know, before that happens, there's not really any opportunity for a set of ideas to start to grow. I was involved a fair amount, in the relatively early days of quantum computing. It was really interesting to watch as people started to get the ability to do things like run conferences, or submit to journals. It sounds crazy, but it was actually quite difficult to know where to publish, because there wasn't really a sort of a publication home. There are different transitions that take place in the life of a field, but I do think, at the core, if there's not a deep set of ideas, there can't possibly be a field. 

The reason I've expounded at such an enormous length here, there are two interesting things that can happen. I like to think of Potemkin fields, which I won't try to name any particular examples. But it's when, you know, sort of money and interest, tries to declare that something is a field, but there's no deep set of ideas. You can spend as much money as you'd like, if no deep set of ideas is discovered, you'll have plenty of activity. You might have journals, you might have all kinds of things happening, but there will never really be any, any substance to it. 

This affects to some extent, or can affect interdisciplinary work, where maybe you have two fields, both of which are grounded in deep sets of ideas. But just declaring something to be at the intersection doesn't mean that there's sort of interesting, deep ideas to be discovered that sometimes there is actually I think quantum computing basically occurred by people mashing up quantum mechanics with computer science. But sometimes, interdisciplinary work can sort of flounder a bit, because there'll be people trying to work at the intersection of two fields. 

DEVON: For these Potemkin fields, I like that phrase a lot. 

MICHAEL: You might be able to make the case that computing in the 19th century was an example of a Potemkin field. So people like Charles Babbage attempted to invent the field of computer science. They actually got a fair amount of support for it, but the time wasn't quite right. They were asking great fundamental questions, and were able to make a little bit of progress, but the time wasn't right for the field to exist. So they weren't really able to get to the incredible set of ideas that Von Neumann and others had in the 1930s and 1940s, with the invention of the transistor and so on.

DEVON: What would you consider the fieldeist field like the archetypal field of a field?

MICHAEL: Well, you know, physics was very successful early, particularly classical mechanics, Newton's Newtonian mechanics. So it's very tempting to say, “oh, this is the prototype for a field”. But part of what's exciting is when things work in very different ways. I don't know much about AI, but when I hear some of the criticisms that are made by skeptics, I maybe hear a little bit that they're trying to force it into an old mold. So they want systems that they can really understand, like, you know, how they operate. If they're talking about things like language models, they want to be able to understand the relationship of the model to things like the different parts of speech, nouns, verbs, grammar, and so on. And we just don't have that. We don't have a principled way of understanding the way any of the big foundational models are operating. Maybe some of the old standards actually shouldn't be applied. 

There's usually some back and forth interplay between tinkering and fooling around with stuff. That [tinkering can] sometimes be improved by a detailed scientific understanding of what's going on. Maybe in the case of AI, you don't actually need the deep understanding step quite as much because we can do the experiments so quickly. I think the skeptic says, “You're never going to have successful artificial intelligence, if you don't stop and understand what these systems are doing. You know, look at the history of the way we've improved technologies in the past, we always needed to understand how they operate”. The counterpoint to that is well, yeah, that's true. But actually, we're in a different situation. Today, our ability to test our tinkering is just much greater than it has been in the past, we can try many, many, many more possibilities automatically. So we don't need really strong theoretical explanations to rule out incorrect lines of investigation, we can instead just try a trillion or a trillion trillion trillion different things and rely on our ability to recognize when something is working, rather than derive from first principles why it's working.

DEVON: It sounds like you're describing a sort of feeling towards understanding as opposed to thinking towards understanding.

MICHAEL: A little bit. Yeah, I mean, very much this intuitive approach, where you don't necessarily systematically understand why things are improving. Feeling is not a not a bad way of putting it. But you can't go from like, the steam engine to a modern Tesla without having a lot of scientific explanation somewhere in between. In some sense, we're trying to do that on the AI side without that sort of detailed understanding in between time. So it's easy to see why you might be skeptical. But you just couldn't test trillion trillion trillions of intermediate technologies in the past. 

DEVON: I think what I'm imagining is if technologies we could operate are a tree, and you have all these different choices that you could make, similar to the choices you could make in chess, the human mind has to build models that make it more clear which paths to go down because we just cannot compute all of the different possible paths. But with an AI with a computer, we can run them down all the paths and see, “is there anything at the end?” That's interesting. It can come back and tell us so we won’t need to have as much of a model of understanding about why that path is the right one, because we've tried them all. We just pick the one that has the result that we want.

MICHAEL: Yeah, that's, you've just said much more clearly what I was trying to say than how I said it.

DEVON: No, makes a lot of sense. I had never considered that before. I don't follow AI nearly as closely as you do. Is this something that you're seeing in the behavior of how AI researchers are doing their work?

MICHAEL: It's just when I talk to AI researchers versus some of the critics of the AI researchers, I see that dichotomy a lot. It is possible, I'm unfairly characterizing some or all of these people. But when I see people like David, or to some extent, Gary Marquez, certainly Noam Chomsky, critiquing approaches to AI, that seems to me, like a big part of what they're saying is that we should have detailed understanding, telling us, you know, which, which in your, in your way of describing it, which of those branches not to go down? You know, which ones are the correct ones to go down. And they're not very happy with a situation where people essentially say, let's just go down all of them, and rely on the ability to recognize at the end, whereas the thrust of the AI researchers seems to be they're much happier, just just trying a very large number of different things.

So yeah, a really interesting question and answer with the AI researcher Ian Goodfellow. Somebody asks a question, it's along the lines of “Do you feel upset when one of your experiments doesn't work out?” And he says, “oh, no, not at all. I adopt what I consider a high throughput approach to experimentation. In fact, while running while doing this question and answer session, I've said a whole bunch of different experiments running and seeing the results”. So for him, it's just the sort of very cheap thing where he's able to keep asking questions and keep getting answers very, very, very quickly enough so that he can do it while actually participating in the Q&A. I really liked that terminology, sort of a high throughput approach to experimentation. It seems to me like, in many ways, a fundamental shift in how we think about science.

Something that friends who work in AI labs, commercial, industrial AI labs now do tell me that one thing that they have noticed that is a difference is there is much more emphasis put on having access to really first rate engineering resources. So in the past, some poor grad student had to write all the code and, you know, arrange all the infrastructure. Now, there will be a team that just specializes in making sure all the infrastructure is working extremely well, being able to rapidly scale up experiments and do these kinds of things. 

For certain problems, they actually had to flip pieces of paper over the monitor to cut the kids off. Instead, they actually had to stop and think - sometimes speed is your enemy.

DEVON: How do you think physics experiments would be different, or physics in general, if physicists had more of this mindset of “let's just throw as many experiments on the wall as possible and see what sticks".

MICHAEL: There's often intrinsic timescales in physical systems that make it hard. In my one published experiment, I'm a theoretical physicist by one published experiment, we're using a particular molecule to do an NMR experiment. And we needed to wait roughly three minutes at the end of every cycle, simply for the molecule to relax back to its original ground state. When we're sort of just setting up we would typically wait like 50, 60, 70 seconds. So it wasn't really being reset properly, but it was mostly being reset. That was just for us to kind of do some initial rough calibration. But later on, when we're actually doing the real experiment, there was just this really annoying three minute wait. Everything would have been so much easier, if we could have eliminated that. If we'd had multiple machines in parallel, that also would have been very profound for us. But I can remember that the cost of the machine was on the order of $10 million, so having multiple machines was out. 

I don't know what the consequences would have been. I instinctively feel that it would have completely transformed the process for us, not always, not entirely for the better - we did a lot of good thinking and those three minute intervals. Basically, when doing an experimental run, you'd see the results. In the three minutes, while we were waiting, we'd write up a few lines about what we just saw. Sometimes, of course, we'd spend much more time if something very important just happened. But most of the time, it was just a few minutes of writing. There were a lot of good thoughts that were jotted down there. Experimentalists make comments to the effect of more thinking and less work is sometimes the right approach to problems, but it's not always the easiest thing to do. 

There's another interesting example. I remember who I heard this from, possibly from Alan Kay, possibly from somebody who was at his school in Los Angeles, in the 70s or 80s. They've got a bunch of kids who are some of the first users of computers. For certain problems, they actually had to flip pieces of paper over the monitor to cut the kids off from the monitor. So the kids couldn't just keep randomly trying calculations to solve some problem. Instead, they actually had to stop and think, sometimes speed is your enemy. Actually, that's a fun question, I think about cases when really dramatically speeding something up, like the ability to iterate, has actually resulted in a sort of secular slowdown. Like the overall process has maybe been burned by it. Can you think of any examples like that?

DEVON: I would say debugging often has this effect, in programming, where if I get into a rhythm on debugging, and I'm just, I'm just trying all sorts of different things. This is something many, many people experience where as soon as you step away from the computer to go to the restroom, or grab a coffee, or go to sleep, that's when the solution sort of hits your brain. I think the fact that you can, you have so many ways to test something at your fingertips can often stop you from stepping away and just letting the problem sink into your subconscious. 

MICHAEL: Going for a walk is often the best debugger.

DEVON: Right, right. Luckily, there are certain basic human needs that you have to fulfill that sort of force on you, like “Oh, I haven't eaten in 12 hours, maybe I should stop trying to find this bug that is making me so frustrated”. And then you go have a sandwich and you think, “Oh, I solved that”. On the other hand, those hours of debugging, poking at the system, getting lots of different inputs into your brain, are helping you build a model. That subconscious process would not be able to do what you want, if you hadn't done a lot of that. 

As good as slowing things down can be, sometimes I think that there are plenty of processes in the world that slow things down already. And it's much harder to speed things up. So I'll tend to bias towards tools that will make me faster. There's just plenty of things that are going to make you stop and you know, give you time to think, but not so many that help you get interesting, relevant inputs more quickly.

MICHAEL: I wonder, the use of libraries in computing seems closely related, where some particular libraries will become so canonical. I'm thinking of the LINPACK library and in science, which is used to do linear algebra. Maybe there are people who like debugging all the time, but I also wonder if important parts of the code may just end up being like the sort of civilization we neglected. Like they become like infrastructure that everybody uses, but nobody ever thinks about it. 

It is a really interesting long term trend, I think in our civilization some group or a person spends an enormous amount of effort developing [the infrastructure]. If they're really good, then they just sort of sit there. And nobody really has to think about them for a long time. And we're gradually building more and more layers of that. So it's really good from an immediate speed perspective, you just want to be able to import NumPy or whatever. Over the long term, it creates sort of an interesting failure point. For catastrophic failure, if there's a bug in one of those things in some important way. It can potentially have sort of interesting system wide effects.

DEVON: This is the stuff of my nightmares. I’m not going to sleep tonight. Thanks, Michael.

MICHAEL: I don't know. I'm sorry. I flew over the main distribution center of, I guess, supermarkets in Toronto once. And it was just interesting, kind of looking down from the plane. And thinking, you, when you've got a centralized system like that, that is so important, like an essential function of a city, what happens when that starts to fail? You know, just how brittle are the systems? I wonder about it. 

I gave the example of LINPACK. I'm sure that LINPACK is used in things like the discovery by the Large Hadron Collider of the Higgs Boson, or the discovery of gravitational waves by LIGO and probably in a zillion other important discoveries. It's just a sort of a fun game to play to think about. Yeah, how would you detect subtle software bugs in the output from those experiments? I'm quite confident those teams know a lot about what they're doing. But at least as a science fictional sort of scenario, it's fun to think about the possibility of bugs very deep inside software, actually really causing us to come to erroneous conclusions about the way the world works. Maybe the way in which food gets distributed, or electricity is the electrical grid stabilized or something like that.

DEVON: There's all those stories of biology labs using Excel, and then like, genes get parsed as dates via Excel, instead of genes. Then all these results are way off as a result. I hear stories like this pop up pretty frequently and those are just the ones that we hear about, I'm sure that there are many, many more that we are not aware of. 

I think this is where the importance of viewing problems in different ways becomes so important through different lenses, so that you can sanity check the results. I mean, you know, in math class, I remember in elementary school, they said to solve the problem two different ways. And that way, if you made a mistake in one of the paths, it will show up and you'll see that there's a different result. I would imagine that with some of these problems in physics, it's probably pretty difficult to show it in one way, let alone in two ways.

MICHAEL: So actually, the further the Higgs Boson is, there are at least two detectors at the LHC. And so you can in principle, sort of run not entirely independently, like this suddenly gonna be a lot of common infrastructure still, like they'll still all be using whatever, you know, Intel or AMD. So there's still the possibility and it's not like they're completely uncorrelated, but there's quite a bit of independence for LIGO. They had a team whose job was to infect it to inject fake signals essentially. Yeah, it was to act as sort of a Chaos Monkey in the system, which I think is just great. Having that kind of adversarial group built into the design of the experiment is so cool. It also must have been so much fun. I mean, imagine you're part of a scientific group and you're hired actually to sneak into the lab at night and do nasty things, kind of essentially. It's the role that they have, and you want to be resilient against that kind of behavior.

DEVON: It sounds like white hat penetration testing and software systems. But in intellectual systems. That's fun, I would definitely watch a heist movie with that.

MICHAEL: That's such a great idea. That's such a great idea.

DEVON: Or at least like a sci-fi novel heist movie, I think I think that would be fun.

MICHAEL: I'll bet some actually have. A great thing about science fiction is that it seems like any idea you or I could possibly have, some sci-fi author has at least written a short story about it. There's so imaginative,

DEVON: We'll have to dig one up and link it in the show notes if we can find one. That'd be fun. Besides intellectual chaos monkeys, what are some of the most successful tools for thought that you've seen in physics?

MICHAEL: Language, mathematics, symbols? Yeah, these are all things we take for granted. But of course, they're just amazingly important. Things like Arabic numerals, super easy to take for granted, even actually, just the idea that place matters for numerals is incredibly deep. The fact that if I consider the number 21, 2 has a very different meaning in 21, than when it's just 2 alone. We don't think about that, but it's actually quite a different symbol. In some sense, the context has changed its meaning. So I think you're probably asking about much more recent things, though, is that right?

DEVON: That is true, but that was also a good answer. What are some recent tools for thought developed in physics, say, in the last 100 years?

MICHAEL: Just just narrowing down to a century? Yeah. Not even not even giving me 100,000 years?

DEVON: Yea.

MICHAEL: Certainly, Mathematica has had an enormous impact on physics. So to have things like MATLAB and NumPy, and things like that. I have used those systems, but I'm not a master in the way some people are. I had a student, Henry Haselgrove. What Henry could do with MATLAB was astonishing. We would have a conversation about what I thought of as a very conceptual, theoretical kind of a question. Then, he would just spend a few minutes in MATLAB, and even though he was doing computations with particular matrices, he was able to get conceptual insight about abstract mathematics, which I thought was very interesting. It was much more rapid than I could possibly do, being restricted to just a completely different mode of thinking. 

Thought that it's really interesting to work with people who have that kind of capacity to generate very concrete calculations with specific numbers, but to draw interesting conclusions about very abstract conceptual questions. Physics has certainly changed. To the point where computational and numerical methods have really become a third way of understanding so many systems. You just can't analytically solve a lot of problems. You know, if you want to understand what the gravitational wave signature of colliding black holes is going to look like, you need to do some pretty heavy duty numerical calculation. That will tell you then what sort of smoking gun signatures to look for. 

In some sense, just that ability to do simulation has transformed, not just physics, but all of science. It's like you're able to take a system specification, which is very broad. And then answer questions about specific behaviors under specific circumstances which absent simulation, would just be completely and utterly inaccessible to you. You can't solve the problem of figuring out what kind of gravitational wave signature will be provided by colliding black holes. You can't do that. In theory, the problem is too complicated. You could do it experimentally, if you knew that you had two black holes nearby. But we don't know that we have two black holes nearby, we're trying to figure out if what we're seeing is, in fact, two black holes. So it's important then to have this extra method, this method of simulation, which is able to say, “Oh, two black holes colliding would look like this in gravitational wave signature”. So then you study a whole bunch of sort of events like that maybe you didn't see in your simulations about black holes and how they behave. That then can be used as input to other experiments. I don't know if that's before you asked that particular example.

How is it transforming science in that, in that particular instance? It's enabling you to make inferences about systems where you're not actually. In the experiment, you're not actually sure what the constituent systems are and you're instead inferring it from the outcome of some theory plus numerical simulation. So that's a new ability in science and a pretty significant one. I’m thinking in real time here

I think somehow, like, you can certainly do the thing where you just try simulating lots and lots of different possible systems that you think might be out there somewhere, you know, neutron stars colliding neutron stars, and a black hole colliding other sorts of many other possibilities. You can simulate all of them. And then you can look in the data to see are there signatures in the data?  That's been the process. I do know that. They did do early simulations of some of these important sorts of plausible classes of astrophysical phenomena. And then they actually just had to wait and see what actually showed up and in the data, there was no guarantee that they were going to see black holes colliding. 

DEVON: What are some areas of physics where you think that we could benefit from better tools for thought where our thinking is kind of hazy because we don't have maps that help us really understand what's going on?

MICHAEL: It is certainly true, and I have played a lot with tools, which are just meant to represent particular individual systems. As an example, I'll just use my friend, Grant Sanderson’s YouTube channel, 3Blue1Brown. It's not a tool, but at least makes the point clear. He does these beautiful animated 3D graphics, which show systems from illustrating phenomena and mathematics, sometimes from physics, sometimes from other areas, but most often formed from mathematics. You're just able to build intuition, often relatively easily, by seeing these kinds of animated drawings that you wouldn't have in any other way. It's extremely striking. I mean, it's a very simple example, and super common, but in many ways, I feel like it's a pity in some sense that it's not more routine. There's quite a startup cost to doing it. To the extent that we can make it easier and easier to do those kinds of things, it's certainly a good intuition generator.

Visual processing system is so powerful. It operates in parallel, whereas the, you know, symbolic manipulation I was doing before… it's very sort of serialized.

DEVON: It's really difficult to create the quality of graphics that Grant does for his videos, I do think that there are relatively lower bars that people still don't make it over. Just even drawing a very simple line graph can often be so elucidating. I can't even count the number of times that in a company, I spent an hour or two drawing a visual that explained my thinking and thinking “This won't help that much. This is kind of a waste of time. I should have been doing real work”. And then I share it with other people on my team and they go, “Oh, now I understand”. This makes sense. Sometimes an entire team will form around that diagram and solve a problem. It's always so valuable to do it. I always then kick myself and think, why don't I spend more of my time doing this? Because suddenly, all of these people understand something that they previously didn't. And my own understanding is better. Because once I drew it out, I realized, “oh, this thing doesn't quite add up” or “this thing in my head actually doesn't map to paper very well”. This either means there's something wrong, or it means that there's some way I'm representing it that doesn't quite capture what matters. But yeah, these things are always so valuable. I think it seems that most people underrate them, myself included. Why is that?

MICHAEL: So myself included as well, sort of it's funny like that. The question, why did you know different representations of the same ideas help so much? It's pretty clear that lots of people have that experience. I enjoy Venkatesh Rao, the writer. He loves to draw two by twos for absolutely everything. Yeah, they make you think about things in a slightly different way. Yeah, there's a cheap answer, I think there's a better answer. The cheap answer is when you arrange something in a new representation, it does make it sometimes easier to see other connections, which had formally escaped you, so you see, “oh, this might be related to this other thing” and such in such a way, maybe that I think that the better answer. 

Some years ago, I developed a little prototype. It was for studying the motion of physical systems in one dimension. And the thing that I did in the prototype was that I found a way of representing the conservation of energy, so that you could just see conservation of energy directly in the visual representation provided by the prototype. Even though I've been doing physics for most of my life, 30 odd years, I applied conservation of energy, I don't know how many 1000s or 10s of 1000s of times before, I found just being able to see it directly in this visual representation of the system completely changed the way I related to it. It was no longer an algebraic manipulation that I was doing. Instead, I could just see what to expect. That simple change really made a difference to how I thought about it.

One thing, I mean, it's just because of course, you know, a visual processing system is so powerful. You know, it operates in parallel, whereas the, you know, symbolic manipulation I was doing before, to think about conservation of energy before that, it's very sort of serialized. So it's much harder to get a global view. But the key thing to do there was to design into the interface and this prototype. So that that prototype provided a direct representation of an important, deep result about the system. In my particular case, it was the conservation of energy. I found a few other examples, and a little prototype to illustrate some ideas about complex analysis. Again, I found that just being able to see directly, rather than having to do the symbolic manipulations certainly helped me. 

It changed my experience, whether it would have changed other people's experience, I don't know. It was just a little sort of sketched out prototype, not a system that was ever shipped for wide distribution. 

DEVON: One thing that I've noticed in our sort of general social group and beyond, in the last five or so years, is the rise of the independent researcher. It seems to me like more and more people are choosing settings outside of traditional academia to pursue lines of inquiry that previously would have found a home in the universities. First, there's sort of two questions. One is just does that match your observations because you've been much more embedded in science for a much longer time than I have and if it does match your observations, what is driving that shift?

MICHAEL: I mean, intuitively it does. I don't have any real data to support it, just sort of noticing more and more people, which might just be that I'm getting older. 

There's at least two really good structural reasons for it, maybe three. One really good structural reason is that it's just getting a lot easier to access papers and other sorts of serious materials so you can participate in conversation. In that sense, the Academy has kind of opened up a little bit. And the second, which is very closely related, is that those communities of practice are no longer as closed as they used to be. It's kind of shocking. To me, looking back on my experience of quantum computing in the late 1990s, and early 2000s, my experience, at least, was quite insular. I think it's much easier now to just sit on the boundary. I still track at least a little bit of stuff about quantum computing, I mean, a lot of it is just catching up with old friends and sort of gossiping about stuff. But it's made a lot easier by social media, there's no doubt about that. It's also hugely, hugely easier to sit on a lot of those boundaries, not just with one field, but with 2, 3, 10, 30 fields. Somebody who wants to be an independent researcher in some field, can to some extent just embed themselves. 

In AI, it's been interesting to watch. Certain people who don't have PhDs or in some cases have undergraduate degrees, become significant parts of the AI community. I'm thinking about people like Chris Olah and Alec Radford and others who don't haven't necessarily done the PhD and all that kind of thing. But they have nonetheless become very important parts of the community. And I think that's been helped a little bit, also, by that sort of permeability. 

The third thing, and this is just more speculative, I can just point to loads of very specific examples where I've just witnessed it happening. The third thing is sources of capital to support this work. So people support themselves on Substack, or Patreon, or through patronage in some way. Maybe there were precedents in previous generations. We were talking before about Jane Jacobs, she wasn't an urban planner in the traditional mold. She was able to arrange a certain amount of independent funding, and I think in some ways, she'd have to be regarded as an independent researcher - one who became one of if not the most influential person in her field ever. So you know, maybe unbeknownst to me, there were lots of people like that back then. And I'm simply picking on the person who was a particularly outstanding example. So I don't know whether the sources of capital have increased or, or what happened. It's a good question. Actually, it's not a bad research question.

DEVON: Somewhat someone listening to this, hopefully, can they pick up their years? Hopefully, they also write that sci-fi heist novel, I'm hoping we can plant a few seeds of ideas.

MICHAEL: Or actually, I liked the idea that it was an academic before Marie Kondo should write a sequel about the Life Changing Magic of Finding New Representations. She might not be the right person to write it.

DEVON: Jane Jacobs is an interesting example, because I think the fact that she is independent, or what was independent, was so important to the results that she ended up discovering and seeing. The dynamics that she noticed, a lot of it came from her being a mother who's spent a lot of time walking around cities, and just noticing things over the years, piling up to give her opinions, views, and perspectives that other people didn't have. Whereas had she been sitting in a traditional Urban Planning Department in an office, or in an academic setting studying sort of land use economics, she would have a very different viewpoint. It would be much more colored by what people say is true than what she actually sees on the ground. I think that those modes of thinking can definitely solve a lot of important problems. But that inductive approach of just what do I see? What do I think is happening?

MICHAEL: You've changed my opinion, just now about something actually. So David Keith, who's a very well known proponent of geoengineering, he's written books about how to increase the reflectivity of the Earth's atmosphere as a way of ameliorating some, but not all of the effects of global warming. In his book, he makes a comment very similar to what you just said. He said he worries that too many of the people making decisions about climate and related things are doing so in air conditioned conference rooms. He sort of makes a plea, basically, for people just to go out and sort of into the environment, and spend some time sort of seeing the world and whatnot. I must admit, when I read that I sort of dismissed it as misplaced romanticism. I think your example, just now of Jane Jacobs, has made me change my mind. Then I thought he's kind of making a plea for just diversity of experience. And yeah, insofar as it's hard to say what experiences are relevant, maybe it's a good point. You're certainly right about Jacobs.

DEVON: Yeah, with climate, it feels more wrong to me. But I need to think about why. My immediate thought is, “oh, that's so wrong”. Because when you go out, there's only so much of the climate that you can experience, the planet is huge. If you're in California, and you experience wildfires, you might think, yes, climate change is a big issue. But actually, maybe the fires are caused by mismanagement over decades, and it's kind of uncorrelated with or unrelated to climate change, perhaps I'm not making that claim. That's my immediate reaction. But then I think, you know, when you go out, you still might see things that are just totally contradictory to what your model is, and might completely shock you. 

I've spent a lot of time in Argentina over the last few years and I had read a lot about monetary policy and inflation before that in sort of theoretical terms. There's something really different about seeing that on the ground, where I had the realization that I really didn't understand the phenomenon at all, until I was there. There's just so many other social effects, sort of outlook on life that just hadn't really registered because I hadn't been there. And yeah, I haven't spoken with every Argentinian. I think there's like 40 million of them so I'm missing a lot of that experience. But the handful that I have gotten to know really well have sort of made me realize that I was missing a big part of the picture. So I could totally see something like that happening with a climate where yeah, you're not gonna see every square inch of Earth at every moment in time. But you probably don't need that to correct some of the biggest errors in your thinking.

MICHAEL: I think in the case of somebody like Jane Jacobs, I would have a priori not been terribly sympathetic to the argument that it would help that much to spend that much time engaged when it's not even field work, it's just randomly walking around. And yet, I think it's pretty clear that it did generate the anticipated level of insight. You think about great explorers, Jacques Cousteau and Robert Ballard and people like that. They would just get a tremendous amount of insight. Actually I shared an office briefly with a person, Thomas Lovejoy, who was the father of conservation biology, I think is the term that he coined. He just spent, at that point, 40 or 50 odd years in the Amazon. And he'd been tremendously involved in hundreds, possibly 1000s of different cases of sort of trying to save different parts of the Amazon rainforest. He had so much varied on the ground experience of just all the different sorts of local conditions, there was certainly a lot of very contingent knowledge. He made the comment that he'd really changed his sort of thinking over the years, from a global view to a much more local negotiated point of view. 

I've gotten a little off track here. I guess it's an obsession, for me, the thinking about this kind of local knowledge versus the sort of abstract global view. I think, very early in my career, I was very fond of broad, abstract arguments, and became much more interested in and just enjoyed having a multitude of very specific kinds of instances in mind and trying to reason from collisions and inconsistencies between those.

DEVON:I was very enamored with theory when I was younger as well. And I think every year I get older, the more I appreciate specificity and that's where all the interesting stuff is. Anyone can memorize the theory and try to apply it willy nilly. I think that's not that hard. It's actually integrating it and, or finding evidence and integrating it into something broader that explains all that evidence. That's much harder and much more interesting, I think.

MICHAEL: Yup! Certainly seems so to me.

DEVON: So my last question is based on someone that you quoted in one of your essays, you said that the physicist John Wheeler once stated a useful principle to guide research: “In any field you should find the strangest thing and then explore it”. So my question is, what's something strange that's captured your interest recently?

MICHAEL: I recently read Kazuo Ishiguro’s The Buried Giant. I guess he's very well known for writing The Remains of the Day and Never Let Me Go. which are books I love, particularly The Remains of the Day. The Buried Giant is basically a fairy tale set in Arthurian times and it doesn't entirely work. The strange thing, though, and the bit that kept me reading was that it had the quality of being a fairy tale. And I don't know, there are many things that seem like fairy tales, but that are really just stories about people. And they don't, they don't have a fairy feeling to them. 

He's talking about what he calls sub-creation, meaning the creation of a complete internally consistent world that is nonetheless different from ours

I read Neil Gaiman's book, Stardust, years ago, and it feels like a fairy tale. It feels like it was written 1000 years ago. I don't know why. That's the strange thing I want to understand is what is that sense of being fae, being out of time? There is some sense of strangeness in Stardust and the very giant that I also find in Beowulf, that I find in The Lord of the Rings, that I find in Lord Dunsany. But that I don't find in almost all fantasy. Somehow, it seems internally consistent, but very different and I don't understand why.

There's an essay I love by Tolkien which is about process. It really is the process of sort of creativity, he's talking about what he calls sub-creation, meaning the creation of a complete internally consistent world that is nonetheless different from ours. That's what he was trying to do and where he was getting the force of myth from in The Lord of the Rings, The Silmarillion, and some of his other works. There isn't the same sense of sub-creation in The Buried Giant or Stardust. But there is still some strange sense of depth. In the case of The Buried Giant, it's embedded within the Arthurian legends, and maybe you get something from that, you know, Arthur is so sort of throughout our culture, you know, it's in everything in some ways. It's not as much as say, the Bible is in everything, but it really strongly influences our culture. Ideas about chivalry, ideas about the way the genders treat each other, a lot of things like that are influenced by Arthur in different ways. The Buried Giant some of that, and maybe that's where it gets its sense from. 

Appropriately, I can't give a coherent answer, because it's strange. I'm trying to understand what those authors are doing. How are they sourcing it? I don't actually, I don't particularly love Gaiman. And I don't particularly find it his light at work or his supposedly light at work is the stuff that actually grabs me at all. I like Stardust, I like Coraline. I love his book with Terry Pratchett, Good Omens. Neither of those authors on their own I particularly like. But Coraline and Stardust, somehow, are the ones that actually seem deep to me even while they're apparently little children's stories. Do you know what I mean by this sense of strangeness?

DEVON: There's two other types of experiences that I think might be similar. Tell me if they resonate. One is the experience of traveling to another place. And then also, another is the experience of spending time with people who are religious when you are yourself not religious.

MICHAEL: Yeah, that's great. 

DEVON: Yeah, like I recently went to a family friend's Seder. And their family is not Orthodox Jewish, but they have a lot of friends who are more traditional. And they had all of these rituals and things at the dinner that I could just tell had so much meaning to them, and to their ancestors, and to so many different people. To me, I kind of understood a few surface level things and they explained a few things to me. But it was a sense of like, I both, clearly don't subscribe to it myself, but also, I can tell that this actually really does matter. And I'm kind of befuddled, but also sort of awed by it at the same time. There’s this whole narrative about what they're doing with each other that I can't quite tap into, but I can tell that it's happening.

MICHAEL: There's something I think a little bit about just layers and layers of meaning. You see this in cities as well, some cities are very designed, there's one reason why something is there. And some cities, you know, there's just so many layers by which they've been laid down. There's 27 different reasons why that thing is there. You can tell the thing that has been placed there by some bureaucrat, and then there's something else that is there by the origin of the streets determined by some accident 2000 years ago. Somehow, there's something similar in social rituals and stories like that. 

Maybe that's what a very clever writer like Gaiman is doing. He's studied enough and internalized enough that he's reproducing some of those strange choices. You can sort of feel the layers of the past that would be one sort of possible theory or possible explanation of what's what's what's going on or giving it these interesting and strange resonances, that.

Barbara Tversky has this. She makes this nice point, that with language, if you sort of think about it in user interface terms, there's a lot of speakers, hundreds of millions of people are contributing constantly to user testing. They're turning over all the features. They get to design the features. And they get to do it over centuries or longer. In that sense, it's got a lot more contributors. Myth and fairy tales have the same kind of quality, they're often retold over and over and over and over again. One theory would be to say, “Oh, they end up very watered down because of that, you know, a lot of the rough edges are hewn off”. I also wonder at the extent to which maybe some very deep elements get preserved, which we don't entirely understand why they're there, they are there for very good reasons. 

Richard Feynman, the physicist, was very interested in stories and fairy tales and tried to write some. He said that he discovered that his fairy tales were just completely boring. And he couldn't understand they will always just be boring, you know, recapitulation of elements he'd already seen. And he would feel that nothing else was possible. And then he would talk to a friend in the English department, and they would say, “oh, no, no, you know, here's another example”. And it would have the same kind of mythic force again, and the same kind of originality. Maybe that's actually what I'm responding to, in the examples. The two examples I gave, there is some kind of mythic force whose origin I don't understand. And I would love to.

DEVON: Yeah, Tolkien famously wrote books before he ever wrote the real books. He wrote songs, he created a language, he created this very rich tapestry. You have a whole universe that existed, and then built the story from there. So that, I think, really contributes to what you're saying. 

If you had said something about the other author that your model was maybe he had seen so much, he had been so attuned to real cultures that had complexity and nuance in them that he was able to then sort of generate it. It's interesting you say, because my intuition is that it would be very different, which is more, you have to create those layers over time. And it just takes an incredible amount of time to build it up and let it mutate, and so on, which sounds more like what Tolkien maybe did, where he created this mythology over time that layered on itself. Where does your intuition for the other approach come from.

MICHAEL: I guess, I mean, it's just reading sort of interviews with Gaiman and some of his non-fiction writing, where he's clearly incredibly observant about stuff. The book of his that I probably most love is The Remains of The Day. In that book, I mean, it's just the story of essentially a failed love affair between a butler running an English house. And it's just so carefully observed, that's what makes it beautiful. He has this enormous eye for detail, and you don't appreciate all of the details. I'm sure. I've reread that book several times and I've watched the movie many times, and I see new things each time. I don't even know that it's necessarily an accurate representation of the milieu it's purportedly about, but his eye for detail in human beings just seems astonishing to me. Reminds me a bit of Jane Austen also has that kind of just incredible eye for people.

DEVON: Well, that is all the questions I'm going to ask. Thanks, Michael. This was a really fun conversation. I really enjoyed the excuse to dive deeper into your work.

MICHAEL: Thank you so much, Devon. It was absolutely lovely.

Share this post


Try it now

Get going on web or desktop

We also have Mac & Windows apps to match.

We also have iOS & Android apps to match.

Web app

Desktop app