Preview Mode Links will not work in preview mode

Idea Machines


Jan 18, 2022

Professor Michael Strevens discusses the line between scientific knowledge and everything else, the contrast between what scientists as people do and the formalized process of science, why Kuhn and Popper are both right and both wrong, and more.

Michael is a professor of Philosophy at New York University where he studies the philosophy of science and the philosophical implications of cognitive science. He’s the author of the outstanding book “The Knowledge Machine” which is the focus of most of our conversation.

Two ideas from the book that we touch on:

1. “The iron rule of science”.

The iron rule states that “`[The Iron Rule] directs scientists to resolve their differences of opinion by conducting empirical tests rather than by shouting or fighting or philosophizing or moralizing or marrying or calling on a higher power` in the book Michael Makes a strong argument that scientists following the iron rule is what makes science work.

2. “The Tychonic principle.”

Named after the astronomer Tycho Brahe who was one of the first to realize that very sensitive measurements can unlock new knowledge about the world, this is the idea that the secrets of the universe lie in minute details that can discriminate between two competing theories. The classic example here is the amount of change in star positions during an eclipse dictated whether Einstein or Newton was more correct about the nature of gravity.

Links

 

Automated Transcript

[00:00:35]

In this conversation. Uh, Professor Michael And I talk about the line between scientific knowledge and everything else. The contrast between what scientists as people do and the formalized process of science, why Coon and popper are both right, and both wrong and more. Michael is a professor of philosophy at New York university, where he studies the philosophy of science and the philosophical implications [00:01:35] of cognitive science.

He's the author of the outstanding book, the knowledge machine, which is the focus of most of our conversation. A quick warning. This is a very Tyler Cowen ESCA episode. In other words, that's the conversation I wanted to have with Michael? Not necessarily the one that you want to hear. That being said I want to briefly introduce two ideas from the book, which we focus on pretty heavily.

First it's what Michael calls the iron rule of science. Direct quote from the book dine rule states that the iron rule direct scientists to resolve their differences of opinion by conducting empirical tests, rather than by shouting or fighting or philosophizing or moralizing or marrying or calling on a higher power.

In the book, Michael makes a strong argument that scientist's following the iron rule is what makes science work. The other idea from the book is what Michael calls the Taconic principle. Named after the astronomer Tycho Brahe, who is one of the first to realize that very sensitive measurements can unlock new [00:02:35] knowledge about the world.

This is the idea that the secrets of the universe that lie into my new details that can discriminate between two competing theories. The classic example, here is the amount of change in a Star's position during an eclipse dictating whether Einstein or Newton was more correct about the nature of gravity.

So with that background, here's my conversation with professor Michael strengthens.

[00:02:58] Ben: Where did this idea of the, this, the sort of conceptual framework that you came up with come from?

Like, what's like almost the story behind the story here.

[00:03:10] Michael: Well, there is an interesting origin story, or at least it's interesting in a, in a nerdy kind of way. So it was interested in an actually teaching the, like what philosophers call that logic of confirmation, how, how evidence supports or undermines theories.

And I was interested in getting across some ideas from that 1940s and fifties. Scientists philosophers of science these days [00:03:35] look back on it and think of as being a little bit naive and clueless. And I had at some point in trying to make this stuff appealing in the right sort of way to my students so that they would see it it's really worth paying attention.

And just not just completely superseded. I had a bit of a gear shift looking at it, and I realized that in some sense, what this old theory was a theory of, wasn't the thing that we were talking about now, but a different thing. So it wasn't so much about how to assess how much a piece of evidence supports a theory or undermines it.

But was it more a theory of just what counts as evidence in the first place? And that got me thinking that this question alone is, could be a important one to, to, to think about now, I ended up as you know, in my book, the knowledge machine, I'm putting my finger on that as the most important thing in all of science.

And I can't say it at that point, I had yet had that idea, but it was, [00:04:35] it was kind of puzzling me why it would be that there would, there would be this very kind of objective standard for something counting is evidence that nevertheless offered you more or less, no help in deciding what the evidence was actually telling you.

Why would, why would this be so important at first? I thought maybe, maybe it was just the sheer objectivity of it. That's important. And I still think there's something to that, but the objectivity alone didn't seem to be doing enough. And then I connected it with this idea in Thomas Kuhn's book, the structure of scientific revolutions that, that science is is a really difficult pursuit that I've heard.

And of course it's wonderful some of the time, but a lot of. requires just that kind of perseverance in the face of very discouraging sometimes. Oh, it's I got the idea that this very objective standard for evidence could be playing the same role that Coon Coon thought was played by what he called the paradigm bar, providing a kind of a very objective framework, which is also a kind of a safe framework, [00:05:35] like a game where everyone agrees on the rules and where people could be feeling more comfortable about the validity and importance of what they were doing.

Not necessarily because they would be convinced it would lead to the truth, but just because they felt secure in playing a certain kind of game. So it was a long, it was a long process that began with this sort of just something didn't seem right about these. It didn't seem right that these ideas from the 1940s and fifties could be so, so so wrong as answers to the question.

Philosophers in my generation, but answering. Yeah, no, it's,

[00:06:11] Ben: I love that. I feel in a way you did is like you like step one, sort of synthesized Coon and popper, and then went like one step beyond them. It's, it's this thing where I'm sure you'd go this, this, the concept that whenever you have like two, two theories that seem equally right.

But are [00:06:35] contradictory, that demand is like that, that is a place where, you know, you need more theory, right? Because like, you look at popper and it's like, oh yeah, that seems, that seems right. But then there's you look at Kuhn and you're like, oh, that seems right. And then you're like, wait a minute. Because like, they sort of can't both live in the broom without

[00:06:56] Michael: adding something.

Although there is something there's actually something I think. Pop Harrington about Koons ideas now. And there's lots of things that are very unpopped period, but you know, Papa's basic idea is science proceeds through reputation and Koons picture of science is a little bit like a very large scale version of that, where we're scientists now, unlike in Papa's story by scientists, we're all desperately trying to undermine theories, you know, the great Britain negative spirits.

And with, with, they just assume that that prevailing way of doing things, the paradigm is going to work out okay. But in presuming that they push it to its breaking point. And [00:07:35] that process, if you kind of take a few steps back, has the look of pop and science in the sense that, in the sense that scientists, but now unwittingly rather than with their critical faculties, fully engaged and wittingly are, are taking the theory to a point where it just cannot be sustained anymore in the face of the evidence.

And it progresses made because the theory just becomes antenna. Some other theory needs to be counted. So there's at, at the largest scale, there's this process of that, of success of reputation and theories. Now, Coon reputation is not quite the right word. That sounds too orderly and logical to capture what it's doing, but it is nevertheless, there is being annihilated by facts and in a way that's actually quite a period.

I think that interesting.

[00:08:20] Ben: So it's like, like you could almost phrase Coon as like systemic pop area. Isn't right. To like no individual scientist is trying to do reputation, but then you have like the system eventually [00:08:35] refutes. And that like, that is what the paradigm shift

[00:08:37] Michael: is. That's exactly right. Oh,

[00:08:39] Ben: that's fast.

Another thing that I wanted to ask before we dig into the actual meat of the book is like, wow, this is, this is almost a very, very selfish question, but like, why should people care about this? Like, I really care about it. There's some, and by this, I mean like sort of the, like theories of how science works, right?

Like, but I know, I know many scientists who don't care. They're just like, I tried to, I talked to them about that because then they're like, like I just, you know, it's like I do, I do. I think,

[00:09:12] Michael: you know, in a way that, and that's completely fine, you know, people to drive a car, you don't know how the engine works.

And in fact the best drivers may not have very much mechanical understanding at all. And it's fine for scientists to be a part of the system and do what the system requires of them without really grasping how it works most of the time. 1, 1, 1 way it becomes important is when people start wanting.[00:09:35]

Science might not be improved in some ways. So there's a few, there's always a little bit of that going on at the margin. So some string theorists now want to want to relax the standards for what counts as a, as a acceptable scientific arguments so that the elegance or economy of an explanation kind of officially count in favor of a theory as well as, as well as the empirical evidence in the fashion sense.

Or there's, there's quite a bit of, of momentum for reform of the publishing system and science coming out of things like the replicability crisis, the idea that actually that, you know, it's talking about science as a game, but science has been gamified to the point where it's being gamed.

Yes. And so, you know, there a certain kind of ambitious individual goes into science and yeah, not necessarily. One who has no interest in knowledge, but they, once they see what the rules are, they cannot resist playing those rules to the, to the limit. And what you get is a sequence of scientists sometimes call it the least publishable unit.

That's tiny little [00:10:35] results that are designed more to be published and cited in advance of scientist's career than to be the most useful, a summary of research. And then you, and you get time to simply then even worse, choosing their research direction, less out of curiosity, or the sense that they can really do something valuable for the world at large then because they see a narrower and shorter term opportunity to make their own name.

Know that's not always a bad thing, but you know, no system of no system of rules, as perfect as people explain the rules more and more that the direction of science as a whole can start to veer a little bit away. Now it's a complicated issue because you changed the rules and you may lose a lot of what's good about the system.

Things that you may, it may all look like it's very noble and, and so on, but you can still lose some of what's good about the system as well as fixing what's bad. So I think it's really important to understand how the whole thing works before just charging in and, and, and making a whole series of reforms.

[00:11:34] Ben: [00:11:35] Yeah. Okay. That makes a lot of sense. It's like, what are the, what are the actual, like core pieces that, that drive the engine?

[00:11:42] Michael: So that's the practical, that's the practical side of the answer to your question. You might, people should care. I thing it's a fascinating story. I mean, I love these kinds of stories.

Like the Coon story, where we're at turn, everything turns out to be working in completely different way from the way it seems to be working with that ideology turns out to be not such a great guide to the actual mechanics of the thing. Yeah,

[00:12:03] Ben: yeah, no, I mean, yeah. I think that I like that there are some people who just like, think it's fascinating and it's like also just.

My, my bias has also the, like how it sort of like weaves between history, right? Like you have to like, really like, look at all of these like fascinating case studies and be like, oh, what's actually going on there. So actually to build on two things you just said could, could you make the argument that with the ref replicability crisis and [00:12:35] like sort of this idea of like P hacking, you're actually seeing, you're seeing what you like th the, the mechanisms that you described in the book in play where it sort of, it used to be that looking at P values was like, like having a good P value was considered sufficient evidence, but then we like now see that, like, having that sufficient P value doesn't, isn't actually predictive.

And so now. Everybody is sort of starting to say like, well, maybe like that, that P felt like the using P value as evidence is, is no longer sufficient. And so, because the, the observations didn't match the, the, like what is considered evidence it's like the, what is considered evidence is evolving.

Is that like, basically like a case, like,

[00:13:29] Michael: exactly. That's exactly right. So the, the whole, the significance testing is one of these, it's a [00:13:35] particular kind of instanciation of the sort of broadest set of rules. We, this whole rule based approach to science where you set up things. So that it's very clear what counts as, as publishable evidence, you have to have a statistically significant results in that P P value testing and stuff is the, is the most widespread of kind of way of thinking about statistical significance.

So it's all very straightforward, you know, exactly what you have to do. I think a lot of. Great scientific research has been done and that under that banner, yeah. Having the rules be so clear and straightforward rather than just a matter of some, the referees who referee for journals, just making their own minds up about whether this result looks like a good mind or not.

It's really helped science move forward. And given scientists the security, they need to set up research the research programs that they've set up. It's all been good, but because it sort of sets up this very specific role it's possible to, for the right kind of Machiavellian mind to [00:14:35] look at those rules and say, well, let me see.

I see some ways, at least in some, in some domains of research where there's plentiful data or it's fairly easy to generate. I see ways that I can officially follow the rules and yet, and technically speaking, what I'm doing is publishing something that's statistically significant and yet. Take a step back.

And what happens is, is you may end up with a result, know there's the John you need is one of the, one of the big commentators on this stuff has result. Most published research is false in the title of one of his most famous papers. So you need to step back and say, okay, well, the game was working for a while.

It was really, we had the game aligned to people's behavior with what, with what was good for all of us. Right. But once certain people started taking advantage of it in certain fields, at least it started not working so well. We want to hang on to the value we get out of having [00:15:35] very clear objective rules.

I mean, objective in the sense that anyone can make a fair judgment about whether the rules are being followed or not, but somehow get the alignment back.

[00:15:46] Ben: Yeah. And then, so it's like, so, so that game, that game went out of whack, but then sort of like there's. The broader metagame that is like that that's the, the point of the consistent thing.

And then also sort of you, you mentioned string theory earlier, and as I was reading the book, I, I don't think you call this out explicitly, but I, I feel like there are a number of domains that people would think of as science now, but that sort of by your, by, by the iron law would not count. So, so string theory being one of them where it's like very hard, we've sort of reached the limit of observation, at least until we have better equipment.

Another [00:16:35] one that came to mind was like a lot of evolutionary arguments were sort of, because it's based on something that is lot like is in the past there there's sort of no way to. To gather additional evidence. W would you say that, like, it's actually, you have a fairly strict bound on what counts as science?

[00:16:59] Michael: It is, it is strict, but I think it's, it's not my, it's not in any way. My formulation, this is the way science really is now. It's okay. The point of sciences to is to develop theories and models and so on, and then to empirically test them. And a part of that activity is just developing the theories and models.

And so it's completely fine for scientists to develop models and string theory and so on and, and develop evolutionary models of that runway ahead of the evidence. Yeah. I, you know, there where, where, where it's practically very difficult to come up with evidence testimony. I don't think that's exact that in itself is not [00:17:35] unscientific, but then that the question of course immediately comes up.

Okay. So now what do we do with these models and, and The iron rule says there's only one, there's only one way to assess them, which is to look for evidence. So what happens when you're in a position with string theory or see with some models and evolutionary psychology in particular where, where it's there's there just is no evidence right now that there's a temptation to find other ways to advance those theories.

And so the string theorists would like to argue for string theory on the ground of its it's unifying power, for example, that evolutionary psychologists, I think relying on a set of kind of intuitive appeal, or just a sense that there's something about the smile that sort of feels right. It really captures the experience of being a human being and say, I don't know, sexually jealous or something like that.

And that's just not, that is not science. And that is not the sort of thing that. In general published in scientific journals, but yeah, the [00:18:35] question that's come up. Well, maybe we are being too strict. Maybe we, if we could, we would encourage the creation of more useful, interesting illuminating explanatorily powerful models and theories.

If we allowed that, allowed them to get some prestige and scientific momentum in ways other than the very evidence focus way. Well, maybe it would just open the gates to a bunch of adventure, idle speculation. Yeah. That was way science down and distract scientists from doing the stuff that has actually resulted in 300 years or so of scientific progress.

[00:19:12] Ben: And, and, and your argument would be that like for the ladder, that is well don't

[00:19:21] Michael: rush in, I would say, you know, think carefully before you do it.

[00:19:25] Ben: No, I mean, I find that that very another, another place where I felt like your framework, [00:19:35] I'm not quite sure what the right word is. Like sort of like there was, there was some friction was, is with especially with the the, the Taconic principle of needing to find like, sort of like very minute differences between what the theory would predict.

And the reality is sort of areas you might call it like, like complex systems or emergent behavior and where sort of being able to explain sort of like what the fundamentally, just because you can explain how the building blocks of a system work does like, makes it very hard to make. It does not actually help you make predictions about that system.

And I I'm I'm do you have a sense of that? How, how you expect that to work out in with, with the iron rule, because it's, it's like when there are, there are just like so many parameters that you could sort of like, argue like, well, like we either we predicted it or we didn't predict it.

[00:20:34] Michael: Yeah, [00:20:35] no.

Right. So, so sometimes the productions are so important that people will do the work necessary to really crank through the model. So where the forecast is the best example of that. So getting a weather forecast for five days time, you just spend a lot of money gathering data and running simulations on extremely expensive computers, but almost all of, almost all of science.

There just isn't the funding for that. And so you'd never going to be able to make, or it's never going to be practically possible to make those kinds of predictions. But I think these models are capable of making other kinds of predictions. So I mean, even in the case of, of the weather models, you can, without, without, without being able to predict 10 days in advance, as long as you relax your demands and just want a general sense of say whether that climate is going to get warmer, you can make, do with a lot with, with, with many fewer parameters.

I mean, in the case of, in a way that's not the greatest example because the climate is so complicated that to, to [00:21:35] even, to make these much less specific predictions, you still need a lot of information and computing power, but I think most, most science of complex systems hinges on hinges on relaxing the, the demands for, for.

Of the specificity of the prediction while still demanding some kind of prediction or explanation. And sometimes, and sometimes what you do is you also, you say, well, nevermind prediction. Let's just give me a retrodiction and see if we can explain what actually happened, but the explanation has to be anchored and observable values of things, but we can maybe with some sort of economic incident or evolutionary models are a good example of this weekend.

Once we've built the model after the fact we can dig up lots of bits and pieces that will show us that the course of say, we, we, we never could have predicted that evolutionary change would move in a certain direction, but by getting the right fossil evidence and so on, we can see it actually did [00:22:35] move in that direction and conforms to the model.

But what we're often doing is we're actually getting the parameters in their model from the observation of what actually happened. So there are these, these are all ways that complex system science can be tested empirically one way or

[00:22:52] Ben: another. Yeah. The, the thing that I guess that I'm, I'm sort of hung up on is if you want, like, if you relax the specificity of the predictions that you demand it makes it harder than to sort of compare to compare theories, right?

So it's like w the, you have, you know, it's like Newton and Einstein were like, sort of were drastically different models of the world, but in re like the reality was that their predictions were, you need very, very specific predictions to compare between them. And so if, if the hole is in order [00:23:35] to get evidence, you need to re lacks specificity it makes it then harder to.

Compare

[00:23:41] Michael: theories. No, that's very true. So before you, before you demand, is that theories explain why things fall to the floor when dropped then? Good. Einstein let's go. Aristotle looks. Exactly. Yeah. And one reason physics has been able to make so much progress is that the model, all Sara, the models are simple enough that we can make these very precise predictions that distinguish among theories.

The thing in that in complex systems sciences, we often, often there's a fair amount of agreement on the underlying processes. So say Newton versus Einstein. There's what you have is a difference in the fundamental picture of space and time and force and so on. But if you're doing something like economics or population ecology, so that looking at ecosystems, animals eating one another and so on.

[00:24:35] That the underlying processes are in some sense, fairly uncontroversial. And the hard part is finding the right kind of model to put them together in a way that is much simpler than they're actually put together in reality, but that still captures enough of those underlying processes to make good predictions.

And so I think because the prob that problem is a little bit different. You can, the, the that's, it's less, the, the situation is less a matter of distinguishing between really different fundamental theories and Mora case of refining models to see what needs to be included or what can be left out to make the right kinds of predictions.

In particular situations, you still need a certain amount of specificity. Obviously, if you, if you really just say, I'm not going to care about anything about the fact that things fall downwards rather than up, then you're not going to be able to refine your models very far before you run out of. It's to give you any further guidance.

That's, that's [00:25:35] very true. Yeah. But typically that complex systems kinds of models are rather more specific than that. I mean, usually they're too specific and they give you, they, they, they say something very precise that doesn't actually happen. Right. And what you're doing is you're trying to bring that, that particular prediction closer to what really happens.

So that gives, and that gives you a kind of that gives you something to work towards bringing the prediction towards the reality while at the same time not demanding of the model that already make a completely accurate prediction.

[00:26:10] Ben: Yeah. But that makes sense. And so sort of to like another sort of track is like what do you think about like theory free?

Predictions. Right? So so like the extremity exam question would be like, could a, like very large neural net do science. Right. So, so if you had no theory at all, but [00:26:35] incredibly accurate predictions, like sort of, how does that square with, with the iron rule

[00:26:41] Michael: in your mind? That's a great question. So when I formulate the iron Roy, I build the notion of explanation into it.

Yeah. And I think that's functioned in, in an important way in the history of science especially in fields where explanation is actually much easier than prediction, like evolutionary modeling, as I was just saying. Now when you have, if you have the, if you, if your, if your model is an effect, then you're on that, that just makes these predictions it looks, it looks like it's not really providing you with an explanatory theory.

The model is not in any way articulating, let's say the causal principles, according to which the things that's predicting actually happen. And you might think for that reason, it's not, I mean, of course this thing could always be an aid there's no, it's not it almost anything can have a place in science as a, as a, as a tool, as a stepping stone.

Right. So could you cook, but quickly [00:27:35] you say, okay, we now have we now have we've now finished doing the science of economics because we've found out how to build these neural networks that predict the economy, even though we have no idea how they work. Right. I mean, I don't think so. I don't think that's really satisfying because it's not providing us with the kind of knowledge that science is working towards, but I can imagine someone saying, well, maybe that's all we're ever going to get.

And what we need is a broader conception of empirical inquiry. Yeah. That doesn't put so much emphasis on an explanation. I mean, what do you want to do. To be just blindsided by the economy every single time, because you insist on a explanatory theory. Yeah. Or do you want, what do you want to actually have some ability to predict what's going to happen to make the world a better place?

Well, of course they want to make the world a better place. So we've, I think we've focused on building these explanatory theories. We've put a lot of emphasis, I would say on getting explanations. Right. But, [00:28:35] but scientists have always have always played around with theories that seem to get the right answer for reasons that they don't fully comprehend.

Yeah. And you know, one possible future for science or empirical inquiry more broadly speaking is that that kind of activity comes to predominate rather than just being, as I said earlier, a stepping stone on the way to truly explanatory theories.

[00:29:00] Ben: It's like, I sort of think of it in terms of. Almost like compression where the thing that is great about explanatory theories is that it compresses all, it just takes all the evidence and it sort of like just reduces the dimension drastically.

And so I'm just sort of like thinking through this, it's like, what would a world in which sort of like non explanatory predictions is like, is fully admissible. Then it just leads to sort of like some exponential [00:29:35] explosion of I don't know, like of whatever is doing the explaining. Right?

Cause it just, there there's never a compression. From the evidence down to a theory,

[00:29:47] Michael: although it may be with these very complicated systems that even in an explanatory model is incredibly uncompressed. Yeah, exactly. Inflated. So we just have to, I mean, I think it's, it's kind of amazing. This is one of my other interests is the degree to which it's possible to build simple models of complicated systems and still get something out of them, not precise predictions about, about, about what's going to happen to particular components in the system.

You know, whether, whether this particular rabbit is going to get eaten yeah. Tomorrow or the next day, but, but more general models about how say increasing the number of predators will have certain effects on the dynamics of the system or, or you know, how the kinds of the kinds of things that population ecologists do do with these models is, is, is answer questions.

So this is a bit of an example of what I was saying earlier [00:30:35] about making predictions that are real predictions, but but a bit more qualitative, you know, will. Well one of the very first uses of these models was to answer the question of whether just generally killing a lot of the animals in an ecosystem will lead the the prey populations to increase relatively speaking or decrease.

It turns out, but in general they increase. So I think this was after this was in the wake of world war one in Italy George, during world war one, there was less fishing because it's just a sailor, but we're also Naval warfare, I guess, not, maybe not so much in the Mediterranean, but in any case there was, there were, there was less fishing.

So it was sort of the opposite of, of killing off a lot of animals in the ecosystem. And the idea was to explain why it was that certain just patterns and that increase in decrease in the populations of predator and prey were served. So some of the first population ecology models were developed to predict.

So it's kind of a, and these are tiny. These, this [00:31:35] is, I mean, here you are modeling this ocean. That's full of many, many different species of fish. And yet you just have a few differential equations. I mean, that look complicated, but the amount of compression is unbelievable. And the fact that you get anything sensible out of it at all is truly amazing.

So we've kind of been lucky so far. Maybe we've just been picking the low-hanging fruit. But there's a lot of that fruit to be had eventually though, maybe we're just going to have to, and, you know, thankfully there're supercomputers do science that way. Yeah.

[00:32:06] Ben: Or, or, or developed sort of a, an entirely different way of attacking those kinds of systems.

I feel like sort of our science has been very good at going after compressible systems or I'm not even sure how to describe it. That I feel like we're, we're starting to run into all of these different systems that don't, that sort of aren't as amenable [00:32:35] to to, to Titanic sort of like going down to really more and more detail.

And so I, I I'd always speculate whether it's like, we actually need like new sort of like, like philosophical machinery to just sort of like grapple with that. Yeah.

[00:32:51] Michael: When you modeling, I mean, festival, they might be new modeling machinery and new kinds of mathematics that make it possible to compress things that were previously incompressible, but it may just be, I mean, we look at you look at a complicated system, like the, like in an ecosystem or the weather or something like that.

And you can see that small, small differences and the way things start out can have big effects down the line. So. What seems to happen in these cases where we can have a lot of compression as that, those, although those small, those there's various effects of small, small variations and initial conditions kind of cancel out.

Yeah. So it may be, you change things [00:33:35] around and it's different fish being eaten, but still the overall number of each species being eaten is about the same, you know, it kind of all evens out in the end and that's what makes the compression possible. But if that's not the case, if, if these small changes make differences to the kinds of things we're trying to predict people, of course often associate this with the metaphor of the butterfly effect.

Then I dunno if compression is even possible. You simply, well, if you really want to predict whether, whether there's going to be an increase in inflation in a year's time or a decrease in inflation, and that really every person that really does hinge on the buying decisions of. Some single parent, somewhere in Ohio, then, then you just need to F to, to figure out what the buying decisions of every single person in that in the economy are in and build them in.

And yet at the same time, it doesn't, it, it seems that everyone loves the butterfly effect. [00:34:35] And yet the idea that the rate of inflation is going to depend on this decision by somebody walking down the aisles of a supermarket in higher, that just doesn't seem right. It does seem that things kind of cancel out that these small effects mostly just get drowned out or they, they kind of shift things around without changing their high-level qualitative patents.

Yeah. Well,

[00:34:56] Ben: I mean, this is the diversion, but I feel like that that sort of like touches right on, like, do you believe in, in like the forces theory of history, more like the great man theory of history, right? And then it's like, and people make arguments both ways. And so I think that. And we just haven't haven't figured that out.

Actually split like the speaking of, of, of great man theory of history. The thing, like an amazing thing about your book is that you, you sort of, I feel like it's very humanistic in the sense of like, oh, scientists are people like they do like lots of things. They're [00:35:35] not just like science machines.

And you have this, like this beautiful analogy of a coral reef that you, that, that scientists you know, contribute, like they're, they're, they're like the living polyps and they build up these they're, they're sort of like artifacts of work and then they go away and it, they, the new scientists continue to build on that.

And I was sort of wondering, like, do you see that being at odds with the fact that there's so much tacit knowledge. In science in the sense that like you F for most fields, I found you probably could not reconstruct them based only on the papers, right? Like you have to talk to the people who have done the experiments.

Do you see any tension

[00:36:23] Michael: there? Well, it's true that the, the metaphor of the coral reef doesn't doesn't capture that aspect of science. It's very true. So I think on the one hand that what's what is captured by the metaphor is the idea that the, [00:36:35] the, what science leaves behind in terms of, of evidence that can is, is, is interpreted a new every generation.

So each new generation of scientists comes along and, and, and, and sort of looks at the accumulated fact. I mean, this is going to sound it, this is, this makes it sound. This sounds a little bit fanciful, but you know, in some sense, that's, what's going on, looks at the facts and says, well, okay, how shall I, what are these really telling me?

Yeah. And they bring their own kind of human preconceptions or biases. Yeah. But none of these break-ins the preconceptions and biases are not necessarily bad things. Yeah. They look at it in the light of their own mind and they are reinterpret things. And so the scientific literature is always just to kind of a starting point for this thought, which, which really changes from generation to generation.

On the other hand, at the same time, as you just pointed out, scientists are being handed certain kinds of knowledge, [00:37:35] which, which are not for them to create a new, but rather just to kind of learn how to just have a use various instruments, how to use various statistical techniques actually. And so there's this continuity to the knowledge let's, as I say, not captured at all by the reef metaphor, both of those things are going, are going on.

There's the research culture, which well, maybe one way to put it. It's the culture, both changes stays the same, and it's important that it stays the same in the sense that people retain their, know how they have for using these instruments until eventually the instrument becomes obsolete and then the culture is completely lost, but it's okay.

Most of the time if it's completely lost. But on the other hand, there is this kind of always this fresh new re-interpretation of the, of the evidence simply because the the interpretation of evidence is is a rather subjective business. And what the preceding generations are handing on is, is not, is, should be seen more as a, kind of [00:38:35] a data trove than, as, than a kind of a body of established knowledge.

But

[00:38:43] Ben: then I think. Question is, is it's like, if, what counts as evidence changes and all you are getting is this data trove of things that people previously thought counted as evidence, right? Like, so you know, it's like, they all, all the things that were like, like thrown out and not included in the paper doesn't like that make it sort of harder to reinterpret it.

[00:39:12] Michael: Well, there's, I mean, yeah. The standards for counselors, evidence, I think of as being unchanging and that's an important part of the story here. So it's being passed on, it's supposed to be evidence now of course, some of it, some of it will turn out to be the result of faulty measurements, all these suspicious, some of that even outright fraud, perhaps.

And so, and so. To some extent, that's [00:39:35] why you wouldn't want to just kind of take it for granted and they get that, that side of things is not really captured by the reef metaphor either. Yeah. But I think that the important thing that is captured by the metaphor is this idea that the, what, what's the thing that really is the heritage of science in terms of theory and evidence, is that evidence itself?

Yeah. It's not so much a body of knowledge, although, you know, that knowledge can, it's not that it's, it's not, it's not that everyone has to start from scratch every generation, but it's, it's this incredibly valuable information which may be, you know, maybe a little bit complicated in some corners.

That's true, but still it's been generated according to the same rules that or, you know, 10 to. by the same rules that we're trying to satisfy today. Yeah. And so, which is just as [00:40:35] trustworthy or untrustworthy as the evidence we're getting today. And there it is just recorded in the animals of science.

[00:40:41] Ben: So it's much more like the, the thing that's important is the, like the, the process and the filtering mechanism, then the, the, the specific artifacts that yeah.

[00:40:55] Michael: Come out, I'll make me part of what I'm getting at with that metaphor is the scientists have scientists produce the evidence. They have their, an interpretation of that evidence, but then they retire.

They die. And that interpretation is not really, it doesn't need to be important anymore enough and isn't important anymore. Of course, they may persuade some of their graduate students to go along with their interpretation. They may be very politically powerful in their interpretation, may last for a few generations, but typically ultimately that influence wanes and What really matters is, is, is the data trove.

Yeah. I mean, we still, it's not, as you, as you said, it's not perfect. We have to regard it with that [00:41:35] somewhat skeptical eye, but not too skeptical. And that's the, that's the, the real treasure house yeah. Of

[00:41:43] Ben: science and something that I was, I was wondering, it's like, you, you make this, this really, you have a sentence that you described, you say a non event such as sciences non-rival happens, so to speak almost everywhere.

And I would add, like, it happens almost everywhere all the time, and this is, this is wildly speculative. But do you think that there would have been any way to like, to predict that science would happen or to like no. There was something missing. So like, could, could we then now, like, would there be a way to say like, oh, we're like, we're missing something crucial.

If that makes sense, like, could we, could we look at the fact that [00:42:35] science consistently failed to arrive and ask, like, is there, is there something else like some other kind of like like intellectual machinery that also that has not arrived. Did you think, like, is it possible to look for that?

[00:42:51] Michael: Oh, you mean

[00:42:52] Ben: now? Yeah. Or like, like, or could someone have predicted science in the past? Like in

[00:42:57] Michael: the past? I, I mean, okay. I mean, clearly there were a lot of things, highly motivated inside. Why is thinkers. Yeah. Who I assume I'd have loved to sell the question of say configuration of the solar system, you have that with these various models floating around for thousands of years.

I'm not sure everyone knows this, but, but, but, but by, you know, by the time of the Roman empire, say that the model with the sun at the center was well known. The muddle with the earth at the central is of course well known and the model where the earth is at the center, but then the [00:43:35] sun rotates around the earth and the inner planets rotate around the sun was also well known.

And in fact was actually that this always surprises me was if anything, that predominant model in the early middle ages and in Western Europe, it had been kind of received from late antiquity from that, from the writers at the end of the Roman empire. And that was thought to be the, the kind of the going story.

Yeah. It's a complicated of course, that there are many historical complications, but I, I take it that someone like Aristotle would have loved to have really settled that question and figured it out for good. He had his own ideas. Of course, he thought the earth had to be at the center because of its that fit with his theory of gravity, for example, and made it work and having the Senate, the city just wouldn't wouldn't have worked.

And for various other reasons. So it would have been great to have invented this technique for actually generating evidence that that in time would be seen by everyone has decisively in favor of one of these theories, the others. So they must have really wanted it. [00:44:35] Did they think, did they themselves think that something was missing or did they think they had what they needed?

I think maybe Aristotle thought he had what was needed. He had the kind of philosophical arguments based on establishing kind of coherence between his many amazing theories of different phenomena. Know his. Falling bodies is a story about that. The solar system, as of course, he would not have called it the, the planets and so on, and it all fit together so well.

And it was so much better than anything anyone else came up with. He may have thought, this is how you establish the truth of, of of the geocentric system with the earth at the center. So now I don't need anything like science and there doesn't need to be anything like science, and I'm not even thinking about the possibility of something like science.

Yeah. And that, to some extent, that explains why someone like Aristotle, who seemed to be capable of having almost any idea that could be had, nevertheless did [00:45:35] not seem to have, sort of see a gap to see the need, for example, for precise, qualitative experiments or, or, or even the point of doing them. Yeah.

It's, you know, that's the best, I think that's the most I can say. That I don't, I let myself looking back in history, see that people felt there was a gap. And yet at the same time, they were very much aware that these questions were not being said, or

[00:46:04] Ben: it was just it just makes me wonder w w some, some period in the future, we will look back at us and say like, oh, that thing, right.

Like, I don't know, whatever, like, Mayans, right? Like how could you not have figured out the, like my antigenic method? And it's just it, I, I just find it thought provoking to think, like, you know, it's like, how do you see your blind spots?

[00:46:32] Michael: Yeah. Well, yeah, I'm a philosopher. And we in, in [00:46:35] philosophy, it's still, it's still much like it was with Aristotle.

We have all these conflicting theories of say you know, justice. What, what really makes the society just to what makes an act. Or even what makes one thing cause of another thing. And we don't really, we don't know how to resolve those disputes in a way that will establish any kind of consensus. We also feel very pleased with ourselves as I take it.

Aristotle's are these really great arguments for the views? We believe in me, that's still sort of quite more optimistic maybe than, than we ought to be. That we'll be able to convince everyone else. We're right. In fact, what we really need and philosophers, do you have this thought from time to time?

There's some new way of distinguishing between philosophical theories. This was one of the great movements of early 20th century philosophy. That logical positivism was one way. You can look at it as an attempt to build a methodology where it would be possible to use. [00:47:35] And in effect scientific techniques to determine what to, to adjudicate among philosophical theories, mainly by throwing away most of the theories as meaningless and insufficiently connected to empirical facts.

So it was a, it was a brutal, brutal method, but it was an idea. The idea was that we could have, there was a new method to be had that would do for philosophy. What, what science did for, you know, natural philosophy for physics and biology and so on. That's an intriguing thought. Maybe that's what I should be spending my time thinking about, please.

[00:48:12] Ben: I, I do want to be respectful of your time, the like 1, 1, 1 last thing I'd love to ask about is like, do you think that and, and you, you talked about this a bit in the book, is that, do you think that the way that we communicate science has become almost too sterile. And sort of one of my, my going concerns [00:48:35] is this the way in which everybody has become like super, super specialized.

And so, and sort of like once the debate is settled, creating the very sterile artifacts is, is, is useful and powerful. But then as, as, as you pointed out as like a ma as a mechanism of like, actually sort of like communicating knowledge, they're not necessarily the best. But, but like, because we've sort of held up these like sterile papers as the most important thing it's made it hard for people in one specialization to actually like, understand what's going on in another.

So do you think that. That, that, that we've sort of like Uber sterilized it. You know, it's like, we talked earlier about people who want to, to change the rules and I'm very much with you on like, we should be skeptical about that. But then at the same time you see that this is going [00:49:35] on.

[00:49:35] Michael: Yeah.

Well, I think, I mean, there's a real problem here, regardless, you know, whatever the rules of the problem of communicating something as complicated as scientific knowledge or the really, I should say the state of scientific play because often what needs to be communicated is not just somebody that's now been established beyond any doubt, but here's what people are doing right now.

Here's the kind of research they're doing here are the kinds of obstacles they're running into to communicate, to, to put that in a form where somebody can just come along and digest it all easily. I think it was incredibly difficult, no matter what the rules are. Yeah. It's probably not the best use of most scientists time and to try to present their work in that way.

It's better for them just to go to the rock face and start chipping away at their and little local area. So what, what you need is either for a scientist to take time out from time to time. And I mean there exists these publications review [00:50:35] publications, which try to do this job. That's true.

So that people in related fields, you know, typically in the typically related fields means PhD in the same subjects. They're usually for the nearest neighbors to see what's going on, but often they're written in ways that are pretty accessible. I find. So then you create, you create a publication that simply has a different set of rules.

The point here is not to in any way to evaluate the evidence, but simply to give a sense of the state of play for. To reach further a field, you have science journalists or what's going on with newspapers and magazines right now is because it's not very good for serious science journalism. And then you have scientists and people like me, who, for whatever reason, take some time out from what they usually do to really, really look kind of a self-standing project to explain what's going on those activities all to some extent, take place outside the narrow narrow view of the [00:51:35] iron rule.

So, and I think, I think it's, it's going okay. Given the difficulty of the task. It seems to me that that the, the, the knowledge of the information is being communicated in a, in a somewhat effective, accessible way. I mean, not that if anything, the real, the real, the real barriers to. Some kinds of fruitful, interdisciplinary thinking, not just that it's hard for one mind to simply take on all this stuff that needs to be taken on no matter how effectively, even brilliantly it's communicated the world is just this very complicated place.

Yeah. You know, one, one thing I'm interested in historically not, I mean, just, I find fascinating is that fruitfulness of certain kinds of research programs that came out of came out of finding serious wars, like in particular, the second world war, you threw a bunch of people together and they had to solve some problem, like [00:52:35] building at a bomb , it's usually something, something horrendous or a a device, the device for the guns and bombers and so on that would allow that.

To rather than having to bit very skillfully. I forget the word for it. You know, you kind of have to put your guide ahead of where the enemy fighter so by the time that your, your, your bullets get there, the plane arrives at the same time, but they built these really sophisticated analog computers basically would do the job.

So the Ghana, some, you know, some 19 year olds, like just pointed the plane again. Yeah. And a lot of problems to do with logistics and weather forecasting. And so on this, these, these, the need to have that done through together, people from very different areas in engineering and science and so on and resulted in this amazing explosion.

I think if knowledge [00:53:35] it's a very, it's a very attractive period in the history of human thought. When you go back and look at some of the things people were writing in the late forties and fifties, Computers, how the mind works. And so on. And I think some of that is coming out from this, this kind of almost scrambling process that that happened when, when these very specific kind of military engineering problems are solved by throwing people together who never normally would have talked to one another.

Maybe we need a little bit of that. Not the war. Yeah. But

[00:54:08] Ben: I have a friend who described this as a serious context of use is it is a thing. And it's, I, I mean, I'm, I'm incredibly biased towards looking at that period. Okay. But

[00:54:20] Michael: I guess it's connected to what you're doing.

[00:54:23] Ben: Absolutely. Is I do you know who.

Yeah. So, so he actually wrote a series of memoirs and I just there reprinting it. I wrote the forward to it. So that's, [00:54:35] so I'm like, I agree with you very strongly. And it is it's. I find, I always find that fascinating because I feel like there's, there's like this. I mean, there's this paradigm that sort of got implemented after world war II, where do you think like, oh, like theory leads to applied science leads to leads to technology, but you actually see all these, these places where like, trying to do a thing makes you realize a new theory.

Right. And you see similar thing with like like, like the steam engine, right? Like that's how we get thermodynamics is it's like what, like that's a great piece of work that's right, right. Yeah. So that's, I mean, like that, that absolutely plays to my biases that like, yeah, we. Like not, not doing interdisciplinary things for their own sake.

Like just being like, no, like let's get these people that are rude, but like having very serious contexts of use that can like drive people having

[00:55:32] Michael: problem to solve. It's not just the case [00:55:35] of kind of enjoying kind of chatting about what you each do. And then just going back to the thing you were doing before.

Yeah. Feeling, feeling enriched. Yeah. But otherwise I'm changed it. It's interesting

[00:55:46] Ben: though, because the incentives in that situation sort of like now fall outside of the iron rule right. Where it's like, it's like, you don't care. Like you don't care about like, I mean, I guess to some extent you could argue like the thing needs to work.

And so if it works, that is evidence that your, your theory is, is

[00:56:09] Michael: correct. That's true. But, you know, but I think as you're about to say, engineering is not science and it's not it's the own rule is not overseeing engineering. It's the it's engineering is about making things that work and then about producing evidence for, or against various ideas.

That's just a kind of a side effect,

[00:56:27] Ben: but then it can sort of like, I guess it can like spark those ideas that people then sort of like take, I [00:56:35] was like, I mean, in my head, it's all of this, like I think of what would I call like phenomena based cycles where like, there's, there's like this big, like cyclical movement where like you discover this like phenomena and then you like, can theorize it and you use that theory to then do like, I dunno, like build better microscopes, which then let you make new observations, which let you discover new phenomena.

[00:57:00] Michael: It's really difficult to tell where things are going. Yeah. I think the discovery of plate tectonics is another good example of this sea, of these, all of these scientists doing things that, that certainly not looking into the possible mechanisms for continental drift, right. But instead, getting interested for their own personal reasons and doing things that don't sound very exciting, like measuring the magnet, the measuring the ways that the orientation of the magnetic field has changed over past history.

By looking at the, by basically digging up bits of rock and tests, looking at the orientations of the, [00:57:35] of the iron molecules or whatever, and the lock and, you know, it's, I mean, it's not, it's not completely uninteresting, but in itself it sounds like a kind of respectable, but probably fairly dull sideline and geology.

And then things like that. We're developing the ability to meet very precise measurements of the gravitational field. Those things turn out to be. Key to understanding this, this amazing fact about the way the whole planet works. Yeah. But nobody could have understood in advance that, that they would play that role.

What you needed was for a whole bunch of, that's not exactly chaos, but I kind of I kind of diversity that might look almost, it might look rather wasteful. Yeah. That's very practical perspective to, to blossom. Yeah. This is,

[00:58:29] Ben: I, I truly do think that like, moving forward knowledge involves like being almost like [00:58:35] irresponsible, right?

Like if you had to make a decision, it's like, it's like, should we fund these people who are going in like measuring magnetic fields just for, for funsies. Right. And it's like, like, like from, from like a purely rational standpoint, it's like, no, but yeah,

[00:58:51] Michael: the reason that sort of thing happens is cause a bunch of people decide they're interested in.

Yeah, persuade the students to do it too. And you know, whether they could explain it to the rest of the world, actually that's another, there was also a military angle on that. I don't know if you know that, but the, the, some of the mapping of the ocean floors that was also crucial to the discovery of plate tectonics in the fifties and sixties was done by people during the war with the first sonar systems who nobody's supposed to be, you know, finding submarines or whatever, but decided, Hey, it would be kind of interesting just to turn the thing on and leave it on and sort of see what's down there.

Yeah. And that's what they did. And that's how some of those first maps started being put together. [00:59:35] That's

[00:59:36] Ben: actually one of the, one of my concerns about trying to do science with, with like no networks is. How many times do you see someone just go like, huh, that's funny. And like, like so far you can't like computers.

Like they can sort of like find what they're setting out to find or like they have a, or they, they almost have like a very narrow window of what is considered to evidence. And perhaps like through, through your framework the, the thought of like, huh, that's funny is like you're someone's brain, all of a sudden, like take something as evidence that wasn't normally like supposed to be evidence.

Right. So it's like, you're doing like one set of experiments and then you just like, notice this like completely different thing. Right. And you're like, oh, like maybe that's actually like a different piece of evidence for something completely different. And then it opens up a rabbit hole.

[01:00:31] Michael: Yeah. This is another one of those cases though, with.[01:00:35]

Sort of the, some kind of creative cause it, and they do think it's incredibly important that scientists not get distracted by things like this. On the other hand, it would be terrible if scientists never got distracted by things like this. And I guess I, one thing I see with the iron rule is it's is it's a kind of a social device for making scientists less distracted.

Well, not putting the kind of mental fetters on that would, would make it impossible for them ever to become distracted.

[01:01:05] Ben: And maybe perhaps like the, like the, the distraction and like saying, oh, that's funny. It's like the natural state of human affairs.

[01:01:12] Michael: Well, I think so. I think if we, we would all be like Aristotle and it turns out it was better for science fair, actually a little bit less curious and yeah.

And it's interesting and variable and we had actually our, so

[01:01:24] Ben: one could almost say that like the, the iron rule, like w w would you say it's accurate that like the iron rule is absolutely. But so [01:01:35] is breaking in the sense that like, like if, if like somehow there, like you could enforce that, like every single person only obeyed it all the time science, like we, we actually, we make serendipitous discoveries.

And so it's like in order to make those, you need to break the rule, but you can't have everybody running around, breaking the rule all the

[01:01:57] Michael: time. All right. Put it a little bit differently. Cause I see the rule list is not so much, it's not so much a rural for life. And for thinking is for, for sort of publishing activity.

So you don't, you're not, you're not technically breaking the rule when you think. Huh? That's funny. And you go off and start thinking your thoughts. You may not be moving towards. Yeah. It has the kind of scientific publication that, that satisfies the role. But nor are you breaking. The F, but if all scientists can, as it were live to the iron rule, not just in there, not just when they took themselves to be playing a game in every way that they thought about [01:02:35] they, they, they thought about the, the point of their lives as, as kind of investigators of nature.

Then, I mean, that's, people are just not like that. It's hard to imagine that you could really, that would ever really happen. Although, you know, to some extent, I think our science education system does encourage it. Yeah. But if that really happened, that would probably be disastrous. We need, it's like the pinch of salt, you know, if you only want to pinch, but without it, it's not good.

Yeah. That

[01:03:06] Ben: seems like an excellent place to end.

Thank you so much for being part of idea missions.

[01:03:35]