If atheism were true, science would be impossible


We value scientific knowledge as the best and highest form of knowledge. A recent article written for National Geographic explained that “in this bewildering world we have to decide what to believe and how to act on that. In principle that’s what science is for.” But is science reliable?

It may seem a laughable question at first because everyone takes it for granted that, yes, obviously science is reliable when it’s done in accordance with the standard procedures. But the subject at hand is not the way we do science; this is a higher question than that.

Science is high on the scale of cognitive development. Babies are curious about the world, and they go about satisfying their curiosity in mostly haphazard ways. From their earliest development they begin learning some basic principles about the world. They learn about gravity in the sense of recognizing a pattern that things, quite simply, fall. They put things in their mouth. They touch and feel and grab and squeeze. But this is a rather primitive mode of cognitive function. Science is much higher than this and a sign of a highly developed mind. Science is a systemized way of gaining knowledge. It is a disciplined approach to investigating the world.

All of science rests upon what we call inductive inference. That’s because modern science is practically synonymous for the way we go about gaining knowledge, and we go about gaining knowledge by applying the scientific method. Wikipedia explains:

Science is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe….In modern usage however, “science” most often refers to a way of pursuing knowledge, not only the knowledge itself. It is also often restricted to those branches of study that seek to explain the phenomena of the material universe….Over the course of the 19th century, the word “science” became increasingly associated with the scientific method itself, as a disciplined way to study the natural world, including physics, chemistry, geology and biology.

The National Geographic article quoted the editor of an academic journal on the close link between “science” and the method we use to develop it:

“Science is not a body of facts,” says geophysicist Marcia McNutt, who once headed the U.S. Geological Survey and is now editor of Science, the prestigious journal. “Science is a method for deciding whether what we choose to believe has a basis in the laws of nature or not.”

“The scientific method leads us to truths that are less than self-evident, often mind-blowing, and sometimes hard to swallow,” the article’s author explains. The scientific method applies inductive reasoning in a systematic way, and these hard-to-swallow “truths” are derived from the application of the scientific method. When we make a valid inductive inference, we are taking past experience and making a guess about what the future will be like based on that past experience.

But this very process itself brings us face-to-face with some hard-to-swallow truths, truths which are certainly mind-blowing. They are so mind-blowing, in fact, that they raise serious problems for modern science. Most people, especially scientists, would rather stumble around in darkness in defiance of the stated goals of the scientific profession than confront the issue head on. Neil deGrasse Tyson issued the profession’s mission statement in the opening episode of Cosmos:

“This adventure is made possible by generations of searchers strictly adherent to a simple set of rules. Test ideas by experiments and observations. Build on those ideas that pass the test. Reject the ones that fail. Follow the evidence wherever it leads, and question everything. Accept these terms, and the cosmos is yours.”

But science doesn’t question everything. We may be quite eager to answer the question “Is science reliable?” in the affirmative, but do we have a justification for that answer?

In regards to this particular issue, scientists selectively turn the light of science away in favor of allowing certain truths to remain forever in the shadows.


Consider the safety aspect of taking out your hammer to do some woodwork around the house. You may think back to an earlier time in your life when, not being as careful as you are today, you wacked your thumb with one. You remember how badly it hurt  (very badly). You don’t want to repeat the same mistake every time you use a hammer. So, you corrected your technique or the procedure you use when wielding a hammer to minimize the chances of it happening again.

We assume some basic things about the nature of reality, though, when we think and act like that (even if we aren’t consciously aware of it). When you hit your thumb, a pain sensation in the form of a biochemical signal shot, in an instant, from your finger to your brain. The pain stung so strongly that it may have even caused you to scream.

If you hit your thumb once before and it hurt terribly, and if you set up another experiment to hit your thumb again, what would you predict would happen? We wouldn’t immediately think to ask ourselves, “The next time I hit my thumb with a hammer, I wonder if it will feel good?” You also wouldn’t predict that, by hitting your thumb with a hammer, the chair next to you will turn into a gopher. No, you wouldn’t do the experiment at all. It’s ridiculous.

The universe just doesn’t work in such a way to prompt us to ask questions like this, at least not usually. Things don’t generally happen randomly. If they did, the universe and reality would be unpredictable. If it were unpredictable, we would have no reason to assume that, just because it hurt when we whacked our thumb last time with a hammer, that it will hurt the next time, too. That means, practically speaking, we couldn’t rely on our memory of past events to provide any useful information to us. The next time we hit our thumb, our bank accounts may explode with millions of dollars in cash.

If the universe really did work that way, it means we couldn’t reliably predict anything.


This simple truth extends to the sciences: if certain procedures performed in the lab under controlled conditions on one day didn’t behave the same way under the same controlled conditions in the lab on a different day, then it would do us no good to go to the lab and try to learn anything about whatever it is we are studying. Whatever we learned from studying chemicals or physical laws on Tuesday would not hold true on Saturday. Trying to combine hydrogen and oxygen into water molecules would be fruitless because, if one procedure worked one day, it couldn’t be replicated on another.

Randomness would rule the universe. Chance would be our hope and our fate.

We could develop no basis for knowledge or truth, then, since what appeared to be the truth on Tuesday became different than what we consider to be truth on Saturday. Knowing how atoms behave on Thursday does you no good when they behave differently on Friday. Or when they behave differently from one minute or hour to the next. Things would be truly random.

As it is, nature is predictable. Reality and the way the universe operates are predictable. This property of reality is called the uniformity of nature, or uniformitarianism. The use of inductive inference — forming general theories by projecting into the future the same results gained from conducting and observing limited test cases in the past — is an application of the uniformity principle.

As a foundational principle, then, it can be seen how the principle of uniformity applies to all science, but the term “uniformitarianism” was perhaps most popularized by the geological sciences movement that began with James Hutton in the late 1700’s and culminated with its systematization by Charles Lyell’s Principles of Geology, published around 1830. Hutton proposed that the same processes that occur today have been occurring in the past, and at the same rate (slowly). His conclusion was that it would take millions or billions of years for the earth’s geological features to take shape when acted upon by the slow-moving forces of erosion and deposition that we observe today. His famous phrase is that, by assuming the uniformity principle and using it as a window to enquire into the past “we find no vestige of a beginning, no prospect of an end.”

Hutton wasn’t a very readable author, which may be one reason his theory didn’t catch on immediately. Lyell, however, saw some genius in Hutton’s work, and Lyell was a much better writer. He was able to communicate Hutton’s ideas to a much broader audience. Charles Darwin was a member of that broad audience. Darwin read Lyell’s books during his voyage on the Beagle in the mid-1830s and, from the possibilities of the vast deepness of time presented in them, gained inspiration for his theory of evolution as first described in his 1859 book, On the Origin of Species. The term “uniformitarianism” isn’t commonly used these days, perhaps in part because of Stephen J. Gould’s influence — and partly because its definition has changed since Lyell’s day.

Stephen J. Gould was one of the most widely read and popular science writers of the 20th century before he died in 2002. He was an influential evolutionary biologist and historian of science. One of the main reasons for this is because he, like Lyell in his day, was a good writer who presented his thoughts clearly.

In a paper Gould published in 1965, he wrote that it was time to retire the term “methodological uniformitarianism,” the term used to describe “the invariability of natural laws in space in time.” Having won the battle of removing God, providence, and supernatural intervention (i.e. miracles) from the picture of geological and, thus, world history, he said “we need no longer take special pains to affirm the scientific nature of our discipline.” In his paper, he claimed that allowing the possibility of miracles and divine intervention into the study of earth’s history made geology unscientific because it would eliminate the validity of inductive reasoning. When explaining the debate over the place of supernatural involvement with science, he wrote in part that “if God intervenes, then laws are not invariant and induction becomes invalid.”

Gould’s assessment of the importance of assuming the principle of the uniformity of nature is right-on: “Methodological uniformitarianism as a statement of scientific procedure remains vital to geologic inquiry. However, the assumption of spatial and temporal invariance of natural laws is by no means unique to geology since it amounts to a warrant for inductive inference which, as Bacon showed nearly four hundred years ago, is the basic mode of reasoning in empirical science.”

Assuming the uniformity principle forms the critical foundation for all fields of science.


It’s generally accepted that knowledge must be justified, a concept referred to as “justified true belief” (a concept usually attributed to Plato). It’s not enough for someone to simply say “I believe the sun is yellow.” It’s considered one thing to simply believe something, but another altogether to have actual knowledge of something — to truly know it. If you tell a friend that you just “know” your favorite baseball team is going to win the World Series, and then they do, then did you really know it? No, but you did believe it, and as it turned out your hunch proved to be true. But you didn’t know it because you would have to have had foreknowledge of their victory, and you didn’t have any actual evidence that proved your foreknowledge of their victory.

If you did have such evidence to justify your foreknowledge, that would be akin to having the gift of predictive prophecy, of being able to see the future before it happens. If you saw a vision of the future, then it could be said you knew who would win even though others may not believe you. Objective evidence in the form of a smart-phone video recording of that prophetic vision that you could show to others, in advance of the game being played, might convince others that you really did know your team would win. They could then be sound in their belief that you knew who would win because they would have evidence to justify that belief (a video of a prophetic vision).

Otherwise, holding a belief without having evidence to back it up isn’t considered true knowledge. Making decisions based on such a belief is usually considered to be irrational; acting against real knowledge, which is considered to be a justified true belief, is usually considered to be extremely irrational or insane if no counter evidence is produced to justify those actions. Someone who understands the consequences and physical effects to his body that will be produced by jumping off a cliff (splat), but who insists that they will survive the fall in spite of that knowledge without providing proof that there would be some other outcome, is considered to be insane.


Empirical science is the approach to gaining knowledge which is based on experimentation and observation. Empiricism is a philosophical school that believes true knowledge only comes from that which we can experience through our senses; in other words, “seeing is believing.” To put it a slightly different way, if you haven’t seen it, then you shouldn’t believe it. Empiricism emphasizes the role of experience-based evidence that is derived from experimentation and observation in forming and expanding our base of true knowledge.

Empiricism is contrasted with rationalism, which is a school of thought that emphasizes the importance of theoretical knowledge and the role of reason as our primary tool in discovering true knowledge; you might think of rationalism as embodied by the “arm-chair philosopher” who sits and thinks about the world and how things are, as opposed to the “I must see it to believe it” empiricist who is always testing his theories in the lab.

As philosophical schools of thought, they both describe different theories of epistemology, which is the study of the nature of knowledge. In layman’s terms, epistemology tells us what we can know and how we can know it. It includes the study of the limits of our knowledge and understanding. It tries to answer questions like “What is truth, and how can we discover it?”

Empiricism says we discover truth by observing it through our senses, while rationalism says we discover it primarily through the application of reason. These different schools have historically produced different consequences. Aristotle was an empiricist, and his writings are full of discussions of experiments he did on investigating different aspects of reality, such as animal dissections to investigate biology. Plato, being a rationalist, didn’t produce this kind of work because he didn’t think there was as much value in investigating the details of reality because it was more important to understand the realm and nature of the Idea, or Form, which was apart from the material reality we experience in our day-to-day lives.

That’s not to say that empiricism is not rational, while rationalism is. Both are rational in the sense that both attempt to develop theories of knowledge — and methods of attaining new knowledge — based on facts or reason. Both schools recognize that it is rational to have good reasons for believing what we do, and that our actions ought to be consistent with our beliefs. Both schools would acknowledge that it is irrational to believe that being hit by a train will kill you while, knowing you desire to live a long life, you insist on stepping in front of one without trying to avoid it.

Both schools of thought believe humans ought to be rational, so both schools think people should have solid reasons for holding their beliefs, and that people should act consistently with those beliefs. Those beliefs should be justified on the basis of reliable evidence.

The main difference between empiricism and rationalism is that each school places different levels of importance on the type of evidence, and whether it is experimentally derived or well-reasoned through thought experiments and logical syllogisms.

What both schools seek to do is avoid being irrational.


Irrationality is succinctly explained by Wikipedia as “an action or opinion given through inadequate use of reason, emotional distress, or cognitive deficiency.” Someone who makes an illogical decision is thought of as being irrational. For example, if it’s very hot outside on a summer day, and someone shows up to an afternoon baseball game dressed in long pants and a long-sleeve shirt, they will probably be thought of as irrational by nearby observers, for obvious reasons.

Another reason people are said to be irrational is because they make decisions based on feelings and emotion instead of logic. They are thought to react emotionally instead of responding logically. This opinion is probably a leftover influence from the ancient Greeks (or a symptom of modern culture’s fondness for ancient Greek culture) who are said to have separated rationality from emotion and sensuality because the latter two were thought to be the sources of false assumptions and statements.

However, the fellow at the baseball game might know something the others don’t. What if, when asked by a neighboring fan why he wore long clothes despite the heat, he replied that, around here, the mosquitoes are so bad on these hot afternoons that the only way to comfortably enjoy the game is to wear long clothes to protect yourself from their constant biting. In light of that information, he would have rescued himself in the eyes of his neighbors. Far from being irrational, it becomes clear that he might be, on the contrary, one of the most rational people out there that day.

The baseball fan wearing the long-sleeve shirt is said to have provided a warrant, which is a proper justification for holding a particular belief. His justification, in this case, would have come from his past experience — probably suffering through one or more summer afternoon baseball games while being eaten up by mosquitoes. In a very informal sense, he applied the scientific method to gain new knowledge. If, when the bugs come out, they leave him alone because he’s wearing long clothes, then his hypothesis (“it’s better to wear long-sleeve shirts at summertime baseball games than being eaten by the bugs”) has been tested and found to be plausible.


The principle of uniformity is vital in the development of scientific knowledge. Inductive inference is only a valid reasoning tool if the principle of the uniformity of nature is true; that’s what Gould means by uniformity being a “warrant” for inductive reasoning. The baseball fan who wore the long-sleeve shirt provided a warrant for his attire when he explained the threat and inconvenience of summertime mosquitoes. Similarly, the scientist provides a warrant for using inductive reasoning when he invokes the principle of the uniformity of nature.

Gould refers to uniformity as “spatial and temporal invariance of natural laws,” which is a fancy way of saying that the laws of physics that operate in our little corner of the universe here and now 1) operate the same at the far reaches of the cosmos (“spatially”) and 2) have always been in operation in the same way throughout time (“temporal”) — even, especially, in the areas of the universe we can’t actually observe.

If the principle of uniformity isn’t true, then we have no basis for assuming that the way the world operated in the past is generally the way it will operate in the future. If we couldn’t make that assumption, then it would be pointless to develop hypotheses and repeatedly test them. The results from a set of experiments one day would be different when conducted on a different day. It might be true that, today, the speed of light is 299,792,458 meters per second, but it might only be 146,294,000 meters per second tomorrow. Such a conclusion would be disastrous to our understanding of the world and its operation. If the world truly operated this way, the scientific method would be meaningless. Its theoretical basis (uniformity) would be at odds with the actual operation of reality (randomness), and so any conclusions drawn from its application would be useless.

In other words, we would have no justifiable reason for using inductive reasoning. If our goal was to gain true knowledge, but we knew that the physical laws (among other things) change daily, then it would be irrational to apply knowledge-gathering theories that assume that they didn’t.

That would be like assuming you are Superman and standing in front of a speeding bullet, even though you aren’t. The consequences would be disastrous.

In the case of the scientific method, it is deemed reliable based on its application of inductive reasoning. Inductive reasoning is considered to be a valid and reliable mode of reason. Inductive reasoning, in turn, is considered to be a valid mode of reasoning because it is justified on the basis of the principle of the uniformity of nature:

  1. The scientific method is founded upon inductive reasoning.
  2. Inductive reasoning is founded upon the principle of the uniformity of nature.

It’s now time to start homing in on our problem.


Remember, it is considered to be irrational to act on a belief that you don’t have actual knowledge of, and even worse to act contrary to the facts. If you don’t have evidence of a belief, then you don’t have true knowledge of it. In accordance with the dictates of modern science, we gain knowledge by forming logical conclusions based on evidence gained through testing and observation. People are derisively labeled as science deniers if they disagree with the thesis of man-made climate change, that all vaccines are safe, that the big bang theory is true, or that the earth is 4.5 billion years old — all “facts” that are said to have been derived from science.

So, that being said, what is the rock-solid evidence that we have to justify our belief in the principle of the uniformity of nature?

Here, at long last, we have arrived at a serious problem that has been swept under the rug for over two hundred years. If the principle of the uniformity of nature is our warrant for inductive reasoning, then what is our warrant for assuming the principle of the uniformity of nature?

In a nutshell, there isn’t one, at least none offered by modern science. Modern science asks us not to pursue one, but rather simply to assume it without supplying any reasonable evidence to justify it. We are brought up in school to uncritically accept this belief. It’s not raised as a question at all unless you go to college and begin taking philosophy classes.

Unless pressed, the issue is rarely brought up. The problem is that the answer to the question, “How do we justify our belief in the principle of uniformity,” requires an application of circular reasoning to justify it, and for that reason the whole issue is ignored; it would be an embarrassment if word got out that the foundation of modern science is irrational.

Circular reasoning is a logical fallacy. It’s fallacious because the circular argument requires us to assume something that hasn’t yet been proven in order to prove a point. The original conclusion “A” proposed by the argument remains unanswered because it depends on reason “B,” which in turn depends on conclusion A for its own validity.

A popular example of circular reasoning is given in the fifth edition of Engel’s logic textbook, With Good Reason. It goes like this:

  1. God exists.
  2. How do you know?
  3. The Bible says so.
  4. How do I know that what the Bible says is true?
  5. Because the Bible is the word of God.

Engel explains that this particular argument is circular because it begs the question, asserting the premiss as the conclusion. We can easily retool this example to make it relevant to our special case:

  1. Inductive reasoning is valid.
  2. How do you know?
  3. The principle of uniformity is true.
  4. How do I know that the principle of uniformity is true?
  5. Because it is confirmed through the application of inductive reasoning, which we call the scientific method.

In other words, we can show that the principle of uniformity is true through experimentation carried out using the scientific method, but the scientific method requires an assumption that the principle of uniformity is true in order for it to be valid. It’s like saying uniformity is true if uniformity is true.


Simply speaking, without having a good reason for doing so, simply assuming uniformity is an arbitrary act, which is also a logical fallacy. Merriam-Webster tells us the definition of arbitrary is “not based on reason or evidence.” To assume the principle arbitrarily is to be irrational since being arbitrary is contrary to reason.

Ultimately, then, the official yet unspoken foundation of modern science is arbitrary, fallacious, and irrational. We have an arbitrary assumption (the principle of uniformity) that can only be justified after-the-fact through the application of a logical fallacy (circular reasoning). Modern academia and education, by refusing to even ask the question, is arbitrarily sealing off realms of investigation and intellectual inquiry. It has to do this because, like the man behind the curtain, government accreditation, billions of dollars in grants funded by taxpayers, and peer-reviewed scientific journals present dazzling spectacles of smoke and fire that are meant to distract us from the wiles of, as Wikipedia puts it, ordinary conmen who have been using a lot of elaborate magic tricks and props to make themselves seem “great and powerful.”

Just as modern science says we should simply assume uniformity, we could just as easily, and just as arbitrarily, insist instead on making a different assumption: the principle of uniformity is not true. Or, similarly, we could insist upon saying that the principle of uniformity applies for the past 729 years of history, but no further back than that.

Most of us aren’t logical all the time, but we all strive to be logical most of the time. Arbitrariness strikes at the heart of our intuitive capability to identify bad arguments. If any of those other assumptions I just proposed were actually true then any so-called “knowledge” gained by use of our former assumption of long-term uniformity will have to be revaluated in the light of these other arbitrary assumptions. This is the problem with granting the use of arbitrary assumptions without challenging them. There’s no reasonable barrier restricting the entrance of other such assumptions.

In an arbitrary world, the evidence obtained by applying the scientific method to reality becomes one set of possibilities among a sea of infinite other possibilities, limited only by the number of creative assumptions mankind can devise. How do we know that we shouldn’t try out any of these other assumptions about our beliefs about the nature of how reality operates? To be dedicated to one assumption in the light of a multitude of others would normally be derisively labeled as dogmatic; Christians who believe the Bible’s account of special creation in six 24-hour days in the face of the modern scientific consensus that the cosmos were created over a period of billions of years as a consequence of the big bang are said to be dogmatic. They are criticized for their faith, something which is said to be contrary to reason. They cling to their beliefs with tenacity even in light of evidence supplied by science that supposedly contradicts the Bible’s account, and this angers atheists. As mentioned earlier, these Christians are labeled science deniers because they aren’t willing to give up their faith in Scripture’s testament to the creation account (among other things).

In case you think, by the seeming absurdity of this situation, I may be misrepresenting the officially-held position of this quandry of modern science, I’ll let Gould explain it:

Without assuming this spatial and temporal invariance [the principle of uniformity], we have no basis for extrapolating from the known to the unknown and, therefore, no way of reaching general conclusions from a finite number of observations. (Since the assumption is itself vindicated by induction, it can in no way “prove” the validity of induction — an endeavor virtually abandoned after Hume demonstrated its futility two centuries ago.) [Emphasis added.]

The problem is so difficult that practically all attempts at solving it have been abandoned. As Gould said, David Hume explored this problem in great detail in the mid-1700s. He concluded the pursuit was hopeless. Instead of dedicating more time to investigating the matter, he simply gave up: “I dine, I play a game of backgammon, I converse, and am merry with my friends. And when, after three or four hours’ amusement, I would return to these speculations, they appear so cold, and strained, and ridiculous, that I cannot find in my heart to enter into them any farther.”

Bertrand Russell was a master logician and philosopher. In 1912, in his survey of philosophy (titled Problems of Philosophy), his assessment followed Hume’s. He wrote that “we must either accept the inductive principle on the ground of its intrinsic evidence, or forgo all justification of our expectations about the future.”

By “instrinsic evidence,” Russell is appealing to some universal ability for all humans to mystically intuit that the inductive principle is true. To pursue this problem further raises more, and more difficult, questions that most scientists simply don’t want to pursue. Questions like “How do I know if a belief I think is self-evident on the basis of its intrinsic evidence is actually so?” and “How do you know that recognizing intrinsic evidence is a skill universal to all people?” By Russell’s own admission, “the existence and justification of such beliefs — for the inductive principle, as we shall see, is not the only example — raises some of the most difficult and most debated problems of philosophy.”

By the very standard modern science uses to test evidence and judge theories, Russell’s appeal fails. By the very mantra that modern science, represented by Neil deGrasse Tyson, espouses — question everything — it arbitrarily closes the door to the question of how to justify the principle of uniformity. Perhaps it is because to apply the scientific method consistently when predicting the future would require uncomfortable premisses. Since testing and observation are a scientist’s means of gaining knowledge, in order for a scientist to legitimately, on the basis of this emperical epistemology, have any true knowledge about the future would require that he be able to observe the future in advance.

In other words, to predict the future accurately and in a way consistent with the application of the scientific method, science would have to admit predictive prophecy as a legitimate possibility. Needless to say this is not an idea most scientists find palatable.

So, what we are left with is a presupposition, a belief foundational to human thought that is incapable of being tested by science because science itself rests upon its validity. In order for science to be a useful tool, this presupposition must be true.

And yet, modern science and the materialistic atheists who promote it are incapable of providing any rational justification whatsoever for using this presupposition. It is such an elementary assumption in our thinking that to abandon it would mean to throw our world into disarray. You would have no reason to think that, just because Jim was killed when he walked in front of a train, that you would be, too. To find out what would happen, then, you would have to be willing to try it.

A question that was once at the forefront of intellectual inquiry during the Enlightenment, the presupposition of uniformity has gone from one that found the scientific leaders of the day visibly admitting total defeat to simply being dropped down the memory hole altogether. These days, it’s better not to ask the question at all, either out of personal curiosity or in defiance of the status quo, where self-inflicted ignorance and blatant hipocracy have become official policy.


The materialistic atheist faces more hurdles when it comes to deductive reasoning, which is the use of laws of logic to construct valid arguments and draw conclusions based on premisses. The classic example of deductive reasoning is the following:

  1. All men are mortal.
  2. Socrates was a man.
  3. Therefore, Socrates was mortal.

The first two statements are premisses. If they are both true, then it can be concluded that the third statement is also true. This follows a common concept in logical procedure that falls out from the basic logical laws: if p, then q.

A law of logic is an universal concept. It is a principle that holds true regardless of the person who is using it, regardless of the day or year it is invoked, or regardless of whether there is anyone around to think about it or apply it at all. Similarly, concepts in general are universal ideas that resonate with people across time and space, and sometimes even across cultures.

In the example given above, “men” is a class concept. “Socrates” is an individual instance of the class of “men.” Socrates was a material thing. “Men” is not. As a class concept, it exists only in our mind.

We commonly speak of things “in general” or things “in particular.” Concepts are general ideas that tie together individual particulars. “Whaleness” is a general concept. When we see a herd of whales in the ocean, we recognize certain characteristics of each that embodies their “whaleness.” When we see three marine mammals with a certain set of characteristics, we recognize that they belong to the class “whale.” Because of that, we can accurately assume basic information about the creatures we are observing.

If concepts did not exist, then, again as before, randomness would take over. We would see three creatures swimming in the ocean, but we would know nothing about them. They would be “particulars” only.

We could only legitimately assume that each animal would be different from the others. If we captured, dissected, and studied one of those three whales, we would have only gained knowledge about that one particular whale (assuming the uniformity principle were true, granting us the use of science). Whatever we learned about that one whale, we would be unable to justify applying it to the other two. We could not study one whale and then make generalizations about other whales because there would be no such thing as a class concept to link them together in a logical and accurate fashion.

In fact, there would be no “logic” to speak of in such a world because logic is a concept.

Another example is the number five. “Five” is a universal concept. If you write a number five on a piece of paper and then erase it, you have erased an instantiation of the concept of “fiveness.” You have not obliterated fiveness itself. Fiveness is an immaterial concept that cannot be eradicated. Mathematics depends on fiveness, as well as twoness, oneness, and many other abstract concepts. These concepts are fixed across time and space. Can 2 + 2 = the square root of a tree, or possibly an Oldsmobile, or parmesan cheese, just because a different culture or person arbitrarily defines it that way?


Not if they want to have any hope of using their math to gain greater knowledge about our world and harness it to achieve progress. 2 + 2 had better equal 4 in their system of math, or else it won’t be able to derive any accurate information at all.

Concepts are immaterial. They are not physical things that you can touch, smell, see, hear, or taste. How can a materialistic atheist account for such things?

If one person imagines a concept, can another person imagine the same concept? What about millions of people across time, space, and cultures? If so, that would imply that such things as “concepts” are not only universal, but invariant. “Whaleness” as conceived in one culture wouldn’t consist of the same attributes as what another culture would call “treeness.” They may have different words for those concepts, but, when translated, there would be a set of ideas common among the cultures.

If there is no higher power ensuring that all of the particular people on earth and in history are able to think of the same general class concepts, then all that we are left with are particular individuals each with a brain behaving in some biochemical way.

If concepts are merely products of our brain chemistry, then what mechanism has been set in place to ensure that one person’s brain chemistry behaves similar to another person’s brain chemistry when sensing the same or a similar set of stimuli? Without such a mechanism, there’s no way to predict how one person’s brain chemistry will behave when compared to another. When I’m talking about whales, you might imagine I’m talking about trees. Whales are creatures that swim in the ocean, and trees are plants that grow on the surface of the earth. How could people communicate if this is how our brains functioned?

It would be kind of like this, but worse:

As I said just a moment ago, this same conundrum applies to the laws of logic. Laws of logic are universal concepts. At least, that’s how we think of them. But if they are particular to every individual who imagines them, then what’s stopping us from making up our own laws of logic?

In a material universe where things like souls and abstract ideas don’t actually exist, where all that exists at bottom is atomic matter, what, I ask you, is a “concept”?


Finally, the two issues we have been discussing all rest upon an even more fundamental presupposition: that we are capable of thinking real thoughts and systemizing knowledge about our experiences.

We traditionally draw distinctions between the terms “mind” and “brain.” We understand that it’s ridiculous to speak of the mind as a material object; as Dr. Greg Bahnsen has said, it’s absurd to say something like “My mind is currently positioned at five feet, 11 inches above the ground, directly above my feet.” It is correct to say that about our brain, the physical gray matter that is contained in our craniums and held in place securely between our ears.

When we say “mind,” we mean that part of us that is intangible. Humans have thoughts. They have imaginations. We can imagine how music sounds, and when we close our eyes we can imagine visual objects without using our organs of vision.

When we say “brain,” we mean the electrochemical processes that operate inside the millions of cells that makeup our brain tissue. We are implicitly referring to things like neurons passing signals through plasma membranes, and to the synaptic channels that make this neurological communication possible. We are understood to be talking about the physiological command center that exerts control over the rest of our body. The physical brain is like a complex system that receives inputs, processes them, and generates outputs.

We can measure these processes using analog and digital instruments, and in doing so we can quantify them in terms of signal content, frequencies, and amplitudes. We can record this data and retrieve it later. But what we can’t do is convert them into thoughts and emotions. We can’t monitor someone’s brain activity and report back to them exactly what they were thinking. We can’t stimulate a portion of someone’s brain, capture the resulting electrochemical reaction using our precision instruments, and then read back to them their memories. We can’t use a projector and some electrodes to project their thoughts on a wall like a movie.

This is the boundary between the two concepts we generally think of as mind and brain.

In some ways, the two are linked. We can think commands with our mind, and then our brain will convert those thoughts into action, such as if I decide to grab a glass and bring it to my mouth to drink the water contained inside. But in other ways, the two seem to be completely divorced from one another. My brain commands my heart to beat and regulates its action without my needing to actively think those thoughts. In fact, I can’t even think the thought “Heart, stop pumping,” and make my heart stop pumping. It will start or stop independent of my thoughts or desires on the matter (short of killing myself).

The question for the materialistic atheist, then, is this: does mind reduce to brain? That is, is man nothing more than an animal, a complex of electrochemical reactions and physiological factors that govern his actions? Do thoughts actually exist? Are feelings and emotions like love real?

Or are they mere consequences of the reactions in our brain? Is a person’s feeling of personality and individuality merely an illusion brought on by uncontrollable chemical reactions firing in their brain? Am I the only one in the world who has these thoughts? Do you even understand the words I’m writing?

If it truly is the case that mind reduces to brain, then the atheist’s argument falls in upon itself. If we can’t control the neurological reactions, then having debates and conducting scientific inquiry makes no sense. It is pointless to debate, pointless to argue, pointless to do science. Any materialistic atheist who does any of these things is contradicting himself and playing the hipocrit.


There are three critical presuppositions that we must hold to if we are to have legitimate hope that science truly helps us to gain new knowledge about our universe:

  1. Mankind has a mind and a brain, two distinguishable concepts that work together. Our thoughts are unique and are produced by some faculty that is separate from the biochemical and electrical processes operating at the cellular level alone. Our thoughts are not merely the products of those biological processes.
  2. Universal, invariant, immaterial concepts exist. They exist across cultures, individuals, space, and time. The laws of logic are one such concept, and classes are another. Things in general and things in particular are legitimate distinctions. Twoness is related to two, and the concept of twoness, along with all other concepts, exists independently of the person or people thinking it.
  3. Nature is uniform so that the processes that govern the physical universe tomorrow will be similar to those that govern it today, and both are similar to those which have governed it in the past. By way of this presupposition, in conjunction with the others, we can apply the scientific method.

If these presuppositions weren’t true, science would be impossible and the pursuit of any knowledge at all would be meaningless. The question we must ask is, What worldview makes sense of these presuppositions?

Materialistic atheism is incapable, as a coherent worldview, of supplying the justifications required for people to legitimately hold to these three presuppositions. If the world actually operated as the materialistic atheist claims it does, then we would be incapable of even knowing it.

Materialistic atheism calls for a world made of only material things, atomic particles bumping into each other. Concepts and thoughts can’t exist because they are immaterial things. “Thoughts” could only be described as biological processes which have no meaning. Human dignity, as a concept, would have no meaning. Nor would love, nor kindness, empathy, or charity. Furthermore, in a universe governed by chaotic forces, what reason can be offered to justify that processes tomorrow will operate similarly as they have today?

Darwinian evolution, if it were true, would mean we live in a world that we cannot comprehend. We simply wouldn’t comprehend anything at all.

Hume was right. Apart from God, it is futile to prove the validity of inductive reasoning.

Only the Christian worldview offers the precondition of intelligibility that justifies holding these presuppositions. Christianity offers a world governed by a loiving, just, holy, and righteous God. He has promised to rule creation with regularity so that we may gain increasing dominion over it. But his rule of regularity includes miraculous interventions where He sees fit, such as resurrecting dead men to life, parting the Red Sea, and calling into existence all of creation from nothing at all.

God’s thought processes are what we characterize as “logical,” and since he has made us in his own image we, too, are designed to think God’s thoughts after him. We were given minds that could grasp His creation. We are all given a soul; he personally knows every individual ever to have been born, or who has yet to be born, so we are more than mere biological processes. We are more than just animals or sacks of meat. In fact, we are greater than all the animals, for He has set humans above them in his creation hierarchy. And it is his righteous law by which we are to live, a law which requires we honor and recognize the sanctity of life.

The image of God in us is our source of dignity, and it is preserved by societies which govern themselves in accordance with God’s Bible-revealed law.

Finally, it is on the basis of mankind being made in God’s image, and upon God’s promise that he will rule his creation with regularity, that we can legitimately hold to the three presuppositions presented in this essay that make science possible.

God has revealed these truths to us in the Holy Scriptures. But not all will receive them. Because of Adam’s sin in the garden of Eden, all mankind covenantally inherited Adam’s ethical failings. We must be reconciled to God before we are willing to embrace biblical truth. We can only do that through the atoning blood of Jesus Christ, by confessing our sins with our mouth and declaring Jesus to be our Lord and Savior. Only then can we truly start taking captive all of our thoughts to the obedience of Christ.

Unbelievers are ethical rebels. They refuse to bend their knee to King Jesus. Because of this, they suppress the knowledge of God’s existence in unrighteousness. They attempt to veil the image of God within them, by which knowledge of God is made known to them, by burying it under piles of moral pollution. They refuse to acknowledge God as the ruler and creator. But the proof of their knowledge, and thus their guilt before him, is that they conduct their lives in accordance with the Christian worldview. They hold presuppositions that they have no business holding unless they understand God to be who the Bible says he is.

They in principle hold to a non-Christian worldview, but in practice they conduct their lives as if they believe the basis of reality as revealed in Christianity.  As Paul wrote, “when they knew God, they glorified him not as God, neither were thankful” (Romans 1:21).

The proof of Christianity is that, if it wasn’t true, you couldn’t prove anything at all. Atheistic scientists should, on the basis of their own worldview, be incapable of harnessing science. The fact that they do is proof of their knowledge of God and, therefore, their guilt. Their ability to aid in the development of progress, however, is also proof of God’s common grace, by which “He maketh His sun to rise on the evil and on the good, and sendeth rain on the just and on the unjust” (Mat. 5:45).

So, even though unbelievers often gain control of the levers of cultural influence, through God’s common grace their evil is restrained. He has them under a bridle so that even their evil deeds work to his good and glory. His kingdom in history expands despite even self-conscious attempts to subvert it. Christians are the benefactors. The wealth of the wicked is laid up for the righteous (Prov. 13:22), and the meek shall inherit the earth (Mat. 5:5).


4 responses to “If atheism were true, science would be impossible

  1. Pingback: Late June 2015 Presuppositional Apologetics’ Links | The Domain for Truth

  2. Reblogged this on Talmidimblogging and commented:
    Excellent reading!

  3. Pingback: Defending the Christian worldview against a Harvard-educated evolutionary biologist | Rebuild Your Biblical Worldview

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s