Why Evolution is Wrong, post 7 of 10: Radiometric Dating

         Most people reading this series have undoubtedly read in their school textbooks that modern dating techniques can be used to prove that fossils are millions of years old.  For many of them it probably seemed unconscionable that I would have the audacity to fundamentally reject the evolutionary time-table for the geologic column the way I did in my last post as if radiometric dating never even existed.  But the patient reader will realize that each of these subjects I am discussing are very extensive, and I cannot always allow myself to be diverted from the particular point at hand because I may never have a chance to come back to it.  But now I think its time for me to finish covering my bases and address this very important topic.

             I believe that the confidence scientists have in most methods of radiometric dating is misplaced. In order to accurately conduct a radiometric dating, one must know precisely the ratio of parent isotope to daughter isotope at the point in time in which the rock was created.  This is hypothetically possible in some rocks where we believe that the native material was “reset” at the time of formation. However, in other rocks, one has to assume that the rock started out with no daughter isotope present, which may turn out to be completely false.

            Another potential problem for radiometric dating is the possibility of contamination by natural processes.  Dr. David Plaisted, a computer science professor at the University of North Carolina Chapel Hill, has written a detailed article on this and other concerns with radiometric dating.* The questions he raises are far too numerous for me to reference here, but I will attempt to deal quickly with the most important one.

            The contamination problem stems from the fact the evolutionists rely on elements with very long half-lives for testing the age of rocks.  The reason they do this is obvious: they are testing their theory that these rocks are millions of years old.  Only parent isotopes with huge half-lives would survive long enough to be tested if that were the case. Unfortunately, this means that the rate of decay is very slow, and that tiny levels of contamination of daughter elements—that is the introduction of material from other sources—could easily throw off the analysis by millions of years or even more.

            The main radiometric method that scientists use to date rocks is potassium-argon dating. Potassium 40, the parent isotope, has a half life of over 1 billion years! So the daughter element, Argon, is formed EXTREMELY slowly.  Tiny amounts of Argon infiltrating the rocks from a different source can easily throw the results off, resulting in an artificially old date of formation. Sources of contamination include radiation, the air, and rapid cooling of magma. Argon will not attach itself to rocks in huge quantities because it is a very stable element, however, only tiny amounts of contamination are necessary to call the results into question.

            Scientists are aware of these problems, of course, and try to adapt their calculations accordingly. This makes it necessary to alter some of the numbers in order to achieve comprehensible results.  But again, because of the small amount of daughter material involved, some of these adjustments may have to be very large.  It is perhaps this practice that gives radiometric dating some of its supposedly consistent results.  I am not accusing them of lying—they obviously believe in what they’re doing, and it is necessary to attempt to fix potential problems—but the many obstacles that they must overcome makes their results extremely questionable. I admire them for attempting to tackle these challenging problems, but I do not believe that they have yet been demonstrated that their results tell us very much about the actual age of the rocks.  In order for their results to make any sense, they have to know precisely how much contamination to adjust for over the course of half a billion years! This is really impossible since they cannot know all the conditions that the rocks were exposed to over this immense period of time, and the slightest (and I mean really, really tiny) miscalculation will completely throw off their results.

            But how then, you might ask, is it that rocks farther down in the crust tend to have higher levels of Argon in them than the ones closer to the surface? Is this extra argon not the result of millions more years of radioactive decay? My answer is: not necessarily. Argon leaching out from volcanic material below the rocks could have the same result. There are other answers to this problem as well, but I do not claim to have a firm grasp on them.  Anyone wishing to begin their own exploration of the subject would be advised to start with Dr. Plaisted’s article which I referenced earlier.  Although I cannot prove that all his material is 100% accurate (there’s a lot of it!), it is a good starting point for many potential lines of inquiry.

* http://www.cs.unc.edu/~plaisted/ce/dating.html


0 Responses to “Why Evolution is Wrong, post 7 of 10: Radiometric Dating”

  1. Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog Stats

  • 7,326 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 28 other followers


%d bloggers like this: