Poppers Falsifiability As A Criterion Of Demarcation

The problem of demarcation has long preoccupied philosophers of science who wished to differentiate pseudo-science from science itself. Many solutions have been attempted, but it is still, in my opinion, Popper’s falsfiability which addresses the demarcation problem most effectively. This paper will therefore argue for a revised use of falsifiability as a criterion of demarcation. To argue this point, a clear explanation of Popper’s falsfiability criterion will be attempted, as well as an examination of the criticisms falsifiability has received, specifically in relation to the Duhem-Quine problem and Kuhn’s problem of incommensurability. This paper will then conclude with a discussion of ad hoc modifications and ultimately demonstrate that falsifiability can convincingly demarcate science from pseudo-science.

There's a specialist from your university waiting to help you with that essay.
Tell us what you need to have done now!

order now

Early on in his book Conjectures and Refutations: The Growth of Scientific Knowledge, Popper notes that the Logical Positivists differentiated science from pseudo-science by its empirical method; in other words they believed that science relied on induction from experience while non-scientific disciplines did not. This, according to Popper, was untrue, since fields such as astrology, a pseudo-science, also used induction from observation to justify their claims, relying on things such as horoscopes, biographies, etcaˆ¦ Unsatisfied, Popper notes that although some pseudo-scientific claims might be just as truthful as scientific ones, the problem of demarcation needed to be solved so that philosophers, scientists and the public alike could distinguish scientific theories from those which merely pretended to be scientific.

Verifiability was seen as a solution to the problem of demarcation for philosophers such as Wittgenstein, but not for Popper, whoargued that pseudo-scientists relied very much on verifiability in order to convince their peers of the scientific status of their theories. This point is illustrated in Popper’s anecdote in which Alfred Adler supports his theory of inferiority feelings by his “thousand-fold experience”. This personal experience convinced Popper that the very ability of pseudo-scientific theories, such as Marxism and Freudianism, to incessantly confirm their predictions, in other words with overwhelming verifiability, was in fact the strongest argument against them. Verifiability, therefore, could not be an adequate criterion of demarcation.

Before further exploring Popper’s explanation of falsifiability as a criterion of demarcation, it is important to draw a distinction. While Popper uses the terms falsifiability and testability interchangeably, this paper will not. Falsifiability, in this paper, will be seen as the possibility of a concept being both theoretically and practically falsifiable, while “testable” will be restricted to things only falsifiable in practice. This distinction is important as it entails that, if falsifiability is to be used a criterion of demarcation, theories which can only be falsified in theory, such as Newton’s second law, can in fact reach scientific status. Indeed, although there is no place in the universe in which no forces will be exerted on a body, Newton’s second law remains falsifiable (not testable) and therefore can still be viewed as scientific. Testability would be too restricting as a criterion of demarcation.

Popper explains that the value of falsifiability lies in its risk. If a theory is falsified, it is subsequently refuted by the scientific community. Pseudo-sciences, it is argued, attempt to avoid falsifiability either by providing unfalsifiable predictions or destroying their falsifiability through ad hoc modifications, a procedure he calls a “conventionalist twist”. The first case, that is providing unfalsifiable predictions, is exemplified in Popper’s view of Astrology. Astrology makes predictions and prophecies in such a vague manner, that it is impossible to falsify their predictions. For example, predicting that today Libras will counter an emotional block in one of their long term goals is not falsifiable: practically any event can be interpreted as an emotional block in a long-term goal. By escaping falsifiability, astrology has in fact prevented itself from reaching scientific status.

Popper’s second remark on pseudo-sciences, that it is about those which escape falsifiability through ad hoc modifications, has been much more controversial, inspiring much criticism from other philosophers of science. However, before addressing the issue of ad hoc modification, this paper will address the criticisms of falsifiability known as the Duhem-Quine problem and Kuhn’s problem of incommensurability in order to prove a much needed revision of Popper’s falsifiability.

The Duhem-Quine problem is a strong criticism of Popper’s falsifiability. It was first proposed in Pierre Duhem’s The Aim and Structure of Physical Theory. The Duhem-Quine problem revolves around the idea of holism, which explains that any given system, such as a proposed scientific theory, relies heavily on its components ability to work together as a group. Duhem proposes that the theories of physics cannot be tested in isolation, as the testing theories of physics themselves require the use of “auxiliary hypotheses”, a stance known today as confirmation holism. This argument can effectively be extrapolated to all the sciences, thus entailing that the testing of scientific theories relies on the use of materials and methods which themselves rely on other theories. For example, when testing a theory that predicts the position of certain stars, one uses a telescope, a tool built on the assumption that our theories on electromagnetic radiation are both correct and accurate. The Duhem-Quine problem thus proposes that the testing of isolated theories is impossible, a proposition which can be seen as an attack on the use of falsifiability as a criterion of demarcation between scientific and pseudo-scientific theories.

The act of falsifying can be understood as comparing a theory’s predictions to the results of experimentation. If the theory’s predictions are found to be different from the experimentation results, the theory is falsified. This is problematic for subscribers to confirmation holism who accept the fact that falsifying a theory can only establish that there is an error in either the theory or our background assumptions, and not where, or even what, the error is. Therefore, if it is assumed that the testing of any theory relies on many different background theories, all scientific theories could escape falsification by simply transferring the error to its background theories. Referring back to the telescope example, if a theory inaccurately predicted the position of Pluto, this theory could escape falsification simply by stating that the error lies not in its prediction but within the theory of electromagnetic radiation. This is problematic for Popper’s use of falsifiability as a criterion of demarcation as the falsification of an isolated scientific theory would be impossible. This, in turn, would mean that the testing of theories, scientific or pseudo-scientific, holds the inherent characteristic of escaping falsification, making falsification an impossible criterion of demarcation.

To answer the Duhem-Quine problem, Popper’s use of falsifiability as a criterion of demarcation must be revised. It must be conceded that the process of testing a scientific theory in isolation is unfeasible, as our methods of testing themselves rely on background assumptions. Yet, it does not make falsifiability obsolete as a criterion of demarcation, just more exhaustive. Unlike as Popper had suggested, it is not sufficient for a theory to be falsifiable for it to be scientific. All isolated theories, scientific or pseudo-scientific, attempt to escape falsification by pegging the source of error on the background assumptions of testing. Moreover, it is not adequate to propose that all background assumptions upon which the testing of a theory is based must also be falsifiable in order for that theory to be scientific, as this would be too restricting. Every theory is built upon an “infinite” number of assumptions, a problem analogous to underdeterminism, and inevitably all theories would be pseudo-scientific. For example, the testing of Newton’s laws of motions is based on the unfalsifiable assumption that the human observation of motion is accurate. It is for this reason that I believe scientific theories must not be viewed as isolated propositions, but rather as part of a “scientific system” which requires the provision of at least one falsifiable method of testing. This is a criterion which the pseudo-science of astrology, for example, fails to meet, as astrology provides no falsifiable method of testing its predictions, while Newton’s laws provide falsifiable equations (ex: F=ma) as a method of testing its predictions. It is thus concluded that only “scientific systems” are falsifiable.

Another criticism of Popper’s falsifiability has been the argument that falsification does not produce an accurate picture of science, that falsificationist methodologies incorrectly depict science as a sort of “pyramid of knowledge”, where scientific knowledge is accumulated over time (brick by brick) to provide an ever-progressing image of how the universe works (the pyramid itself). This view of science, heavily endorsed by Karl Popper, is the subject of criticism in Thomas Kuhn’s book The Structure of Scientific Revolutions, where the l[?] “problem of incommensurability” is introduced.

Thomas Kuhn argues that science, as a historical discipline, is in reality not an accumulation of knowledge, but rather a collection of “normal science” and “scientific revolutions”. In order to fully appreciate Kuhn’s argument, it must first be understood what Kuhn meant by “paradigm”. For Kuhn, a paradigm “stands for the entire constellation of beliefs, values, techniques and so on shared by the members of a given community” (Kuhn 175) – in this case, the scientific community. Kuhn defines normal science as the period where scientists’ methodologies and goals are unified within a paradigm; Aristotelian physics would, for example, be a period of normal science where scientists agreed on science’s goal and methodology. Establishing this, Kuhn then proceeds to label science-as-accumulation as a myth. It is argued that different periods of normal science are incommensurable: they cannot understand each other methodologies, goals, taxonomy, etc. and as such, science cannot be seen as progressive discipline, seeing that its history is simply a collection of different methods, goals and values which have irrationally changed over time. “[Scientists] neither test nor seek to confirm the guiding theories of their paradigm” (Bjorhusda) but simply adhere to the rules of science within their paradigm. If this view is accepted, it must be concluded that falsification could not demarcate science from other disciplines, such as the pseudo-sciences, as science is seen not as a discipline requiring falsifiability, but rather a discipline which solely adheres to ever-changing regulations, goals and methodologies.

This problem of incommensurability across different paradigms poses a serious problem to Popper’s use of falsifiability as criterion of demarcation, although it might not be seen at first. If it is accepted that the goals, regulations and methods of science are ever-changing, falsifiability cannot be viewed as a fixed requirement of science, much less a criterion of demarcation. After all, how could falsifiability provide us with an accurate picture of science if scientific theories do not hold permanently the unchanging desire to be falsifiable? Once again, a revision of Popper’s use of falsificationism as a criterion of demarcation is needed. Although I do recognize that the history of science is, to a certain degree, a collection of incommensurable paradigms, I do not believe that the history of science is a correct representation of science as a discipline. I would argue that science is in reality a normative concept, and more of a goal than a historical accumulation of theories.

Many philosophers of science, such as Karl Popper, Thomas Kuhn and even Imre Lakatos all mistakenly believed that the history of science and science itself are identical concepts, although in my opinion, the history of science is most accurately described by Imre Lakatos. Imre Lakatos argued that, much like Kuhn, scientists did not produce single, isolated theories throughout time, but rather worked within research programs (a concept very similar to Kuhn’s paradigms). In an attempt to reconcile Popper’s falsificationist approach to science with Kuhn’s incommensurability, Lakatos argued that the history of science was actually the process of falsifying research programmes. In this view, the problem of incommensurability is rendered insignificant, as research programmes (which are substantially equivalent to paradigms) are not required to be commensurable, as each is falsified along the way. This provides a vision of the history of science as an accumulation of falsifiable knowledge. Nevertheless, ad hoc modifications were observed by Lakatos as being a part of the history of science, and inadvertently attributed to science itself.

Although Lakatos’ history of science approach is eloquent, it is incorrect in assuming that since ad hoc modifications are present in the history of science then ad hoc modifications must be a part of science itself. Ad hoc modifications are undoubtedly a part of the history of science, but they are not part of science as a discipline as they do not conform to science’s normative goals. To illustrate this point, Einstein’s formulation of the cosmological constant may be used as an example. In order to justify his Theory of General Relativity, Einstein required “a static universe — one that [would] stand(s) still and (aˆ¦) not collapse under the force of gravity in a big crunch” (Texas A&M University). In order to support this claim, Einstein proposed an ad hoc modification, his cosmological constant, a move he later recalled as his “greatest blunder”. It is here that the distinction between the history of science and science as a discipline can be seen. In truth, over the course of history, scientists like Albert Einstein have practiced science in many different ways. They have used ad hoc modification to support their theories, a “mistake” which has been practiced by scientists and pseudo-scientists alike. But science as a discipline is separate from its history, as it is a normative goal which has employed the use of “scientific systems”, that is, of falsifiable theories and testing methods, in order to gain valuable inductive knowledge about the universe around us, something that pseudo-sciences have not.

To conclude, Popper’s falsifiability, although convincing, requires [considerable] revision in order to be used as a criterion of demarcation. Indeed, it should be understood that science is a normative discipline where falsifiability is required and where planned modifications take precedence over ad hoc modifications, unlike pseudo-science which satisfies itself in confirming predictions. It should also be understood that this paper does not provide a complete description of science, as many questions remain. Perhaps the most glaring, which was not discussed in this paper due to length constraints, is the problem of how to falsify statements such as “all metals conduct electricity”, a problem posed by Carl Hempel. Finally, although falsifiability is a requirement of science, it is simply one criterion in a whole set of criteria which distinguish the discipline of science from pseudo-science in a normative attempt to create knowledge through falsifiable “scientific systems”.