## Computer Science in Epistemology

2008-01-08

I had my first epistemology class today, and while we didn't talk about anything in detail, I did have a thought.

The professor spent some time on skepticism, to emphasize why epistemology is deeper than it might seem. The conclusion of skepticism is that we know close to nothing about anything at all, because all our knowledge depends on other pieces of knowledge, and each of these can in turn be questioned as to their validity. In other words, the question for validity regresses forever.

I thought about that for a bit, and decided that infinite regression actually does not apply to all knowledge. There are certain domains which have basic, undeniable truths, and everything else can be reasoned deductively from these first principles. This allows the knowledge of that field to be valid and consistent - provided that

Specifically, I'm talking about the field of mathematics.

Mathematics as a study of numbers can be thought of to be not entirely real, in the sense that numbers are not physical entities, but properties of objects, like color. Unlike color, however, numbers do not represent a physical property, like the frequency of light reflecting off an object. According to Wikipedia, this is similar to Plato's idea of forms, in contrast with the other understandings of numbers. Other fields of mathematics study similar abstract mental constructs.

Let's focus on number theory for a moment, since I know it best. Processes like addition, multiplication, exponentiation, concepts such as prime numbers, exist as long as the concept of numbers exist. These processes and concepts do not describe the physical world, but only abstract numbers. To prove their validity, therefore, we only need to prove that numbers exist.

That sounds like an absurd goal, since numbers are abstract entities, and clearly do not exist like the screen this text appears on exists. There are, however, restrictions on what numbers are, and in fact there have been definitions of numbers. The Peano axioms try to do just that. Of particular note is that the axioms start of defining 0 as a number, and then a successor function S(n) which takes one number as an argument and output another number. Under this definition of mathematics, skepticism only works to reduce the field to a pair of axioms already taken to be true, and so fails to put all of mathematics into doubt.

It is certainly possible to "cheat" in the same way with all knowledge, asserting that certain things be taken for granted, but it is extremely difficult to say exactly what needs to be assumed, and what can form the basis for all other knowledge. In my opinion, it is also interested to note that, if the assumptions "God exists" and "God is all powerful" are taken to be true, then religion is also exempt from the perils of skepticism. I think the difference between the two lies in how practical they are; that is, how it can influence reality, which unfortunates requires a study of how we know what is real (metaphysics). For the curious: I would classify religion (or at the very least, God), under intuitionalism, one of the other theory of numbers.

All this, by the way, can be said in much less words.

I've written a lot already, and I still haven't mentioned computer science. I was thinking about the infallibility of mathematics today, and was wondering if computer science can be reduced to similar axioms as well. My first thought was that computer science could be confined to objects and processes similarly artificially constrained, a study of Turing machines. While some aspects of computer science rely heavily on mathematics, however, a large part of the field also rests on reality, which cannot be simply defined to have first principles. I will give this idea more thought.