BeRx: Bertrand Russell on Axioms
This project adresses an open question in the interpretation of Bertrand Russell's view on science relying crucially on the computational analysis of a large corpus of Russell's English writings via AutoSearch on the CLARIAH infrastructure.
The British philosopher Bertrand Russell (1872-1970) followed the axiomatic ideal of science, as testified by his landmark work in formal logic co-authored with Alfred N. Whitehead, Principia Mathematica (Whitehead & Russell 1910-1913). According to the axiomatic ideal of science, a science worth its name must be organized like Euclid's geometry, starting from axioms and proceeding to theorems by means of logical proofs. One known issue with axiomatic conceptions like Russell's regards the 'foundational' status of the axioms: we know that theorems are true because they are proven from axioms; but how do we know that axioms themselves are true?
Traditional views take axioms to be 'self-evident' truths. Russell instead - interpreters argue - sees axioms as truths that possess certain special system-organizing powers: axioms are true because they let us prove certain true, desirable consequences (cf. Shapiro 2009, Mayo-Wilson 2011, Patton 2017). Yet, in Russell's oeuvre we find clear mention of axioms as 'self-evident truths' (Russell 1912: XIII). So, does Russell see axioms as self-evident truths or not? A different line of work (de Jong & Betti 2010) suggests that Russell deems axioms as self-evident truths or not according to the different sciences he is talking about. If this suggestion is confirmed, then Russell's position on the status of the axioms turns out to be considerably more traditional than so far suspected.
To what extent can we use corpus analysis on Russell's writings to check whether for Russell axioms are self-evident truths or not depending on the different sciences he considers?
Although our research question mainly regards certain concepts, we address it by means of a string-based lexical strategy. This is justified by the fact that humans take lexicon as a proxy for concepts. We have previously articulated a complex string-based strategy to work on a similar concept-based question on a similar paragraph-segmented corpus by another author (Betti et al 2020), which we will reuse and refine in this research. Our strategy was originally motivated by the need to filter for relevance the sheer amount of results yielded by simple queries of single keywords in large corpora.
Our strategy is a mixed (qualitative, computational and data-driven) application of the socalled 'model approach' (Betti & van den Berg 2014) to concepts, and relies on representing concepts as clusters of lexical entries (n-grams) actually appearing in the corpus (expert lists) to enable retrieval of relevant fragments of text.
Researchers
Hoogleraar Taalfilosofie, Universiteit van Amsterdam
More projects
MasterSearch
MasterSearch is about training the next generation of data-driven masters of concepts using Autosearch, by developing a suite of 'how to'-to...
Enhanced Autosearch: Corpora for Philosophers
Enhanced Autosearch aims to create an environment for the deployment of multilingual philosophical text corpora of different periods. By dra...