Mr. Davide Leonessi successfully defended his dissertation for the Masters of Science degree in Mathematics and Foundations of Computer Science, entitled “Transfinite game values in infinite games,” on 15 September 2021. Davide earned a distinction for his thesis, an outstanding result.

Abstract. The object of this study are countably infinite games with perfect information that allow players to choose among arbitrarily many moves in a turn; in particular, we focus on the generalisations of the finite board games of Hex and Draughts.

In Chapter 1 we develop the theory of transfinite ordinal game values for open infinite games following [Evans-Hamkins 2014], and we focus on the properties of the omega one, that is the supremum of the possible game values, of classes of open games; we moreover design the class of climbing-through-$T$ games as a tool to study the omega one of given game classes.

The original contributions of this research are presented in the following two chapters.

In Chapter 2 we prove classical results about finite Hex and present Infinite Hex, a well-defined infinite generalisation of Hex.

We then introduce the class of stone-placing games, which captures the key features of Infinite Hex and further generalises the class of positional games already studied in the literature within the finite setting of Combinatorial Game Theory.

The main result of this research is the characterization of open stone-placing games in terms of the property of essential locality, which leads to the conclusion that the omega one of any class of open stone-placing games is at most $\omega$. In particular, we obtain that the class of open games of Infinite Hex has the smallest infinite omega one, that is $\omega_1^{\rm Hex}=\omega$.

In Chapter 3 we show a dual result; we define the class of games of Infinite Draughts and explicitly construct open games of arbitrarily high game value with the tools of Chapter 1, concluding that the omega one of the class of open games of Infinite Draughts is as high as possible, that is $\omega_1^{\rm Draughts}=\omega_1$.

This will be a talk for the Oxford Seminar in the Philosophy of Mathematics, 1 November, 4:30-6:30 GMT. The talk will be held on Zoom (contact the seminar organizers for the Zoom link). There is a possibility of it also being held in-person in The Ryle Room, Faculty of Philosophy, Oxford, and I shall update with further information as the date approaches.

Abstract. The standard treatment of sets and classes in Zermelo-Fraenkel set theory instantiates in many respects the Fregean foundational distinction between objects and concepts, for in set theory we commonly take the sets as objects to be considered under the guise of diverse concepts, the definable classes, each serving as a predicate on that domain of individuals. Although it is often asserted that there can be no association of classes with objects in a way that fulfills Frege’s Basic Law V, nevertheless, in the ZF framework I have described, it turns out that Basic Law V does hold, and provably so, along with other various Fregean abstraction principles. These principles are consequences of Zermelo-Fraenkel ZF set theory in the context of all its definable classes. Namely, there is an injective mapping from classes to objects, definable in senses I shall explain, associating every first-order parametrically definable class $F$ with a set object $\varepsilon F$, in such a way that Basic Law V is fulfilled: $$\varepsilon F =\varepsilon G\iff\forall x\ (Fx\leftrightarrow Gx).$$ Russell’s elementary refutation of the general comprehension axiom, therefore, is improperly described as a refutation of Basic Law V itself, but rather refutes Basic Law V only when augmented with powerful class comprehension principles going strictly beyond ZF. The main result leads also to a proof of Tarski’s theorem on the nondefinability of truth as a corollary to Russell’s argument.

Philosophy of Mathematics, Exam Paper 122, Oxford University

Wednesdays 12-1 during term, Radcliffe Humanities Lecture Room

Joel David Hamkins, Professor of Logic

This series of self-contained lectures on the philosophy of mathematics is intended for students preparing for Oxford Philosophy exam paper 122. All interested parties from the Oxford University community, however, are welcome to attend, whether or not they intend to sit the exam. The lectures will be organized loosely around mathematical themes, in such a way that brings various philosophical issues naturally to light. Lectures will loosely follow the instructor’s book Lectures on the Philosophy of Mathematics (MIT Press 2021), with supplemental suggested readings each week.

Previously recorded lectures from last year are available on the lecturer’s YouTube channel, below.

In light of the earlier lectures being available online, the plan for the lectures this year will be to feel somewhat more free occasionally to focus on narrower topics, and also to entertain at times a discussion format. Therefore kindly bring questions and well-thought-out opinions to the lecture.

The lectures this term will be held in person. The lecturer requests that students be vaccinated, wear masks, and observe social distancing as practicable. If this proves impossible or unsustainable, we shall regretably revert to online lectures on short notice.

Lecture 1. Numbers

Numbers are perhaps the essential mathematical idea, but what are numbers? There are many kinds of numbers—natural numbers, integers, rational numbers, real numbers, complex numbers, hyperreal numbers, surreal numbers, ordinal numbers, and more—and these number systems provide a fruitful background for classical arguments on incommensurability and transcendentality, while setting the stage for discussions of platonism, logicism, the nature of abstraction, the significance of categoricity, and structuralism.

Lecture 2. Rigour

Let us consider the problem of mathematical rigor in the development of the calculus. Informal continuity concepts and the use of infinitesimals ultimately gave way to the epsilon-delta limit concept, which secured a more rigorous foundation while also enlarging our conceptual vocabulary, enabling us to express more refined notions, such as uniform continuity, equicontinuity, and uniform convergence. Nonstandard analysis resurrected the infinitesimals on a more secure foundation, providing a parallel development of the subject. Meanwhile, increasing abstraction emerged in the function concept, which we shall illustrate with the Devil’s staircase, space-filling curves, and the Conway base 13 function. Finally, does the indispensability of mathematics for science ground mathematical truth? Fictionalism puts this in question.

Lecture 3. Infinity

We shall follow the allegory of Hilbert’s hotel and the paradox of Galileo to the equinumerosity relation and the notion of countability. Cantor’s diagonal arguments, meanwhile, reveal uncountability and a vast hierarchy of different orders of infinity; some arguments give rise to the distinction between constructive and nonconstructive proof. Zeno’s paradox highlights classical ideas on potential versus actual infinity. Furthermore, we shall count into the transfinite ordinals.

Lecture 4. Geometry

Classical Euclidean geometry is the archetype of a mathematical deductive process. Yet the impossibility of certain constructions by straightedge and compass, such as doubling the cube, trisecting the angle, or squaring the circle, hints at geometric realms beyond Euclid. The rise of non-Euclidean geometry, especially in light of scientific theories and observations suggesting that physical reality is not Euclidean, challenges previous accounts of what geometry is about. New formalizations, such as those of David Hilbert and Alfred Tarski, replace the old axiomatizations, augmenting and correcting Euclid with axioms on completeness and betweenness. Ultimately, Tarski’s decision procedure points to a tantalizing possibility of automation in geometrical reasoning.

Lecture 5. Proof

What is proof? What is the relation between proof and truth? Is every mathematical truth true for a reason? After clarifying the distinction between syntax and semantics and discussing various views on the nature of proof, including proof-as-dialogue, we shall consider the nature of formal proof. We shall highlight the importance of soundness, completeness, and verifiability in any formal proof system, outlining the central ideas used in proving the completeness theorem. The compactness property distills the finiteness of proofs into an independent, purely semantic consequence. Computer-verified proof promises increasing significance; its role is well illustrated by the history of the four-color theorem. Nonclassical logics, such as intuitionistic logic, arise naturally from formal systems by weakening the logical rules.

Lecture 6. Computability

What is computability? Kurt Gödel defined a robust class of computable functions, the primitive recursive functions, and yet he gave reasons to despair of a fully satisfactory answer. Nevertheless, Alan Turing’s machine concept of computability, growing out of a careful philosophical analysis of the nature of human computability, proved robust and laid a foundation for the contemporary computer era; the widely accepted Church-Turing thesis asserts that Turing had the right notion. The distinction between computable decidability and computable enumerability, highlighted by the undecidability of the halting problem, shows that not all mathematical problems can be solved by machine, and a vast hierarchy looms in the Turing degrees, an infinitary information theory. Complexity theory refocuses the subject on the realm of feasible computation, with the still-unsolved P versus NP problem standing in the background of nearly every serious issue in theoretical computer science.

Lecture 7. Incompleteness

David Hilbert sought to secure the consistency of higher mathematics by finitary reasoning about the formalism underlying it, but his program was dashed by Gödel’s incompleteness theorems, which show that no consistent formal system can prove even its own consistency, let alone the consistency of a higher system. We shall describe several proofs of the first incompleteness theorem, via the halting problem, self-reference, and definability, showing senses in which we cannot complete mathematics. After this, we shall discuss the second incompleteness theorem, the Rosser variation, and Tarski’s theorem on the nondefinability of truth. Ultimately, one is led to the inherent hierarchy of consistency strength rising above every foundational mathematical theory.

Lecture 8. Set Theory

We shall discuss the emergence of set theory as a foundation of mathematics. Cantor founded the subject with key set-theoretic insights, but Frege’s formal theory was naive, refuted by the Russell paradox. Zermelo’s set theory, in contrast, grew ultimately into the successful contemporary theory, founded upon a cumulative conception of the set-theoretic universe. Set theory was simultaneously a new mathematical subject, with its own motivating questions and tools, but it also was a new foundational theory with a capacity to represent essentially arbitrary abstract mathematical structure. Sophisticated technical developments, including in particular, the forcing method and discoveries in the large cardinal hierarchy, led to a necessary engagement with deep philosophical concerns, such as the criteria by which one adopts new mathematical axioms and set-theoretic pluralism.

Consider this fascinating vision of recursive chess — the etching below was created by Django Pinter, a tutorial student of mine who has just completed his degree in the PPL course here at Oxford, given to me as a parting gift at the conclusion of his studies. Django’s idea was to play chess, but in order for a capture to occur successfully on the board, as here with the black Queen attempting to capture the opposing white knight, the two pieces would themselves sit down for their own game of (recursive) chess. The idea was that the capture would be successful only in the event that the subgame was won. Notice in the image that not only is there a smaller recursive game of chess, but also a still tinier subrecursive game below that (if you inspect closely), while at the same time larger pieces loom in the background, indicating that the main board itself is already several levels deep in the recursion.

The recursive chess idea may seem clear enough initially, and intriguing. But with further reflection, we might wonder how does it work exactly? What precisely is the game of recursive chess? This is my question here.

My goal with this post is to open a discussion about what ultimately should be the precise the rules and operations of recursive chess. I’m not completely sure what the best rule set will be, although I do offer several proposals here. I welcome further proposals, commentary, suggestions, and criticism about how to proceed. Once we settle upon a best or most natural version of the game, then I shall hope perhaps to prove things about it. Will you help me decide what is the game of recursive chess?

Let me describe several natural proposals:

Naïve recursion. Although this seems initially to be a simple, sound proposal, ultimately I find it problematic. The naïve idea is that when one piece wants to capture another in the game at hand, then the two pieces themselves play a game of chess, starting from the usual chess starting position. I would find it natural that the attacking piece should play as white in this game, going first, and if that player wins the subgame, then the capture in the current game is successful. If the subgame is a loss, then the capture is unsuccessful.

There seem, however, to be a variety of ways to handle the losing subgame outcome, and since these will apply with several of the other proposals, let me record them here:

Failed-capture. If the subgame is lost, then the capture in the current game simply does not occur. Both pieces remain on their original squares, and the turn of play passes to the opponent. Notice that this will have serious affects in certain chess situations involving zugswang, a position where a player has no good move — because if one of them is a capture, then the player can aim to play badly in the subgame and thereby legally pass the turn of play to the opponent without having made any move. This situation will in effect cause the subgame players to attempt to lose, rather than win, which seems undesirable.

Failed-capture-with-penalty. If the subgame is lost, then the capture does not occur, but furthermore, the attacking piece is itself lost, removed from the board, and the turn of play passes to the opponent. In effect, under this rule, every attempt at capture is putting the life of the capturing piece at risk, which makes a certain amount of sense from a military point of view. I think perhaps this is a good rule.

Failed-capture-with-retry. If the subgame is lost, then the capture does not occur, but both pieces remain on their original squares, and the attacking player is allowed to proceed with another (different) move. Attempting the same attack from the same board position multiple times is subject to the three-fold repetition rule. This interpretation amounts in effect to the game play searching a large part of the game tree, exploring the possible capturing moves, but with the first successful option fixed as official. It invites manipulation by the opponent, who might play badly against a misguided capture attempt, causing it to be fixed as the official move.

Drawn subgame. A further complication arises from the fact that the subgame can itself be drawn, rather than won. Is this sufficient to cause the penalty or the retry? Or does this count as a failed capture?

As I see it, however, the principal problem with the naïve recursion rule is that it seems to be ill-founded. After all, we can have a game with a capture, which leads to a subgame with a capture, which leads to a deeper subgame with a capture, and so on descending forever. How is the outcome determined in this infinitely descending situation? None of the subgames is ever resolved with a definite conclusion until all of them are, and there seems no coherent way to assign resolutions to them. All infinitely many subgames are simply left hanging mid-play, and indeed mid-move. For this reason, the naïve recursion idea seems ultimately incoherent to me as a game rule.

What we would seem to need instead is a well-founded recursion, one which would ultimately bottom-out in a base case. With such a recursion, every outcome of the game would be well-defined. Such a well-founded recursion would be achieved, for example, if on every subgame there were strictly fewer pieces. Eventually, the subgames would reduce to king versus king, a drawn game, and then the drawn subgame rule would be invoked to whatever affect it cause. But the recursion would definitely terminate. And perhaps most recursions would terminate because the stronger player was ultimately mating in all his attacks, without requiring any invocation of the drawn subgame rule.

We can easily describe several natural subgame positions with one fewer piece. For example, when one piece attacks another, we may naturally consider the positions that would result if we performed the capture, or if we removed the attacking piece; and we might further consider swapping the roles of the players in these positions. Such cases would constitute a well-founded recursion, because the subgame position would have fewer pieces than the main position. In this way, we arrive at several natural recursion rules for recursive chess.

Proof-of-sufficiency recursion. The motivating idea of this recursion rule is that in order for an attack to be successful, the attacking player must prove that it will be sufficient for the attack to be successful. So, when piece A attacks piece B in the game, then a subgame is set up from the position that would result if A were successfully to capture B, and the players proceed in the game in which the attack has occurred. This is the same as proceeding in the main game, under the assumption that the attack was successful. If the attacking player can win this subgame, then it shows in a sense the sufficiency of the attack as a means to win, and so on the proof-of-sufficiency idea, we allow this as a reason for the attack to have been successful.

One might object that this recursion seems at first to amount to just playing chess as usual, since the subgame is the same as the original game with the attack having succeeded. But there is a subtle difference. For a misguided attack, the attacked player can play suboptimally in the subgame, intentionally losing that game, and then play differently in the main game. There is, of course, no obligation that the players respond the same at the higher-level games as in the lower games, and this is all part of their strategic calculation.

Proof-of-necessity recursion. The motivating idea of this recursion rule, in contrast, is that in order for an attack to be successful, the attacking player must prove that it is necessary that the attack take place. When piece A attacks piece B in the main game, then a subgame is set up in which the attack has not succeeded, but instead the attacking piece is lost, but the color sides of the players are swapped. If a black Queen attacks a white knight, for example, then in the subgame position, the queen is removed, and the players proceed from that positions, but with the opposite colors. By winning this subgame, the attacking player is in effect demonstrating that if the attack were to fail, then it would be devastating for them. In other words, they are demonstrating the necessity of the success of the attack.

For the both the proof-of-sufficiency and the proof-of-necessity versions of the recursion, it seems to me that any of the three failed-capture rules would be sensible. And so we have quite a few different interpretations here for what is the game of recursive chess.

What is your proposal? Please let me know in the comments. I am interested to hear any comments or criticism.

This bried unpublished note (11 pages) contains an overview of the Gödel fixed-point lemma, along with several generalizations and applications, written for use in the Week 3 lecture of the Graduate Philosophy of Logic seminar that I co-taught with Volker Halbach at Oxford in Hilary term 2021. The theme of the seminar was self-reference, truth, and consistency strengths, and in this lecture we discussed the nature of Gödel’s fixed-point lemma and generalizations, with various applications in logic.

Gödel’s fixed-point lemma An application to the Gödel incompleteness theorem

Finite self-referential schemes An application to nonindependent disjunctions of independent sentences

Gödel-Carnap fixed point lemma Deriving the double fixed-point lemma as a consequence An application to the provability version of Yablo’s paradox

Kleene recursion theorem An application involving computable numbers An application involving the universal algorithm An application to Quine programs and Ouroborous chains

This is a graduate seminar in the Philosophy of Logic at the University of Oxford, run jointly by myself and Volker Halbach in Hilary Term 2021.

The theme will be self-reference, truth, and the hierarchy of consistency strength.

A detailed schedule, including the list of topics and readings is available on Volker’s web site.

The seminar will be held Fridays 9-11 am during term, online via Zoom at 812 2300 3837.

The final two sessions of term will be specifically on the hierarchy of consistency strength, based on my current article in progress concerning the possibility of natural instances of incomparability and ill-foundedness in the hierarchy of large cardinal consistency strength.

This series of self-contained lectures on the philosophy of mathematics, offered for Oxford Michaelmas Term 2020, is intended for students preparing for philosophy exam paper 122, although all interested parties are welcome to join. The lectures will be organized loosely around mathematical themes, in such a way that brings various philosophical issues naturally to light.

Lectures will follow my new book Lectures on the Philosophy of Mathematics (MIT Press), with supplemental readings suggested each week for further tutorial work. The book is available for pre-order, to be released 2 February 2021.

Lectures will be held online via Zoom every Wednesday 11-12 am during term at the following Zoom coordinates:

All lectures will be recorded and made available at a later date.

Lecture 1. Numbers

Numbers are perhaps the essential mathematical idea, but what are numbers? There are many kinds of numbers—natural numbers, integers, rational numbers, real numbers, complex numbers, hyperreal numbers, surreal numbers, ordinal numbers, and more—and these number systems provide a fruitful background for classical arguments on incommensurability and transcendentality, while setting the stage for discussions of platonism, logicism, the nature of abstraction, the significance of categoricity, and structuralism.

Lecture 2. Rigour

Let us consider the problem of mathematical rigour in the development of the calculus. Informal continuity concepts and the use of infinitesimals ultimately gave way to the epsilon-delta limit concept, which secured a more rigourous foundation while also enlarging our conceptual vocabulary, enabling us to express more refined notions, such as uniform continuity, equicontinuity, and uniform convergence. Nonstandard analysis resurrected the infinitesimals on a more secure foundation, providing a parallel development of the subject. Meanwhile, increasing abstraction emerged in the function concept, which we shall illustrate with the Devil’s staircase, space-filling curves, and the Conway base 13 function. Finally, does the indispensability of mathematics for science ground mathematical truth? Fictionalism puts this in question.

Lecture 3. Infinity

We shall follow the allegory of Hilbert’s hotel and the paradox of Galileo to the equinumerosity relation and the notion of countability. Cantor’s diagonal arguments, meanwhile, reveal uncountability and a vast hierarchy of different orders of infinity; some arguments give rise to the distinction between constructive and nonconstructive proof. Zeno’s paradox highlights classical ideas on potential versus actual infinity. Furthermore, we shall count into the transfinite ordinals.

Lecture 4. Geometry

Classical Euclidean geometry is the archetype of a mathematical deductive process. Yet the impossibility of certain constructions by straightedge and compass, such as doubling the cube, trisecting the angle, or squaring the circle, hints at geometric realms beyond Euclid. The rise of non-Euclidean geometry, especially in light of scientific theories and observations suggesting that physical reality is not Euclidean, challenges previous accounts of what geometry is about. New formalizations, such as those of David Hilbert and Alfred Tarski, replace the old axiomatizations, augmenting and correcting Euclid with axioms on completeness and betweenness. Ultimately, Tarski’s decision procedure points to a tantalizing possibility of automation in geometrical reasoning.

Lecture 5. Proof

What is proof? What is the relation between proof and truth? Is every mathematical truth true for a reason? After clarifying the distinction between syntax and semantics and discussing various views on the nature of proof, including proof-as-dialogue, we shall consider the nature of formal proof. We shall highlight the importance of soundness, completeness, and verifiability in any formal proof system, outlining the central ideas used in proving the completeness theorem. The compactness property distills the finiteness of proofs into an independent, purely semantic consequence. Computer-verified proof promises increasing significance; its role is well illustrated by the history of the four-color theorem. Nonclassical logics, such as intuitionistic logic, arise naturally from formal systems by weakening the logical rules.

Lecture 6. Computability

What is computability? Kurt Gödel defined a robust class of computable functions, the primitive recursive functions, and yet he gave reasons to despair of a fully satisfactory answer. Nevertheless, Alan Turing’s machine concept of computability, growing out of a careful philosophical analysis of the nature of human computability, proved robust and laid a foundation for the contemporary computer era; the widely accepted Church-Turing thesis asserts that Turing had the right notion. The distinction between computable decidability and computable enumerability, highlighted by the undecidability of the halting problem, shows that not all mathematical problems can be solved by machine, and a vast hierarchy looms in the Turing degrees, an infinitary information theory. Complexity theory refocuses the subject on the realm of feasible computation, with the still-unsolved P versus NP problem standing in the background of nearly every serious issue in theoretical computer science.

Lecture 7. Incompleteness

David Hilbert sought to secure the consistency of higher mathematics by finitary reasoning about the formalism underlying it, but his program was dashed by Gödel’s incompleteness theorems, which show that no consistent formal system can prove even its own consistency, let alone the consistency of a higher system. We shall describe several proofs of the first incompleteness theorem, via the halting problem, self-reference, and definability, showing senses in which we cannot complete mathematics. After this, we shall discuss the second incompleteness theorem, the Rosser variation, and Tarski’s theorem on the nondefinability of truth. Ultimately, one is led to the inherent hierarchy of consistency strength rising above every foundational mathematical theory.

Lecture 8. Set Theory

We shall discuss the emergence of set theory as a foundation of mathematics. Cantor founded the subject with key set-theoretic insights, but Frege’s formal theory was naive, refuted by the Russell paradox. Zermelo’s set theory, in contrast, grew ultimately into the successful contemporary theory, founded upon a cumulative conception of the set-theoretic universe. Set theory was simultaneously a new mathematical subject, with its own motivating questions and tools, but it also was a new foundational theory with a capacity to represent essentially arbitrary abstract mathematical structure. Sophisticated technical developments, including in particular, the forcing method and discoveries in the large cardinal hierarchy, led to a necessary engagement with deep philosophical concerns, such as the criteria by which one adopts new mathematical axioms and set-theoretic pluralism.

This will be a graduate-level lecture seminar on the Philosophy of Mathematics held during Trinity term 2020 here at the University of Oxford, co-taught by Dr. Wesley Wrigley and myself.

The broad theme for the seminar will be incompleteness, referring both to the incompleteness of our mathematical theories, as exhibited in Gödel’s incompleteness theorems, and also to the incompleteness of our mathematical domains, as exhibited in mathematical potentialism.

All sessions will be held online using the Zoom meeting platform. Please contact Professor Wrigley for access to the seminar (wesley.wrigley@philosophy.ox.ac.uk). The Zoom meetings will not be recorded or posted online.

The basic plan will be that the first four sessions, in weeks 1-4, will be led by Dr. Wrigley and concentrate on his current research on the incompleteness of mathematics and the philosophy of Kurt Gödel, while weeks 5-8 will be led by Professor Hamkins, who will concentrate on topics in potentialism.

Weeks 1 & 2 (28 April, 5 May) Kurt Gödel “Some basic theorems on the foundations of mathematics and their implications (*1951)”, in: Feferman, S. et al. (eds) Kurt Gödel: Collected Works Volume III, pp.304-323. OUP (1995). And Wrigley “Gödel’s Disjunctive Argument”. (Also available on Canvas).

Week 4 (19th May) Bertrand Russell “The Regressive Method of Discovering the Premises of Mathematics (1907)”, in: Moore , G. (ed) The Collected Papers of Bertrand Russell, Volume 5, pp.571-580. Routledge (2014). And Wrigley “Quasi-Scientific Methods of Justification in Set Theory.”

Week 5 (26th May) Øystein Linnebo & Stewart Shapiro, “Actual and potential infinity”, Noûs 53:1 (2019), 160-191, https://doi.org/10.1111/nous.12208. And Øystein Linnebo. “Putnam on Mathematics as Modal Logic,” In: Hellman G., Cook R. (eds) Hilary Putnam on Logic and Mathematics. Outstanding Contributions to Logic, vol 9. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-96274-0_14

Week 6 (2nd June) The topic this week is: tools for analyzing the modal logic of a potentialist system. This seminar will be based around the slides for my talk “Potentialism and implicit actualism in the foundations of mathematics,” given for the Jowett Society in Oxford last year. The slides are available at: http://jdh.hamkins.org/potentialism-and-implicit-actualism-in-the-foundations-of-mathematics-jowett-society-oxford-february-2019. Interested readers may also wish to consult the more extensive slides for the three-lecture workshop I gave on potentialism at the Hejnice Winter School in 2018; the slides are available at http://jdh.hamkins.org/set-theoretic-potentialism-ws2018. My intent is to concentrate on the nature and significance of control statements, such as buttons, switches, ratchets and railyards, for determining the modal logic of a potentialist system.

Week 7 (9th June) Joel David Hamkins and Øystein Linnebo. “The modal logic of set-theoretic potentialism and the potentialist maximality principles”. Review of Symbolic Logic (2019). https://doi.org/10.1017/S1755020318000242. arXiv:1708.01644. http://wp.me/p5M0LV-1zC. This week, we shall see how the control statements allow us to analyze precisely the modal logic of various conceptions of set-theoretic potentialism.

Week 8 (16th June) Joel David Hamkins, “Arithmetic potentialism and the universal algorithm,” arxiv: 1801.04599, available at http://jdh.hamkins.org/arithmetic-potentialism-and-the-universal-algorithm. Please feel free to skip over the more technical parts of this paper. In the seminar discussion, we shall concentrate on the basic idea of arithmetic potentialism, including a full account of the universal algorithm and the significance of it for potentialism, as well as remarks of the final section of the paper.

This will be a fun talk for the Philosophy Plus Science Taster Day, a fun day of events for prospective students in the joint philosophy degrees, whether Mathematics & Philosophy, Physics & Philosophy or Computer Science & Philosophy. The talk will be Friday 10th January in the Andrew Wiles building.

Abstract. In this talk, we shall pose and solve various fun puzzles in epistemic logic, which is to say, puzzles involving reasoning about knowledge, including one’s own knowledge or the knowledge of other people, including especially knowledge of knowledge or knowledge of the lack of knowledge. We’ll discuss several classic puzzles of common knowledge, such as the two-generals problem, Cheryl’s birthday problem, and the blue-eyed islanders, as well as several new puzzles. Please come and enjoy!

This will be a fun start-of-term Philosophy Undergraduate Welcome Lecture for philosophy students at Oxford in the Mathematics & Philosophy, Physics & Philosophy, Computer Science & Philosophy, and Philosophy & Linguistics degrees. New students are especially encouraged, but everyone is welcome! The talk is open to all. The talk will be Wednesday 16th October, 5-6 pm in the Mathematical Institute, with wine and nibbles afterwards.

Abstract. In this talk, we shall pose and solve various fun puzzles in epistemic logic, which is to say, puzzles involving reasoning about knowledge, including one’s own knowledge or the knowledge of other people, including especially knowledge of knowledge or knowledge of the lack of knowledge. We’ll discuss several classic puzzles of common knowledge, such as the two-generals problem, Cheryl’s birthday problem, and the blue-eyed islanders, as well as several new puzzles. Please come and enjoy!

This will be a series of self-contained lectures on the philosophy of mathematics, given at Oxford University in Michaelmas term 2019. We will be meeting in the Radcliffe Humanities Lecture Room at the Faculty of Philosophy every Friday 12-1 during term.

All interested parties are welcome. The lectures are intended principally for students preparing for philosophy exam paper 122 at the University of Oxford.

The lectures will be organized loosely around mathematical themes, in such a way I hope that brings various philosophical issues naturally to light. The lectures will be based on my new book, forthcoming with MIT Press.

There are tentative plans to make the lectures available by video. I shall post further details concerning this later.

Lecture 1. Numbers. Numbers are perhaps the essential mathematical idea, but what are numbers? We have many kinds of numbers—natural numbers, integers, rational numbers, real numbers, complex numbers, hyperreal numbers, surreal numbers, ordinal numbers, and more—and these number systems provide a fruitful background for classical arguments on incommensurability and transcendentality, while setting the stage for discussions of platonism, logicism, the nature of abstraction, the significance of categoricity, and structuralism.

Lecture 2. Rigour. Let us consider the problem of mathematical rigour in the development of the calculus. Informal continuity concepts and the use of infinitesimals ultimately gave way to formal epsilon-delta limit concepts, which provided a capacity for refined notions, such as uniform continuity, equicontinuity and uniform convergence. Nonstandard analysis resurrected the infinitesimal concept on a more secure foundation, providing a parallel development of the subject, which can be understood from various sweeping perspectives. Meanwhile, increasing abstraction emerged in the function concept, which we shall illustrate with the Devil’s staircase, space-filling curves and the Conway base 13 function. Whether the indispensibility of mathematics for science grounds mathematical truth is put in question on the view known as fictionalism.

Lecture 3. Infinity. We shall follow the allegory of Hilbert’s hotel and the paradox of Galileo to the equinumerosity relation and the notion of countability. Cantor’s diagonal arguments, meanwhile, reveal uncountability and a vast hierarchy of different orders of infinity; some arguments give rise to the distinction between constructive and non-constructive proof. Zeno’s paradox highlights classical ideas on potential versus actual infinity. Time permitting, we shall count into the transfinite ordinals.

Lecture 4. Geometry. Classical Euclidean geometry, accompanied by its ideal of straightedge and compass construction and the Euclidean concept of proof, is an ageless paragon of deductive mathematical reasoning. Yet, the impossibility of certain constructions, such as doubling the cube, trisecting the angle or squaring the circle, hints at geometric realms beyond Euclid, and leads one to the concept of constructible and non-constructible numbers. The rise of non-Euclidean geometry, especially in light of scientific observations and theories suggesting that physical reality may not be Euclidean, challenges previous accounts of what geometry is about and changes our understanding of the nature of geometric and indeed mathematical ontology. New formalizations, such as those of Hilbert and Tarski, replace the old axiomatizations, augmenting and correcting Euclid with axioms on completeness and betweenness. Ultimately, Tarski’s decision procedure hints at the tantalizing possibility of automation in our geometrical reasoning.

Lecture 5. Proof. What is proof? What is the relation between proof and truth? Is every mathematical truth, true for a reason? After clarifying the distinction between syntax and semantics, we shall discuss new views on the dialogical nature of proof. With formal proof systems, we shall highlight the importance of soundness, completeness and verifiability in any such system, outlining the central ideas used in proving the completeness theorem. The compactness theorem distills the finiteness of proofs into an independent purely semantic consequence. Computer-verified proof promises increasing significance; it’s role is well illustrated by the history of the four-color theorem. Nonclassical logics, such as intuitionistic logic, arise naturally from formal systems by weakenings of the logical rules.

Lecture 6. Computability. What is computability? Gödel defined the primitive recursive functions, a robust class of computable functions, yet he gave reasons to despair of a fully satisfactory answer. Nevertheless, Turing’s machine concept, growing out of a careful philosophical analysis of computability, laid a foundation for the contemporary computer era; the widely accepted Church-Turing thesis asserts that Turing has the right notion. The distinction between computable decidability and computable enumerability, highlighted by the undecidability of the halting problem, shows that not all mathematical problems can be solved by machine, and a vast hierarchy looms in the Turing degrees, an infinitary information theory. Complexity theory refocuses the subject on the realm of feasible computation, with the still-unsolved P vs. NP problem standing in the background of nearly every serious issue in theoretical computer science.

Lecture 7. Incompleteness. The Hilbert program, seeking to secure the consistency of higher mathematics by finitary reasoning about the formal system underlying it, was dashed by Gödel’s incompleteness theorems, which show that no consistent formal system can prove even its own consistency, let alone the consistency of a higher system. We shall describe several proofs of the first incompleteness theorem, via the halting problem, via self-reference, and via definability. After this, we’ll discuss the second incompleteness theorem, the Rosser variation, and Tarski on the non-definability of truth. Ultimately, one is led to the inherent hierarchy of consistency strength underlying all mathematical theories.

Lecture 8. Set theory. We shall discuss the emergence of set theory as a foundation of mathematics. Cantor founded the subject with key set-theoretic insights, but Frege’s formal theory was naive, refuted by the Russell paradox. Zermelo’s set theory, in contrast, grew ultimately into the successful contemporary theory, founded upon the cumulative conception. Set theory was simultaneously a new mathematical subject, with its own motivating questions and tools, but also a new foundational theory, with a capacity to represent essentially arbitrary abstract mathematical structure. Sophisticated technical developments, including especially the forcing method and discoveries in the large cardinal hierarchy, led to a necessary engagement with deep philosophical concerns, such as the criteria by which one adopts new mathematical axioms and set-theoretic pluralism.

This will be a graduate-level lecture seminar on the Philosophy of Mathematics, run jointly by Professor Timothy Williamson and myself, held during Trinity term 2019 at Oxford University. We shall meet every Tuesday 2-4 pm during term in the Ryle Room at the Radcliffe Humanities building.

We shall discuss a selection of topics in the philosophy of mathematics, based on the readings set for each week, as set out below. Discussion will be led each week either by Professor Williamson or myself.

In the classes led by Williamson, we shall discuss issues concerning the ontology of mathematics and what is involved in its application. In the classes led by me, we shall focus on the philosophy of set theory, covering set theory as a foundation of mathematics; determinateness in set theory; the status of the continuum hypothesis; and set-theoretic pluralism.

Week 1 (30 April) Discussion led by Williamson. Reading: Robert Brandom, ‘The significance of complex numbers for Frege’s philosophy of mathematics’, Proceedings of the Aristotelian Society (1996): 293-315 https://www.jstor.org/stable/pdf/4545241.pdf

Week 2 (7 May) Discussion led by Hamkins. Reading: Penelope Maddy, Defending the Axioms: On the Philosophical Foundations of Set Theory, OUP (2011), 150 pp.

The Oxford Graduate Philosophy Conference will be held at the Faculty of Philosophy November 10-11, 2018, with graduate students from all over the world speaking on their papers, with responses and commentary by Oxford faculty.

I shall be the faculty respondent to the delightful paper, “Paradoxical Desires,” by Ethan Jerzak of the University of California at Berkeley, offered under the following abstract.

Ethan Jerzak (UC Berkeley): Paradoxical Desires
I present a paradoxical combination of desires. I show why it’s paradoxical, and consider ways of responding to it. The paradox saddles us with an unappealing disjunction: either we reject the possibility of the case by placing surprising restrictions on what we can desire, or we revise some bit of classical logic. I argue that denying the possibility of the case is unmotivated on any reasonable way of thinking about propositional attitudes. So the best response is a non-classical one, according to which certain desires are neither determinately satisfied nor determinately not satisfied. Thus, theorizing about paradoxical propositional attitudes helps constrain the space of possibilities for adequate solutions to semantic paradoxes more generally.

The conference starts with coffee at 9:00 am. This session runs 11 am to 1:30 pm on Saturday 10 November in the Lecture Room.

This will be a talk for the Philosophy of Mathematics Seminar in Oxford, October 29, 2018, 4:30-6:30 in the Ryle Room of the Philosopher Centre.

Abstract. In light of the comparative success of membership-based set theory in the foundations of mathematics, since the time of Cantor, Zermelo and Hilbert, it is natural to wonder whether one might find a similar success for set-theoretic mereology, based upon the set-theoretic inclusion relation $\subseteq$ rather than the element-of relation $\in$. How well does set-theoretic mereological serve as a foundation of mathematics? Can we faithfully interpret the rest of mathematics in terms of the subset relation to the same extent that set theorists have argued (with whatever degree of success) that we may find faithful representations in terms of the membership relation? Basically, can we get by with merely $\subseteq$ in place of $\in$? Ultimately, I shall identify grounds supporting generally negative answers to these questions, concluding that set-theoretic mereology by itself cannot serve adequately as a foundational theory.

This is joint work with Makoto Kikuchi, and the talk is based on our joint articles:

J. D. Hamkins and M. Kikuchi, Set-theoretic mereology, Logic and Logical Philosophy, special issue “Mereology and beyond, part II”, pp. 1-24, 2016.

This will be a talk for the Logic Seminar in Oxford at the Mathematics Institute in the Andrew Wiles Building on October 9, 2018, at 4:00 pm, with tea at 3:30.

Abstract. The universal algorithm is a Turing machine program $e$ that can in principle enumerate any finite sequence of numbers, if run in the right model of PA, and furthermore, can always enumerate any desired extension of that sequence in a suitable end-extension of that model. The universal finite set is a set-theoretic analogue, a locally verifiable definition that can in principle define any finite set, in the right model of set theory, and can always define any desired finite extension of that set in a suitable top-extension of that model. Recent work has uncovered a $\Sigma_1$-definable version that works with respect to end-extensions. I shall give an account of all three results, which have a parallel form, and describe applications to the model theory of arithmetic and set theory.