The Online Logic Seminar meets weekly on Thursdays at 1pm US Central Time (currently UTC-5) on Zoom. You can connect to the live seminar by clicking here, or by joining the Zoom meeting with Meeting ID 122 323 340.
In cases where I have slides, or a link to them, I have a link from the title of the talk.
The earliest examples of linear orders not Medvedev above some of their endpointed intervals (due to the speaker) were well-orders - roughly speaking, any two distinct "sufficiently $\omega_1$-like" well-orders are Medvedev-incomparable, and we can always view the smaller as an endpointed initial segment of the larger. Annoyingly, these examples are quite large and the nontrivial relationships between even relatively concrete well-orders (e.g. the $\omega_n^{CK}$s for finite $n$) are still open; later, Julia Knight and Alexandra Soskova gave a much simpler example of a linear order having an endpointed interval which it does not uniformly compute and is low in the arithmetical hierarchy.
I will present two additional examples, intended to flesh out this picture:
If time remains, I will present some separate work motivated by trying to find simpler Medvedev-incomparable well-orders.
The pursuit of improvements to the Pila--Wilkie Theorem remains an active area of research, in several different directions. These are motivated by potential diophantine applications, but are centred on the inherent nature of the definable sets involved. Time permitting, we will discuss various aspects of this pursuit, related to geometric smooth parameterization, effectivity, and the importance of certain key systems of differential equations.
I will then describe how we can leverage SAT solving to accomplish other automated-reasoning tasks. Sampling uniformly at random satisfying truth assignments of a given Boolean formula or counting the number of such assignments are both fundamental computational problems in computer science with applications in software testing, software synthesis, machine learning, personalized learning, and more. While the theory of these problems has been thoroughly investigated since the 1980s, approximation algorithms developed by theoreticians do not scale up to industrial-sized instances. Algorithms used by the industry offer better scalability, but give up certain correctness guarantees to achieve scalability. We describe a novel approach, based on universal hashing and Satisfiability Modulo Theory, that scales to formulas with hundreds of thousands of variables without giving up correctness guarantees.
In this talk we will study the class of pseudo real closed fields (PRC-fields) from a model theoretical point of view and we will explain some of the main results obtained. We know that the complete theory of a bounded PRC field (i.e., with finitely many algebraic extensions of degree m, for each m > 1) is NTP_2 and we have a good description of forking.
Also, in a joint work with Alf Onshuus and Pierre Simon we describe the definable groups in the case that they have f-generics types.
In the end of the talk we will explain some results obtained with Silvain Rideau. Where we generalize the notion of PRC fields to a more general class of fields. In particular, this class includes fields that have orders and valuations at the same time.
Their algebraic semantics are based on residuated lattices. The class of these ordered algebraic structures is quite big and hard to study, but it contains some proper subclasses that are well-known such as Boolean algebras, Heyting algebras, MV-algebras. In this talk we will see different constructions of new residuated lattices based on better-known algebras.
In previous work, I proposed a model-theoretic framework of "semantic constraints," where there is no strict distinction between logical and nonlogical vocabulary. The form of sentences in a formal language is determined rather by a set of constraints on models. In the talk, I will show how this framework can also be used in the process of formalization, where the semantic constraints are conceived of as commitments made with respect to the language. I will extend the framework to include "formalization constraints" on functions taking arguments from a source language to a target language, and I will consider various meta-constraints on both the process of formalization and its end result.
In spite of the great diversity of those features, there are some eerie similarities between them. These were observed and made more precise in the case of graph homomorphisms by Brightwell and Winkler, who showed that dismantlability of the target graph, connectedness of the set of homomorphisms, and good mixing properties of the corresponding spin system are all equivalent. In this talk we go a step further and demonstrate similar connections for arbitrary CSPs. This requires a much deeper understanding of dismantling and the structure of the solution space in the case of relational structures, and also new refined concepts of mixing. In addition, we develop properties related to the study of valid extensions of a given partially defined homomorphism, an approach that turns out to be novel even in the graph case. We also add to the mix the combinatorial property of finite duality and its logic counterpart, FO-definability, studied by Larose, Loten, and Tardif. This is joint work with Andrei Bulatov, Víctor Dalmau, and Benoît Larose.
I will explain how to extend these results to the broader class of henselian valued fields of equicharacteristic zero, residue field algebraically closed and poly- regular value group. This includes many interesting mathematical structures such as the Laurent Series over the Complex numbers, but more importantly extends the results to valued fields with finitely many definable convex subgroups.
Then letting K be a model of T _{δ}^{*} and M a |K|^{+}-saturated elementary extension of K, we first associate with an L_{δ}(K)-definable group G in M, a pro-L-definable set G^{**}_{∞} in which the differential prolongations G^{∇_∞} of elements of G are dense, using the L-open core property of T _{δ}^{*}. Following the same ideas as in the group configuration theorem in o-minimal structures as developed by K. Peterzil, we construct a type L-definable topological group H_{∞} ⊂ G^{**}_{∞}, acting on a K-infinitesimal neighbourhood of a generic element of G^{**}_{∞} in a faithful, continuous and transitive way. Further H_{∞}∩ G^{∇_∞} is dense in H_{∞} and the action of H_{∞}∩ G^{∇_∞} coincides with the one induced by the initial L_{δ}-group action.
The first part of this work is joint with Pablo Cubidès Kovacsics.
In this talk, I will:
Proving the optimality of these bounds, however, was non-trivial. Since the standard presentation of [0,1] with the Euclidean metric is computably compact, we were forced to work on the natural numbers with the discrete metric (in some sense, the "simplest" non-compact metric space). Along the way, we also proved some surprising combinatorial results. McNicholl and I then continued our study of computable infinitary continuous logic and found that for any nonzero computable ordinal α and any right Π^{c}_{α} (or Σ^{c}_{α}) real number, there is a Π^{c}_{α} (or Σ^{c}_{α}) sentence which is universally interpreted as that value.
Here, we focus on the complexity of another meta-problem: learning to solve problems such as Boolean satisfiability. There is a range of ways to define "solving problems", with one extreme, the uniform setting, being an existence of a fast algorithm (potentially randomized), and another of a potentially non-computable family of small Boolean circuits, one for each problem size. The complexity of learning can be recast as the complexity of finding a procedure to generate Boolean circuits solving the problem of a given size, if it (and such a family of circuits) exists.
First, inspired by the KPT witnessing theorem, a special case of Herbrand's theorem in bounded arithmetic, we develop an intermediate notion of uniformity that we call LEARN-uniformity. While non-uniform lower bounds are notoriously difficult, we can prove several unconditional lower bounds for this weaker notion of uniformity. Then, returning to the world of bounded arithmetic and using that notion of uniformity as a tool, we show unprovability of several complexity upper bounds for both deterministic and randomized complexity classes, in particular giving simpler proofs that the theory of polynomial-time reasoning PV does not prove that all of P is computable by circuits of a specific polynomial size, and the theory V^{1}, a second-order counterpart to the classic Buss' theory S^{1}_{2}, does not prove the same statement with NP instead of P.
Finally, we leverage these ideas to show that bounded arithmetic "has trouble differentiating" between uniform and non-uniform frameworks: more specifically, we show that theories for polynomial-time and randomized polynomial-time reasoning cannot prove both a uniform lower bound and a non-uniform upper bound for NP. In particular, while it is possible that NP != P yet all of NP is computable by families of polynomial-size circuits, this cannot be proven feasibly.
Removing the inverse operation from any lattice-ordered group (l-group), such as the ordered additive group of integers, produces a distributive lattice-ordered monoid (l-monoid), but it is not the case that every distributive l-monoid admits a group structure. In particular, every l-group embeds into an l-group of automorphisms of some chain and is either trivial or infinite, whereas every distributive l-monoid embeds into a possibly finite l-monoid of endomorphisms of some chain.
In this talk, we will see that inverse-free abelian l-groups generate only a proper (infinitely based) subvariety of the variety of commutative distributive l-monoids, but inverse-free l-groups generate the whole variety of distributive l-monoids. We will also see that the validity of an l-group equation can be reduced to the validity of a (constructible) finite set of l-monoid equations, yielding --- since the variety of distributive l-monoids has the finite model property — an alternative proof of the decidability of the equational theory of l-groups.
In both cases, considerable care is needed from a theoretical standpoint. Negation should be restricted to avoid paradoxical scenarios. We learned logic programs with stratified negation (in the style of Datalog). Anti-unification (i.e., generalization) of arbitrary higher-order terms is not unique. We learned second order logic programs that are generalizations of first order programs.
The Correct Computational Law project tackles this inequity by formalizing and capturing computational law using formal methods. Whether it is the French Tax Code, French family benefits or Washington State's Legal Financial Obligations, we formalize, re-implement and find bugs in the law. Doing so, we make it possible for ordinary citizens to prevail over the complexity of the law, rather than falling prey to it.
We will first describe our research agenda and ongoing efforts spanning France and the US. Then, we will focus on a case study: the complexity of federal civil procedure in the US, and how the Lean proof assistant can always find, with mathematical certainty, a path through the pleading phase that fulfills all major procedural requirements.
Proving whether such a definition exists is still out of reach. However, we will show that only "very few" algebraic extensions of the rationals have the property that their ring of integers are existentially or universally definable. Equipping the set of all algebraic extensions of the rationals with a natural topology, we show that only a meager subset has this property. An important tool is a new normal form theorem for existential definitions in such extensions. As a corollary, we construct countably many distinct computable algebraic extensions whose rings of integers are neither existentially nor universally definable. Joint work with Russell Miller, Caleb Springer, and Linda Westrick.
One property of a statistical procedure is "admissibility": Roughly, a procedure is admissible if there is no other procedure which does better under all circumstances ("better" in a sense specified by the decision theoretical framework, i.e., with respect to a fixed loss function). This is certainly a necessary condition for optimality.
Admissibility is notoriously hard to characterize. In particular, establishing a characterization in Bayesian terms has been an ongoing pursuit for decades in statistical decision theory. Recently we have found a characterization of admissibility in Bayesian terms, by using prior probability distributions which can take on infinitesimal values. We are also able to draw connections to classical methods establishing admissibility, such as Blyth's method and Stein's characterization of admissibility (which does partially characterize admissibility, but only under additional, technical hypotheses). Finally, our method has applications in concrete problems such as the problem of establishing the admissibility of the Graybill-Deal estimator.
The talk will not presuppose any knowledge on statistics or nonstandard analysis. Everything is joint work with D. Roy and H. Duanmu.
[1] Merkle, Wolfgang and Miller, Joseph S and Nies, Andre and Reimann, Jan and Stephan, Frank. Kolmogorov--Loveland randomness and stochasticity. Annals of Pure and Applied Logic, vol.138 (2006), no.1-3, pp.183--210.
As an application, we will present a Borel version of Katok's representation theorem for multidimensional Borel flows. One-dimensional flows are closely connected to actions of ℤ via the so-called "flow under a function" construction. This appealing geometric picture does not generalize to higher dimensions. Within the ergodic theoretical framework, Katok introduced the concept of a special flow as a way to connect multidimensional ℝ^{d} and ℤ^{d} actions. We will show that similar connections continue to hold in Borel dynamics.
Another illustration of the partial actions techniques that we intend to touch is the following result: a Borel equivalence relation generated by a free R-flow can also be generated by a free action of any non-discrete and non-compact Polish group. This is in contrast with the situation for discrete groups, where amenability distinguishes groups that can and cannot generate free finite measure-preserving hyperfinite actions.
The talk will be based on a forthcoming paper coauthored with Andrew Bacon, available here.
1. If John's stealing, he ought to be stealing.
While this might seem like a problem specifically for the restrictor analysis of conditionals, the issue is far more general. For any account must predict that modals in the consequent sometimes receive obligatorily unrestricted interpretation, as in (1), but sometimes appear restricted, as in (2):
2. If John's speeding, he ought to pay the fine.
And the problem runs deeper, for there are non-conditional variants of the problematic data. Thus, the solution cannot lie in adopting a particular analysis of conditionals, nor a specific account of the interaction between conditionals and modals. Indeed, with minimal assumptions, the standard account of modality will render a massive number of claims about what one ought to, must, or may, do trivially true. Worse, the problem extends to a wide range of non-deontic modalities, including metaphysical modality. But the disaster has a remedy. I argue that the source of the problem lies in the standard account's failure to capture an inferential evidence constraint encoded in the meaning of a wide range of modal constructions. I offer a semantic account that captures this constraint, and show it provides a general and independently motivated solution to the problem, avoiding unwanted validities.
The well-known Big Five phenomenon of RM is the observation that a large number of theorems from ordinary mathematics are either provable in the base theory or equivalent to one of only four systems; these five systems together are called the `Big Five' of RM. The aim of this paper is to greatly extend the Big Five phenomenon, working in Kohlenbach's higher-order RM ([1]).
In particular, we have established numerous equivalences involving the second-order Big Five systems on one hand, and well-known third-order theorems from analysis about (possibly) discontinuous functions on the other hand. We both study relatively tame notions, like cadlag or Baire 1, and potentially wild ones, like quasi-continuity. We also show that slight generalisations and variations (involving e.g. the notions Baire 2 and cliquishness) of the aforementioned third-order theorems fall far outside of the Big Five. In particular, these slight generalisations and variations imply the principle NIN from [2], i.e. there is no injection from [0, 1] to N. We discuss a possible explanation for this phenomenon.
REFERENCES.
[1] Ulrich Kohlenbach, Higher order reverse mathematics, Reverse mathematics 2001, Lect. Notes Log., vol. 21, ASL, 2005, pp. 281–295.
[2] Dag Normann and Sam Sanders, On the uncountability of R, Journal of Symbolic Logic, DOI:doi.org/10.1017/jsl.2022.27 (2022), pp. 43.
[3] _________________________, The Biggest Five of Reverse Mathematics, Submitted, arxiv
Based on joint work with Corentin Barloy & Charles Paperman (U. Lille, France) and Thomas Zeume (Bochum U., Germany).
The concept of measurable structures was defined by Macpherson and Steinhorn in [2] as a method to study infinite structures with strong conditions of finiteness and definability for the sizes of definable sets. The most notable examples are the ultraproducts of asymptotic classes of finite structures (e.g., the class of finite fields or the class of finite cyclic groups). Measurable structures are supersimple of finite SU-rank, but recent generalizations of this concept are more flexible and allow the presence of structures whose SU-rank is possibly infinite.
The everywhere infinite forest is the theory of an acyclic graph G such that every vertex has infinite degree. It is a well-known example of an omega-stable theory of infinite rank. In this talk we will take this structure as a motivating example to introduce all the concepts mentioned above, showing that it is pseudofinite and giving a precise description of the sizes of their definable sets. In particular, these results provide a description of forking and U-rank for the infinite everywhere forest in terms of certain pseudofinite dimensions, and also show that it is a generalized measurable structure that can be presented as the ultraproduct of a multidimensional exact class of finite graphs. These results are joint work with Melissa Robles, and can be found in [1].
References:
[1] Darío García and Melissa Robles. Pseudofiniteness and measurability of the everywhere infinite forest. Available at arXiv
[2] Dugald Macpherson and Charles Steinhorn. One-dimensional asymptotic classes of finite structures, Transactions of the American Mathematical Society, vol. 360 (2008)
Here, I will show that we can extend the system one step further, and induce a three-level logical pluralism. The first and second levels remain as suggested by Ferrari and Orlandelli (2019), but we can allow for multiple notions of uniqueness in the definition of Belnap-harmony, or multiple notions of harmony writ large. Either of these options generates a pluralism at the level of our admissibility conditions. This generates a pluralism at three levels: validity, connective meanings, and admissibility conditions. But it still preserves the spirit of the Ferrari and Orlandelli (2019) solution: harmony remains as the admissibility constraint across the board, and so the original worries of Kouri (2016) are still put to rest and the original Beall and Restall (2006) criteria for admissible logics are still met.
Enayat asked whether the full strength of theories like ZF or full second-order arithmetic is necessary for the tightness results and conjectured that this property can be used to give a characterization of these theories. Phrased in the contrapositive: must it be that any strict subtheory of these theories admits distinct, bi-interpretable extensions? Alfredo Roque Freire and I investigated this question for subsystems of second-order arithmetic, providing some evidence for Enayat’s conjecture. We showed that if you restrict the comprehension axiom to formulae of a bounded complexity then you can find two distinct yet bi-interpretable extensions of the theory. The main idea of the construction, not uncommon for work in logic, goes back to an old observation by Mostowski. Namely, while truth is not arithmetically definable, it is definable over the arithmetical sets.
In this talk I'll explore what there is to be said about ``abstract model theory in the computable universe." One logic we'll pay particular attention to is gotten by mixing classical computable infinitary logic with the notion of realizability coming from intuitionistic arithmetic. This is work in progress, so this talk will have lots of questions as well as results. No prior knowledge of intuitionistic logic will be assumed.
When F is countable, Gal(F) usually has the cardinality of the continuum. However, it can be presented as the set of all paths through an F-computable finite-branching tree, built by a procedure uniform in F. We will first consider the basic properties of this tree, which depend in some part on F. Then we will address questions about the subgroup consisting of the computable paths through this tree, along with other subgroups similarly defined by Turing ideals. One naturally asks to what extent these are elementary subgroups of Gal(F) (or at least elementarily equivalent to Gal(F)). This question is connected to the computability of Skolem functions for Gal(F), and also to the arithmetic complexity of definable subsets of Gal(F).
Some of the results that will appear represent joint work with Debanjana Kundu.
This talk will put these results in context and describe the new, more structural approach we took to this problem. In particular, I will describe these results through the lens of a surprising connection with Ramsey-like properties.
In this talk I will present joint work with Angus Macintyre giving a solution to Ax's problem. Our solution uses some previous work of ours on the model theory of the ring of adeles. These are locally compact rings associated to number fields and have been of fundamental importance in number theory ever since they were introduced by Chevalley, Weil, Artin. Interestingly Ax's problem can be reduced to the decidability of the ring of adeles of the rational numbers. So while the theory of pseudo-finite fields governs the theory of all finite fields as shown by Ax, the theory of all Z/mZ is governed by the theory of the rational adele ring.
(This work is published in Forum of Mathematics, Sigma, 24 July 2023.)
In a partial answer to a question of Gromov, Kramer et al. showed that there is a finitely presented group such that, depending on the truth of CH, this group has either a unique or 2^{c} many asymptotic cones up to homeomorphism. Asymptotic cones of metric spaces are realized as particular metric ultraproducts. The Kramer et al. paper does not formalize the obvious model theoretic connection, but does comment on the combinatorial-geometric structure of the asymptotic cones, which was known to Thornton (and independently to Kramer and Tent) and is a certain kind of building.
In this talk, we'll give a brief overview of work done by Luther to formalize this model theoretic connection. Special attention will be given to Ehrenfeucht-Fraïssé games and how the building structure can give us additional tools to develop a possible winning strategy for Player II in games between (what are potentially) non-homeomorphic asymptotic cones of certain symmetric spaces.
Some of this work is joint with M. Parnes and L. Scow.
We then turn our attention to uniform Frostman Blaschke products, shown by Frostman to be those with nontangential limits of modulus one everywhere (as opposed to generic Blaschke products which, as inner functions, are only guaranteed to have radial limits of modulus one almost everywhere). Matheson showed that the spectra of such functions are precisely the closed, nonempty, and nowhere dense subsets of the unit circle. We discuss an effectivization of one direction of his result, showing that every computably closed, nonempty, and nowhere dense subset of the circle is the spectrum of a computable uniform Frostman Blaschke product.
Joint work with Timothy McNicholl
In this talk the Grätz algorithm will be described, as well as a new algorithm for deciding validity in IPL obtained by considering another translation derived from Gödel's one. It allows defining the composed algorithm for IPL above mentioned, but in a direct way, hence the soundness and completeness of the method is proved independently of Gödel and Grätz results. In this way, an original 3-valued RNmatrix for IPL is defined, with a very natural interpretation, as well as an easy algorithm which allows to remove, from the truth tables generated by the 3-valued Nmatrix, those rows which are not sound. This decision procedure, as well as Grätz's one, were implemented in Coq. This is a joint work with Renato Leme and Bruno Lopes.
School of Mathematical and Statistical Sciences Mail Code 4408 Southern Illinois University 1245 Lincoln Drive Carbondale, Illinois 62901 Office: (618) 453-6582 Fax: (618) 453-5300 Home: (618) 985-3429Wesley Calvert's Web Page