The Online Logic Seminar meets weekly on Thursdays at 1pm US Central Time (currently UTC-5) on Zoom. You can connect to the live seminar by clicking here, or by joining the Zoom meeting with Meeting ID 122 323 340.
In cases where I have slides, or a link to them, I have a link from the title of the talk.
The earliest examples of linear orders not Medvedev above some of their endpointed intervals (due to the speaker) were well-orders - roughly speaking, any two distinct "sufficiently $\omega_1$-like" well-orders are Medvedev-incomparable, and we can always view the smaller as an endpointed initial segment of the larger. Annoyingly, these examples are quite large and the nontrivial relationships between even relatively concrete well-orders (e.g. the $\omega_n^{CK}$s for finite $n$) are still open; later, Julia Knight and Alexandra Soskova gave a much simpler example of a linear order having an endpointed interval which it does not uniformly compute and is low in the arithmetical hierarchy.
I will present two additional examples, intended to flesh out this picture:
If time remains, I will present some separate work motivated by trying to find simpler Medvedev-incomparable well-orders.
The pursuit of improvements to the Pila--Wilkie Theorem remains an active area of research, in several different directions. These are motivated by potential diophantine applications, but are centred on the inherent nature of the definable sets involved. Time permitting, we will discuss various aspects of this pursuit, related to geometric smooth parameterization, effectivity, and the importance of certain key systems of differential equations.
I will then describe how we can leverage SAT solving to accomplish other automated-reasoning tasks. Sampling uniformly at random satisfying truth assignments of a given Boolean formula or counting the number of such assignments are both fundamental computational problems in computer science with applications in software testing, software synthesis, machine learning, personalized learning, and more. While the theory of these problems has been thoroughly investigated since the 1980s, approximation algorithms developed by theoreticians do not scale up to industrial-sized instances. Algorithms used by the industry offer better scalability, but give up certain correctness guarantees to achieve scalability. We describe a novel approach, based on universal hashing and Satisfiability Modulo Theory, that scales to formulas with hundreds of thousands of variables without giving up correctness guarantees.
In this talk we will study the class of pseudo real closed fields (PRC-fields) from a model theoretical point of view and we will explain some of the main results obtained. We know that the complete theory of a bounded PRC field (i.e., with finitely many algebraic extensions of degree m, for each m > 1) is NTP_2 and we have a good description of forking.
Also, in a joint work with Alf Onshuus and Pierre Simon we describe the definable groups in the case that they have f-generics types.
In the end of the talk we will explain some results obtained with Silvain Rideau. Where we generalize the notion of PRC fields to a more general class of fields. In particular, this class includes fields that have orders and valuations at the same time.
Their algebraic semantics are based on residuated lattices. The class of these ordered algebraic structures is quite big and hard to study, but it contains some proper subclasses that are well-known such as Boolean algebras, Heyting algebras, MV-algebras. In this talk we will see different constructions of new residuated lattices based on better-known algebras.
In previous work, I proposed a model-theoretic framework of "semantic constraints," where there is no strict distinction between logical and nonlogical vocabulary. The form of sentences in a formal language is determined rather by a set of constraints on models. In the talk, I will show how this framework can also be used in the process of formalization, where the semantic constraints are conceived of as commitments made with respect to the language. I will extend the framework to include "formalization constraints" on functions taking arguments from a source language to a target language, and I will consider various meta-constraints on both the process of formalization and its end result.
In spite of the great diversity of those features, there are some eerie similarities between them. These were observed and made more precise in the case of graph homomorphisms by Brightwell and Winkler, who showed that dismantlability of the target graph, connectedness of the set of homomorphisms, and good mixing properties of the corresponding spin system are all equivalent. In this talk we go a step further and demonstrate similar connections for arbitrary CSPs. This requires a much deeper understanding of dismantling and the structure of the solution space in the case of relational structures, and also new refined concepts of mixing. In addition, we develop properties related to the study of valid extensions of a given partially defined homomorphism, an approach that turns out to be novel even in the graph case. We also add to the mix the combinatorial property of finite duality and its logic counterpart, FO-definability, studied by Larose, Loten, and Tardif. This is joint work with Andrei Bulatov, Víctor Dalmau, and Benoît Larose.
I will explain how to extend these results to the broader class of henselian valued fields of equicharacteristic zero, residue field algebraically closed and poly- regular value group. This includes many interesting mathematical structures such as the Laurent Series over the Complex numbers, but more importantly extends the results to valued fields with finitely many definable convex subgroups.
Then letting K be a model of T δ* and M a |K|+-saturated elementary extension of K, we first associate with an Lδ(K)-definable group G in M, a pro-L-definable set G**∞ in which the differential prolongations G∇_∞ of elements of G are dense, using the L-open core property of T δ*. Following the same ideas as in the group configuration theorem in o-minimal structures as developed by K. Peterzil, we construct a type L-definable topological group H∞ ⊂ G**∞, acting on a K-infinitesimal neighbourhood of a generic element of G**∞ in a faithful, continuous and transitive way. Further H∞∩ G∇_∞ is dense in H∞ and the action of H∞∩ G∇_∞ coincides with the one induced by the initial Lδ-group action.
The first part of this work is joint with Pablo Cubidès Kovacsics.
In this talk, I will:
Proving the optimality of these bounds, however, was non-trivial. Since the standard presentation of [0,1] with the Euclidean metric is computably compact, we were forced to work on the natural numbers with the discrete metric (in some sense, the "simplest" non-compact metric space). Along the way, we also proved some surprising combinatorial results. McNicholl and I then continued our study of computable infinitary continuous logic and found that for any nonzero computable ordinal α and any right Πcα (or Σcα) real number, there is a Πcα (or Σcα) sentence which is universally interpreted as that value.
Here, we focus on the complexity of another meta-problem: learning to solve problems such as Boolean satisfiability. There is a range of ways to define "solving problems", with one extreme, the uniform setting, being an existence of a fast algorithm (potentially randomized), and another of a potentially non-computable family of small Boolean circuits, one for each problem size. The complexity of learning can be recast as the complexity of finding a procedure to generate Boolean circuits solving the problem of a given size, if it (and such a family of circuits) exists.
First, inspired by the KPT witnessing theorem, a special case of Herbrand's theorem in bounded arithmetic, we develop an intermediate notion of uniformity that we call LEARN-uniformity. While non-uniform lower bounds are notoriously difficult, we can prove several unconditional lower bounds for this weaker notion of uniformity. Then, returning to the world of bounded arithmetic and using that notion of uniformity as a tool, we show unprovability of several complexity upper bounds for both deterministic and randomized complexity classes, in particular giving simpler proofs that the theory of polynomial-time reasoning PV does not prove that all of P is computable by circuits of a specific polynomial size, and the theory V1, a second-order counterpart to the classic Buss' theory S12, does not prove the same statement with NP instead of P.
Finally, we leverage these ideas to show that bounded arithmetic "has trouble differentiating" between uniform and non-uniform frameworks: more specifically, we show that theories for polynomial-time and randomized polynomial-time reasoning cannot prove both a uniform lower bound and a non-uniform upper bound for NP. In particular, while it is possible that NP != P yet all of NP is computable by families of polynomial-size circuits, this cannot be proven feasibly.
Removing the inverse operation from any lattice-ordered group (l-group), such as the ordered additive group of integers, produces a distributive lattice-ordered monoid (l-monoid), but it is not the case that every distributive l-monoid admits a group structure. In particular, every l-group embeds into an l-group of automorphisms of some chain and is either trivial or infinite, whereas every distributive l-monoid embeds into a possibly finite l-monoid of endomorphisms of some chain.
In this talk, we will see that inverse-free abelian l-groups generate only a proper (infinitely based) subvariety of the variety of commutative distributive l-monoids, but inverse-free l-groups generate the whole variety of distributive l-monoids. We will also see that the validity of an l-group equation can be reduced to the validity of a (constructible) finite set of l-monoid equations, yielding --- since the variety of distributive l-monoids has the finite model property — an alternative proof of the decidability of the equational theory of l-groups.
In both cases, considerable care is needed from a theoretical standpoint. Negation should be restricted to avoid paradoxical scenarios. We learned logic programs with stratified negation (in the style of Datalog). Anti-unification (i.e., generalization) of arbitrary higher-order terms is not unique. We learned second order logic programs that are generalizations of first order programs.
The Correct Computational Law project tackles this inequity by formalizing and capturing computational law using formal methods. Whether it is the French Tax Code, French family benefits or Washington State's Legal Financial Obligations, we formalize, re-implement and find bugs in the law. Doing so, we make it possible for ordinary citizens to prevail over the complexity of the law, rather than falling prey to it.
We will first describe our research agenda and ongoing efforts spanning France and the US. Then, we will focus on a case study: the complexity of federal civil procedure in the US, and how the Lean proof assistant can always find, with mathematical certainty, a path through the pleading phase that fulfills all major procedural requirements.
Proving whether such a definition exists is still out of reach. However, we will show that only "very few" algebraic extensions of the rationals have the property that their ring of integers are existentially or universally definable. Equipping the set of all algebraic extensions of the rationals with a natural topology, we show that only a meager subset has this property. An important tool is a new normal form theorem for existential definitions in such extensions. As a corollary, we construct countably many distinct computable algebraic extensions whose rings of integers are neither existentially nor universally definable. Joint work with Russell Miller, Caleb Springer, and Linda Westrick.
One property of a statistical procedure is "admissibility": Roughly, a procedure is admissible if there is no other procedure which does better under all circumstances ("better" in a sense specified by the decision theoretical framework, i.e., with respect to a fixed loss function). This is certainly a necessary condition for optimality.
Admissibility is notoriously hard to characterize. In particular, establishing a characterization in Bayesian terms has been an ongoing pursuit for decades in statistical decision theory. Recently we have found a characterization of admissibility in Bayesian terms, by using prior probability distributions which can take on infinitesimal values. We are also able to draw connections to classical methods establishing admissibility, such as Blyth's method and Stein's characterization of admissibility (which does partially characterize admissibility, but only under additional, technical hypotheses). Finally, our method has applications in concrete problems such as the problem of establishing the admissibility of the Graybill-Deal estimator.
The talk will not presuppose any knowledge on statistics or nonstandard analysis. Everything is joint work with D. Roy and H. Duanmu.
[1] Merkle, Wolfgang and Miller, Joseph S and Nies, Andre and Reimann, Jan and Stephan, Frank. Kolmogorov--Loveland randomness and stochasticity. Annals of Pure and Applied Logic, vol.138 (2006), no.1-3, pp.183--210.
As an application, we will present a Borel version of Katok's representation theorem for multidimensional Borel flows. One-dimensional flows are closely connected to actions of ℤ via the so-called "flow under a function" construction. This appealing geometric picture does not generalize to higher dimensions. Within the ergodic theoretical framework, Katok introduced the concept of a special flow as a way to connect multidimensional ℝd and ℤd actions. We will show that similar connections continue to hold in Borel dynamics.
Another illustration of the partial actions techniques that we intend to touch is the following result: a Borel equivalence relation generated by a free R-flow can also be generated by a free action of any non-discrete and non-compact Polish group. This is in contrast with the situation for discrete groups, where amenability distinguishes groups that can and cannot generate free finite measure-preserving hyperfinite actions.
School of Mathematical and Statistical Sciences Mail Code 4408 Southern Illinois University 1245 Lincoln Drive Carbondale, Illinois 62901 Office: (618) 453-6582 Fax: (618) 453-5300 Home: (618) 985-3429Wesley Calvert's Web Page