Quote:
Originally Posted by Maschke Yes, I noticed this in something I read somewhere. If $P$ is a unary predicate then the expression $\exists x P(x)$ asserts the existence of something but does not provide a recipe for cooking one up. Constructivists are people who are bothered by this.
On the other hand, computation is inherently constructive. It's exactly the study of everything we can construct using algorithms. 
Yes, but this presupposes a definition of computation that excludes real applications that can only be described as "computation".
Quote:
Originally Posted by Maschke So I am arguing that the link between computation and constructive math is more circumstantial than philosophical. In other words computation just happens to be about recipes, and constructivists like recipes. But I think the study of computation is itself agnostic to the philosophy. If computation turned out to involve nonconstructive elements then the computer scientists would just do that. They wouldn't be bothered by using nonconstructive math if that's what they needed to do to study computation. Whereas the constructivists would be annoyed.
Does that make sense?
I have a datapoint to support this opinion later. 
I agree.
Quote:
Originally Posted by Maschke Yes, I think I understand that. That's why for example determining if a floating point number is zero is not computable. Because in an idealized (infinite memory) implementation, no matter how many 0's you see, you can never be sure whether the next digit will be 1. So floating point numbers are conceptually broken. You can't map the mathematical (nonconstructive) real numbers onto computations. 
Is the mapping from N to N you mentioned earlier is uncountably infinite? Because even Godel's incompleteness deals with countable things AFAIK.
Quote:
Originally Posted by Maschke Procedures that don't halt. I don't know anything about those. So maybe this is something people study. 
Have you read about recursively enumerable languages?
Quote:
Originally Posted by Maschke I do know that there is a concept in math called "definable but not computable." A number is computable if there is an algorithm that cranks out its decimal (or binary) digits. A number is definable if it can be uniquely characterized in the language of firstorder Peano arithmetic (or some other system, which may give you a different notion of definability).
It's not hard to whip up a real number that is definable but not computable. One such is Chaitin's $\Omega$. https://en.wikipedia.org/wiki/Chaitin%27s_constant
This number is essentially a bitstring where the nth bit tells you whether or not the nth Turing machine (in some particular encoding) halts. This number can not be computable else we'd solve the halting problem. But it's perfectly well definable, I just defined it. (Exercise for the reader to figure out how what I said makes it "definable in firstorder Peano arithmetic." That's beyond the limits of my knowlege). 
IIRC the axioms of Peano arithmetic define one number and a successor function that generates all other numbers. The procedure of applying the successor function never terminates, making the total set of entities defined under Peano arithmetic uncomputable. The Halting problem is recursively enumerable, and the natural numbers are recursively enumerable, so that makes sense. Am I on the right track?
Also, Chaitin's constant is very interesting. I don't know about it before.
Quote:
Originally Posted by Maschke Now what's interesting is that Chaitin used his ideas to prove Gödel's incompleteness theorems in purely informationtheoretic terms. So we're actually in the realm of Gödelean incompleteness here. 
I know I've read a pdf on this before. Found it:
https://www.andrew.cmu.edu/user/avig...andi_notes.pdf Quote:
Originally Posted by Maschke And here is something not too well known. Say you have an axiom system for arithmetic (theory of the natural numbers). We know it's incomplete, so there's some statement we can neither prove nor disprove. So we pick a truth value arbitrarily and add that as a new axiom. Now we have a NEW axiom system (the original one plus the independent statement or its negation). But this new system must ALSO be incomplete ... so we can add another axiom ...
You can see that we are going to get an infinite hierarchy of axiom system that kind of branch out in all directions. And do you who studied this? None other than Turing. His doctoral dissertation was about "Ordinal models of computation," in which he used the mathematical theory of the ordinal numbers to keep track of the structures you get when you keep extending axiom systems. https://en.wikipedia.org/wiki/System...ed_on_Ordinals
This is related to oracles. You assume you have a magic box that computes something noncomputable, and you see what you get.
So Gödel, Turing, and Chaitin are all over this mysterious realm of extensions beyond what's computable. And I believe this is what you are talking about.
Tell me if anything I've written is on target. 
I don't know. I hope so, because now I'm going to have to read up on it. (I forgot too much.)
Quote:
Originally Posted by Maschke I read that section of the page and I think you read more into that quote than I did. But I'm really unfamiliar with formal approaches to computation although I've heard Hoare's name. 
I was pushing a very modest claim about a well known mapping of imperative languages in terms of lattice theory not having termination hardcoded into it at any point.
Quote:
Originally Posted by Maschke I have the same question myself. A web browser obviously does not halt. It has no way of knowing whether the user will click on something in five minutes or whether the world has ended and there will be no more clicks.
So is a web browser not a computation in the sense of computer science? In which case, what is it? 
A recursively enumerable language, probably. I guess you could say it is not a "computation" in the sense that a web browser is not an algorithm designed to compute the value of any one specific answer. But it is still a mathematical structure that can be elucidated in Hoare logic, right?
Quote:
Originally Posted by Maschke I wonder if the business about oracles and such is a step in that direction. We do go beyond computability by allowing one new noncomputable thing at a time.
But in math, we have a LOT of uncomputable things. Uncountably many. Too many to be accounted for in ANY possible formal system of symbol manipulation. Those kinds of systems are inherently countable.
So there's three levels:
* Computable.
* Definable but not computable. Extending axiom systems one new statement at a time. Chaitin's ideas. Oracles.
* Full mathematical ontology. Uncountably many real numbers, existence without recipes. 
Level 3 is beyond my ambition. Since I asked for computability to be extended from level 1 to 2, I guess I should study recursively enumerable languages. What do you think? (Edit: "Post's theorem shows that RE, together with its complement coRE, correspond to the first level of the arithmetical hierarchy."
https://en.wikipedia.org/wiki/Recurs...rable_language)
Anyone who isn't thinking of discussing game semantics should ignore the rest:
(For anyone who might be put off by my posts in this thread, this is what Wikipedia says on the history of Game Semantics:
Quote:
In the late 1950s Paul Lorenzen was the first to introduce a game semantics for logic, and it was further developed by Kuno Lorenz. At almost the same time as Lorenzen, Jaakko Hintikka developed a modeltheoretical approach known in the literature as GTS. Since then, a number of different game semantics have been studied in logic.
Shahid Rahman (Lille) and collaborators developed dialogic into a general framework for the study of logical and philosophical issues related to logical pluralism. At around 1995 this triggered a kind of Renaissance with lasting consequences. Actually this new philosophical impulse experienced a parallel renewal in the fields of theoretical computer science, computational linguistics, artificial intelligence and the formal semantics of programming languages triggered by the work of Johan van Benthem and collaborators in Amsterdam who looked thoroughly at the interface between logic and games. New results in linear logic by JY. Girard in the interfaces between mathematical game theory and logic on one hand and argumentation theory and logic on the other hand resulted in the work of many others, including S. Abramsky, J. van Benthem, A. Blass, D. Gabbay, M. Hyland, W. Hodges, R. Jagadeesan, G. Japaridze, E. Krabbe, L. Ong, H. Prakken, G. Sandu D. Walton, and J. Woods who placed game semantics at the center of a new concept in logic in which logic is understood as a dynamic instrument of inference.

If you don't believe me, see for yourself:
https://en.wikipedia.org/wiki/Game_semantics I have not edited Wikipedia to have it read this way for the purpose of proving a point:
https://en.wikipedia.org/w/index.php...action=history I would prove I'm not Japaridze if I could, but I can't think of a way to do that right now other than by inviting him to join us here, which I'm not crazy enough to try. Paranoia about my motives is very offputting to me, so I'm not going to discuss this subject in the future. Also, I may not respond immediately, not because I'm trying to think of a way to maliciously turn the subject towards game semantics, but because I'm going through preop for a medical procedure tomorrow.)