September 4th, 2018, 01:36 PM  #1 
Newbie Joined: Sep 2018 From: Guadalajara Posts: 8 Thanks: 0  Function decomposition
Greetings all, I read in a book that “a logical, mathematical fact about functions is that functions of two arguments, which include all of the basic arithmetic functions, cannot be decomposed into functions of oneargument”. The authors give no demonstration of this, nor do they cite any reference where a demonstration is given. Are the authors right? Is there a demonstration of this? Where? Many thanks in advance, Jose 
September 4th, 2018, 03:11 PM  #2  
Senior Member Joined: Aug 2012 Posts: 2,010 Thanks: 574  Quote:
Given three sets $X$, $Y$, and $Z$ and a function $f : X \times Y \to Z$ given by $z = f(x,y)$, we can consider a collection of functions $\{f_x : x \in X\}$ where $f_x : Y \to Z$ and $f_x(y) = f(x,y)$. Then given an expression $f(x,y)$ we can just input $y$ into the appropriate $f_x$. In fact we have two functions: one that inputs an element $x \in X$ and outputs some function $f_x$; and some particular $f_x$, into which we input $y$. In this way we transform a twovariable function into a composition of onevariable functions. In computer science this is called Currying. That's one way to interpret your statement. But your book might be intending an entirely different interpretation. That's why we need more context. What's the subject, what's the book, what is the chapter about, what point are they trying to make. Last edited by Maschke; September 4th, 2018 at 03:17 PM.  
September 4th, 2018, 08:40 PM  #4 
Global Moderator Joined: Dec 2006 Posts: 19,542 Thanks: 1750 
The book is presumably Memory and the Computational Brain (by C. R. Gallistel, Adam Philip King). It can be found online.

September 6th, 2018, 04:57 AM  #5 
Newbie Joined: Sep 2018 From: Guadalajara Posts: 8 Thanks: 0 
Yes, that's it.

September 6th, 2018, 05:11 AM  #6 
Newbie Joined: Sep 2018 From: Guadalajara Posts: 8 Thanks: 0 
The statement is theirs, not mine (I meant my enclosing it in scare quotes as a verbatim quotation). They make their assertion on p. 53, but discuss the topic in pp. 4953, reproduced below, to provide a context. Unfortunately, the special symbols do not show up correctly, but they can be inferred from the context. "Functions of More than One Argument The functions we have looked at so far take one argument and return a single value. What about a function like multiplication that takes two arguments and returns a single value? We can easily generalize our definition of a function to include two or more arguments by letting there be more than one set that defines the domain. In this way, the actual domain entities become sequences (ordered lists) of elements from these sets. We write a particular such sequence using functional notation by separating the arguments by commas. For example, we can write f*(2, 3) = 6. Since the domain now comes from two separate sets, the arguments to the function are themselves pairings of elements from these two sets. The set composed of all possible pairings of the elements in two other sets, A and B, is called their Cartesian product, and is written A × B. The function that maps all possible pairs of numbers from the set of natural numbers {0, 1, 2, . . . }, ℕ, to their product is f* : ℕ × ℕ → ℕ, as its domain is the Cartesian product of the set of natural numbers with itself. Predicates and relations as functions Using the Cartesian products of sets, we can express functions of an arbitrary number of arguments. Let’s look at another example function that takes three arguments. Let set P = {Jim, Sandy}, two people. Let A = {Movie, Exercise, Work}, three activities that one could do. Let D = {Sunday, Monday}, two days of the week, and let T = {True, False}, concepts of truth and falsehood. Our function, let’s call it fdid_they, will map from people, activities they might have done, and days of the week to True or False, with True indicating that that person did that activity on the given day and False indicating they did not. We therefore can write: fdid_they : P × A × D → T. This function has as its domain the Cartesian product of three different sets and as its codomain the set whose only two members are True and False. Properties and relations Functions such as fdid_they in which the codomain consists of {True, False} are called predicates. They express properties and relations. A predicate that takes one argument is called a property. For example, if we have a domain O consisting of objects {object1, object2, object3}, we can define a predicate fis_blue : O → {True, False} such that fis_blue(object1) = True if and only if object object1 is blue and False otherwise. We can think of fis_blue(object1) as telling us whether or not object1 has the property of being blue. A predicate that takes two or more arguments is called a relation. It expresses the existence of a relationship between the arguments. For example, take the predicate fis_touching: O × O → {True, False}. Here fis_touching(object1, object2) = True expresses the relationship that object1 is in physical contact with object2. The Limits to Functional Decomposition One may wonder whether functions with two (or more) arguments can be decomposed into functions with one argument. Generally speaking, they cannot. One can see why not by considering whether the fis_touching can be decomposed into the composition of an ftouches function. This function would take single objects as its argument and produce as its output an object touched by whatever object was the input. We might think that we could then replace fis_touching(object1, object2) = True with ftouches(object1) = object2. We can see this won’t work because while we could have the situation fis_touching(object1, object2) = True and fis_touching(object1, object3) = True, we cannot have both ftouches(object1) = object2 and ftouches(object1) = object3. As this example illustrates, allowing only for one argument restricts the expressive power of functions. Functions of one argument cannot combine to do the work of functions with two. Take the integer multiplication function, f*: ℤ × ℤ → ℤ, which maps pairs of integers to a single integer. It is inherent in this function that the two arguments are combined into one value and cannot remain separate within two separate functions. If f*(x, y) = z, there is no way that we can create two functions f*_ part1: ℤ → ℤ and f*_ part2: ℤ → ℤ that enable us to determine z without eventually using some function that can be applied to multiple arguments. Thus, we cannot realize this and many other twoargument functions by composing oneargument functions. This elementary mathematical truth is a critical part of our argument as to why the architecture of a powerful computing device such as the brain must make provision for bringing the values of variables to the machinery that implements some primitive twoargument functions. The ability to realize functions of at least two arguments is a necessary condition for realizing functions of a nontrivial nature. Functions Can Map to MultiPart Outputs Above we used the Cartesian product to create functions that take multiple arguments. We can construct functions that return multiple values using the same technique. Let’s look at the example of integer division. Integer division is similar to real numbered division except that it only takes integers as arguments and produces an integer instead of a real number. You can think of it as the most number of times one integer can be subtracted from the other before the result goes negative. Given fint_division: ℤ × ℤ → ℤ, we have fint_division(14, 3) = 4. With integer division, there may be an integer remainder when we have done the last possible subtraction. When we divide 14 by 3, it divides 4 integral times, with a remainder of 2, that is, 14 = (3)(4) + 2. Therefore, we may want a function that would map from pairs of integers to twocomponent outputs, one component is the integer division of the two arguments and one the remainder. We could write this as fint_div_and_rem : ℤ × ℤ → ℤ × ℤ with fint_div_and_rem(14, 3) = (4, 2). Our pointing out that multicomponent outputs are possible does not contradict our earlier point that a function cannot have two different outputs. There cannot be a function that maps from any perfect square to its square root, because every perfect square has two different square roots, so the mapping would be indeterminate, one would not know which square root to expect. It is possible, however, to have a function that maps from any perfect square to its square roots, because in this function its square roots are components of a single output. In such a function, the output forms an ordered pair. The codomain of this function is the Cartesian product ℤ × ℤ. Note, however, that this function does not invert the mapping from integers to their squares. The domain of that function is ℤ, whereas the codomain of the inverse function from the perfect squares to their roots is ℤ × ℤ. In other words, the function that maps from perfect squares to their roots generates a different kind of entity than the entities that serve as the arguments of the function that maps from the integers to their squares. Thus, it remains true that the function mapping from integers to their squares does not have an inverse. Mapping to MultipleElement Outputs Does Not Increase Expressive Power The capacity to return multipart outputs does not buy us additional expressive power. If we have a function that takes x number of arguments and returns y number of values, we can replace that function with y functions, each taking x number of arguments and returning one value. For example, we can replace fint_div_and_rem: ℤ × ℤ → ℤ × ℤ with the two functions fint_division: ℤ × ℤ → ℤ and fint_remainder: ℤ × ℤ → ℤ. This would result in fint_division(14, 4) = 3 and fint_remainder(14, 4) = 2. This works because each output value can be determined independently with no interactions between the outputs. Therefore, it is not logically necessary to have functions with more than one output value, whereas it is necessary to have functions with two inputs. By combining the concept of functional composition with the fact that we only need up to two arguments and only one output value, we can use our example above regarding integer division to show how we can apply some function f^ to the integer dividend and the integer remainder of integer division. Here we would use the form: f^( fint_division(x, y), fint_remainder(x, y)). Defining Particular Functions Above, we were fairly informal in defining our function fdid_they. Somewhat more formally, we could say that fdid_they: P × A × D → T and that if a ∈ P, b ∈ A, and c ∈ D, then fdid_they(a, b, c) = True if person a did the activity b on day c and fdid_they(a, b, c) = False if they did not. Another way to define this function would be to use a lookup table. Lookup tables define and determine a function by giving the explicit mapping of each input to its output. Table 3.1 shows the threedimensional lookup table for fdid_they. The advantage of the lookup table is that it explicitly specifies the output of the function for each possible input. Under the everyday metaphysical assumption that there are simple empirical truths, our English definition of the function establishes that it exists, because we assume that there is a simple truth as to whether a given individual did or did not do a given activity on a given day. However, our description does not tell us what those truths actually are for the people and activities and days in its domain. We may believe that there is a truth of the matter, but that truth may not be knowable. The lookup table, by contrast, specifies the truth in each case. Thus, a lookup table specification of a function is a procedure that implements that function: it gives you a means of obtaining the output for a given input. In a computational machine, that is what we want. However, the lookup table architecture (the form of the physically instantiated procedure that implements the function) is impractical if there are a very large number of possible input combinations, for which an equally numerous set of outputs must be specified. Consider, for example, using a lookup table to implement the multiplication function over the infinite set of integers, f*: ℤ × ℤ → ℤ. To implement it with a lookup table, we would need two separate physical realizations of every possible integer symbol (one replication for the column headers and one for the row names), which is clearly impossible. As a practical matter, even if we limit our implementation to input integers between, say, minus a trillion trillion and plus a trillion trillion, we cannot possibly build a lookup table that big. It would require more physical resources than we could ever bring together in a manageable space. (We call this the problem of the infinitude of the possible. We will return to it repeatedly.) Moreover, in building such a lookuptable machine, we would need to precompute and put into the table the result of every possible multiplication, because lookup table machines require us to put into the machine all the possible outputs that we may ever hope to obtain back from it. (We call this the problem of prespecification. We also return to it repeatedly.) Thus, if we are going to physically realize computationally important functions, such as the arithmetic functions, we are going to need architectures that permit us to make mappings from essentially infinite domains to essentially infinite ranges using modest physical resources and without having to build every possible output into the machine in advance. In effect, the machine must be able to tell us things we don’t know when we have finished building it, such as: What is the product 1,234,581,247 and 7629? As our first way of defining fdid_they makes clear, function definitions do not necessarily tell us how to determine a particular output value given the arguments to the function, they only establish that such a mapping exists. Indeed, we know that there are many perfectly welldefined functions that, either in principle or in practice, cannot be computed, that is, the actual mapping specified by the definition of the function cannot be fully realized by any physically realizable device. For an example in which practical considerations arise, consider the function fnext_prime: ℕ → ℕ, which takes a natural number (call it n), and returns the next prime number (the first prime larger than n). This is a very welldefined function with no one left in doubt that such a mapping exists. We know many parts of this mapping. For example, fnext_prime(1) = 2, fnext_prime(8) = 11, and fnext_prime(100) = 101. However, our knowledge of the complete mapping is limited and probably always will be. We know that the number of primes is infinite, but at the time of this writing, we don’t know any particular prime number of greater than 13 million digits. All known procedures for finding the next prime take longer and longer to execute as the arguments get bigger and bigger. Therefore, while fnext_prime(108 ) ⋅ (100 million) certainly exists, we currently do not know what its value is. It is possible that in practice, for extremely large values n, we may never be able to determine the value of fnext_prime(n). Contrast this with the case of fnext_integer(n), where we have a procedure (the successor function, which simply adds 1) that can produce the answer in the blink of an eye for arbitrarily large n. An example of a function that cannot be physically realized even in principle is the function that maps all rational numbers to 1 and all irrational numbers to 0. There are uncountably many irrational numbers within any numerical interval, no matter how small. Thus, we cannot order them in some way and begin progressing through the ordering. Moreover, most of the irrational numbers are uncomputable. That is, there is no machine that can generate a representation (encoding) of them out to some arbitrarily specified level of precision. In essence, uncomputable numbers are numbers that cannot be physically represented. If we cannot physically Table 3.1 The lookup table for fdid_they P A D T Jim Movie Sunday False Jim Movie Monday False Jim Exercise Sunday True Jim Exercise Monday True Jim Work Sunday False Jim Work Monday True Sandy Movie Sunday False Sandy Movie Monday False Sandy Exercise Sunday True Sandy Exercise Monday False Sandy Work Sunday False Sandy Work Monday True represent the inputs to the machine that is supposed to generate the corresponding outputs, we cannot construct a machine that will do the specified mapping. On the other hand, many functions can be implemented with simple machines that are incomparably more efficient than machines with the architecture of a lookup table. These minimachines are at the heart of a powerful computing machine. An example of such a machine is our marble machine that adds binary number symbols (see Figure 8.11). Summary: Physical/Neurobiological Implications of Facts about Functions Logical, mathematical facts about functions have implications for engineers contemplating building a machine that computes. They also have implications for cognitive neuroscientists, who are confronted with the brain, a machine with spectacular computing abilities, and challenged to deduce its functional architecture and to identify the neurobiological mechanisms that implement the components of that architecture. One important fact is that functions of two arguments, which include all of the basic arithmetic functions, cannot be decomposed into functions of one argument. A second important fact is that functions of n arguments, where n is arbitrarily large, can be decomposed into functions of two arguments. A third important fact is that functions with nelement outputs can be decomposed into (replaced with) n functions with oneelement outputs. What these facts tell us is that a powerful computing machine must have basic components that implement both one and twoargument functions, and it must have a means of composing functions. Moreover, these facts about functions tell us that this is all that is essential. All implementable functions can be realized by the composition of a modest number of wellchosen functions that map one or two input elements to an output element. These logicomathematical truths about functions tell us a great deal about the functional architecture of modern computing machines. A critical component of all such machines is a processing unit or units that implement a modest number of elementary functions (on the order of 100). Most of these functions map two input elements to one output element by means of a procedure hard wired into the component. Another critical aspect of its architecture makes it possible for the machine to compose these functions in essentially infinitely various ways. An essential component of the compositional aspect of the machine’s architecture is a read/write memory. The product of one elementary function is written to this memory, where it is preserved until such time as it becomes one of the inputs to another elementary function. Some obvious questions that these facts about functions pose for cognitive neuroscientists are: 1 Are there a modest number of elementary functions in the brain’s computational machinery, out of which the very large number of complex functions that brains implement are realized by composition? 2 If so, what are these functions?" Last edited by skipjack; September 6th, 2018 at 08:15 AM. 
September 6th, 2018, 05:20 AM  #7 
Newbie Joined: Sep 2018 From: Guadalajara Posts: 8 Thanks: 0 
The other possibility I thought to refute the authors' claim for the basic arithmetic operations was this (e.g., for the case of multiplication): Let m(x,y)=x*y One way to decompose this twoargument function into two oneargument functions is by introducing the identity function f(x)=x. We could thus write m(x,y)=f(x)*f(y) Alas, I do not know if this is a correct refutation. I am not a professional mathematician. Cheers, Jose 
September 6th, 2018, 05:30 AM  #8 
Newbie Joined: Sep 2018 From: Guadalajara Posts: 8 Thanks: 0 
Perhaps this is the same that Maschke showed before?
Last edited by JoseEnrique; September 6th, 2018 at 05:41 AM. 
September 6th, 2018, 07:09 AM  #9 
Newbie Joined: Sep 2018 From: Guadalajara Posts: 8 Thanks: 0 
Thank you for posting the context. The key part is found in the section The Limits to Functional Decomposition, where they assert this (I have added the special symbols that didn't make the cut and paste): "Functions of one argument cannot combine to do the work of functions with two. Take the integer multiplication function, f*: Z × Z → Z, which maps pairs of integers to a single integer. It is inherent in this function that the two arguments are combined into one value and cannot remain separate within two separate functions. If f*(x, y) = z, there is no way that we can create two functions f*_ part1: Z → Z and f*_ part2: Z → Z that enable us to determine z without eventually using some function that can be applied to multiple arguments. Thus, we cannot realize this and many other twoargument functions by composing oneargument functions. This elementary mathematical truth is a critical part of our argument as to why the architecture of a powerful computing device such as the brain must make provision for bringing the values of variables to the machinery that implements some primitive twoargument functions. The ability to realize functions of at least two arguments is a necessary condition for realizing functions of a nontrivial nature" (p. 49, emphasis mine). But then again, from Maschke's reply above, one can curry multiplication. Last edited by skipjack; September 6th, 2018 at 08:22 AM. 
September 6th, 2018, 09:02 AM  #10  
Senior Member Joined: Aug 2012 Posts: 2,010 Thanks: 574  Quote:
Still, they seem to be going a long way to make a trivial point and then extrapolating that to something about minds or brains. After all, every function can be regarded as taking one input, namely an element of the Cartesian product. The multiplication takes the ordered pair (3,5) and outputs 15. But the ordered pair is a single element in the Cartesian product of the reals with itself. To me this seems like a long way to stretch a point. Indeed, to say "... a powerful computing device such as the brain ..." already makes the assumption that the brain is a computing device. There's no actual proof of this. Brains work very differently than Turing machines. [Arguments about machine learning algorithms won't help, since ML always reduces to a TM. ML programs run on conventional computing hardware]. So the author has already baked his conclusion into his argument. I'm not buying this. Last edited by Maschke; September 6th, 2018 at 09:08 AM.  

Tags 
decomposition, function 
Thread Tools  
Display Modes  

Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
LU Decomposition  ZMD  Linear Algebra  1  February 18th, 2017 04:06 AM 
LU decomposition  mick7  Linear Algebra  3  December 25th, 2013 10:00 AM 
Help on a Decomposition Problem?  BlackSnowMarine  Physics  1  December 14th, 2013 01:21 PM 
Singular Value Decomposition  SLUO  Applied Math  1  September 8th, 2013 02:08 AM 
decomposition  rakeshkool27  Algebra  1  June 28th, 2010 03:25 AM 