CS221/321 Lecture 6, Oct 19, 2010 Section 3. Functions and Recursion 3.1 SEALF We will next enlarge SAEL by adding functions. ---------------------------------------------------------------------- Figure 3.1: abstract syntax of SAELF ---------------------------------------------------------------------- v ::= x, y, z, ... (alphanumeric variables) n ::= 0, 1, 2, ... (natural numbers) bop ::= Plus, Times, ... (primitive binary operators) e ::= Num(n) | Var(v) | Bapp(bop, e, e) | Let(v, e, e) | Fun(v, e) | App(e, e) ---------------------------------------------------------------------- There are two new constructs. (1) Fun(v, e) : these expressions represent anonymous functions. They are equivalent to lambda expressions (λv.e) in the λ-calculus. The meaning is a function with formal parameter v and body e, which expresses the value returned by a function application (App(f, a)). This form of expression is sometimes called a "function abstraction", or simply an abstraction. (2) App(e1, e2) : e1 must evaluate to a function value, which is applied to the argument e2. There are a couple questions raised by these descriptions. (i) What is a "function value"? (ii) What is the meaning of function application? Note that up to this point, there was only one kind of value, a natural number. The space of values had a simple definition (for SAE[BS]): value = Nat or in other cases (SAE[SS]), values were elements of a particular subset of expressions, called value expressions: value = {Num(n) | n ∈ Nat} ⊆ expr Now there will be a new kind of value, functions. The syntactic version of functions is just closed Fun expressions. fun_value = {Fun(v,e) | FV(Fun(v,e)) = ∅} where we extend the definition of FV by FV(Fun(v,e)) = FV(e) \ {v} Fv(App(e1,e2)) = FV(e1) ⋃ F(e2) A semantic version of function values could be actual mathematical functions from Nat to Nat: fun_value = Nat → Nat ⋃ (Nat -> Nat) -> Nat ⋃ ... But initially we'll use the syntactic notion of function value. With the addition of Fun and App, the Let construct can be considered redundant, since it could be defined in terms of function abstraction and application. Defn 3.1: Let(v,e1,e2) == App(Fun(v,e2), e1) As in the case of Let, there are two ways of defining the dynamic semantics of functions and function application. "Call-by-value" semantics (CBV) is the version where a function argument must be fully evaluated before the function application can be performed (i.e. by reduced). In "Call-by-name" semantics (CBN), the application can be performed before evaluating the argument, and the argument is passed as an unevaluated expression. These are clearly analogs of "by-value" and "by-name" versions of Let expression semantics. A significant complication of the semantics is that now values come in two different forms: numbers and functions. It is awkward to work with values when one kind, numbers, are actual natural numbers, while the other kind, functions, are a subset of expressions. So we'll now treat number values as expressions of the form Num(n), as we have been doing for the small-step semantics. ---------------------------------------------------------------------- Figure 3.2: SAELF[BSv] - "By-Value" big-step sematics for SAELF ---------------------------------------------------------------------- Num = {Num(n) | n ∈ Nat} Fun = {Fun(x,e) | Fun(x,e) closed} value = Num + Fun Evaluation: ⇓ ⊆ expr * value (1) Num(n) ⇓ Num(n) (2) Bapp(bop, e1, e2) ⇓ Num(n) <= e1 ⇓ Num(n1) & e2 ⇓ Num(n2) & prim(bop,n1,n2) = n (3) Let(x, e1, e2) ⇓ v <= e1 ⇓ v1 & [v1/x]e2 ⇓ v (4) Fun(x, e) ⇓ Fun(x, e) [Fun(x,e) closed] (5) App(e1, e2) ⇓ v <= e1 ⇓ Fun(x,e) & e2 ⇓ v2 & [v2/x]e ⇓ v ---------------------------------------------------------------------- Note: By Rule (4), we treat any (closed) function expression as a value. We don't try to evaluate under the abstraction. Thus Fun(x, Plus(Num(1), Num(2))) is a value even though there is a redex in its body (in fact, its body _is_ a redex). ---------------------------------------------------------------------- Figure 3.3: SAELF[BSn] - "By-Name" big-step sematics for SAELF ---------------------------------------------------------------------- Num = {Num(n) | n ∈ Nat} Fun = {Fun(x,e) | Fun(x,e) closed} value = Num + Fun Evaluation: ⇓ ⊆ expr * value (1) Num(n) ⇓ Num(n) (2) Bapp(bop, e1, e2) ⇓ Num(n) <= e1 ⇓ Num(n1) & e2 ⇓ Num(n2) & prim(bop,n1,n2) = n (3) Let(x, e1, e2) ⇓ v <= [e1/x]e2 ⇓ v (4) Fun(x, e) ⇓ Fun(x, e) [Fun(x,e) closed] (5) App(e1, e2) ⇓ v <= e1 ⇓ Fun(x,e) & [e2/x]e ⇓ v ---------------------------------------------------------------------- Note that in both of these semantics, the bindings of variables in Let-expressions and function applications may represent either numbers or functions. So the Let Rule (3) have had to be modified in SAELF[BSv]. Small-Step Semantics for SAELF ------------------------------ ---------------------------------------------------------------------- Figure 3.4: SAELF[SSv] - "Call-By-Value" small-step sematics for SAELF ---------------------------------------------------------------------- Num = {Num(n) | n ∈ Nat} Fun = {Fun(x,e) | Fun(x,e) closed} value = Num + Fun transition: ↦ ⊆ expr * expr (1) Bapp(bop, Num n1, Num n2) ↦ Num p where p = prim(bop,n1,n2) (2) Bapp(bop, e1, e2) ↦ Bapp(bop, e1', e2) <= e1 ↦ e1' (3) Bapp(bop, Num n1, e2) ↦ Bapp(bop, Num n1, e2') <= e2 ↦ e2' (4) Let(x, e1, e2) ↦ Let(x, e1', e2) <= e1 ↦ e1' (5) Let(x, v1, e2) ↦ [v1/x]e2 (v1 a value) (6) App(e1,e2) ↦ App(e1', e2) <= e1 ↦ e1' (7) App(v1, e2) ↦ App(v1, e2') (v1 a value) <= e2 ↦ e2' (8) App(v1, v2) ↦ [v2/x]e (v1 = Fun(x,e); v2 a value) ---------------------------------------------------------------------- Note: Rule (8) is doing some "runtime type checking" by only applying where the operator value is indeed a function expression. ---------------------------------------------------------------------- Figure 3.5: SAELF[SSn] - "Call-By-Name" small-step sematics for SAELF ---------------------------------------------------------------------- Num = {Num(n) | n ∈ Nat} Fun = {Fun(x,e) | Fun(x,e) closed} value = Num + Fun transition: ↦ ⊆ expr * expr (1) Bapp(bop, Num n1, Num n2) ↦ Num p where p = prim(bop,n1,n2) (2) Bapp(bop, e1, e2) ↦ Bapp(bop, e1', e2) <= e1 ↦ e1' (3) Bapp(bop, Num n1, e2) ↦ Bapp(bop, Num n1, e2') <= e2 ↦ e2' (4) Let(x, e1, e2) ↦ [e1/x]e2 (6) App(e1,e2) ↦ App(e1', e2) <= e1 ↦ e1' (7) App(Fun(x,e), e2) ↦ [e2/x]e ---------------------------------------------------------------------- Question 3.1: Are the two call-by-value semantics (SAELF[BSv] and SAELF[SSv]) equivalent? [We would be surprised if not!] Question 3.2: Are the two call-by-name semantics (SAELF[BSn] and SAELF[SSn]) equivalent? Question 3.3: Are the CBV and CBN semantics equivalent? Question 3.4. Are the various versions of the semantics terminating? ---------------------------------------------------------------------- 3.2 Type errors --------------- Until now, we lived in a simple paradise where nothing could go wrong. All syntactically well-formed expressions could be evaluated, their evaluation would always terminate, and the value was unique (evaluation was deterministic). But now that we have more than one form of value, things can go wrong -- not all syntactically correct expressions can be evaluated. Examples: (1) Bapp(Plus, Num(3), Fun(x, Var(x))) (2) App(Plus(Num(2), Num(3)), Num(5)) Here (1) is bogus because the second argument for Plus is a function instead of a number. Example (2) is bogus because the first argument of App evaluates to a number instead of a function, as required for rule (5) for either CBV or CBN. What actually happens when we try to evaluate these? We can successfully evaluate the arguments in both cases, but then when we try to use Rule (2) in the first case, or Rule (5) in the second case, the premises can't be satisfied because the argument values don't match the required forms (Fun(x, Var(x)) doesn't match Num(n2) for Rule (2), and Num(5) doesn't match Fun(x,e) for Rule (5)). The evaluation becomes "stuck" and can't proceed. The value patterns occurring in the premises of Rules (2) and (5) constitute dynamic "type checks" to validate that the arguments are of the correct form. ---------------------------------------------------------------------- 3.3 Relation with the Pure λ-Calculus ------------------------------------- The usual abstract syntax definition for the pure λ-calculus is: x ::= variables M ::= x | λx.M | M1 M2 Embedded within our SEALF language, we have the smaller language e ::= Var(x) | App(e1,e2) | Fun(x,e) It is clear that these languages are "isomorphic", i.e. they are the same language except for the abstraction and application notation. The usual execution semantics for the λ-calculus is given by: ---------------------------------------------------------------------- Figure 3.6: Small-step semantics of the λ-calculus ---------------------------------------------------------------------- (1) (λx.M)N ↦ [N/x]M (β-reduction) M ↦ M' (2) ------------- M N ↦ M' N N ↦ N' (3) ------------- M N ↦ M N' M ↦ M' (4) -------------- λx.M ↦ λx.M' ---------------------------------------------------------------------- These three search rules are nondeterministic, and they say that you can reduce any β-redex anywhere in an expression, including within the body of a λ-abstraction. We will sometimes use ↦β instead of plain ↦ to identify λ-calculus β-reduction. What expressions play the role of "value" expressions in the λ-calculus? These are the expressions containing no β-redexes, and they are called "normal forms". (We often restrict ourselves to closed λ-expressions when talking about evaluation.) Examples: x, xy, x(λy.y) (if you allow nonclosed expressions) λx.x, λx.λy.x, λf.λx.(f(xx))(f(xx)) (closed normal forms) One important evaluation strategy (i.e. method of picking the next redex to be reduced) is the leftmost-outermost reduction strategy, also known as "normal-order" reduction. Theorem: If an expression M is normalizable, meaning there exists some finite transition sequence ending in a normal form, then normal-order reduction will terminate with a normal form. So in this sense normal order reduction is "safe". Note that normal-order reduction is not the same as what we have called call-by-name, because normal-order reduction will reduce redexes under λ-abstractions. Is the λ-calculus normalizing? I.e. for any M, is there a normal form N s.t. M ↦* N? Example: Δ = (λx.xx)(λx.xx) Δ is a β-redex. It reduces to: [λx.xx/x](xx) ↦β (λx.xx)(λx.xx) = Δ So when we try to evaluate Δ to a normal form, we get an infinite reduction sequence: Δ ↦ Δ ↦ Δ ↦ Δ ↦ Δ ↦ Δ ↦ Δ ↦ ... Delta = App(Fun(x, App(x,x)), Fun(x, App(xx))) ↦ Delta ------------------- Question: What does this Δ example have to do with the termination quesion for SAELF (either CBV or CBN)? ---------------------------------------------------------------------- 3.4 Recursive functions Defining factorial in SAELF --------------------------- Can we define the factorial function in SEALF? Here is a defn in ML: fun fact n = if n = 0 then 1 else n * fact(n-1) What are we missing in SEALF? (1) the minus (-) primitive arithmetic operator (2) the relational operator = (3) boolean values returned by relational operators like = (4) conditional expressions (if .. then ... else ...) (5) recursive (circular) function definitions Lets extend SEALF to a language that is rich enough to define factorial. We'll call this extended SEALF "Fun". ---------------------------------------------------------------------- Figure 3.7: abstract syntax of Fun ---------------------------------------------------------------------- v ::= x, y, z, ... (alphanumeric variables) n ::= 0, 1, 2, ... (natural numbers) b ::= True, False bop ::= Plus, Times, Minus, ... (primitive arithmetic operators) Eq, LT, GT, ... (primitive relational operators) e ::= Num(n) | Var(v) | Bapp(bop, e, e) | If(e, e, e) | Let(v, e, e) | Fun(v, e) | App(e, e) ---------------------------------------------------------------------- We are assuming for now that our new relational operators are also binary operators that take a pair of numbers as arguments. Dynamic Semantics of Fun ------------------------ We need an auxiliary function Val that takes the result of applying one of the expanded set of binary "primops" (primitive operations) and transforms it into a syntactic value (a constant expression). Primops can return either a number (n ∈ Nat) or a boolean value (true, false). Val(n) = Num(n) (n ∈ Nat) Val(true) = True Val(false) = False Val does not need to deal with function values, since none of the primops returns functions. ---------------------------------------------------------------------- Figure 3.8: Fun[BSv] - "Call-By-Value" big-step sematics for Fun ---------------------------------------------------------------------- Num = {Num(n) | n ∈ Nat} Bool = {true, false} Fun = {Fun(x,e) | Fun(x,e) closed} value = Num + Bool + Fun Evaluation: ⇓ ⊆ expr * value (1) Num(n) ⇓ Num(n) (2) Bapp(bop, e1, e2) ⇓ Val(p) <= e1 ⇓ Num(n1) & e2 ⇓ Num(n2) & prim(bop,n1,n2) = p (*) (3) Let(x, e1, e2) ⇓ v <= e1 ⇓ v1 & [v1/x]e2 ⇓ v (4) Fun(x, e) ⇓ Fun(x, e) [Fun(x,e) closed] (5) App(e1, e2) ⇓ v <= e1 ⇓ Fun(x,e) & e2 ⇓ v2 & [v2/x]e ⇓ v (6a) If(e1, e2, e3) ⇓ v <= e1 ⇓ true & e2 ⇓ v (*) (6b) If(e1, e2, e3) ⇓ v <= e1 ⇓ false & e3 ⇓ v (*) ---------------------------------------------------------------------- ---------------------------------------------------------------------- Figure 3.9: Fun[BSn] - "Call-By-Name" big-step sematics for Fun ---------------------------------------------------------------------- Num = {Num(n) | n ∈ Nat} Fun = {Fun(x,e) | Fun(x,e) closed} value = Num + Fun Evaluation: ⇓ ⊆ expr * value (1) Num(n) ⇓ Num(n) (2) Bapp(bop, e1, e2) ⇓ Val(p) <= e1 ⇓ Num(n1) & e2 ⇓ Num(n2) & prim(bop,n1,n2) = p (*) (3) Let(x, e1, e2) ⇓ v <= [e1/x]e2 ⇓ v (4) Fun(x, e) ⇓ Fun(x, e) [Fun(x,e) closed] (5) App(e1, e2) ⇓ v <= e1 ⇓ Fun(x,e) & [e2/x]e ⇓ v (6a) If(e1, e2, e3) ⇓ v <= e1 ⇓ true & e2 ⇓ v (*) (6b) If(e1, e2, e3) ⇓ v <= e1 ⇓ false & e3 ⇓ v (*) ---------------------------------------------------------------------- ---------------------------------------------------------------------- Figure 3.10: Fun[SSv] - "Call-By-Value" small-step sematics for Fun ---------------------------------------------------------------------- Num = {Num(n) | n ∈ Nat} Bool = {True, False} Fun = {Fun(x,e) | Fun(x,e) closed} value = Num + Bool + Fun transition: ↦ ⊆ expr * expr (1) Bapp(bop, Num n1, Num n2) ↦ Val(p) where p = prim(bop,n1,n2) (2) Bapp(bop, e1, e2) ↦ Bapp(bop, e1', e2) <= e1 ↦ e1' (3) Bapp(bop, Num n1, e2) ↦ Bapp(bop, Num n1, e2') <= e2 ↦ e2' (4) Let(x, e1, e2) ↦ Let(x, e1', e2) <= e1 ↦ e1' (5) Let(x, v1, e2) ↦ [v1/x]e2 (v1 a value) (6) App(e1,e2) ↦ App(e1', e2) <= e1 ↦ e1' (7) App(v1, e2) ↦ App(v1, e2') (v1 a value) <= e2 ↦ e2' (8) App(v1, v2) ↦ [v2/x]e (v1 = Fun(x,e); v2 a value) (9) If(e1,e2,e3) ↦ If(e1',e2,e3) <= e1 ↦ e1' (10) If(True,e2,e3) ↦ e2 (11) If(False,e2,e3) ↦ e3 ---------------------------------------------------------------------- ---------------------------------------------------------------------- Figure 3.10: Fun[SSn] - "Call-By-Name" small-step sematics for Fun ---------------------------------------------------------------------- Num = {Num(n) | n ∈ Nat} Fun = {Fun(x,e) | Fun(x,e) closed} value = Num + Fun transition: ↦ ⊆ expr * expr (1) Bapp(bop, Num n1, Num n2) ↦ Val(p) where p = prim(bop,n1,n2) (2) Bapp(bop, e1, e2) ↦ Bapp(bop, e1', e2) <= e1 ↦ e1' (3) Bapp(bop, Num n1, e2) ↦ Bapp(bop, Num n1, e2') <= e2 ↦ e2' (4) Let(x, e1, e2) ↦ [e1/x]e2 (5) App(e1,e2) ↦ App(e1', e2) <= e1 ↦ e1' (6) App(v1, e2) ↦ [e2/x]e (v1 = Fun(x,e)) (7) If(e1,e2,e3) ↦ If(e1',e2,e3) <= e1 ↦ e1' (8) If(True,e2,e3) ↦ e2 (9) If(False,e2,e3) ↦ e3 ---------------------------------------------------------------------- Notes: (1) For App, we still have to evaluate the first, operator, argument, so App is "strict" in its first argument, even in the CBN version. (2) For If, the rules are the same in CBV and CBN. If is strict in its first, condition, argument, and "by name" in its other two arguments by its very nature. ---------------------------------------------------------------------- Achieving Recursion ------------------- So now we have enough arithmetic primitives, we have the relational operator of equality (=), and we have conditional expressions. The final ingredient needed to define factorial is recursion. [Actually, the weaker notion of primitive recursion would do in this case, but we'll go directly to general recursion.] Do we need to add a new language construct to introduce recursive definitions, like a letrec declaration form? letrec fact = λn. if n = 0 then 1 else n * fact(n-1) in This kind of declaration form allows the defined symbol (e.g. fact in this case) to be used in its definiens. This form was introduced in Peter Landin's ISWIM in the mid 1960s, and may have been used informally much earlier. We'll use such a declaration form for convenience (and later when we go to typed languages), but it turns out that it can actually be defined in Fun without letrec, so it is not essential. Here is the trick (discovered by Church and his students in the 1930s). We can use a particular function expression to "implement" recursive definitions. This expression is called the Y combinator (a "combinator" is just a closed λ-expression). For background, see this Wikipedia article: http://en.wikipedia.org/wiki/Fixed_point_combinator Here is the expression (using λ-calculus concrete syntax): Y = λf.(λx.f(xx))(λx.f(xx)) Notice that this expression is a close cousin to the self-reproducing Δ expression above (this is not a coincidence!). Give some function F, we have YF ↦β (λx.F(xx))(λx.F(xx)) ↦β F((λx.F(xx))(λx.F(xx))) = F(YF) so we have the equation YF = F(YF) (*) and YF is a symbolic "fixed-point" of the function F. ------------- Note: The "equation" (*) is true using a theory of equality of λ-expressions based on the reflexive, symmetric, transitive closure of the ↦β relation. It is a kind of "syntactic" notion of equality. ------------- How can we use this fixed-point property of Y to compute a recursive function? Let Fact = λg.λx. if x = 0 then 1 else x * g(x - 1) Then we would expect the factorial function fact to satisfy fact = Fact fact i.e. fact should be a fixed-point of the "generator" functional F. So lets try fact = Y Fact Now lets try computing fact 2: fact 2 = (Y Fact) 2 ↦ (λx.Fact(xx))(λx.Fact(xx)) 2 ↦ Fact((λx.Fact(xx))(λx.Fact(xx))) 2 = Fact f 2 where f = (λx.Fact(xx))(λx.Fact(xx)) ↦ (λx. if x = 0 then 1 else x * f(x - 1)) 2 (1) ↦ if 2 = 0 then 1 else 2 * f(2 - 1) (2) ↦ if false then 1 else 2 * f(2 - 1) ↦ 2 * f(2 - 1) ↦ 2 * f(1) (3) ↦ 2 * (Fact f 1) (4) ↦ 2 * (λx. if x = 0 then 1 else x * f(x - 1)) 1 ↦ 2 * (if 1 = 0 then 1 else 1 * f(1 - 1)) ↦ 2 * (if false then 1 else 1 * f(1 - 1)) ↦ 2 * (1 * f(1 - 1)) ↦ 2 * (1 * f(0)) ↦ 2 * (1 * (Fact f 0)) ↦ 2 * (1 * (λx. if x = 0 then 1 else x * f(x - 1)) 0) ↦ 2 * (1 * (if 0 = 0 then 1 else 0 * f(0 - 1))) ↦ 2 * (1 * (if true then 1 else 0 * f(0 - 1))) ↦ 2 * (1 * 1) ↦ 2 * 1 ↦ 2 Notes: (1) We β-reduced Fact f -- using the CBN rule! (2) β-reduction of the top application. (3),(4) These are reductions of primitive operator redexes. This is not β-reduction, but is usually called δ-reduction. This works only because we are using CBN semantics in this example. If we were using CBV semantics, at the point (1) where we were reducing the β-redex (Fact f), we would first have to evaluate the argument f, which is also a β-redex. Reducing f yields Fact((λx.Fact(xx))(λx.Fact(xx))) = Fact f which is again a β-redex whose argument is the β-redex f, which means we have to reduce f again, yielding Fact(Fact f) and so on. We are stuck in a loop trying to reduce f to a value to which Fact can be applied. If we want recursion with CBV, however, we are not out of luck. It turns out that the Y combinator can be modified to work with CBV. This modified combinator is Yv = λf.(λx.f(λy.(xx)y))(λx.f(λy.(xx)y)) ====================================================================== Homework 4.1. Verify that fact = Yv Fact defines the factorial in CBV semantics by hand calculating (fact 2) (using Fun[SSv] rules). Follow the example of the calculation of (fact 2) using the CBN Y combinator in Lecture 6. ====================================================================== Note that the nontermination of (Y Fact 2) under CBV semantics, while it converges under CBN semantics, means that CBV and CBN are not equivalent. This answers Question 3.3 above. But there is a relation: If an expression evaluates to a value v under CBN, it will evaluate to that same value under CBV (but obviously, not vice versa). Prop 3.1: For e ∈ Fun, e ⇓ v in Fun[BSn] => e ⇓ v in Fun[BSv]. Question 3.5. How hard is it to prove Prop 3.1? Example: Here is a simple λ-calculus term whose evaluation terminates under CBN (or normal-order) evaluation, but it does not terminate under CBV: (λx.λy.y)Δ This can also be expressed in SEALF and Fun. ---------------------------------------------------------------------- letrec ------ Although we can implement recursion using a Y-combinator without additional syntax constructs, it is convenient to have a special declaration form for defining recursive functions. This is usually a recursive variant of let: letrec f = λx.e1 in e2 where the function name f will normally appear free (be called recursively) in e1. Such a letrec expression can be translated into an ordinary, nonrecursive let expression using the Y combinator: let f = Y(λf.λx.e) in e2 where Y will be either the CBN or CBV form of the Y combinator, as appropriate. If we add a letrec construct to Fun, then an environment-based evaluator can use a special shortcut to implement recursion, without resorting to the Y combinator. This technique is used in prog_3_4.sml. ---------------------------------------------------------------------- Implementations --------------- prog_3_1.sml: defines a big-step, call-by-value evaluator using substitutions prog_3_2.sml: defines a big-step, call-by-name evaluator using substitutions prog_3_3.sml: defines a big-step, call-by-value evaluator using environments prog_3_4.sml: defines a big-step, call-by-value evaluator using environments, with letrec added to the language Fun-CBV-tests.sml : test cases for CBV evaluator, including factorial using the call-by-value Y combinator Fun-CBN-tests.sml : test cases for CBN evaluator, including factorial using the call-by-name Y combinator ---------------------------------------------------------------------- 3.5 A denotational view of recursive functions ---------------------------------------------- We saw that fact can be defined as a formal fixed-point of a generator functional: fact = Fact(fact) where the equality is with respect to a formal theory of equality of expressions based on syntactic reduction rules. Another way of interpreting this fixed-point equation is to view it as an equation involving the mathematical functions fact ∈ Nat → Nat and Fact ∈ (Nat → Nat) → (Nat → Nat). We start with "partial functions", which can be modeled as single-valued binary relations between the domain set and range set: fact = {(0,1), (1,1), (2,2), (3,6), (4,24), ...} Defn: Given a function f ⊆ A × B. (1) Dom(f) = {a ∈ A | ∃b ∈ B. (a,b) ∈ f} (domain of f) (2) Ran(f) = {b ∈ B | ∃a ∈ A. (a,b) ∈ f} (range of f) (3) f is "total" if Dom(f) = A, otherwise it is "partial". (4) A → B is the set of all (total or partial) functions from A to B. The set of functions A → B is ordered by the ⊆ relation. This is a partial order with least element ∅. Consider the higher-order functional Fact : (Nat → Nat) → (Nat → Nat) defined by Fact = λg.λx.if x = 0 then 1 else x * g(x - 1) Claim: Fact is monotonic: f1 ⊆ f2 => Fact(f1) ⊆ Fact(f2). This from the fact that the basic constructs for building functions (abstraction, application, conditionals, primitive operators) can be shown to be monotonic or to preserve monotonicity of their arguments (by induction on the structure of the defining λ-expression). So now we play the same game as we did when we wanted to explain the meaning of the recursive definition of SAE expressions. f(0) = ∅ f(1) = Fact(f(0)) ... f(n+1) = Fact(f(n)) Each f(n) is a finite partial function: f(0) = ∅ f(1) = {(0,1)} f(2) = {(0,1), (1,1)} f(3) = {(0,1), (1,1), (2,2)} We can observe: (1) ∀n. dom(f(n)) = {k ∈ Nat | k < n} (2) ∀n. f(n) ⊆ f(n+1) (3) ∀n. f(n) ⊆ fact (f(n) is a finite approximation of fact) Now we _define_ fact as the limit of these approximations: fact = ⋃{f(n) | n ∈ Nat} Claim: Fact(fact) = fact. Proof: Fact(fact) = Fact(⋃{f(n) | n ∈ Nat}) = ⋃ {Fact(f(n)) | n ∈ Nat} (1) = ⋃ {f(n+1) | n ∈ Nat} = (⋃ {f(n+1) | n ∈ Nat}) ⋃ ∅ = (⋃ {f(n+1) | n ∈ Nat}) ⋃ f(0) = (⋃ {f(n) | n ∈ Nat}) = fact The equality (1) in this proof follows from a stronger property of function definitions in the λ-calculus: all functions expressed in the λ-calculus are "continuous", meaning that they preserve limits of infinite ascending chains (in this case, the chain is f(n), and the limit is the infinite union ⋃{f(n) | n ∈ Nat}). The property of continuity is a strengthening of the property of monotonicity (continuity => monotonicity), and it can be proved for λ-expressions using the same technique as for monotonicity (induction on the structure of the expression). -------------------------------------------------------------------