Linking Syntax and Semantics

Aim:
To describe an interpretation algorithm that uses the syntactic analysis of a sentence together with grammar rules augmented by feature information describing how the semantic analysis of a phrase is derived from the semantic analyses of its constituents. We shall show how this works for a simple grammar and then investigate a few more complicated examples, including the handling of auxiliary verbs, and prepositional phrases.
The concepts of lambda-expression and lambda-reduction are central to the handling of certain grammar rules.
Reference: Chapter 9 of Allen, J.: Natural Language Understanding, 2nd ed., Benjamin Cummings, 1995.
Keywords: CNP, common noun phrase, lambda reduction, VAR feature
Plan:
  • Every constituent must have an interpretation - lambda-expressions, lambda-reduction
  • Example: grammar rules with semantic interpretation via features
  • The var feature
  • Lexicon entries with sem features
  • Handling PPs and VPs with lambda-expressions
  • Handling different types of PPs


James Allen

James Allen, who wrote the textbook we follow in the NLP section of this course, is a professor in Computer Science at the University of Rochester in the USA. Allen is a distinguished researcher in the NLP field, and a former editor-in-chief of the important NLP journal Computational Linguistics. The picture is from his home page, which is linked above. Picture of James Allen
James Allen

Interpretation Algorithm


Lambda Expressions


Lambda Expressions 2

______
† technically, you replace every free occurrence of X in p(X) by a. So when would an occurrence not be free?
Answer: If it was within the scope of a quantifier or another lambda
Example: λ(X, knows1(X, λ(X, likes1(X, pizza1)))) = [someone] knows [someone else] likes pizza
The X in the inner λ is bound by its λ, and so would not be replaced if λ-reducing the outer λ-expression.
Obviously it would be clearer to write λ(X, knows1(X, λ(Y, likes1(Y, pizza1))))

Lambda Expressions 3


9.2 Grammar / Lexicon with Semantic Interpretation


Parse Tree with Logical Form (SEM) Features

The whole of the parse/semantic tree for this sentence (Mary sees Jack) is shown below (equivalent to Fig. 9.1 of Allen):

Semantic parse tree for 'Mary sees Jack'


Grammatical Rules with sem Features

Like Table 9.3 in Allen
1S(sem(?semvp(?semnp))) → NP(sem(?semnp)) VP(sem(?semvp))
2VP(var(?v), sem(lambda(a2, ?semv(?v, a2)))) → V[_none](sem(?semv))
3VP(var(?v), sem(lambda(a3, ?semv(?v, a3, ?semnp)))) →
   V[np](sem(?semv)) NP(sem(?semnp))
4NP(wh(-), var(?v), sem(pro(?v, ?sempro))) → PRO(sem(?sempro))
5NP(var(?v), sem(name(?v, ?semname))) → NAME(sem(?semname))
6NP(var(?v), sem(?semdet(?v, ?semcnp(?v)))) →
   DET(sem(?semdet)) CNP(sem(?semcnp))
7CNP(sem(?semn)) → N(sem(?semn))

Head_feature(s, vp, np, cnp) = var


The var feature


Implementing Features in Prolog


Implementing Features in Prolog 2


Implementing Features in Prolog 3


Lexical Entries with Semantic Information

Like Table 9.2 of Allen
adet(agr(3s), sem(a))
canaux(subcat(vp:base), sem(can1))
carn(sem(car1), agr(3s))
cryv(sem(cry1), vform(base), subcat(none))
decidev(sem(decides1), vform(base), subcat(none))
decidev(sem(decides_on1), vform(base), subcat(pp:on))
dogn(sem(dog1), agr(3s))


Lexical Entries with Semantic Information 2

Like Table 9.2 of Allen (continued)
fishn(sem(fish1), agr(3s))
fishn(sem(plur(fish1)), agr(3p))
housen(sem(house1), agr(3s))
hasaux(vform(pres), agr(3s), subcat(vp:pastprt), sem(perf))
hepro(sem(he1), agr(3s))
inp(pform([loc, mot]), sem(at_loc1))
Jillname(agr(3s), sem('Jill'))

Lexical Entries with Semantic Information 3

Like Table 9.2 of Allen (continued)
mann(sem(man1), agr(3s))
menn(sem(plur(man1)), agr(3p))
onp(pform([loc, on]), sem(on_loc1))
sawv(sem(sees1), vform(past), subcat(np))
seev(sem(sees1), vform(base), subcat(np), irreg_past(+), en_pastprt(+))
shepro(sem(she1), agr(3s))
thedet(sem(the), agr([3s, 3p]))
toto(agr(-), vform(inf))


Handling Semantic Interpretation

The chart parser can be modified so that it handles semantic interpretation as follows:


Interpreting an example sentence: Jill saw the dog

  1. The word Jill is parsed as a name. A new discourse variable, j1, is generated, and set as the var feature of the name.

  2. This constituent is used with rule 5 to build an NP. Since var is a head feature, var j1 is passed up to the NP, and the sem is built using rule 5 to give sem name(j1, 'Jill')

  3. The lexical entry for the word saw generates a V constituent with the sem past(sees1) and a new var ev1.

  4. The lexical entry for the produces sem the.


Interpreting an example sentence 2

  1. The lexical entry for dog produces a N constituent with sem dog1 and var d1. This in turn gives rise to a CNP constituent with the same sem and var, via rule 7 .

  2. Rule 6 combines the sems the and dog1 with the var d1 to produce an NP with the sem the(d1, dog1(d1)) and var d1.

  3. the(d1, dog1(d1)) is combined with the sem of the verb and its var by rule 3 to form a VP with var ev1 and sem

    lambda(X, past(sees1)(ev1, X, the(d1, dog1(d1))))


Interpreting an example sentence 3

  1. This is then combined with the subject NP name(j1, 'Jill') to form the sem

    past(sees1)(ev1, name(j1, 'Jill'), the(d1, dog1(d1)))

    and var ev1 (after lambda-reduction).


Completed Parse Tree for Jill saw the dog (Fig. 9.5 Allen)

Semantic parse tree for 'Jill Saw The Dog'


9.3 Prepositional Phrases and Verb Phrases

Auxiliary Verbs


Prepositional Phrases and Verb Phrases 2


Prepositional Phrases


PP Modifying an NP


PP Modifying an NP 2


PP Modifying an NP 3


PP Modifying an NP 4


PP Modifying an NP 5


PP Modifying a VP


PP Modifying a VP 2

You are now in a position to do all of the exercises at http://www.cse.unsw.edu.au/~cs9414/Exercises/semantics.html


PP Modifying a VP 3

Parse tree for cry in the corner using this rule: (like Allen Fig. 9.6, corrected):

Semantic parse tree for 'cry in the corner'


PP as a subcategorized constituent in a VP


PP as a subcategorized constituent in a VP 2


Using the pred Feature in Grammar Rules


Logical Forms of pred(+) and pred(–) PPs


Parse Trees for pred(+) and pred(–)

Semantic parse for decide-on a-couch


Semantic parse for decide on-a-couch


Summary: Semantic Interpretation
The semantic interpretation algorithm is feature-driven: reading the augmented grammar rules right to left provides a description of how to build the SEM feature of the phrase described by the grammar rule. When the semantic description has a gap (as with VP), the SEM feature is a lambda-expression.

CRICOS Provider Code No. 00098G

Copyright (C) Bill Wilson, 2012, except where another source is acknowledged.