Lab 5

Check the write up instructions first.

Preliminaries

Requirements for this assignment

  1. Make sure you have a baseline test suite corresponding to your lab 4 grammar (the final grammar from the Matrix). Add some (positive and negative) examples of adjectival and adverbial modifiers.
  2. Add adjectival and adverbial modifiers.
  3. If your language has agreement between adjectives and head nouns, implement the appropriate lexical rules for adjectives to model this.
  4. Add demontratives and markers of definiteness (if any).
  5. Refine the rules allowing for optional arguments (argument drop) to reflect the discourse constraints on when arguments can be dropped.
  6. 5. If there's anything simple that you've been waiting for tdl editing to fix, optionally fix it now. (Keyword here is simple1)
  7. Collect a small test corpus and create a testsuite file for it.
  8. Test!
  9. Write up the phenomena you have analyzed.

Modification

Head-modifier rules

The Matrix distinguishes scopal from intersective modification. We're going to pretend that everything is intersective and just not worry about the scopal guys for now (aside from negative adverbs, if you got one from the customization system).

Adjectives

Adverbs

Adjective Agreement

To model adjective agreement, you'll probably want to write lexical rules that inflect the adjectives and constrain the features inside the MOD value so that each inflected adjective can only modify the right kind of nouns.

Below is some general information on writing lexical rules. Please also refer to the lexical rules emitted by the customization system. Adjective agreement lexical rules should be of the "add only" type. Note that if you have an apparently uninflected form, you'll need to make sure it goes through a constant lexical rule (no spelling change) which fills in the relevant feature values.

Lexical rules


Demonstratives and definiteness

The basics

We are modeling the cognitive status attributed to discourse referents by particular referring expressions through a pair of features COG-ST and SPECI on ref-ind (the value of INDEX for nouns). Here is our first-pass guess at the cognitive status associated with various types of overt expressions (for dropped arguments, see below):
MarkerCOG-ST valueSPECI value
Personal pronounactiv-or-more+
Demonstrative article/adjectiveactiv+fam 
Definite article/inflectionuniq+fam+act 
Indefinite article/inflectiontype-id 

If you have any overt personal pronouns, constrain their INDEX values to be [COG-ST activ-or-more, SPECI + ].

If you have any determiners which mark definitness, have them constrain the COG-ST of their SPEC appropriately. For demonstrative determiners, see below.

If you have any nominal inflections associated with discourse status, implement lexical rules which add them and constrain the COG-ST value appropriately.

Note that in some cases an unmarked form is underspecified, where in others it stands in contrast to a marked form. You should figure out which is the case for any unmarked forms in your language (e.g., bare NPs in a language with determiners, unmarked nouns in a language with definiteness markers), and constrain the unmarked forms appropriately. For bare NPs, the place to do this is the bare NP rule (note that you might have to create separate bare NP rules for pronouns v. common nouns in this case). For definiteness affixes, you'll want a constant-lex-rule that constrains COG-ST, and that is parallel to the inflecting-lex-rule that adds the affix for the overtly marked case.

Some languages have agreement for definiteness on adjectives. In this case, you'll want to add lexical rules for adjectives that constrain the COG-ST of the item on their MOD list.

Demonstratives

All demonstratives (determiners, adjectives and pronouns [not on the todo list this year]) will share a set of relations which express the proximity to hearer and speaker. We will arrange these relations into a hierarchy so that languages with just a one- or two-way distinction can be more easily mapped to languages with a two- or three-way distinction. In order to do this, we're using types for these PRED values rather than strings. Note the absence of quotation marks. We will treat the demonstrative relations as adjectival relations, no matter how they are introduced (via pronouns, determiners, or quantifiers).

There are (at least) two different types of three-way distinctions. Here are two of them. Let me know if your language isn't modeled by either.

demonstrative_a_rel := predsort.
proximal+dem_a_rel := demonstrative_a_rel. ; close to speaker
distal+dem_a_rel := demonstrative_a_rel.   ; away from speaker
remote+dem_a_rel := distal+dem_a_rel.      ; away from speaker and hearer
hearer+dem_a_rel := distal+dem_a_rel.      ; near hearer
demonstrative_a_rel := predsort.
proximal+dem_a_rel := demonstrative_a_rel. ; close to speaker
distal+dem_a_rel := demonstrative_a_rel.   ; away from speaker
mid+dem_a_rel := distal+dem_a_rel.         ; away, but not very far away
far+dem_a_rel := distal+dem_a_rel.         ; very far away

Demonstrative adjectives

Demonstrative adjectives come out as the easy case in this system. They are just like regular adjectives, except that in addition to introducing a relation whose PRED value is one of the subtypes of demonstrative_a_rel defined above, they also constrain the INDEX.COG-ST of their MOD value to be activ+fam.

Demonstrative determiners

Demonstrative determiners introduce two relations. This time, they are introducing the quantifier relation (Let's say "exist_q_rel") and the demonstrative relation. This analysis entails changes to the Matrix core, as basic-determiner-lex assumes just one relation being contributed. Accordingly, we are going to by-pass the current version of basic-determiner-lex and define instead determiner-lex-supertype as follows:

determiner-lex-supertype := norm-hook-lex-item & basic-zero-arg &
  [ SYNSEM [ LOCAL [ CAT [ HEAD det,
			   VAL[ SPEC.FIRST.LOCAL.CONT.HOOK [ INDEX #ind,
				  			     LTOP #larg ],
                                SPR < >,
                                SUBJ < >,
                                COMPS < >]],
		     CONT.HCONS < ! qeq &
				 [ HARG #harg,
				   LARG #larg ] ! > ], 
	     LKEYS.KEYREL quant-relation &
		   [ ARG0 #ind,
		     RSTR #harg ] ] ].

This type should have two subtypes (assuming you have demonstrative determiners as well as others in your language --- otherwise, just incorporate the constraints for demonstrative determiners into the type above).

  1. The subtype for ordinary (non-demonstrative) determiners should add the constraint that the RELS list has exactly one thing on it, by adding the supertype single-rel-lex-item.
  2. The subtype for demonstrative determiners should specify a RELS list with two things on it: the first should have the "exist_q_rel" for its PRED value. (It's already constrained to be a quant-relation because the type norm-hook-lex-item inherited by determiner-lex-supertype identifies the first element of the RELS list with the LKEYS.KEYREL.) The second one should be identified with LKEYS.ALTKEYREL and should be an arg1-ev-relation (the type we use for the relations of intransitive adjectives). The HOOK.INDEX.COG-ST inside the SPEC value should be constrained to activ+fam. Finally, the LBL and ARG1 of the arg1-ev-relation should be identified with the SPEC..HOOK.LTOP and SPEC..HOOK.INDEX of the determiner, respectively. (This will result in the demonstrative adjective relation sharing its handle with the N' the determiner attaches to.)

Make sure your ordinary determiners in the lexicon inherit from the first subtype, and that your demonstrative determiners inherit from the second subtype. Demonstrative determiner lexical entries should constrain their LKEYS.ALTKEYREL.PRED to be an appropriate subtype of demonstrative_a_rel.


Optional arguments

The customization system now includes an argument optionality library which we believe to be fairly thorough, regarding the syntax of optional arguments. The goal of this part of this lab (this year!) therefore is to (a) fix up anything that is not quite right in the syntax and (b) try to model the semantics, and in particular, the cognitive status associated with different kinds of dropped arguments. Regarding (a), if the analysis provided by the customization system isn't quite working, email me and we'll discuss how to fix it with tdl editing.

Regarding (b), you need to do the following:

Note that the Matrix currently assumings that dropped subjects are always [COG-ST in-foc]. This may not be true, especially in various impersonal constructions. If it's not true for your language, please let me know.


Test corpus

In order to get a sense of the coverage of our grammars over naturally occurring text, we are going to collect small test corpora. Minimally, these should consist of 10-20 sentences from running text. They could be larger, however, that is not recommended unless:

Note also that our grammars won't cover anything without lexicon. If you have access to a digitized lexical resource that you can import lexical items from, you can address this to a certain extent. Otherwise, you'll want to limit your test corpus to a size that you are willing to hand-enter vocabulary for. (If you have access to a Toolbox lexicon for your language, contact me about importing via the customization system.)

For Lab 5, your task is to locate your test corpus (10-20 sentences will be sufficient, more if you want) and format it for [incr tsdb()]. If you have IGT to work with in the first place, it may be convenient to use the make_item.pl script to create the test corpus skeleton. (Note that you want this to be separate from your regular test suite skeleton.) Otherwise, you can use [incr tsdb()]'s own import tool (File | Import | Test items) which expects a plain text file with one item per line. The result of that command is a testsuite profile from which you'll need to copy the item (and relations) file to create a testsuite skeleton.

Check list:


Write up your analyses

For each of the following phenomena, please include the following in your final write up:

  1. A descriptive statement of the facts of your language.
  2. Illustrative IGT examples from your testsuite.
  3. A statement of how you implemented the phenomenon, in terms of types you added/modified and particular tdl constraints. That is, I want to see actual tdl snippets with prose descriptions around them.
  4. If the analysis is not (fully) working, a description of the problems you are encountering.

In addition, your write up should include a statement of the current coverage of your grammar over your test suite (using numbers you can get from PyDelphin); and a comparison between your baseline test suite run and your final one for this lab.

Finally please briefly describe your test corpus, including: where you collected it, how many sentences it contains, and what format (transliterated, etc) it is in.


Submit your assignment

OR

use github?


Course materials borrow heavily from Linguistics 567: Knowledge Engineering for NLP at the University of Washington. Thanks to Emily Bender for letting us use them.