Hi! I'm a Ph.D. candidate in the Department of Linguistics at Rutgers University. I am generally interested in the mathematical modeling of context sensitivity in natural language meaning.
In particular, I am interested in linguistic phenomena where an expression exhibits its semantic effects beyond its syntactic position, as well as those where the meaning of an expression must be informed by the broad discourse context. Historically, these two kinds of phenomena were not always treated analogously, yet I believe looking at them in a systematic way can help us explore: to what extent we can abstract out a core mechanism for interpreting human language in context, local or broad, and to what extent it can be modeled compositionally.
My projects so far have broadened the empirical domain of this issue, by revealing the role of context dependence in the semantics of words of a variety of categories that are simultaneously contentful and functional, including count words, attitude verbs, and degree operators, across a diverse set of languages. Together, they advocate for an approach where the denotations of these words invoke variables (of events, alternatives under consideration, etc.) bound in the context.
You can reach out to me at ang.li.aimee@gmail.com.
News
Check out my dissertation (advisor: Simon Charlow).
I will be presenting my work on the cross-linguistic homophonies between comparison, scalar additivity, and continuation at SALT 33
in May.
Research
A new analysis of comparatives
I argue that a comparative meaning that compares structurally-derived alternatives provides a better account of their anaphoric dependence and a more unified view of comparatives in general.
2022. Dissertation defense
2021. Internal reading and the comparative meaning. SuB 26.
2020. Anaphora in Comparison: comparing alternatives. SALT 31.
False answers in Quantificational Variability Effects
For sentences like For the most part, John knows who cheated, I argue the embedded question gets to interact with the matrix adverb through event-relative interpretations.
2020. Quantifying over the resolution: false-answer sensitivity in QVE. GLOW 43.
2018. Quantified Exhaustiveness in embedded questions. NYU workshop: foundational topics in semantics.
Semantics of verbal classifiers
Semantics of verbal classifiers
Semantics of verbal classifiers
I argue that verbal classifiers like ci (Chinese) and time (English) can take scope because they count over properties related to events.
I argue that verbal classifiers like ci (Chinese) and time (English) can take scope because they count over properties related to events.
I argue that verbal classifiers like ci (Chinese) and time (English) can take scope because they count over properties related to events.
I argue that verbal classifiers like ci (Chinese) and time (English) can take scope because they count over properties related to events.
2018. Counting in the verbal domain. CLS 54.
2017. Distributing events: Mandarin verbal classifiers. [email me for the manuscript]
2017. Mandarin verbal classifiers and pseudo-incorporation. NACCL 29.
Web experiment on transparent readings
Two experiments hosted at Amazon Mechanical Turk that show the difference between definite and indefinite noun phrases in their ability to be evaluated in a time different from the topical situation.
Syntax of long passives
I observe that the Accusative Case licensing seems to be separated from the introduction of the external argument in Chinese, and argue this can derive the long distance dependency in Chinese bei-passives.
2018. Severing the end state: long passives of another type. [email me for the manuscript]
2016. Movement in Mandarin bei-passives. RULing 2016
Teaching
Department of Philosophy, Rutgers University
Department of Linguistics, Rutgers University
2021 Summer: Intro to Ling Theory. Co-instructor.
2021 Spring: Invented Languages. Instructor.
2018 Spring - 2019 Fall: Intro to Ling Theory. Instructor.
2017 Fall: Intro to Ling Theory. Teaching Assistant.