Submission 4
Consensus and Creation: A New Framework for Algorithmic Criticism
SP07-01
Presented by: Grant Hamilton
Literary-historical analysis has long relied on pre-determined categories such as genre, period, and theme to organize and interpret texts. While indispensable, these frameworks have also inadvertently silenced corpora by obscuring the interpretative context immanent to the structure of the corpus itself. This paper introduces “consensus communities,” a computational framework specifically designed to listen to a literary corpus on its own terms, revealing patterns of relation that have remained imperceptible to traditional critical methods. To achieve this, the methodology brackets established categories and operates instead through a consensus clustering of stable textual communities. This process synthesizes three distinct analytical dimensions of a text’s relationship to its corpus: its structural positionality (network centrality), its moments of semantic novelty (semantic disruption), and its deep patterns of relational affinity (community dynamics). By integrating these dimensions, the framework grants a corpus the latitude to articulate its own internal logic. When applied to a test corpus of 142 twentieth-century Nigerian novels in English, this approach reveals an emergent structure of six distinct communities that challenge conventional literary-historical classification. These are not simply thematic or stylistic groupings but mathematically defined constellations of textual affinity.
Beyond identifying these corpus-level structures, the framework’s primary innovation lies in its ability to translate such communities into a powerful interpretive lens for close reading. For each consensus community, the model extracts a unique mathematical signature representing its deep formal and semantic patterns. It can then identify specific passages within any given novel that most powerfully resonate with this signature. An analysis of Chinua Achebe’s Things Fall Apart, for instance, highlights passages that, while relatively unremarkable to conventional criticism, are computationally central to the novel’s participation in its consensus community. The algorithm does not interpret these passages. Rather, it presents such passages as empirical evidence, effectively asking why these passages are essential for understanding how the corpus organizes itself.
Such computational evidence presents a challenge that existing critical vocabularies may be ill-equipped to meet. At this juncture, I argue, the literary critic is repositioned as a Deleuzian philosopher who is tasked with developing new concepts and a new critical language by which to understand how such passages function as the load-bearing pillars of the corpus’s architecture. In this way, the consensus communities framework transforms both how the literary critic maps literary history and how she practices close reading. Crucially, it does not seek to replace the critic but to augment critical practice by providing robust, falsifiable evidence of textual relationships that are otherwise invisible. It presents literary studies with a profound opportunity – to allow the immediate literary context of a text speak for itself.