14:00 - 16:00
Parallel sessions 3
+
14:00 - 16:00
Location: Digital Scholarship Lab (G/F University Library)
Conceptual Art in the Age of Artificial Intelligence
  • Jianru Wu, Hong Kong Baptist University
  • Junyuan Feng, Beijing Normal-Hong Kong Baptist University
  • Yang Wang, Independent
Submission 99
Conceptual Art in the Age of Artificial Intelligence
RT01-01
Presented by: Jianru Wu
Jianru Wu
Hong Kong Baptist University
This paper examines the convergence of conceptual art and generative art through the lens of contemporary artificial intelligence, arguing that AI systems serve as a form of "conceptualized" production that necessitates a reevaluation of both artistic lineages. Historically, generative art, born from the academic field of computer graphics, and conceptual art, with its critique of materialism and museum mechanisms, have existed in separate spheres. The former was often confined to technical conferences like SIGGRAPH, while the latter circulated within the institutional frameworks of the art world. This paper posits that the rise of large-scale AI models, particularly those based on latent diffusion and natural language processing, dissolves this historical separation by foregrounding shared underlying logics between algorithms, concepts, and language.

The central inquiry is: How does the "prompt engineering" of AI art not only mirror the dematerialized, instruction-based ethos of historical conceptualism but also function as a critical method of digital humanities (DH)? In the contemporary AI paradigm, the prompt—a linguistic instruction—has replaced code as the primary means of media synthesis. This shift marks a significant moment of remediation, where complex human ideas are translated into a computable format to generate aesthetic outputs. This process is analogous to what Johanna Drucker terms data modeling, the abstraction of features from a phenomenon into a structured form for computation. The artist's prompt acts as a conceptual model that instructs the "machine" of the large language model. This resonates powerfully with Sol LeWitt’s seminal 1967 statement, "The idea becomes a machine that makes the art," which relegated execution to a perfunctory task. Similarly, the collaborative and systems-based logic of Art & Language’s Index 01 (1972) prefigures the relational, database-driven nature of AI.

This paper draws upon a curatorial practice—an exhibition featuring 19 artists and collectives—as its primary research methodology. The exhibition serves as a scholarly interface and a form of argumentation, juxtaposing computational aesthetics with contemporary critiques of technology to test our central thesis. This approach treats the exhibition not merely as a display but as a DH project that performs its analysis through the selection and spatial arrangement of digital and physical objects. The paper is structured into three thematic sections that reflect the curatorial logic.

First, the paper explores the shared logics of recursion, computation, systemic design, and conceptual art. I will re-examine the "algorists" (a term coined by Roman Verostko), including Verostko himself, Hans Dehlinger, and Casey Reas. Their work demonstrates how rule-based systems can produce emergent and unpredictable results, challenging the notion of the artist as a solitary genius and reframing generative art as a "performance" of a system. Verostko's "epigenetic art," where code functions like genetic instructions shaped by environmental randomness, provides a powerful model for understanding the interplay between determinacy and contingency in AI.

Second, the paper moves beyond merely confronting computational limitations to explore how its "flaws"—noise, hallucinations, and amplified social biases—are repurposed by artists as a potent critical methodology. In this section, AI’s "hallucinations" are reframed: they are not technical failures to be fixed, but unique windows into non-human-centric modes of cognition. An AI hallucination is not a mistake in the human sense; it is an inevitable output generated from the machine’s distinct statistical and associative logic, built upon vast, biased, and gapped training data. These hallucinations reveal how a machine "sees" the world—an alien yet internally coherent visual grammar, distinct from human perception. This re-contextualization of "hallucination" is central to art's critical inquiry in the AI era. Reas’s Compressed Cinema (2020) exemplifies this by using GANs not to achieve hyperrealism, but to explore the ephemeral, non-human visualities found within latent space, offering a form of automated media analysis. Documentary artist Li Yifan, trains image generators on socio-political archives not to precisely "restore" history, but to leverage the model’s hallucinatory capacity to generate a "potential" narrative from archival silences and gaps, allowing obscured fragments to reemerge as spectral artifacts. Artist Li Dan directly confronts AI's hegemonic obsession with "clarity" by creating optical camouflage resistant to algorithmic interpretation—a philosophical gesture aimed at preserving ambiguity and the unseen from computational capture.

Finally, the paper re-examines the problem of authorship in the age of AI. While the internet era already made questions of ownership exceedingly complex, the rapid development of AI presents an unprecedented challenge that demands a response beyond conventional legal frameworks. This section, therefore, shifts the inquiry from law to philosophy, analyzing the issue through the lenses of data construction, the machine perspective, and the very mechanism by which machines come to possess data, all contextualized through artistic creation.

The core of my argument is that AI forces a confrontation with the ontology of the digital object itself. From a digital humanities perspective, data is never a neutral given, but is always "capta"—taken and constructed through an interpretive model. AI art exemplifies this, where both the artist's prompt and the machine's vast, biased dataset constitute a complex act of data construction. Artistic practices that exploit the "machine perspective" make this construction visible. Works like Matthew Plummer-Fernández's Metamultimouse create dual-ontology objects: artifacts that are abstract to humans but remain legible data to a computer. This reveals how a machine "possesses" data—not through legal ownership, but through an exclusive mode of perceptual access to the underlying structure. The algorithm becomes the gatekeeper to one of the object's primary modes of being. In this context, the artwork is no longer a stable, singular entity but a relational process, its identity enacted differently by human and non-human observers. This ontological instability, brought to light through artistic practice, demonstrates that traditional models of authorship are insufficient, compelling us to develop new frameworks that account for the distributed, processual, and machine-mediated nature of creation in the age of AI.

In conclusion, this paper argues that artificial intelligence has the potential to provide a new epistemological framework that merges the instruction-based logic of conceptual art with the systemic processes of generative art. The artistic practices built upon AI systems are defined by a new set of characteristics: they are operable, founded on executable instructions; highly dependent on machine reading, with an ontology split between human perception and computational legibility; and they introduce de-humanized features, such as algorithmic hallucinations and non-human-centric perspectives. By grounding this analysis in a curatorial methodology, the paper also demonstrates how the act of exhibition-making itself can be a potent form of digital humanities research, one that uses aesthetic encounters to probe the profound cultural shifts initiated by artificial intelligence.