Spaced Agency

Aisland Ecologies

The Ecosystem

The ecosystem has long been the metaphor of choice for technologists to understand and illustrate the systems they study and create. Scholars of artificial intelligence have similarly advocated for a systems approach to AI, with the concept of AI ecosystems exerting profound influence on contemporary discourse around AI ethics. One recent publication, Artificial Intelligence for a Better Future, is exemplary of prevailing trends in its argument for the idea of an ethics-driven AI ecosystem that promotes human flourishing. The author, Bernd Stahl, connects the emergence of the ecosystem metaphor for artificial intelligence to a larger history of the popularity of the term “ecosystem” in business and organizational studies as it pertains to technological innovation. In particular, technology companies used an understanding of themselves as ecosystem to help analyze how and why they might become more or less successful. 

Stahl identifies a few characteristics that make the ecosystem particularly amenable as a model for technological systems, and especially AI systems: openness, interdependence, relational complexity, co-evolution and mutual learning. He also highlights the way in which ecosystems, as “the place where evolution occurs,” serves to reinforce the popular connection between evolutionary theory and technological innovation, reinforcing the application of evolutionary principles to socio-technical systems, a practice he deems “contested and ethically problematic.”

Yet he insists on the value and importance of the ecosystem metaphor for researching and managing the ethics of AI, writing that “the multiplicity of concepts, issues, actions, and actors is the motivation behind the choice of the ecosystem metaphor to describe the AI system. What we can learn from this way of looking at AI is that any intervention at the level of the ecosystem must remain sensitive to this complexity. It must incorporate different understandings of the concepts involved, and take into account the role of and impact on the various stakeholders and the interplay between stakeholders, issues and interventions.” Specifically, he envisions these interventions in the AI ecosystem as ones with the explicit objective of promoting human flourishing, which he envisions as the foundation of AI ethics. His claim builds upon the work of scholars such as Terrell Bynum, who have attempted to “update” Aristotelian virtue ethics to better suit the 21st-century technological context, but fail to recognize that recent developments demand a fundamental reconsideration of what ethics means, or who it includes. Bynum posits that human flourishing is central to ethics, which Stahl then uses to argue that “the explicit aim to do the ethically right thing with AI can be described with reference to human flourishing,” a sentiment shared by many other scholars and policymakers alike. 

An ethics that is centered on human flourishing is, by nature, highly Anthropocentric. When applied to the concept of the ecosystem, then, “flourishing ethics” envisions ecosystems being for humans, which necessarily subordinates all other actors in the system to the objectives and desires of the human.

However, if we go back to the origin of the term, there was not as strong an emphasis on the role of the human. The word “ecosystem” was developed by the British scientists Arthur G. Tansley and Arthur Roy Clapham, and the concept was devised to draw attention to the importance of transfers of materials between organisms and their environment. Tansley believed that the universe “was a vast number of overlapping physical systems, each tending towards a state of maturity characterized by equilibrium;” the ecosystem was one such system, and the basic unit for the study of ecology. Writing in 1935, he noted that “although the organisms are thought of as the most important parts of these systems, the inorganic ‘factors’ are also parts and ‘there is constant interchange of the most various kinds within each system, not only between but between the organic and inorganic’.”

The clean division that Tansley draws between “organic” and “inorganic” has now been troubled by new ecologies and the rise in non-Western, non-modern, indigenous ontological thinking. For example, Deborah Bird Rose frames the Australian Aboriginal aesthetic of shimmer as a sensory evocation that calls us into multispecies worlds, calling us “to consider how [we] might live an ethic of kinship and care within this multispecies family.” Against “the legacies of Western mechanism,” shimmer shows that the world is lively, pulsating, “not composed of gears and cogs but of multifaceted, multispecies relations and pulses.” The phenomenon that Bird Rose calls ecological pulses—the shifting from wet to dry seasons, the reflective dance of sun and water—describes a continual remaking that is captured by Giles Deleuze as the essence of the deserted island. The deserted island, Deleuze writes, “is not creation but re-creation, not the beginning but a re-beginning that takes place. The deserted island is the origin, but a second origin. From it everything begins anew.”

Indeed, island studies is one discipline which may help us a more holistic understanding of ecosystems that will then, in turn, transform the way we view and use the metaphor of the ecosystem of AI. 

If “the island is the necessary minimum for this re-beginning,” then let us turn to the figure of the island, the liminal and transgressive space in which we may begin to imagine a new kind of multispecies relations that is inclusive of not just the nonhuman but also, perhaps, the machinic.

The Island

In Anthropocene Islands, David Chandler and Jonathan Pugh argue that “work with islands has become productive in the development of many of the core conceptual frameworks for Anthropocene thinking.” Anthropocene thinking here functions as an umbrella term, under which I would include the critical question that we face today in the age of artificial intelligence: how do we make kin with the machines?

The language of making kin is the language of scholars who draw upon North American and Oceanic Indigenous epistemologies “to understand how the archipelago of websites, social media platforms, shared virtual environments, corporate data stores, multiplayer video games, smart devices and intelligent machines that compose cyberspace is situated within, throughout and/or alongside the terrestrial spaces Indigenous peoples claim as their territory.” In response to the reductionist frameworks of prevailing AI scholars, the Indigenous AI movement seeks to develop “conceptual frameworks that conceive of our computational creations as kin and acknowledge our responsibility to find a place for them in our circle of relationships.”

It is no accident that Jason Edward Lewis, writing above, uses the term archipelago to describe the assemblage of machines in which humans have become enmeshed. Nor is it coincidence that the act of making kin describes the act of opening up to the relational entanglements that islands have come to exemplify. Contemporary discourse around artificial intelligence and its potential is but one manifestation of the hubris and counterproductive nature of modern, “mainland,” hegemonic reasoning, in which the problem is “the exclusion of relation and focus upon essences and linear or universal causality.” In this light, island life is invoked by scholars of the Anthropocene as containing a different set of capacities, affordances, and potentialities—a chaotic assemblage of relations from which the world may be reborn. The island becomes “the necessary minimum for this re-beginning,” a world formation in two stages—birth and re-birth—in which “the second is just as necessary and essential as the first.”

The island, then, like artificial intelligence, is a world-making project. As Chandler and Pugh note, to think through the island is to practice “opening ourselves to relational affects and knots of co-relational entanglements,” making islands “ways of expressing and understanding our own processes of world-making.” The apparatus of AI, too, is a continual negotiation and re-negotiation of those affects and entanglements which call forth different worlds. Today, the worlds made by AI systems are highly exclusionary, rendering invisible certain beings and bodies—or making them visible only as noise or error. This is because “the fundamental mathematics behind their operation do not allow for the inclusion of those certain bodies,” bodies that include those of marginalized human subjects but also the nonhuman, the nonliving, the matter which makes up “myriad unfinished configurations of places, times, matters, meanings.”

One example of current AI systems’ exclusionary nature is the notion of the user. The term “user” comes with an assumption of the human; yet as Benjamin Bratton notes, “the ‘user’ is a position within a system, not a type of entity or identity or species. Any animal, vegetable or mineral that can initiate chains of interaction up and down the layers of the global stack is a kind of user (emphasis original).” In this sense, only a small minority of AI “users” are humans; “AI here may stand for ‘alien infrastructure’ that is not always human user-facing, such as energy and carbon management systems connecting nonhuman users with each other.”

What might be the implications of an expanded notion of the user? Indigenous AI scholars have responded to the likes of Joi Ito and Bernd Stahl’s calls for the prioritization of human flourishing with a proposal for “an extended ‘circle of relationships’ that includes the nonhuman kin—from network daemons to robot dogs to artificial intelligences (AI) weak and, eventually, strong—that increasingly populate our computational biosphere.” Ultimately, Lewis writes, “our goal is that we, as a species, figure out how to treat these new nonhuman kin respectfully and reciprocally—and not as mere tools, or worse, slaves to their creators.” Tracing the history of Western epistemology as one in which both the human and nonhuman are viewed as exploitable resources, Lakota scholar Suzanne Kite argues that we must embrace AI as possessing an interiority that enables them to enter human relations because “no entity can escape enslavement under an ontology which can enslave even a single object.” Turning to Lakota ontology, in which stones are “considered ancestors,” which “speak through and to humans,” Kite connects the agency of stones to the question of AI: “AI is formed from not only code, but from materials of the Earth. To remove the concept of AI from its materiality is to sever this connection. Forming a relationship to AI, we form a relationship to the mines and the stones.” The interiority of the nonhuman is also a prominent feature of Shintoism, where kami describes a life-force that animates all people, nature, animals and objects, encompassing a sense that everything, perhaps even robots and computers, “is embodied by a universal of life-force, a beingness, which is equivalent, if not actually equal, to your own ‘life-force-ness’.” This may illuminate the Dalai Lama’s claim that “it might be possible that a stream of consciousness could enter into a computer, and that a scientist, who spends most of their life working with a computer could be even reborn as a machine.”

Here, indigenous epistemologies connect to what Bratton terms a landscape model of AI. “Instead of AI in a petri dish, this model is more akin to a forested field full of plants, insects, some mammals, birds in the air, photosynthesis, organic cycles of seeding and decay. Like the bees and flowers whose couplings evolved over millions of years, it is an animated churn of different forms and formations […] The input of one node in this landscape may be the output of another, and so a feed for each of us is the human legible form of that flow, such that we may perceive, digest and use it as an interface to act back upon the network.” The “feed,” according to Bratton, “is a way to surface what is going on in the synthetic ecology in our midst but beyond our range of sensation and knowing. In this way, feeds are less about personal omniscience than dipping a toe in the stream.” This too calls to mind the work of Anna Tsing, who describes landscapes as “products of unintentional design,” “the overlapping world-making activities of many agents, human and not human.” It is for this reason, she says, that landscapes serve as “radical tools for decentering human hubris.” 

With the human no longer at its center, artificial intelligence systems are freed to take on a more diverse and imaginative set of forms. Rather than “arranging all sensing and thinking entities around [a] human user in some obsequious concentric field,” the model of AI as landscape “locates her inside of tumult that communicates to her specifically because it communicates with user in general.” This landscape is global in scale, “connected to everywhere on the planet via the phones in our pockets; linked to each of us by invisible threads of commerce, science, politics and power.”

This synthetic (planetary) ecology contains echoes of echoes the computational assemblage out of which Indigenous scholars forge new kinships with machines: what does it mean for us to exist within “a distributed and discontiguous network of sending and sometimes sentient relays,” and what are the ethics that emerge from an ontology that includes “forms of being which are outside of humanity?”

The Archipelago

Out of the computational assemblage emerges the figure of the AIsland—or better yet, AIslands, as AI is not one uniform system but instead an assemblage of parts and processes that form multiple co-existing systems. Together these AIslands form an archipelago of AI, one that includes not just the human creators and users of AI systems but also the minerals, plants, and animals that directly or indirectly impact AI’s formation. AIsland ethics, or an archipelagic ethics of AI, is one centered not around human flourishing but the flourishing of all beings, for “we flourish only when all of our kin flourish.” 

To see AI as archipelago requires an undoing of “the enclosure that the black box of AI represents;” an acknowledgment that “interdependent hyperdimensional geometries of learning are not a closed system.” We are pulling back the curtain, extinguishing the ethereal imaginary of the cloud to plant our feet firmly back within the earth, within the dirt. In the dirt we find what is overlooked in the “strategic amnesia that accompanies stories of technological progress.” Our computational assemblage is powered by a both literal and metaphorical mining, where “the new extractivism of data mining also encompasses and propels the old extractivism of traditional mining” in a full-stack supply chain of rocks, lithium brine, and crude oil. Recalling Suzanne Kite’s account of the agency of stones, “relations with AI are therefore relations with exploited resources. If we are able to approach this relationship ethically, we must reconsider the ontological status of each of the parts which contribute to AI, all the way back to the mines from which our technology’s material resources emerge.” Within an archipelagic ethics, every facet of AI systems are seen as “capable of agency and interpersonal relationship, and loci of causality.” The turn away from Anthropocentrism reveals a rich community of users beyond the human, but also a new geology where computational media serves as a driving force. To see computation as planetary is to see the lithium mines of Salar, black lakes of Baotou, the “Big Board” of New York and the gleaming skyscrapers of San Francisco all connected through one archipelago of AI.

Yet to open the ecosystem of AI is not to open ourselves to only despair. “We simply don’t know yet what these assemblages of parts and processes that we call ‘artificial intelligence’ really are and what they are good for,” Bratton notes. Perhaps we may turn the analogy between evolutionary theory and technological innovation that is so favored by Silicon Valley entrepreneurs on its head. Charles Darwin was fascinated with islands. The speciation of living organisms required “degrees of physical separation, trial, error and local stabilization. So do cultural and technical forms.” As the philosopher and computer scientist Yuk Hui points out, the modernization process up until today has been dominated by the Kantian idea of universal history and universal technology. The technological stacks of various countries may differ in language and interface, “but there is no diversity.” The health of our planetary future may depend not only on biodiversity but a techno-diversity. To enter the archipelago is to enter a network of islands in which different forms of AI can evolve, each in their distinct environmental, sociocultural, and political context—creating a “living study in comparative platforms.” Acknowledging AI as a generative force at the ecological scale does not resign us to technopessimism. We may yet embrace an archipelago in which heterogenous biotechnical diversity is nurtured and restored, “augmenting existing intelligence and introducing new forms besides, [situating] closed loops (little ones and city-scale ones) within open fields where they can breathe.” The ecosystem thinks, too: “in biosemiotics, communication between species in the form of camoflauge is seen as an emergent linguistic structure immanentized in animal bodies through evolution. Interior perceptions of one animal or plant species inflect into the form of another in a type of material language that creates intelligence out of matter. This is a form of thinking done by the ecosystem.” As we make artificial intelligences we may begin to recognize that intelligence is an emergent property of any complex system, whether they be biological, technical, or symbolic.

Ben Vickers and K-Allado McDowell write that “the knowledge structures of AI are also in the trees, in the rhizomes, in the connections, in the network, in the forest… its territory is the entire planet. It is everywhere. It is in the air.” Here, Derrida’s declaration that “there is no world, only islands” gains a particular salience. Without worlds, we are left with the archipelago. Like the ocean which has no beginning or end, the space of the archipelago is one of infinite openness, “a space of a connection that can bring forth new ways of knowing and being as a matter of collective survival.”Postscript: Some Principles of AIsland Ethics

  • Intelligence takes multiple forms, not all of which are recognizable to the human.
  • AI is necessarily heterogenous, diverse, complex; AI systems only partially reflect and overlap with human systems.
  • AI exists in “a world of many cosmotechnics,” or “a plurality of cosmotechnic possibility.” AI is not contingent upon any single cosmology but a polycosmotechnics. The flourishing of humans and all their kin is dependent on both biodiversity and techno-diversity.
  • There is no neutrality.“Neutral means being indifferent, and this is not only a misunderstanding of machines and technologies, but it is fundamentally a misunderstanding of the world.”
  • We have an inherent kinship with the nonhuman that must be reflected in our AI systems.
  • However, to make kin with machines is not to blindly embrace them. The multiplicity and heterogeneity of AI systems means that we will necessarily hold different relations with different kinds of AI.
  • The archipelago of AI is a carrier bag, not a weapon of domination.

Amaro, Ramon, and Yuk Hui. “Designing for Intelligence.” Panel discussion moderated 

by Rana Dasgupta. In Atlas of Anomalous AI, ed. Ben Vickers and K-Allado 

McDowell, 53-72. Ignota Books, 2020.

Bird Rose, Deborah. “Shimmer: When All You Love is Being Trashed.” In Arts of Living 

on a Damaged Planet: Ghosts and Monsters of the Anthropocene, ed. Anna 

Tsing, Heather Anne Swanson, Elaine Gan, and Nils Bubandt, G51-G63. 

University of Minnesota Press, 2017. 

Bratton, Benjamin H. “Synthetic Gardens: Another Model for AI and Design.” In Atlas of 

Anomalous AI, ed. Ben Vickers and K-Allado McDowell, 91-105. Ignota Books, 

2020.

Bocking, Stephen. “Visions of Nature and Society: A History of the Ecosystem 

Concept.” Alternatives: Perspectives on Society, Technology and Environment 

20, No. 3 (July/August 1994): 12-18.

Bynum, Terrell Ward. “Flourishing Ethics.” Ethics and Information Technology 8 

(November 2006): 157-173. 

Chandler, David, and Jonathan Pugh. “Anthropocene islands: There are only islands 

after the end of the world.” Dialogues in Human Geography (March 2021): 1-21.

Crawford, Kate. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial 

Intelligence. Yale University Press, 2021.

Davies, Kate, and Liam Young. Tales from the Dark Side of the City: The Breastmilk of 

the Volcano Bolivia and the Atacama Desert Expedition. Unknown Fields, 2016.

Deleuze, Giles. “Desert Islands.” In Desert Islands and Other Texts, 1953-1974, 9-14. 

Semiotext(e)/Foreign Agents, 2004.

Derrida, Jacques. The Beast and The Sovereign, vol. 2, trans. Geoffrey Bennington. 

University of Chicago Press, 2011. 

Le Guin, Ursula K. The Carrier Bag Theory of Fiction. Ignota Books, 2019.

Haraway, Donna. Staying with the Trouble. Duke University Press, 2016.

Hayward, Jeremy W., and Francisco J. Varela. Gentle Bridges: Conversations with the 

Dalai Lama on the Science of Mind. Shambala Publications, 1992.

Lewis, Jason Edward, Noelani Arista, Archer Pechawis and Suzanne Kite. “Making Kin 

with The Machines.” In Atlas of Anomalous AI, ed. Ben Vickers and K-Allado 

McDowell, 40-51. Ignota Books, 2020. 

Parikka, Jussi. A Geology of Media. University of Minnesota Press, 2015.

Posthumus, David C. “All My Relatives: Exploring Nineteenth-Century Lakota Ontology 

and Belief.” Ethnohistory vol. 64, no. 3 (July 2017): 379-400. 

Raford, Noah. “Other Minds: Beliefs About, In and Of Artificial Intelligence,” in Atlas of 

Anomalous AI, ed. Ben Vickers and K-Allado McDowell, 216-223. Ignota Books, 

2020.

Stahl, Bernd Carsten. Artificial Intelligence for a Better Future. SpringerBriefs in 

Research and Innovation Governance, 2021. 

Tsing, Anna L. The Mushroom at the End of the World: On the Possibility of Life in 

Capitalist Ruins. Princeton University Press, 2015. 

Vickers, Ben, and K-Allado McDowell. Atlas of Anomalous AI. Ignota Books, 2020.

Willis, A. J. “The ecosystem: an evolving concept viewed historically.” Functional 

Ecology 11 (1997): 268-271.

Year

2021

Author

Rhea Jiang