top of page

The Computational Future of Experience

I. The Computationalizing of Experience
II. Designing for our New Layers of Reality
III. Metaverse Rules: Systems Designs
Body and Space in Our Realities

IV. Identity in the Metaverse
V. Computational Experience
VI. Nobody Wants to Visit Your Metaverse

A lot has been written about the metaverse as a spatialized version of the internet, internet 3.0. To truly understand what that means, however, we have to understand what the internet means for us now, in order to understand the impact that its evolution will have.

​

Previous periods of technology have been able to define the category that they fall within.

The category definer was the iPhone, kicking off a new interface for connecting to the internet, while also defining a platform to leverage that interface to offer services to users. The app economy within the defined infrastructure of the phone OS and ecosystem. These interactions weren’t redefining

​

Web 1.0 Static, informational 

Web 2.0 Social

Web 3.0 Spatial? Immersive? Interoperable? 

What this means in contrast to our past implementations of the internet is that the

An overlay of tehcnology that can now be more integrated with both individuals and with space.

​

So what really brings together these different layers of experience into one cohesive experience? There are defining features that are generally regarded: interoperability, 

A defining feature of this integrated computationality of experience.

​

Fundamentally, perhaps the largest change underlying both the creation of new virtual worlds as well as the technologizing of our physical reality is the increasing computationalizing of all individual experience. Not only are we able to build computational models of objects, but are increasingly better able to track the intricacies of the inputs and outputs encountered by individuals across experiences. In a computational reality, everything is data. At any given moment, anything can become a data point.

 

The ability to track a user's movement through the data fabric of each layer of reality enables us to better support a responsive experience.  This computationality is what is behind the interoperability of an object, because we know the data history of the object and its relationship to the individual. It is behind the ability of the systems behind the metaverse to respond uniquely to each individual. 

In experiences that allow a number of points of input and interaction, the user has a lot of agency in their choices. From the perspective of a designer, it's important to find a way to better understand these points of interaction and how they can work together to create a seamless experience. Thus, designing for these immersive contexts becomes fundamentally an information architecture and pattern language question: how are we coding these complex environments implicitly and explicitly?

​

And then, simultaneously, how are we qualitatively and quantitatively breaking down the experience of humans within environments that can be computationally modeled? As we reckon with these new mediums, it is equally important to consider what of ourselves we're bringing to. As the possibilities for interaction increase, it becomes more and more important to get a better understanding of our user. Why are they making the choices they make within the context of the experience? How can we understand, anticipate, and thus direct them or customize the experience to their unique needs?

In experiences that allow a number of points of input and interaction, the user has a lot of agency in their choices. From the perspective of a designer, it's important to find a way to better understand these points of interaction and how they can work together to create a seamless experience. Thus, designing for these immersive contexts becomes fundamentally an information architecture and pattern language question: how are we coding these complex environments implicitly and explicitly?

​

And then, simultaneously, how are we qualitatively and quantitatively breaking down the experience of humans within environments that can be computationally modeled? As we reckon with these new mediums, it is equally important to consider what of ourselves we're bringing to. As the possibilities for interaction increase, it becomes more and more important to get a better understanding of our user. Why are they making the choices they make within the context of the experience? How can we understand, anticipate, and thus direct them or customize the experience to their unique needs?

In better designing for these systems, we need to account not just for the complex spatial realities of these worlds, but the fundamental way that these realities are restructuring the organizational networks of society.

Platform governance - mirrorverse

In physical realities, we contend with some level of distinction between the natural laws of physics and chemistry and the ilfe sciences being distinct from societal laws. We cannot live forever, due to biology and entropy. We cannot fly with aid, due to gravity.

 

In virtual space, there is a bleed between these two forms of determining what behaviors are allowed. Physical laws in virtual space are determined to some extent by the engine wherein actions happen, as well as to the design of specific words within those engines. Mechanically, there is nothing to stop one's embodied avatar from flying within virtual space—except for the designed choice of what aspects of movement and transportation to allow in the virtual world, as well as the constraints from whomever has ownership of the virtual world (gating flying, for instance).

​

​

​

  1. Affordances of virtual vs. physical space

    1. Hybrid

    2. The Feeling of Being in Space

    3. What are the new third places?

  2. Form, Program

  3. STatic vs Dynamic Systems

​

Third Places and Localism

Reskinning of a space for multiple purposes. Our houses are serving multiple purposes, as the spaces that we live, work, and socialize in.

Interfaces and adaptive environments

Plus the mirrorverse restructuring space to be more efficient, in concert with autonomous transportation.

a core value proposition of Uber across not just Uber proper but also Uber Eats is its ability to transform space into a DTC distribution pipeline. This is the connectivity and collapsing of the internet again mapped onto real physical spaces.

Uber and DTC supply chains :Self-driving vehicles

Real Worlds API: Nonscalable Platforms and Placemaking: Commerce, Community, and The New Second Places

Technology

Mixed use developments

We are speaking of the affordances of individual spaces, but how do these contribute to the creation of social experiences that make unique cultures for space that are non-collapsible. Culture creates unique program for space.

These personal histories aren't easy to enocde in space. From the perspective of external systems, it is easier instead to create a

When we log onto Google maps, we are aale to get general aggregates of foot traffic for different locations, broken down by hour, as well as a live read on whether a place seems "busier than usual." But this doesn't do a good job of telling us

Environment has functioned shared interpretive layer, a commonality of language that helps us hold down elements of reality. And given the relative permanence of many natural features of our physical realities, this was an element that was assumed. Go down two lights and make a right at the Dairy Queen could mean something as a statement. But then the internet changed the notion of addresses, and the metaverse brings with it a reality that enables us to collapse the way we think about time and space. Does this point go here?

Desire lines, Persistence of actions, and the stakes of actions?

This all means that replicating systems from reality has less of an intuitive reason to exist. How should reality be represented in virtual spaces? There is not a one-size-fits-all approach.

Apple's skeuomorphic design

a pastiche of physical reality

comfort level, calling on intuitive behavioral scripts that are relatively familiar.

At the same time, these actions aren't embodied in the same way that physical actions are. What this means is the way we process sensation and experience is different. Embodiment is what makes the experience of existing in physical reality dynamic.

It's also a strong motivator for action.

Maslow's Hierarchy of Needs. Actions are often driven by needs. Those needs are often related, fundamentally to our corporeal form: food, water, clothing, shelter.

​

​

 

This mapping of perspective and capturing of experience isn't solely useful for directing users to the next task or action.

Embodiment shapes the way that we capture experience. It enables us to create a more complex program of place. It creates built in feedback for actions, a responsiveness that makes certain activities inherently rewarding.

Layers of sensory experience

Proxemics and the Hidden Dimension

// in a very real way, these shape the world that is processed. Reciprocal relationship

The emotion cycle

​

I've always appreciated Christopher Alexander's work around Pattern Languages as a way to begin breaking down ontologies relating to space. But while Christopher Alexander's work has proven to be a useful starting point for creating ontologies, it feels incomplete. Given my background in computational neuroscience, my work has always been concerned with the ways that individual perceptions shapes epistemology. Under this premise, it becomes important to consider the ways that each individual's internal map of the world is different, shaped by their unique experiences and perspectives.

This idea of unique perspectives seems to run counter to the idea of shared ontologies. However, Kevin Lynch's The Image of the City worked to combine these ideas, by building a pattern language around individual perception. Still, it felt to me like scaling the idea of individual maps built on human perspectives would be quite difficult.

As we see a growing need to proceduralize the mapping of physical worlds and be generative in the creation of virtual worlds, it is crucial to have tools to build out these individual ontologies in a computational way. I appreciated this study, as it offered the tools to link these two scales of expression.

​

As we see a growing need to proceduralize the mapping of physical worlds and be generative in the creation of virtual worlds, it is crucial to have tools to build out these individual ontologies in a computational way. I appreciated this study, as it offered the tools to link these two scales of expression.

Building on the thoughts mentioned around my case study: I think the ability to capture individual perspectives in current spatial data models is difficult. Although our notions of epistemology are evolving, maps are often meant to represent something resembling a shared version of reality.

An individual perspective and path through space becomes difficult to capture under current GIS systems, because it deals with two elements that only loosely fall under the "objects" and "fields" breakdown: agents and environments. As agents, humans aren't grounded in point space, but are moving through space based on goal-driven behavior. Similarly, their vantage point in making micro- and macro-decisions isn't based solely on task-based needs, but rather on the complex environmental options available to them based on their POV on the world.

For example, if I were on my way to a post-office box to mail a letter, the first part of my decision-making process would be done through tools that are available through current GIS systems: I could use maps to identify the closest location that might meet my need. But, if I had some time, I might not necessarily choose the most efficient route. I might choose the route that I know has a tree in bloom that I want to see, or want to pass by the home of my crush in hopes that I "run into them" on the way.

Additionally, even if I were to choose an efficient route, the things I see and notice that build out my "image of the city" are created by the unique combination of me and the environment. It is based on my internal cognitive architectures that shape what I focus on, as well as the environmental features that draw my attention. Although internal cognitive processes are difficult to model, the precise nature of current sensors means that we can model some of these environmental features in our GIS systems.

​

  1. The future of human-computer interaction is body-environment interaction.

    1. Content and technology intersecting: Becoming about more intimacy

    2. Embodiment

    3. Feedback Loops

The idea of cognitive architectures has been used to break down mental processes to aid in the creation of artifical/synthetic intelligences, but the same processes can also help us to better break down the experience of humans within environments that can be computationallly modeled. Better understanding this context can be used in multiple ways.

Narrative distinction: user as character, taking on a narrative persona for the experience.

Extrapolating the "black box" of this persona architecture can be done in many ways. The most fundamental bottom up approach uses user choice to create feedback loops, and potentially craft an emergent profile of the user over time. But as we build more sophisticated experiences and create worlds that are themselves computationally generated (or, conversely, are able to extract more information from our built environment), it becomes easier to build more sophisticated architectures that are able to use information about environments and situations to create a more refined portrait of the user.

What does it mean to define user experience in the context of immersive worlds? Doing so requires an interplay between understanding individual perspective (encompassing persona, POV, and presence) and understanding the context of the environment that they inhabit (sensation, interactibility, narrativity, etc).

For each project, putting those pieces together creates bespoke thematically-driven frameworks of inputs and outputs to frame the user experience. Inputs - what features of the world can the individual respond to and Outputs - how can they act on the world?

Defining these features forms a rudimentary cognitive architecture that highlights salient features of the world as viewed through individual personas. These can be useful in designing for a range of character roles, from user-meditated roles to artificial cognitive personalities.

​

​

​

Within these persona architectures, emotion can be framed in different ways, depending on if is viewed as an unconscious sensory process, or if we are dealing with emotion later on in the cycle of processing, when we have been able to cognitively frame and label it. Regardless, this interlinked model of world, interaction, and user should account for the notion that emotions arise out of an environmental and embodied context.

In translating this to virtual space, we are handicapped by the fact that we are dealing with a limited sensory palette. Contrary to what most of us learned in kindergarten, we have more than 5 senses, although the notion of sight, sound, smell, taste, and touch function relatively well as large buckets.

Being in Virtual Space. The act of being in virtual space is less inherently rewarding by virtue of the fact that we are not embodied in being there.

True Virtual Reality

bottom of page