A model to explore digital relationship dimensions

Relationship Design is the profession of crafting and orchestrating digital aspects of our lives into meaningful and personalized experiences. That however begs the question what would actually constitute as a “good” relationship between a human and a digitally augmented thing or environment — and how it could be measured.

A vision of relatable objects and systems, based on Star Trek TNG

In Star Trek TNG it is interesting to look at the level of intelligence the computer displays, especially when simulating virtual agents in the holodeck. The intelligence of these holographic agents is fluctuating wildly, based on whatever fits the desired story line of the respective episode. But this might actually make sense. If we understand the Enterprise as a cloud data center and “the computer” as the main interface to distribute computing power in a sensible manner, it would only allocate as little cycles as needed to fulfill the intended purpose.

A model that measures the relationship between digitally augmented objects and users

Some aspects of a good relationship are objective and can be clearly defined, measured and evaluated. Looking at these aspects, they can be grouped into the overall notion of “utility”: How well the object or environment does what it is supposed to do according to the intentions of a specific user.


Speed is the end-to-end time from the trigger to the executed action. For digital assistants this might represent the entire journey through the pipeline, from recording the audio to speech to text to natural language understanding to business logic to natural language generation to text to speech, responding to the users intent. This also includes infrastructure times like device boot or wake-up, essentially the time to readiness.

Levels of speed

Too effective: The system is reacting too quickly to active requests or proactive queues, leading to negative acceptance as users perceive it as too intrusive.


Accuracy means how correct the user intent was fulfilled. Usually this is measured by the defined feature set of the system, where the feature requirements would specify what the system is supposed to be able to do, with accompanied test cases to sign off that the feature works as intended.

Levels of accuracy

On point: The system does exactly what it was asked to do or what it should be doing in specific situations. The user perceives the request as fulfilled.


Robustness is how well an intent can be answered if the request is not according to the defined way of triggering it. In other words: How good is the system at dealing with certain levels of ambiguity.

Levels of robustness

Fulfilled: The system is directly able to infer the correct intent based on the incomplete information.


Privacy is about being in control of personal information, matters and relationships. It is a spectrum based on what a society perceives to be appropriate to share or be shared in specific contexts. Privacy goes hand in hand with transparency, where privacy covers the aspect that data is gathered and shared and transparency is about understanding what happened with that data after the fact.

Levels of privacy

Passive: All sensors are inactive or disconnected until the user explicitly activates them, for example via a physical button that provides power to the sensors.
Bonus: Sensors are only active for a short period of time that is required to understand requests and deactivate automatically afterwards.


Transparency is about how well the user understands what the device does. That means how it understands requests, the information it collects (which is tied to #Privacy) and what happens with the collected information. It is about how much control the user has over their personal data and the data that they specifically create within the system.

Levels of transparency

Obscured: The system does not communicate or otherwise indicate which data points it collects, when it is collecting them and what it does with them.


Trust builds on privacy and transparency and extends them into an awareness of the implications any action of the system has — either on the user, their peers and environments as well as the world overall.

Levels of trust

Broken: The users can not comprehend the actions of the system and to them it behaves erratic and unpredictable.

Let’s discuss

Again, while not perfect, it’s a starting point to better understand user-object dynamics in a future, fully digitally augmented world. I do have levels for the secondary aspects as well, but for now I’d love to hear your perspective on the above metrics and if you have any models that you use yourself when designing digital assistants / digitally augmented objects.

Living in Berlin / Germany, working at Microsoft, loving technology, society, good food, well designed games and this world in general. Views are mine, k?