against Englebart

or, The Computing Platform is Too Narrow

The current main computing UI paradigm has some assumptions that go back to 1968 or earlier:

  • there is a hierarchical model leading to a single source of truth for what is happening

  • a single user owns and fully controls that model

  • the display is fixed view directly into a certain region of computer memory that the machine has full control of

  • the inputs are represented precisely as data objects (mouse cursor, keyboard stream)

but as soon as we start to reach outside our existing use cases, we find ourselves reaching towards a new reality:

  • computing state is formed by consensus

  • multiple users operate on the same shared state, conflicts need to be resolved

  • output - the machine’s actions on the world - is contingent, uncertain.  results need to be verified, adjusted

  • input - the machine’s perception of the world - is ambiguous. inputs need to be estimated, interpreted.


The biggest difference here is that the old system does not have feedback. The computer never says “is this right?”, the computer simply is what it is, and the human operates it “directly”.

I believe I may be rediscovering Cybernetics.

Traditionally, the way to expand computer-mediation was to use increased precision to make more things that fit the paradigm. This is why we’re using printers that speak dots-per-inch rather than plotter arms that can move in arcs. This is also why 3D printers often suck so much. There is a calibration process, but ultimately they are blind, and if things start to drift from the calibration, the machine does not know it. We’re reaching the limits of this approach.

In the future, humans and machines will work together to operate in and on the world. The machines will have to work the same way we do - looking at things from different angles, trying things out. Estimating, checking, testing models. Revising.


Some industry domains are already pushing into the new paradigm, but in weird and awkward ways:

  • VR - Virtual reality spaces have to be, to some degree, consensus based

  • Crypto - Blockchain is shared data, edited communally according to consensus rules

  • Robotics (e.g. self-driving cars) - I/O is cameras and real-world actions with unpredictable results

but none of that is general-purpose! I want something that I can have on the scale of the personal computer, of home computing.


I want motors, lights, toys, cameras, gadgets, instruments, projectors, sensors and actuators that interact together, that I can use in a real physical room, that can all be sharing a communal model of What Is Going On, that I can virtually wire together or code against, were I don’t have to invent the relationships from scratch every time. I want Augmented Reality, yes, (though I definitely prefer the DynamicLand projectors to the Magic Leap glasses) but it’s not just that I want to see a UI over the world, the reality of the world and the reality of the machine need to be intertwined.


What I want is Cybernetic Reality.

Previous
Previous

the Astrolabe, part 1