Laban’s legacy: making capability legible through design
How a disgraced choreographer contributed to factory efficiency, what it reveals about exposing capability through design and how we can avoid making the same old mistakes with AI.
In the archives of the New York Public Library, somewhere between the scores of Mahler and the choreography of Martha Graham is a document that, to the untrained eye, looks like the secret correspondence of two malfunctioning fax machines
Comprised of three vertical staves, black rectangles stacked like bricks, arrows and curlicues, it resembles musical notes but follows no melody.
A casual glance confirms that it is certainly not music.
But, it's definitely some form of code. There is clearly meaning there.
It actually describes a dance.
It is Labanotation: a system for writing down human movement. Developed in the 1920s by a choreographer-turned-theorist named Rudolf Laban, it allows complex sequences of movement to be captured on paper. Each symbol denotes a specific action: raise the right arm, turn 90 degrees, shift weight to the left foot. It's choreography captured in a form like sheet music.
The mystic who measured effort
Rudolf Laban was born in 1879 in what's now Slovakia, the son of an Austro-Hungarian field marshal. He was meant to become an architect. Instead, he wandered through the salons of fin-de-siècle Europe. He studied art in Paris at the same time as Kandinsky. He was influenced by a spiritual movement known as Theosophy. He sketched geometric movement charts in notebooks. He dabbled in astrology. At one point, he tried to map the expressive power of the human body onto a cube.
Then in 1928 he published Kinetographie Laban; what we now call Labanotation: a visual language that could notate more than just steps: direction, weight, time, flow. A jump was not just a jump. It had effort, purpose, trajectory. His students included avant-garde dancers and modernist architects. He ran festivals in Monte Verità, taught movement in Bauhaus-adjacent circles and imagined dance as a force for spiritual and social renewal.
Then he found himself choreographing for the Nazis.
A compromised legacy
In 1934, Laban was appointed Director of Movement for the Third Reich's cultural programme. His approach: mass movement with symbolic form, fit disturbingly well into the emerging Nazi aesthetic.
But in 1936, he fell from the grace of the Nazi party. His proposed choreography for the Berlin Olympics was rejected by Goebbels for being "too intellectual." In 1937, Laban fled Germany. He arrived in Britain as a compromised figure: unorthodox and politically toxic.
Rather than returning to theatre, he re-emerged in an unexpected place: factories.
Time-and-motion by way of choreography
During the Second World War, Laban was commissioned by the British Ministry of Labour to improve industrial efficiency. Unlike the stopwatch-wielding Taylorists before him, Laban wasn't interested in speed. He was interested in effort.
He believed that all human movement could be classified by four dimensions:
• Weight (strong vs light)
• Time (sudden vs sustained)
• Space (direct vs indirect)
• Flow (bound vs free)
These combined into what he called Effort Actions. A kind of periodic table for movement. A worker lifting crates could be "punching" or "gliding"; a machinist threading wire might be "flicking" or "dabbing." It sounds poetic but it had practical consequences: he showed that by changing the characteristics of movement rather than the task itself, factories could reduce fatigue, injuries, and downtime.
His 1947 book, Effort and Recovery, written with industrialist F.C. Lawrence, became a cornerstone of early ergonomics. His work moved from choreography to efficiency. In a Britain exhausted by war, Laban helped factories move more effectively.
This was a radical reapplication of his knowledge, skills and experience. The same man who once drew mystical movement mandalas in the Swiss hills was now designing better shift patterns at an aircraft plant in Coventry.
And yet, his deeper goal remained constant:
To name movement.
To give it form.
To make it intelligible and repeatable.
A historical curiosity applied to a modern design problem
It's tempting to think of Laban and his Labanotion as a historical curiosity. Nearly a century later though, we faced a remarkably similar design problem in creating a platform that used computer vision to power the internet of things.
The platform was called Pelion and its core capability was that it could turn existing CCTV infrastructure (cameras already installed in shops, offices, museums, and cities) into something intelligent. By applying machine vision and edge-based AI, Pelion could observe what was happening in a space, recognise patterns and behaviours, and trigger actions in response.
A supermarket could detect when stock on shelves was running low.
A museum exhibit could respond to visitor movement.
An office building could adjust lighting or heating dynamically.
The system could see. It could interpret. It could act.
But there was a problem.
The platform had been developed by technologists for technologists - even experienced systems integrators struggled to make sense of it. The underlying power was hidden behind an incoherent interface. It failed to express, explain or expose what the product was capable of and what users could expect it to do.
We couldn't start with screens until we sorted out the semantics.
An Ontology of Action
We built two things: a lexicon and an ontology.
The lexicon came first. A carefully chosen set of words that would describe the components and concepts of the platform in clear, consistent terms. Was it a detection or a recognition? An event or a trigger? Was it insight, or inference, or just data?
We deliberately chose words for clarity and to define character. We wanted Pelion to be intelligible; to feel purposeful, reliable, human-readable. Every term had to carry meaning and suggest the right mental model. 'Insight', for example, wasn't just a label; it signified the moment when raw data became useful at the point when the system recognised something significant and could act on it.
But naming things intuitively wasn't enough. The next step was the ontology: a framework for how those things relate to each other. What happens before an insight appears? What does an insight trigger? Where do devices live, and how do they connect to spaces? How do users build expectations and what happens when those expectations are violated?
The lexicon shaped the language of the product and the ontology defined its logic. Between them they established consistency. They made behaviours predictable. They enabled us to simplify the interface without removing any capability.
Most importantly, it created confidence.
Revealing capability
In interaction design, Alan Cooper describes the tension between the System Model: how a system actually works; and the User Model: how a person expects it to work. When these diverge, interfaces will feel confusing or fragile. When they align, the product becomes intuitive. The designer's job is to bring those models into alignment: to shrink the gap between the internal logic of the product and a user's mental model.
That's exactly what the lexicon and ontology did. They didn't just make the system easier to navigate, they made it possible to understand. They gave users a scaffold around which they could build a correct mental model. One that aligned with how the system actually behaved.
When users could reliably predict what would happen, they felt confident to explore. They didn't need training. They didn't need a manual. They could experiment. They could build.
But more importantly, the ontology made the system's capabilities and features more obvious. It surfaced what the platform could do which changed the conversation. Users stopped asking how do I use this? They started asking what could I use this for?
It gave them a way to imagine possibilities in their own domain, on their own terms.
That's what the lexicon and ontology enabled. Agency. Possibility. Intent.
The same old problems, new technology
It makes me wonder if we're making the same mistake again?
Most people currently experience generative AI through a chat interface. It presents itself as a finished product but it isn't really. It's a brilliantly engineered reference implementation that provides a window onto a broad technical capability. Its job is to provoke conversations about the capabilities. One could argue it's working perfectly. Perhaps too well because it's distracting us from the real breakthrough: the underlying capabilities themselves.
When we talk about AI, we're not talking about a single technology. We're talking about a layered stack of capabilities: large language models for generating and interpreting text; computer vision systems for recognising objects and movement; speech recognition and generation; recommendation engines; planning algorithms; diffusion models, GANs, VAEs and other generative models for images, video, and audio; knowledge graphs. These systems become useful when used together and built on top of; when someone designs a product with purpose.
This foundational layer could be directed towards any number of applications. But too frequently we’re still conflating the demo with the destination. We're using the chat interface as if it were the product, and evaluating its worth by what it spits out: a bedtime story, a LinkedIn post, an email reply.
We argue about em dashes or whether an AI-generated portrait in the style of Studio Ghibli is art or whether it's cheating to use ChatGPT to write a wedding speech. These debates aren't about the technology. They're about its most trivial surface applications.
Just like Pelion, AI arrived without its use cases fully defined. Our real task isn't just interface polish. It's structural thinking. It’s defining language and behaviour. It’s aligning system models with user models, so people can understand what AI products are and imagine how they might be used.
So just in case you were wondering: design isn’t dead. It’s as important as ever in the age of AI.