With all my work at ManageWithoutThem.com I’m feeling vindicated.
https://blog.cosmos-institute.org/p/coasean-bargaining-at-scale
With all my work at ManageWithoutThem.com I’m feeling vindicated.
https://blog.cosmos-institute.org/p/coasean-bargaining-at-scale
I often work at the intersection of business architecture, strategic management, and scenario planning. These are each deep specialised disciplines in their own right, but here I offer one perspective on how they must work in unison.

https://matthewdegeorge.substack.com/p/linking-business-architecture-strategic
I found this old presentation I did at Aware Services (now Merkle) about Digital Twins. It’s was intended as an informal internal information session but worth publishing here.
Update: I mentioned my fondness for Kanye West above. So I should clarify this was an appreciation of his music – and I was pretty sure at the time he’d somehow redeem himself, but I guess he got worse…
As soon as the OpenAI API started to offer voice and image generation (or at least as soon as I discovered it) I wrote a little Python script to make videos.
The script was simple: provide a script and it generates audio spoken words from a script, it then looks at the script again as a whole to generate a bunch of prompts to generated images. Finally, it generates the images using those prompts.
The first example (above) was the Organisational Reasoning teaser video for a new YouTube channel. There is an irony in this that the whole point of “Organisational Reasoning” is to move beyond the fad of AI content generation and focus on how AI advances will impact the design of organisations.
I was still interested in the more modest aim of being able to “express” pre-existing writings as video and audio. Giving audiences multiple ways to consume content, or being able to quickly publish written words on podcasting platforms, feels like a useful capability for solo creators.
I used the preface for my old, draft, ManageWithoutThem book for the above example. It worked quite well.
Then recently, while attending the virtual Board of Innovation Autonomous Summit the avatar-based approach of Synthesia.io caught my eye. I was very disappointed with what it did with the script as it tried to turn prose into… whatever this is; but the avatars are very impressive (and I think you can use a script directly if you want).
These tools are flooding in now, and Descript have just emailed me today with ChatGPT-enabled tool that pushes the result into their already full-featured editing engine, so you can continue to refine the video with all their standard tools.
A quick example below produces results similar to my original Python script. However, being able to continue editing the video makes it much more powerful.

This is the table of contents for the new book. At least it’s the ToC I am using while I write the first draft.
Part 1: We aren’t all AI Companies
Part 2: Minds and Reasoning
Part 3: Correcting Wrong Turns
Part 4: Rethinking for Co-Working with AI
Part 5: Bringing Back Precision (this is SyFi)
Part 6: Leadership Actions
Part 7: Introduction to WorkSense Services
There is also ad associated YouTube channel for this book. At the moment is it experiments and progress reports on the book itself.
I’m also collaborating with Julio Graham for this book; as I have for SenseOfWorkPodcast.com.
Back Cover Text (draft for inspiration while writing first draft)
Preparing for the AI revolution by rethinking organisational design as a metaphor of the mind
Organisational Reasoning rethinks how we design our organisations to incorporate emerging AI capabilities. The organisational challenge of artificial intelligence is not how to manage artificial humans; because we already manage humans today.
Our challenge is to build organisations that have the right mix of all of the available types of intelligence and the communication and control mechanisms required to operate, adapt, and evolve. We have been building our organisations with the assumption that a human-in-the-middle will be sufficient ensure value and mitigate risks.
There is an opportunity for the integration of AI into our organisation to be source of human flourishing, to bring back the intrinsic joy of work, and to shift mundane tasks towards discovery and creativity.
But to capture this opportunity we need to build organisations that transcend traditional structures, envisioning organisations as dynamic, intelligent systems capable of unprecedented adaptability and innovation. The metaphor of an organisation as a reasoning mind might be the best tool we have to start this journey.
We know a lot about how organisations work but have dismissed much of the science of organisation design as we focused on human-centric models. This book reveals how embracing these previously sidelined approaches can dramatically enhance our readiness for AI integration, by designing organisations not just with humans in the loop but as collaborative ecosystems where humans and AI amplify each other’s strengths.
At its heart, this is a manifesto for a transformative shift in workforce planning and organisational design. It’s about reimagining roles, relationships, and structures to create a future where organisations and AI collaborate seamlessly, driving towards shared goals with efficiency and creativity.
“Organisational Reasoning” is a guide to navigating the next frontier in organisational development. Prepare to rethink everything you know about organisation design, workforce planning, and the role of AI in shaping the future of work.
Building a customer-oriented business means going beyond the specific business units that have contact with customers. A customer-oriented business is customer-oriented in the back-end as well as the front-end.
But it’s also not enough to declare slogans – “put the customer in the centre of everything we do” – we must invest in specific business capabilities that keep the organisation aligned to the values of our current and future customers.
These 5 principles must be in place across your organisation – and in each case the business unit(s) responsible for promoting and implementing these principles must be identified.
Surveys are frustrating to your customers. Not only that but they only provide insight after a customer has already had a bad experience. The rest the time they are just reassuring you that the bulk of your customers of course have a positive (or at last not negative) experience.
You need to establish systematic listening platforms that ensure that every customer interaction leaves at foot-print in your customer data system that can be used to proactively measure how you customers are experiencing both your service channels and your products.
Tight integration between your digital channel systems, your partner systems, and your customer data system means your organisation to listen to customers systematically and push those insights into service points, performance management processes, and even businesses cases for new initiatives.
Traditional balanced scorecards attempt to add a customer perspective to your internal, financial, and innovation performance measures. While addressing an important gap in measurement this left the customer perspective as a seperate and distinct dimension competing with other measures.
How you measure your performance should be unique to your operating model, not based on a selection generic or vanity metrics that only measure what you know you are investing to improve. If you have a unique set of performance measure that make specific sense to your business model you are then measuring to differentiate rather than measuring to compare.
For the customer measures this means evaluating all internal, financial, and innovation measures from the customer’s perspective – not just adding new customer-specific measures. We recommend establishing a “customer-return on operations” metric that uses a secret formula, specific to your business, that aggregates these relationships into a single number.
Popular agile delivery approaches focus on maximising the ability to change as requirements change. This approach has improved IT / Business alignment and created significant improvements in throughput and cost control.
But the most important feedback loop isn’t between your IT department and your other business units, it’s between your business capabilities and your customers. Focusing on agility at the expense of adaptable products and services means that continuous delivery is the only solution to an evolving understand of customer needs.
Design all processes, systems, and products with the assumption that they will be personalised and must adapt as they are used. This approach extends the techniques your technical teams call “agile” and “DEVOPS” beyond Business / IT alignment to Business / Customer alignment.
First generation “Digital” teams focused on the digital channel. These teams established new digital capabilities that mirrored customer-centric online start-ups. This approach left digitisation of back-end systems and processes to other teams.
Organisations with mature digital channels know that it’s back-end systems and legacy processes that now have the greatest impact on customer engagement. As organisations re-integrate their digital teams into their core operations they can address these legacy challenges and improve both customer and employee engagement.
Double-sided digital teams have customer journeys on one side and employee journey on the other. Digitalisation initiatives then become an orchestration between these two sets of journeys; enabled by data, analytics, and digital backbones connecting both business processes and IT systems.
Organisations that take their agile initiative from the IT department to the enterprise level make a significant change in the approach. Where agility from an IT perspective means maximising throughput and leaving the prioritisation of value to others, an enterprise level approach must address value discovery.
The types of coaches and delivery partners you needed to establish IT-driven agility will be different to the types of coaches you need for Continuous Delivery Mark II. Bottlenecks will no longer be technical, or resource-based. Bottlenecks will be in systems design and in the evaluation of experiments.
“Fail fast” techniques need to be shifted to “fail local” and for some markets and customer segments “don’t fail at all”. This integration of business risk into customer-facing innovation will be the ultimate intersection between human and machine intelligence for modern corporations.
Part of The Beginning and The End of Information Management.
Organisations have a habit creating silos and then being delighted and self-congratulatory about the value of bringing them back together.
We forget that “silo” is just another name for “team” and your organisation will always be made up of more than one team. Sometimes a “silo” is a team that doesn’t play well with other teams, and sometimes a “silo” is just a team that shouldn’t exist. If you have the wrong portfolio of teams they will all be considered silos but that’s not an attribute of each of the teams it’s an attribute of the structure itself.
This particularly impacts data management because most of the high-value data management effort will be cross-business unit effort. If data management could be performed in a single business unit it wasn’t be difficult – it’s always something that involves multiple business unit.
We build functional organisations and then marvel at the value of cross-functional teams. Henry Mintzberg has mused that we structure our organisations like we structure our business schools. We have HR departments, and Finance departments largely because we have business schools that make people specialise in Human Resources and finance, and they then want to build organisations that they can shine in.
This to me is part of a broader problem of organisations managing themselves for the benefits of their managers rather than rationally or at least cohesively. This is a topic for the Manage Without Them book and ManageWithoutThem.com. But it’s important to understand the principle, often referred to as the Conway’s Law that “Any organisation that designs a system will produce a design whose structure is a copy of the organisation’s communication structure”1
Here we address the first problem with typical approaches to information management. If you haven’t heard it already, lots of people will soon want to talk to you about “metadata”.
In the strictest definition, metadata is simply “data about data”. It’s descriptive of the content, rather than the content itself. It’s actually a reasonably useful concept when you first think about it because you need to be able to describe your data; including its features and business context.
However, there are two problems with the concept of metadata. I think both of these problems cause the word “metadata” to effectively kill any conversion it’s included in.
Metadata is broad. So it can be used as shorthand. This is the first problem. There are a number of very different types of information that can be classified as “metadata” so any time one of these is referenced, or a question is raised about how it might be managed – it’s summarised as “that’s metadata management”, which is close to meaningless.
But it gets worse. Because of Conway’s Law (I presume) we split metadata into “business metadata” and “technical metadata” at the first level. By “at the first level”, I mean if you were to structure the different types of metadata as a hierarchy with a series of branches, you end up with the first split being “business” versus “technical”.
When you are trying to get two groups to work together, the worst thing you can do is tell them explicitly as your first action that there are things of concern to you and there are things of concern to “them”. The above view of metadata – splintering in the first instance to “business” and “technical” does exactly that.
It’s worse when you consider how broad the concept of metadata actually is. It’s too broad, and again that’s the first problem. That this breadth of meaning can first be used to encompass just about everything and then be used to split everything down the middle is the second problem.
If you immediately split everything down the centre, without first creating a set of layers that all groups are obliged to develop a shared understanding of, you destroy collaboration. You are basically telling business units and I.T. to work in seperate silos. You’ve used your best chance at promoting collaboration to destroy it.
The first step is to ban the use of the word “metadata” from your information management initiative. Kill it now. This will force you to be specific. Say what you mean. Write what the person asked about in your notes rather than abbreviate it to “metadata”. Never think you are adding clarity be making a vague distinction between “technical” and “business” metadata because you aren’t thinking hard enough when you do this, and you certainly aren’t setting yourself up for effective change and transformation.
Even though you shouldn’t be saying “metadata” at all, you still need to consider what the uppermost split in concepts should be. If not “technical” and “business” what should the top levels of a conceptual hierarchy actually be?
For starters you might consider using the top levels of your hierarchy to seperate the different data assets you have! That would be good start for an information management initiative.
In reality it’s not helpful to think of a hierarchy at all. There are more interesting and useful relationships between different parts of your information environment that will feel natural once you understand them.
Though if you wanted to show the top-level split of the sort of “data about data” you need to maintain it would be more like this:
The concept of “your Enterprise Information Model” is covered later in this book. It covers the areas above – not as a hierarchy but as a set of artefacts that you’ll have to build, revise, and maintain to promote collaboration around your data assets.
In some ways the above view is more complicated than the first view of metadata split neatly into business and technical metadata. But it’s also more specific to your organisation, it’s richer in what it conveys. It also promotes collaboration by not splitting into business and technical categories until the lowest possible point (if at all).
By allowing the complexity of your information to be captured this approach provides you with a valuable tool for managing that complexity. If you don’t start managing complexity – rather than avoiding it – your information management initiative will fail to deliver the promise of value that initiated it.
There is an interesting role in many IT-driven data governance initiatives. The role is “data custodian”. It’s very strange that this role exists and that we spend time talking about it but it’s also very revealing that we do.
Basically, the definition of a “data custodian” is as follows (by example from Wikipedia): “… Data Custodians are responsible for the safe custody, transport, storage of the data and implementation of business rules…”.
The definition of “data custodian” is always made as distinct from the definition of “data owner” or “data steward” and this is critical to the definition. The distinction being made is between the person responsible for the data itself, and the person who is “just” responsible for something like the movement of the data, or perhaps the repository that the data is stored in.
The reference to “business rules” is a reference to rules defined by others and only “implemented” by the “data custodian”. This concept makes it clear that these are instructions that are followed and defining these rules is not the role of the “data custodian”.
So we have a role defined in many IT-driven data governance approaches that basically means “somebody not responsible for the data”. Data governance is a subset of governance, so one of the main reasons you implement a data governance framework is to determine who is responsible for the data. Having a clear and well defined role that ensures we know who isn’t responsible for the data seems like a strange place to start or to even include at all.
It gets worse. It also turns our “data custodian” isn’t a role at all. It’s actually a shorthand for a group of roles, typically assumed to be part of the IT department, that are all equally not responsible for the data itself. These roles might include data modellers, data architects, product owners, scrum masters, or any other roles that are clearly not responsible for the data itself.
If you are talking about data custodians in your data governance framework you are taking an IT-centric approach – you should remove the concept from your approach.
It would be better to focus on give accountability and responsibility for data – and then shifting budget and resources to reflect that accountability – rather than wasting time defining who isn’t responsible.
Most of the roles that are bundled into the definition of “data custodians” are better thought of as part of the approach for implementing “data services”. If you design and implementing data services, with clear service management, many problems about accountability will go away.
Operating model design benefits from an initial workshop to understand the context of the operating model. These things will come up naturally in any discussion with the executive team. However, if you want to use a canvas style approach to ensure all areas are covered the below canvas is a good staring point:

Update: I get it now! See comment here: https://statisticsbyjim.com/fun/monty-hall-problem/#comment-9601
Like everybody else, my head spins thinking about the Monty Hall Problem. And like everybody else my intuition gives me the “wrong” answer.
For those not familiar with the problem, the unintuitive solution, and the logic behind the solution, this is a great overview: https://statisticsbyjim.com/fun/monty-hall-problem/
Although that explanation is excellent, I think it’s wrong. I know that means I’ll get lumped into all of the other people who don’t get it – so I’m going to briefly explain why I think it’s wrong.
Everybody who explains the solution goes to great lengths to explain the probabilities at the beginning of the game, the number of possible games, and how it changes because you know the host is going to reveal an empty door at some point in the game.
This is all true and I agree with it all – but I am drawing different conclusions from it. So I think the other explanations are wrong in two ways:
1. It is proposed that at the beginning of the game there is a one in three chance of choosing the correct door. This is true, but also at the beginning of the game you already know you will reach a point where a door is open, it’s not the door you have chosen, it doesn’t contain the prize, and there is only one other door. This means at the beginning of the game you already know you have a one in two chance of having already choose the correct door. This is the staring probability of choosing the correct door.
2. The explanation at the link above also says there are only nine possible games. This isn’t true. There are 12 possible games. If you say there are only nine possible games you are missing three scenarios where and you have already picked the right door, so each scenario has two possible games as the host could open either of the other two doors. So, while you know the host will always open a door you can’t ignore the addition three games. This is because all of the games you are ignoring are the games where changing doors will cause you to loose.
The total 12 possible games are shown below:

As shown above – there are 6 games where staying with the same door wins, and 6 games where staying with the same door looses.
If you start ignoring games “because they don’t matter” then they don’t matter at the beginning of the game and your starting probabilities change.