»Our institutions are failing because they are failing to scale.«
Andreas M. Antonopoulos
Ashby’s Law of Requisite Variety is regarded as the basic law of cybernetics or control (aka steering) theory. Put simply, it says »Do not be more limited than your field of action«.
The most important basis of good control is relevant information advantages. Accordingly, control is systematically successful as long as it has a better information basis than its field of application.
With the exponential development of information technology, however, information flows in the control environment can be handled less and less by traditionally successful measures. The weaknesses in the implementation of tried and tested principles become increasingly obvious in exponential times.
Depending on the point of view of the observer, this leads to useful or even harmful imbalances, which can result in the failure of organizations up to macroeconomic scales:
Quite surprisingly, fundamentally new, but in essence often astonishingly simple business models successfully prevail against market leaders that have been considered unassailable until now. »Disruption« here is ultimately nothing more than dominantly better competition. The central question seems to be no longer so much whether, but rather when it sets your own business field as its goal.
The successful new competition regularly makes the leap from underfinanced garage projects to billion-dollar valuations in a few years’ time and, after the usual initial difficulties, pushes the old market leaders out of the race seemingly without effort.
What is their secret?
Just as astonishing as these successes is their conceptual simplicity: in the area of process and project organization, for example, the original two-person project Atlassian with JIRA is worth mentioning, which prevailed in several categories against giants such as Microsoft, IBM and Hewlett Packard: with increasingly agile organizational requirements (i.e., more decentralized planning), the established competition was plainly less flexible than the simple, open approach.
Atlassian now has a market valuation in the double-digit billion range and inspired numerous copycats. The basic system is so generic and versatile that it is actually difficult to stereotype its capabilities (it is often simply characterized as bug trackingsoftware).
Much better known than Atlassian is the most popular serial disruptor Elon Musk, who at the same time not only fought against the international automobile industry, which at first seemed to be overpowering, but also against the nationally operated space industry (besides various other pet projects that initially seemed similarly hopeless).
He explains his entrepreneurial approach with first principles:
»Don’t just follow the trend. […] it’s good to think in terms of the physics approach of first principles. Which is, rather than reasoning by analogy, you boil things down to the most fundamental truths you can imagine and you reason up from there.«
A both simple and elegant innovation concept for Bitcoin, the technically probably most secure digital money system, was published in 2008 under the pseudonym Satoshi Nakamoto, which in its implementation proved to be highly robust even against the most powerful attackers. The »honey badger of money« is probably the most attractive, but also the most insurmountable honeypot for hackers and enjoys the best of health despite countless attacks and obituaries: simple empirical dominance regularly beats symbolism and opinionated value judgement discussions.
Bitcoin would potentially be capable of the greatest conceivable disruption on a global scale: After all, money is a main foundation of economic and social systems.
Andreas Antonopoulos is a popular expert regarding Bitcoin and autonomous blockchain-based control systems. He explains the phenomenon of organizational control failure and the associated distortions quite fittingly as follows:
»History isn’t continuous. Decades go by when nothing happens, and then decades happen in weeks, and we’re living through that period of change right now.
[…] One of the interesting topics […] is the concept of a black swan: The idea that if you don’t have a sample of something happening in the past, you can’t imagine it happening in the future. […] We’re now living in an era of black swans […and] the internet itself is a machine that generates black swans.
When something happens that is completely discontinuous to our past experience, we try to wrap it in narrative. Narrative that relate it to something we understand, hoping that relating it in that way will help us make sense and also that it will help us predict the future. It will allow us to see more clearly what might be coming next. And of course that’s an illusion […:] the narratives are broken.
The institutions […] have started to fail, and they fail because they don’t scale, not because they’re headed by good or evil people, not because they’re rotten at the core, not because they’ve been taken over by mysterious forces: […] they’re failing because they are unable to scale to the enormous complexity of a modern world that is super interconnected and that exhibits chaotic behavior, and massive information flows that are impossible to process. […]
We now have a narrative machine, and the narrative machine is the internet. It is a machine for producing narratives, and these narratives are instantaneously global, very often viral.
It’s a meme machine, a memetic system that produces narrative. And it produces narrative much faster than any of the previous mechanisms for producing narrative.
Now this is important and it is important for a really simple reason: society is narrative, society is a collection of memes. All of our cultures are just a collection of stories that we have taken down through the generations. And when you have a meme machine operating within a society, then it can rewrite the narrative of society in real time.
Ironically all of this is happening at a time when people are most fearful. They are fearful of things that they do not understand, and in order to understand them, many people ascribe some dark force: ‚They‘.
‚They‘ are conspiring, ‚they‘ are going to vaccinate us all, implant us with chips, spray chemtrails on us or whatever ‚they‘ are doing this week. 5G creating coronaviruses, whatever it is, ‚they‘. ‚They‘ are the mysterious cabal, the conspiracy to control the world, and in every country there might be different ‚they‘. And in many cases ‚they‘ is assigned to government that somehow exhibits incredible ability to make decisions, and then make those decisions become reality through competence and efficient management.
The truth is, ‚they‘ are not in control. The reason they are not in control is because the institutions they use to govern are broken. And so the theme of our era is unprecedented incompetence that emerges from an unprecedented collapse of institutions, that is caused by unprecedented disruption through the sheer scale of […] information flows«.
»Failing to scale« is ultimately just another interpretation of Ashby’s Law.
Now there are numerous causes for a lack of adaptability to changed conditions, which can be differentiated in a simplified way into »not wanting«, »not being able to« and »not being allowed to«.
In the following, I will concentrate on the more technical »not being able« aspect and show an easy approach to solving the scaling challenges in the organization of organizations:
The international control solution market is worth billions and requires an enormous amount of consulting, especially in the case of Enterprise Resource Planning (ERP). The traditional options range in the seemingly irresolvable contradiction between little integrated, but cost-effective flexibility and expensive standardization that usually does not fit in well with practical requirements, which requires correspondingly complex adjustments. Both options therefore rarely get along without each other:
Experience has shown that standard systems are not only very expensive to introduce and operate, but also highly problematic in terms of processes: they regularly leave organizational gaps, which need to be closed with individual solutions.
So far there seems to be only the choice between the disintegration-rock (individual data processing) and the standard process-hard place, or appropriate combinations thereof.
This does not mean, however, that the standard process providers do not try to solve the long known problems: The main obstacle lies usually already in the basic architecture.
With the basic design decisions of a control system, a path is taken which is increasingly difficult to change as development progresses. The path dependencies can become so powerful that in some cases it is only advisable to »throw it away and make it new« (which is all the more problematic, the more has already been invested). The adaptation of IT-only systems becomes disproportionately more expensive the closer you get to their core; the more non-IT aspects are affected, the more insurmountable the additional resistance to change can become.
For less capital-strong market participants, the path of least resistance regularly consists of throwing good money after money that is visibly deteriorating and hoping that this will work out well for as long as possible.
Therefore, the core challenge here too is flexible scalability (»scale invariance«).
In the traditional model, scaling takes place by means of gradual aggregation of control information, which is oriented towards the organizational structures:
Decision complexity is statistically reduced and enriched layer by layer with additional decision-relevant information (i.e. integrated horizontally). The scaling limits are reached when the organizational context changes significantly and no longer fits the integration structure. In extreme cases, analyses in preparation for decision-making can lead to tea leaf reading hermeneutics and rampant micropolitics.
So what should a zero-based redesign for the organisational control systems look like, combining the systematic strengths of extreme scenarios that have been difficult to reconcile up to now, but at the same time avoiding their weaknesses?
I propose the following first principles:
- the best statistic is a complete survey
- full vertical integration requires unrestricted availability of basic data
- the basic structure is rooted in networks (all organisational structures can be mapped as special manifestations of a network)
- the modeled structures can be modified collision-free by the system users
- the internal structures are made dynamical, so that not only parameter optimizations but also structural model optimizations can be carried out in real time (which also enables artificially intelligent coordination processes up to autonomous control solutions).

Due to the loss- and collision-free processing of dynamic data networks, the system-internal complexity inevitably becomes extremely high. On the one hand, this can be controlled by simple processing principles, but on the other hand it can be hidden for the interfaces that use it (there is good and bad complexity: good complexity enables scalable control, bad complexity hinders it).
In addition to the loss-free technical complexity reduction, flexibly configurable transparency needs to be implemented: not everything that can be technically accessed should also be organizationally available for each interface in order to meet privacy, information security or simply organizational policy requirements.
Generically simple basic rules for the system behaviour thus allow for maximally complex internal characteristics and adaptability, but these also remain comprehensively controllable by correspondingly simple basic rules.
As an additional benefit, the procedure can be used directly to coordinate artificially intelligent interface systems (even if a universally strong artificially intelligent organization control may well continue to be science fiction for a long time).
The main challenges of the years ahead lie in an intelligent process integration and coordination of organizational units, which keeps pace with the expected exponential development, whatever the scale: my generically simple procedure offers a platform that is maximally flexible, resilient and capable of further development at minimal marginal costs, up to the advancement as an independent artificially intelligent system.
Due to the scale-independent approach, the introduction of such systems is possible in a very soft, consensual way and in small steps at minimal cost.
Thus, no big bang projects with high implementation risks are necessary, the digitization benefit already arises with each small individual step:
For example, many individual, local applications can be further digitized and, after digitization, seamlessly combined, integrated and consolidated.
A very simple example is the “decommissioning” of EUC: the procedure thus enables the integration of individually distributed expert systems (e.g. special planning and controlling calculations) which so far have been systematically out of scope for further process optimization.
The procedure unlocks the next and second-next evolutionary stages in enterprise resource management, and beyond.
Despite even very small possible applications, the introduction of the procedure has great potential for an overall improvement of the entire organisation, with correspondingly massive “legacy effects”. Its successful introduction therefore requires strategic support across the board.
[28.07.2020]
© 2021 Dr. Thomas R. Glueck, Munich, Germany. All rights reserved.