Software development — by 2040 – Trixta – Medium
The world is going through upheavals, triggered by the rapidly accelerating pace of technological change. It is inevitable that artificial intelligence and other innovation will eventually upend countless jobs and fundamentally change how we work.
Jobs, as we know them, must shift to leverage the advances in technology, or become redundant. In response to this turbulent future, kids these days are being encouraged to learn to code, as there is a belief that this will ensure them the jobs of the future.
Of course, young people should learn to code, but do we really believe that by, say, the year 2040 with the exponential technological improvements, we will continue to develop software by writing code in the same type of way we do today?
I think not…
Imagine a future, where we should simply be able to describe our systems and then our software platform will automatically ‘build itself’? If this were to be the case, then by 2040 anyone will be able to design and build a software system as easily as it would take to describe a process.
In fact, we should even be able to build software by describing something we are not that certain about because as the system becomes clearer, it should continuously rebuild aspects of itself based on human or AI intervention. The beauty of this potential future is that we can create an organizational software model that is ‘anti-fragile’, where it grows ever stronger the more it is kicked around.
Following on from this, future software will be defined by the goals and purpose of the organization or system that operates the software. The software itself will not be static and hard-coded as it is today, but instead will dynamically adjust to continuously fulfill this purpose. The future of AI-augmented software development would be constant high-speed change.
But to get to this new level of rapid software adaptation, we need to move away from our current state of writing rigid software. There are already positive automation moves with continuous integration, delivery, and deployment of code, but we still need further intelligence to be able to better manage the fragmented software development process with less human intervention.
So, how do we get from our current software paradigm to the future development paradigm? It will definitely not be a single leap, it needs to be a series of successive steps from where we are now towards automated AI-augmented software development.
The critical missing link in software’s evolutionary path is the need to align the modeling, and building of our software, so that it is easier to work with, for both humans and AIs.
Before we contemplate the future, let us first look at some historic modeling paradigms.
We know that software developers write code. Lots of it. Hundreds, thousands, gazillions of lines of code. But without thought, intention, and meaning; the code has low value. There has to be something that guides the developers in how they model the software to maximize the value of their coding output — a theoretical design of some sort.
Software teams tend to model their abstract ideas on pieces of paper, whiteboards, sticky notes, apps, or even merely in their heads. The models could be consciously created, but sometimes developers may even code almost unconsciously, where the odds of writing a good system are reduced.
Developers need better ways to deal with the growing complexity of software and the systems it models. We need to deliver innovative, distributed software, faster, cheaper, and more efficiently. We need to build our software in a way where we can easily remodel and rebuild aspects of it later. Our modeling has to allow us to apply agile principles.
We plan the software on one set of tools (whiteboards, sticky notes, design files, apps) and then build it in another set (IDE’s, code frameworks, server infrastructure, etc). This creates the risk for things to get lost in translation or to overly simplify the technical challenges. Building distributed systems multiply this challenge.
The secret is that if we can reduce a model down to its core building blocks, we can create a simple atomic modeling approach that is flexible yet easy to implement and use by humans and AI alike.
We must model two core elements
Firstly, a model should define a spatial view that emphasizes the more static or stable structures of the system. We could have a multiplicity of these structural elements that we can refer to as ‘spaces’, which should describe the objects, domains or elements of a greater system. A space should also define its relationship with other spaces. There could be tens, hundreds or even thousands of spaces that make up a big complex distributed model.
Secondly, a model should also define the processes, or dynamic behavior, of the system by showing collaborations among objects and changes to their internal state — let’s call these ‘flows’. Each space may have its own internal flows or flows that allow this space to interact with other external spaces.
So, by modeling spaces and the relationship of those spaces as well as the dynamic flows within spaces and across spaces, we can pretty much model any complex system.
Unified Modeling Language (UML) did model the structural and behavioral elements, but it is too stodgy and overly tied to the object-oriented model.
Domain-driven Design (DDD) has been around for 15 years and it also models both the structural and the dynamic. It is positive in that it assumes an evolving model. By modeling the complex systems as both structural and dynamic elements, we can allow for easier collaboration between technical and domain experts to iteratively refine the conceptual model that addresses particular domain problems.
The DDD challenge is that it generally lives as a theoretical model on paper or a whiteboard, outside of the software itself.
Crossing the Chasm
The modeling system of the future needs to co-exist within the codebase itself, as code. The model builders and the consumer of those models need to operate together within one living system. This would allow for both the business people and developers of a system to ‘speak the same language’. Also, if the model lived within the system, then changes to the model could be very easy to build and deploy. All types of automation would be able to operate to simplify the human effort in building, and maintaining complex systems.
By creating more obvious visual tools around modeling that co-exist within the coding systems, a number of roles will be able to be far more augmented and simplified. One could also envisage best practices architecture which underlies this type of approach. This enables us to create standards and use emerging systems in a far more automated way. This would reduce the load on various human roles as they address System Architecture, DevOps, Testing, QA, software coding, UX, content management, design and so on. The goal of this standardized modeling approach would be to ensure that organizations can do more with fewer core people by leveraging third-party systems, cloud services, open source, as well as gig-based communities.
Such a system would radically augment and empower a small number of humans in an organization enabling them to do great things while reducing the primary code base. It would also mean that small teams could compete with companies that employ tens of thousands of employees in today’s world.
It is an exciting, but scary future, because the nature of organizations is about to be disrupted, along with the people and the use of software. So children should learn how to code to understand the fundamentals — but we should strap on our safety belts. By 2040 we may be operating in many new ways, and with the accelerating pace of innovation and technological advance, we too will evolve into the new future.