In the last episode of the ContinuousX Podcast, hosts Rick Stewart and Michael Fitzurka sat down with Cliff Berg, one of the coauthors of “Agile 2: The Next Iteration of Agile,” about the unintended consequences of the Agile method of application development. Ultimately, the group concluded that – while Agile made massive improvements over the traditional “waterfall” method of application development, there were some drawbacks to Agile that were either unintended or not considered by its creators.
Those unintended consequences might not create massive challenges for a simple project like the development of a Website, but they could be issues when developing a large government system of application – like a weapons system for the U.S. Department of Defense (DoD).
In the final part of their far-reaching conversation with Cliff Berg, the ContinuousX Podcast dives into some of the challenges that Agile creates in the public sector, and discusses some of the ways that Agile 2 helps to solve these problems.
Click the “PLAY” button below to watch their conversation, or scroll down the page to read the transcript.
Transcript: ContinuousX Podcast (Season 2, Episode 12) on Agile Challenges in the Public Sector
Rick Stewart: Welcome back to another episode of ContinuousX Podcast, where we try to “Solve for X in the SDLC Equation.” We’re back with Cliff Berg, co-author of Agile 2: The Next Iteration of Agile. And Cliff, I want to … let’s discuss the challenges within the public sector; the adoption of Agile is the contractual obligation in labor code categories, where traditional Agile methodologies prescribe everyone as more generalist role. In many ways doesn’t really add up to actual… what’s happening in the development community in creating services and creating workable projects.
How can Agile 2 help address these challenges in the public sector?
Cliff Berg: Well, thanks Rick. The whole Agile community has moderated, a little bit, the whole generalist idea. The kind of paradigm today’s T-shaped, but we have to differentiate based on what we’re working on.
Now, if we’re building a little website that sells jam and jelly, yeah, fine. Get a bunch of people together, and everyone’s T-shaped, and they all kind of do everything. That’s great. But if you’re building machinery that has electronic control units in it that sends telemetry…
Rick Stewart: Weapon system, yeah!
Cliff Berg: …weapon systems that are communicating in real-time.
Michael Fitzurka: I’m normally a UI guy, but I’ll do the weapon system.
Cliff Berg: So, number one, those stacks are too deep. Number two, they’re usually several different vendors doing different parts of it for a number of reasons. And it’s a whole different situation. You can’t apply lessons from one to the other necessarily. You have to use judgment. The situation always matters.
In the website world, it’s usually about features. But with machinery or weapon systems and that sphere, it’s more about capabilities. You want to have the capability to do this, and the capability… And so, when you’re developing, you have to demonstrate the capability, and then you have to refine that, and make it reliable and everything. So, the way that you plan that work is different. The capabilities might be a stretch. You want might have to prove that you actually can do it.
So, the whole risk-based approach, where you’re trying to develop the critical capabilities early is really important and you need specialists. Programming is a little bit different in that anyone can learn it on their own. It’s not like becoming an engineer, where… I mean, there are people who learn engineering on their own, but they’re pretty rare. Engineering requires a lot of mathematics. Engineering starts with calculus; doesn’t end with it. It starts with calculus. And builds on that with multiple layers of mathematics and different techniques depending on the kind of engineering. And you have to learn thermodynamics, and all kinds of things.
That’s not something you can just transfer to someone by coaching them. You need experts. You need experts, and sometimes they have PhDs. If you’re building an advanced machine learning system, you need people with PhDs. If you’re using commodity machine learning that you pull down from Amazon or something; yeah, anyone can do that, any programmer. But that’s not the state-of-the-art stuff. That’s for simple things. If you’re building something that’s going to recognize targets in real time on something that’s running on hardware that’s in flight, that’s customized machine learning models. And you need PhD people with PhDs for that.
And so, the question is, how do you how do you interface teams of experts with teams who are building other things? Like other people who are building application software. If you have a distributed system with real-time components that send telemetry, then there’s probably a data pipeline that’s analyzing that, using Spark or similar kinds of parallel processing. And then that probably feeds a command-and-control system or something. And there’s analytics operating on that. So, there are regular applications being built. Programmers can do that. But then the embedded software requires people who know real-time programming in languages like C and similar. It’s different.
So, coordinating all of this. The old approach is to task it out. Try and design ahead of time all the pieces, and then task it out. But the challenge with that, is that it’s all uncertain. Because, unlike building a bridge, all these pieces, none of the people on the team have ever built them before. They’re all unique. You’ll never build the same software twice. So how can you task it out? You really can’t.
There are steps you can task out. And there are intersection points, where you know that to demonstrate this capability, we need another capability. There’s a dependence. You can create milestones. But below that level, you can’t really task things out. For the most part, it depends. Some things you can, and some things you can’t. Usually the coding part, the software parts, you can’t task out because that’s not how coding works. It’s not task-based. It’s a creative thing. You don’t know what it’s gonna end up looking like. Imagine a building that has 1000 floors, and any floor can open a door and connect to any other floor. And that’s software. And designing that ahead of time?!
Rick Stewart: Interesting.
Cliff Berg: Imagine a spreadsheet that has 1000 pages, with a formula all over it. And the pages have formulas that are interlinking the formulas on other pages. Try to design that all ahead of time.
Mike Fitzurka: Trying to let that load!
Cliff Berg: Right! It has to evolve. People cannot figure it out ahead of time. They can figure out the big pieces, but they can’t see all the interconnections ahead of time. They can’t anticipate every interconnection, every dependency, every way it’s going to be structured. It has to evolve. Because as you’re building it, you suddenly see more clarity. And you realize – Oh, this has to connect to this here. This’ll actually need an extra piece here. And so, we evolved / refined… the original computer science term for that from back in the 60s is refinement – that the design refines over time.
We need to allow it to evolve, but we also need to coordinate people who are working in different parts. I can say there’s a machine learning team. And someone in a leadership role has to have enough understanding of that machine learning stuff, and also have the application team stuff working. Someone has to understand both, not at a real detailed level but at a working level. In order to be able to bridge them and initiate a discussion between that team and the other teams. To say; how are we going to feed work from you to you? And how are we going to feed outcomes back to you? And what’s that going to look like?
You can’t do it administratively. You have to roll your sleeves up and ask hard questions and understand. You have to develop an understanding of the flow of information and what people need to know. When it makes sense for tests to be run. You have to be in there with the conversation, and follow the conversation, in order to be facilitating that kind of discussion.
So, we need leaders who are… in leadership theory we call it participative leaders. We need people who instead of trying to lead by tasks, they lead by asking questions. Like Mark Schwartz, you mentioned, asking questions. How does that work? Explain that to me. I don’t really get it, so I’m going to go read a book. Or I’m going to get an expert and have them tutor me in it for a week. And I’m going to learn enough about that so I can follow these conversations to help you guys figure out how to connect this team to that team. So the work is fluid and ongoing and no one’s ever waiting for anyone.
Rick Stewart: And that’s very difficult in any organization, but in the public sector, as you know and Mike and I know very well, that you don’t get these experimental awards. You get awards based on your past performance, doing something in that category, in that context. So, that you can show, you can demonstrate that you’re not going to waste money. That’s a very different paradigm shift from what the goal of Agile / Agile 2 is, and even DevOps, DevSecOps. Because it is a learning process. Its organic. It keeps growing. It keeps feeding upon itself.
But how do you put procurements out there, opportunities out there, that say we’re going to pay you to learn about our business? It’s antithetical to the way that they approach procurements. So, do you have any advice as to how maybe even public sector agencies can design, or ask the right question for the industry to respond in a manner that would be compliant and governed?
Cliff Berg: I think we need to get ourselves un-addicted to the notion that you can issue a bunch of requirements and then have someone deliver a complete solution meeting all those requirements. That’s based on the financial management paradigm of an organization is a static entity. And that in order to change something, you create a project, and it has an ROI. And then you put funds into that, and then it creates a change. And now there’s a new steady state.
And that whole approach is obsolete, because the world now, things are too dynamic. It’s like in the commercial field, you can’t create a piece of software and then just run it. It’s obsolete a week later. Differentiating between maintenance and improvement is useless. It doesn’t make sense anymore to separate your budget into maintenance and improvement. Because if you separate it like that, you remove the tactical flexibility that product managers need to be able to respond to changes in the market. They have to be able to say; Well, yeah, we’re maintaining that, but actually, we’re going to just scratch that feature altogether and replace it with this new one. And they need to be able to make those decisions day by day, not plan it a year ahead of time. So, let’s stop differentiating between maintenance and improvement.
When you create a capability, especially with software, it’s a living thing. It’s like giving birth to an animal. And it’s continuously growing. And you’re continuously improving it. And it should start with a demonstration of life, a demonstration of capability. That’s a success. That’s a measurable success.
In a business context, you would use a lean financing approach. Where instead of constructing an ROI for a fixed set of features, you would have a lean investment experiment, where someone makes an argument that if we build this thing users will come and they will buy it, they’ll like it. Okay, so you try that. It works. Okay. So, we’re still not making money, but at least our hypothesis is true.
The next step is, prove we can make money from it. So, scale that up, let’s see if we can do that. Ah, we’re making money. Now our spend rate is profitable… Our spend rate, so it’s not a fixed amount. It’s an annual spend rate, and we’re going to keep spending as long as it’s profitable. Over time, things change. We might want to increase spending or decrease spending depending on our overall amount of funds and other opportunities. But our spend continues. And at some point, if it becomes less profitable, we decrease our spend rate. And if it becomes unprofitable, we sunset it. It becomes an off-ramp.
This whole idea that you can have a fixed investment and get a fixed system, and then later on plan for improvements, that’s obsolete. We need to think about rolling capabilities. And once it’s there, you have to keep feeding it. It’s not just about maintenance. Feeding includes helping it grow. That’s a paradigm shift. It is.
Rick Stewart: And bringing it back to the public sector, agencies need to realize when a capability or an application or a service is no longer needed. There are so many instances that I could, off the top of my head, where agencies are trying to do a migration to the cloud, and they don’t even know how many applications or workloads they’re even running right now and maintaining. Imagine the amount of resources wasted in keeping the lights on in something that is not being used. So, I agree with you. Treat it as a living organism. And sometimes those organisms die. You got to get rid of them.
Cliff Berg: Yeah, a lot of times there’s end-user computing that’s going on that you don’t know about. There has to be a visible portfolio of where the money’s going. So that you can manage that overall portfolio. But it really needs to be managed at a capability level, not a system level, system with fixed requirements. It needs to be seen as ongoing spend for that capability. If you view it as a capability, yeah, we want that capability. If you view it as a system, then it’s like, okay, well we have that system – let’s stop spending. But then it will die, it will die. No. It’s a capability, and it’s costing this much per year to maintain that capability and keep it growing, keep it effective.
Michael Fitzurka: And even if it’s not growing, just security alone. There’s always new security threats that you need to keep that thing alive. It just does not happen by itself.
Rick Stewart: Hygienic. Yeah, yeah.
Cliff Berg: Big time! Constant, with their new vulnerabilities, sustaining all open source stuff, and new versions. New versions break things all the time. There’s an increasing trend in the industry to not maintain backward compatibility for APIs. It’s really dysfunctional. The Java ecosystem put a very high value on backward compatibility. But a lot of other ecosystems don’t.
I teach a DevOps course. And I’m constantly finding new versions break things that used to work. A lot of test tools do this. A lot of the test tools were not backward compatible, and suddenly stuff doesn’t work anymore, and you have to invest a lot of effort to make it work again. It used to work. It’s a waste of people’s time in my opinion.
Rick Stewart: Well, Cliff, it has been a pleasure over these last couple of episodes. A fountain of knowledge, and we can go on and on and on. So, I really appreciate your time. I appreciate the listener’s time, looking at this. We’re going to have many different topics with Cliff in the future. So, thank you very much for your time as we “Solve for X in the SDLC Equation.”
Cliff Berg: Thank you.