Infrastructure in 2035: Matt Gowie on Why Code Still Matters

Matt Gowie
CEO + CTO at Masterpoint
Infrastructure as Code Expert
About Matt
Matt Gowie is the CEO + CTO at Masterpoint, based out of Boulder, Colorado. He started his career as a software engineer 14 years ago and later transitioned into the Platform and Infrastructure as Code space. He established Masterpoint originally as a solo consultancy as a means to travel the world. In recent years, they've grown to a larger consulting team entirely focused on solving Infrastructure as Code problems for clients, with a particular niche on OpenTofu and Terraform.
When he's not working, you can find Matt rock climbing all over North America, trail running, or traveling to some remote part of the globe.
Current Focus: AI-Powered IaC Optimization
Matt and his team at Masterpoint are excited about their internal use of AI to optimize various workflows they deliver for clients. Things like IaC Audit, writing great IaC tests, and analyzing new environments are all areas where AI is helping them speed up and deliver value. They're actively building out a library of internal prompts that help deliver immense value quickly while still being active in the process.
The underlying infrastructure and resources that we're managing are not simple.
Q1: Where has Terraform stood the test of time, and where have you hit limitations?
Terraform stands the test of time because it is the multi-provider solution that really enables automation across different cloud providers and various APIs. That's been extremely useful for us because we don't just automate AWS or Google Cloud for our clients, we also automate things like Datadog, GitHub, and GSuite. The ability to manage those providers is immensely valuable to the business because we're not just creating efficiency in cloud infrastructure – we're creating efficiency in other areas that platform teams are responsible for and that benefit the entire organization.
We have seriously looked into and considered Crossplane, Pulumi, and CDK. For all of those solutions, either the complexity is too high, the ecosystem isn't there, or the model doesn't work well for us.
For Crossplane, we wrote a long blog post when we did an extensive POC of it. It ended up just not meeting our expectations and needs.
On the CDK front, it is only available for AWS which is a non-starter. But even if it was portable to other providers, it is based on CloudFormation and we know that tool results in ugly delivery mechanics that we're not interested in using.
Pulumi is the one that is viable. It makes sense for teams where app engineers and platform engineers all use the same language, but unless you're in that camp, it's not the right fit. The ecosystem isn't there and you end up with less optionality when you go looking for tooling or solutions to not reinvent the wheel in that space.
The best tool for platform teams is still Terraform + OpenTofu today.
Q2: Is Kubernetes' continuous reconciliation model the right approach for infrastructure?
I originally thought the automatic reconciliation that Kubernetes delivers through operators (Crossplane included) was a really interesting model. At first, I thought that might be the solution to getting away from the annoying plan-and-apply loop we all go through in the Terraform and Pulumi space.
But now that we've looked into it more and used those tools – our POC of Crossplane is the main thing I'm drawing on – I found that it's clumsy and the API of today's world are not built for that. To enable that kind of automatic reconciliation, the underlying API you're interacting with needs to be able to handle hundreds of thousands of requests from you or your organization per hour. And a lot of APIs today just aren't happy with that.
For example, if you try to manage hundreds of GitHub repositories via Terraform today, you'll hit API rate limits quickly. This is a known problem that GitHub rate limits their APIs heavily and you'll get blocked from work for a bit. That problem gets exponentially worse if you're trying to manage those repositories by constantly reconciling. There is more to this issue, but that alone feels like a step in the wrong direction.
The providers we provision infrastructure on typically don't fit the reconciliation model.
Q3: Are visual tools replacing code for infrastructure management?
I believe that as soon as you introduce the abstraction of visual modeling (really any direction away from code) you end up dealing with the problems that come with that. If we're only ever handling the visual interpretation of something, then we don't have a strong grasp of all the knobs and dials that go into that infrastructure. And then, as a team that's supposed to maintain that, how are you supposed to own it, upgrade it, and manage it, if your understanding of it exists at such a high level of abstraction?
The amount of variability between two databases that I can provision on the cloud today is very high – just in that one set of resource types, there is a lot of complexity. And if we are essentially trying to abstract all of that down to only a few knobs and dials, then when things need to be tuned or when shit hits the fan… we no longer have an understanding of those underlying resources.
I think there's a serious problem there: you create a lack of ability to understand and manage your infrastructure in the long run if you're working at that high level of abstraction. Maybe that level of abstraction is exactly what we should be providing to application engineers, but the way that we do that well today from my experience is through code that we maintain and understand, and then we provide an abstraction layer on top of that code to deliver the resources and guardrails that we know are correct for our organization.
I don't today see a future where infrastructure is built and reasoned about through something other than code. Everybody seems to be pushing for that next layer of abstraction away from IaC, but I have yet to see that truly take off in a way that gives us the speed, power, and underlying understanding we need to run a successful platform.
When shit hits the fan, we need an understanding of the underlying resources.
Q4: What does daily infrastructure work look like in 2035?
My honest answer here is that I have no idea. We're moving at such a rapid pace right now that saying what things will look like ten years from now isn't something I have even an inkling of confidence in predicting. So maybe my view of the future is much more grounded in the coming years than it is ten years out.
Like most areas of software, with AI we're now able to generate a ton of code that fits well enough and does a lot of what we want. We have an intention for some part of our architecture, we describe what we need, and AI can deliver code to us that solves the problem quickly. That loop is only going to get better and more accurate. The next step on actually taking that code from "merged to main" all the way to production is where everyone still struggles. That's where my team and I primarily focus our consulting and I believe there is still so much to be improved there. But that isn't a GenAI problem – It's a system integration problem where you need to have a defined, proven vision of how everything stitches together.
I don't foresee a future where we get away from infrastructure as code because, as humans driving the system, we're always going to need something to trace back: How did this change happen? Who made this change? Why was it made? How can I modify it and still have the ability to go back to the old version?
For engineers, that still comes down to Git and having some tangible set of code that we can manage, maintain, upgrade, and own. An artifact that shows what exists and models our intention for what we want the cloud providers to look like.
We're always going to need something to trace back: How did this change happen? Who made it? Why?
Josh's Note
Matt's grounded perspective on infrastructure as code is refreshing. While others chase the next abstraction layer, he focuses on what actually works: code that teams can understand, trace, and maintain. His point about APIs not being built for constant reconciliation is a reality check the industry needs. At Terrateam, we share this pragmatic view – the goal isn't to abstract away complexity but to make it manageable through better workflows and collaboration, not magic.
Follow Matt's work:
- LinkedIn - Active posts multiple times per week
- "IaC Insights" Newsletter - Infrastructure as Code best practices
- YouTube - Masterpoint Consulting channel
- Blog - Deep dives on IaC topics
- GitHub - Open source contributions
- Masterpoint - Infrastructure as Code consulting