Most developers have heard of the Single Responsibility Principle (SRP). It is one of the principle pillars of SOLID and frequently touted as a core principle that all developers should strive for when designing their code. It is also one of the least understood principles I’ve ever seen. Most developers will agree that it’s really important, but these same developer will have a hard time defining SRP means in concrete terms. They have an even harder time translating the amorphous concept into their code.
This became painfully clear to me when we had a particularly bright developer create a Rest API Client for our app . He was an enthusiastic proponent of SRP and sought to employ the principle in all of his code. Unfortunately, he left soon thereafter for a better job, because to this day, I do not understand what he wrote. His client spanned across 10 different classes and so fully conformed to the SRP principle that no one in our team ever managed to understand how all the pieces came together. Whenever it came to modifying the client to fix a bug or add functionality, that task became one of the few tasks no developer wanted to take up. In the end, I recreated the API client as a single class that contained one fifth the original code . The experience was eye opening to me. I thought I was a proponent of SRP until he showed exactly how far I hadn’t taken the concept.
Over the past few years, I’ve seen an increasing interest in software architecture as an important, perhaps now even an integral, part of software development. While this may seem obvious now, it’s still surprising to me just how much architecture is considered as an afterthought by many developers. At least in the mobile space, it’s taken a long time for architecture to enter the foreground of the conversation for software development. But even as the discussion about software architecture grows, I’ve noticed that the conversation tends to center on individual architectures as a solution to specific problems inherent in software development. You can see detailed descriptions of VIPER, MVVM (and it’s variants), the ubiquitous MVC as well as a scattering of others for app development. While each of these solutions do address specific set of problems, what they do not do is teach developers how to create their own architectures to address problems specific to their own development. As such, many developers will try to use an existing architecture to solve the wrong problem, or else failing that, to forget about architecture altogether and hobble along as best as they can.
While I have been pleased to see the conversations on software architecture, I think to a certain extent they are the wrong discussion. We’ve been discussing the benefits/drawbacks of individual architectures someone else created. What we should be talking about are the principle of architecture that allow us to create the architectures we need.
Did you know that the internet you see is not the internet that everyone else sees? In truth, we all see a different internet. And this is not just because we search for different terms, visit different websites or customize our feeds. The internet itself changes, depending on who's using it . Yes, the internet literally conforms itself to the person using it. Most people don’t realize this and on the surface it can seem somewhat innocuous, albeit a little weird. But as more and more people use the internet as their primary source of information, it becomes a unknown force for societal change in unexpected ways.
Now that I’ve spent quite a bit more time implementing a “handler” based development approach, I’m finding that there’s quite a bit I like about it. I still have some frustrations with it, and some unresolved questions, but overall I think this approach has some serious merits that warrant consideration as an Architecture approach to handling complex dependencies.
Lately I’ve been writing a Swift module for a project at work. I decided to branch out a little bit by utilizing a different design philosophy than I’ve used in the past. While Apple sits around and pretends that protocol based development is actually something new and exciting, I’ve been much more interested in the functional abilities that Swift seems to endow us with. To that end, I decided to escew protocols and instead define the dependencies in my objects using handlers instead. This means instead of defining a protocol and requiring an object that conforms to this protocol, I simply require the specific behavior itself.
Before I truly begin this post, I feel the need to run a few qualifies here. I'm going to spend some time ranting about what I consider are some obvious omissions in Swift, but I need to be clear: I actually, really like Swift. I really do. Despite being a "long time" (depending on your perspective) objective-c developer, Swift truly feels like programming in the future. I do things with Swift I never do in objc... good things. Swift helps me abstract like I never could(?) in objc, or just perhaps it encourages me to abstract where objc never did... not sure. But what I do know is I find myself creating better abstractions easier, allowing me to reuse logic in some pretty incredible ways. I really like Swift.
And then, I'm going along and hit a wall at 90 mph. Really!? I can't do that? Why the hell not? What the hell was I creating all this architecture for anyway? Three days later I have a solution but I kind of feel lied to... where was all the power I was promised?
If you read my Architecting Complexity post, you know I feel pretty strongly about the need for Software Architecture in software development in general, but especially in “app” development. So far, I’ve not gone in to very much detail into my approach to applying Software Architecture to software development. To be fair, I’m going to focus only on app development, but many of the principles here apply much more broadly. It just happens that developing apps is something I do a lot, so I’ve developed a consistent strategy regarding it.
One common opinion I keep hearing is about how the Optional system kind of sucks, and is perhaps better in theory than in practice. I would like to take a brief moment to defend Swift’s optional system and provide some tricks I have found that greatly help to tame the optional system and provide a path out of “if-let nested hell”.
When I first started developing software, I made very limited use of protocols. I would occasionally model my software from what I’d seen Apple do with delegates, but once blocks came out I mostly abandoned delegates for callbacks, which I much prefer despite the inherent risks of reference cycles and the weak dance. Because really, why would I want to write my interface in a completely separate file when I already have a convenient “header” file right here. And yet, as I’ve continued to build software and especially as I learn more about software architecture, I find my eschewing interface files and depending on protocols more and more. Let’s just say I’ve become a convert to “protocol based software development”.
Software architecture can not only make the perceived complexity of the system approach its real complexity, but it can also help reduce the real complexity and increase the system’s reliability. It also makes the system much more understandable. And when you understand a system, you can change it with confidence.