Sunday, July 28, 2013
For a new role I’m starting at work, I’ve been thinking about the concept that we in the software development business refer to as “techincal debt.”
In software development, there is always a tension between two opposing forces: the desire to do it fast, and the desire to do it right. I could probably write an entire blog post on just that topic, but for now I’ll settle for the short version. If you do it right, then, later, when you want to extend it, or modify it, or use it as a jumping off point for branching out in a whole new direction (and you will always want to do these things eventually, if your software lives long enough), you can do so easily, with a solid foundation as a base. The downside is that it will take longer. If you do it fast, you get results faster, which means you can serve a customer’s needs before they change, fill a window of opportunity before it closes, or perhaps even beat your competitors to the market with your offering. But when you have to modify it later (which you will), it will end up taking even more time to clean things up than if you’d just done it right in the first place.
You can see why we often call this “technical debt.” You’re saving time now, but you’ll have to “pay it back” later, and the amount of extra time it takes is like the interest. Primarily, we software people invented this analogy because it makes good sense to business people. When you’re running a business, sometimes you need a piece of equipment. You have two choices: you can borrow some money and buy it now, or you can save up and purchase it outright, later. Buying it now allows you use it now, thus saving you costs somewhere else or giving you new capabilities you can capitalize on. But you’re going to have to pay for it eventually, and you’ll have to pay the interest too. Buying it later saves money in the long run, but denies you the advantages of the equipment for however long it takes to save up enough money.
This analogy really works because, despite what some people tend to believe, neither choice is always right or always wrong. Sometimes it makes sense to borrow the money and buy it now; sometimes it makes sense to wait. Likewise with software, sometimes it makes sense to just get the damn thing programmed as quickly as possible, and sometimes it makes sense to take the time and do it right the first time. And business people have to make that call, and expressing the choice in financial terms makes it easy for them to understand the trade-offs.
For instance, when a company is first starting up, “do it fast” is nearly always the right answer. (David Golden has a great post which explores this in greater detail.) Basically, there’s no point in worrying about how hard it will be to change your software tomorrow when you’re not even sure there’s going to be a tomorrow. Once a company leaves the startup phase, though, it’s time to think about paying down some of that technical debt. If it doesn’t, the lack of agility means it may not be able to respond to customer demands quickly enough to thrive in the marketplace.
Now, technical debt is in many ways unique to every individual company, and there’s no one technique or approach that will work for everyone. But everyone faces the same basic problem: I need my development team to keep making new features while they’re also paying off this debt; how can I have them accomplish even more than they were doing before without burning out, or creating new problems as fast as they fix the existing ones? Hiring more people helps, of course, but it isn’t the complete solution. But there are some general strategies that will work, and can be used by nearly every company that finds itself in the position of having a fair amount of technical debt to pay off. These five areas nearly always need some attention, and they’re excellent things to think about when looking for ways to make your programmers more efficient, which not only gives them time to work on refactoring messy areas of your codebase, but also gives them more time and confidence to avoid creating technical debt in the first place.
Unit Tests Few companies have no unit tests, but few have enough either. I’ve heard people worry about creating too many unit tests, but honestly this is like your average American worrying about eating too many vegetables: I suppose it’s technically possible, but it’s unlikely to be a problem you’ll ever face. It is possible for your unit tests to be too messy, though. Remember: your unit tests are supposed to give you confidence. The confidence you need to get in there and clean up some of that technical debt without being afraid you’re going to break something. But your tests can’t give you confidence unless you’re confident in your tests.
Now, many people will tell you that you should put as much time and effort into your tests as you do your code (this is often—but not always—what people mean when they say “tests are code”). I don’t necessarily agree with this. I think that all the effort you put into your unit tests should go towards making your unit tests effortless. I think your code should be elegant and concise: you don’t want it too simple, because it’s a short hop from “simple” to “simplistic.” Imagine the Dr. Suess version of War and Peace and how hard that would be to get through—that’s what I’m talking about. You need to use appropriate levels of abstraction, and using too few can be as bad as using too many. You have to strike a balance.
On the other hand, your tests should be simple. Very simple. The simpler they are, the more likely your developers are to write them, and that’s crucial. Whether you’re using TDD or not (but especially if you’re not), devs don’t particularly like writing tests, and anything that gives them an excuse not to is undesireable. You want your tests to read like baby code, because that way not only will people not have to expend any brainpower to write new tests, but reading them (and therefore maintaining them) becomes trivial. This doesn’t work as well for your code because of the crucial difference between code and tests: Code often needs to be refactored, modified, and extended. Tests rarely do—you just add new ones. The only times you modify tests is when they start failing because of a feature change (as opposed to because you created a bug), and in many cases you’re just going to eliminate that test. And the only times you should be refactoring them is when you’re working on making them even simpler than they already are.
So, in addition to trying to extend your test coverage to the point where refactoring becomes feasible, you should probably work on creating underlying structures to make writing new tests simple, and to make reading existing tests even easier.
Code Reviews There are still plenty of companies out there not doing this one at all, so, if you fall in that category, you know where to start. But even if you’re already doing them, you could probably be doing them more efficiently. Code reviews are tough to implement well, and it has as much to do with the process you implement as with the software you choose to utilize. Of course, it has a lot to do with the software you choose to utilize. Code reviews, like unit tests, need to be simple to do, or people will find excuses not to do them. Good software can really help with that.
Hopefully, like unit testing, code reviews are something that you don’t really need to be convinced that you should be doing. But more people are resistant to code reviews than any other practice mentioned here. They absolutely do require an investment of time and effort, and they absolutely do mean that you will deliver some features more slowly than you could without them. Of course, if that extra time and effort catches a crucial customer-facing bug, you won’t be complaining.
But, honestly, catching bugs is not the primary benefit of code reviews ... that’s just the icing on the cake. Code reviews just produce better software. They’re the time and place where one developer on your team says to another: “hey, you know you could have used our new library here, right?” (To which the response is inevitably: “we have a new libary?”) They’re fantastic for cross-training too: don’t think code has to be reviewed by someone who already knows that code. Have it reviewed by someone who’s never seen the code before and they’ll learn something. Have your junior people reviewed by your senior people, and have your senior people reviewed by your junior people: they’ll all learn something. It makes your codebase more consistent and builds a team dynamic. It diffuses business knowledge and discourages silo effects. If it happens to catch a bug now and then ... that’s just gravy.
Once you have code reviews in place, though, you’ll probably still want to tweak the process to fit your particular organization. If you require reviews to be complete before code can be deployed, then code reviews are a blocking process, which comes with its own hassles. Then again, if you allow deployment with reviews still outstanding, then you risk making the review process completely ineffectual. So there’s always work to be done here.
Code Deployment I suppose there are still software folks out there for whom “deploy” means “create a new shrink-wrap package,” but more and more it means “push to the website.” At one company I worked at, deployment was a two-hour affair, which had to be babysat the entire time in case some step failed, and, if a critical bug was discovered post-launch, rolling back was even more difficult. From talking with developers at other companies, I don’t believe this was an exception.
If your organization is big enough to have a person whose entire job is dedicated to deployment (typically called a “release manager”), then you don’t have to worry about this as much. If that two hours is being spent at the end of a long day of preparation by one of your top developers, you’ve got some room for improvement. But even if you have a seprate release manager, think how much you have to gain as your deployment gets shorter and simpler and more automated. You can release more often, first of all, and that’s always a good thing. And there are simple strategies for making rolling back a release almost trivial, and that’s absolutely a good thing.
Now, I’ve never been a fan of “continuous deployment” (by which I mean deployment happening automatically, triggered by checking in code or merging it). But I do believe that you should strive for being able to deploy at any time, even mere minutes after you just finished deploying. Part of achieving this goal is reaching continuous integration, which I am a fan of. Having those unit tests you worked on so hard being run automatically, all the time, is a fantastic thing, which can free your developers from the tedious chore of doing it manually. Plus, continuous integration, unlike code reviews, really does catch bugs on a regular basis.
Configuration Management Similar to deployment, and for the same reasons. If building a new production server is a heinous chore, you’re not going to want to do it often, and that’s bad. Horizontal scaling is the simplest way to respond to spikes in traffic, and you should be able to bring a new server online practically at the drop of a hat. It should be completely automated, involve minimal work from your sysadmins, and zero work from your developers. There are all kinds of tools out there to help with this—Puppet, Chef, etc—and you should be using one. Using virts goes a long way towards achieving this goal too.
For that matter, it should be trivial to bring up a new development server, or a new QA server. Once that ceases to be a big deal, you’ll find all sorts of new avenues opening up. Your developers and/or your QA engineers will start thinking of all sorts of new testing (particularly performance or scalability testing) that they’d never even considered before. Figuring out how to handle database environments for arbitrary new servers can be a challenge for some shops, but it’s a challenge worth solving.
For Perl in particular, you also need to be able to reproduce environments. You can’t spin up a new production web server alongside your existing ones if it’s going to be different than the rest. If your server build process involves cloning a master virt, you don’t have to worry about this. If you’re rebuilding a server by reinstalling Perl modules, you must expect that CPAN will have changed since you last built production servers, because CPAN is always changing. For this reason, a tool like Pinto (or Carton, or CPAN::Mini, or something) is essential.
Version Control In this day and age, I doubt seriously I need to convince anyone that they should start using version control ... although one does hear horror stories. More likely though, you should be thinking about what VCS you’re using. If you’re using CVS, you should probably be using Subversion, and, if you’re using Subversion, you should probably be using Git. But along with thinking about which software to use and how to switch to a better one (preferably without losing all that great history you’ve built up), you also need to be thinking about how you’re using your VCS.
Are you using branches? Should you be? If you’re not, is it only because of the particular VCS you’re using? If you’re using CVS or Subversion and telling me branching is a terrible idea, I can understand your point of view, but that doesn’t mean you shouldn’t be using branches—it just means you should probably be using Git. Now, granted, branches aren’t always the right answer. But they’re often the right answer, and they allow experimentation by developers in a way that fosters innovation and efficiency. But it takes work. You have to come up with a way to use branches effectively, and not just your core developers should agree. If you have separate front-end developers, for instance, they’re going to be impacted. If you have a separate QA department, they’re definitely going to be impacted—in fact, they can often benefit the most from a good branching strategy. Your sysadmins, or your ops team (or both), may also be affected. If you’ve tried branches before and they didn’t work, maybe it wasn’t branching in general that’s to blame, and maybe it wasn’t even the particular VCS ... maybe it was the strategy you chose. Feature branches, ticket branches, developer branches—again, there’s no one right answer. Different strategies will work for different organizations.
So these are some of the things you need to be considering as you look to reduce your technical debt. None of these things address the debt directly, of course. But, in my experience, they all play a part in streamlining a development team by increasing efficiency and cohesiveness. And that’s what you really need in order to start paying down that debt.
Sunday, July 21, 2013
Once again I’m going to forego my usual blog post, this time because I don’t really have anything to say I haven’t said before. About a year and a half ago, I posted a rather long, rambling musing on the topic of fate. This week I’m reminded of that posting, in a very positive way. I continue to believe that everything happens for a reason, and even things that seem to be bad at the time can often lead to a better outcome than expected (or hoped). Not always, of course. But the universe has a way of tempering the bad with the good in such a way that makes it difficult not to believe that there is an ordered plan. My current plan is still ongoing, so I can’t guarantee it’ll end up where I think it will. But, so far, my faith in the universe is back on track, and aiming at a destination that I’m currently pretty excited about.
Until next week.
Sunday, July 14, 2013
[I wrote this earlier this week, for my family. I’ve decided to post it here as a part of my series about people who have had a great impact on my life. You may wish to read the introduction to the series.]
For me, as a child, “family dinner” meant the mid-day meal on Sundays, with my grandparents on my father’s side. Oh, we had dinners with my maternal grandparents as well, sometimes, but it was more infrequent, and more ... reserved. Not as much fun. And, once or twice a year, we’d go to family reunions, which were huge, sprawling affairs, with more relatives than I could keep track of. My paternal grandmother had a very large family, and most of her brothers and sisters had decent-sized families themselves, and it made for quite a lot of “cousins” to keep up with. Except they weren’t technically my cousins ... they were my dad’s cousins, or in some cases my second cousins.
My dad’s family was smaller, more intimate. Dad has just one brother, and no sisters. At first I was the only kid. Then I had one cousin, then two cousins, and then two cousins and a little brother. And that was as big as our family dinners ever got: grandmother and grandfather, mother and father, aunt and uncle, two cousins and a sibling. We had a lot of family dinners—big dinners at Easter or Thanksgiving or Christmas, little dinners on your average ordinary Sunday—and I ate a lot of food at my grandmother’s table. There were certain things that were very predictable at these dinners. My grandmother would burn the biscuits. Us kids would start eating before everyone was at the table, and we’d get yelled at. There would be a minimum of one dish containing potatoes: boiled potatoes, mashed potatoes, or potato salad, and possibly two of the three, and, for a big dinner, probably all three. There would always be homemade biscuits for everyone else, and canned biscuits because my uncle Jimmy and I liked those better. And, if it was a holiday, there would be some sort of lime green Jello concoction, with whipped cream and unidentifiable chunks floating in it, and my aunt Marion would say “You know what that looks like, don’t you?” And us kids would giggle, and my grandmother would sniff disapprovingly, but then she’d say that she only ever made the stuff just so Marion would get to deliver that line.
My grandmother and grandfather are both gone now, but all the rest of us are still here ... or at least we were, until a few days ago. That’s when I got the word that my aunt Marion had passed away. She’d been sick for a while now, so it wasn’t really a surprise. Not that that helps. When your family members die after a long illness, people will tell you that at least they’re not suffering any more, and when they die suddenly, people will say at least they didn’t suffer, but none of that really makes a dent in how you feel. There’s a sense that you have of a person, regardless of how often you see them: it might be a large looming presence, or it might be just a comfortable feeling tucked away in the back of your mind, like the keepsakes you have up in your attic—you don’t take them out and look at them very often, but you’re comforted just knowing they’re there. Then, one day, that sense is gone ... the loss might be overwhelming, or it might be like an itch you can’t quite reach, or it might be anywhere in between, but it’s always very sharp. It cuts. And it’s hard to adjust to.
I haven’t seen my aunt Marion for many years. But I remember many things about her. Some are superficial: her boundless capacity for the color red, and her endless delight in Snoopy, for instance, are things that everyone who knew her even slightly remembers. Some are not even very well connected with her as a person: for instance, my earliest remembrances of my aunt and uncle are probably going out to their house, which was way outside town, and playing with their electric organ. I don’t specifically see Aunt Marion in these memories, but the organ made quite an impression on me, as I was very young and it was very cool.
I remember giant stuffed Snoopies, and I remember elaborate piled hair-do’s, and I remember her candy-making business, with the rainbow assortments of white chocolate and the candy molds and she even made her own peanut butter cups, and that was pretty cool no matter how old you were. And I remember lots and lots of family dinners. I remember her bringing over my cousin Chris for the first time, and I remember her bringing over my cousin Cathy for the first time. I remember that she laughed the loudest, and the easiest, and the most often. I remember her giving us silly Christmas gifts, like bad cologne, just because she knew that my brother and I loved to say “Brut: by Faberge” in our best Eddie Murphy voice, and I guess she liked hearing us say it.
What I remember most about her, though, was her strength. My Aunt Marion said what was on her mind, and she said it straight, and she didn’t care who heard her. She was the only person in our family who did that. My mother’s family was polite to the point of obsession; my father’s, more plain-spoken, but still proper. But Aunt Marion went beyond plain-spoken and into what was to me a strange new world of honesty and forthrightness. She came into a room like a whirlwind, dominated a conversation without monopolizing it, was brazen without crossing the line into shocking. Well, I’m sure she crossed some people’s lines. There were probably people who considered her rude. There were probably those who were secretly jealous of her apathy towards what others thought of her. I just thought she was cool, and, the older I got, the cooler she was.
So today I’m missing my aunt, even though I hadn’t spoken to her in forever. Even though those family dinners are long gone, they remain fresh in my mind. Even though I haven’t seen my cousins in years, my thoughts are with them, because they’ve lost their mother, and even though I haven’t talked to my uncle for just as long, my thoughts are with him, because he’s lost his soulmate. My loss compared to theirs is very small, but I still feel it. I’m going to miss my loud, crazy Aunt Marion because, in many ways, she was the best of us. And that’s always worth celebrating.
Sunday, July 7, 2013
Well, it’s been an interesting week (more in the Chinese curse sense than the scientific inquiry sense). I’m going to take some time to regroup and refocus, so I won’t be doing an extensive blog post this week. Next week will be better.