Category Archives: Business

Reading Guide for Software Estimates

We’ve all been there. A lot. Before your next argument about a project estimate starts, take a minute to get everyone on the same page. Estimates are in important part of business AND estimates are often wrong. Yes, both of these truths can exist in the same universe!

My favorite insight from this set of articles was from John Cutler, “What job are you hiring estimation to do?” This perspective is perhaps the most valuable thing to keep in mind as your team works through your next estimate.

The Cost of Supporting New Features

Everyone wants more features.

It’s like a car that has a toaster and a rocket engine. Sometimes that can totally make sense.

Plenty of people have covered the dangers of new features and I’m not going to talk about that the cost of building new features. Let’s assume that the new features are important for your market. What I want to talk about is the cost of supporting new features. Say you have a development team of ten. What percentage of that team is available for new development and which need to hang back on bug fixes and minor improvements?  At GoReact we’ve been able to maintain a sub-25% bug allocation for the past two years and we’ve really felt free to work on functionality that’s good for new customers and partnerships. But even with 75% of our resources we haven’t churned out 75% new features. Pointy-haired boss is wondering why at this point, right? Let him wonder. You will always have work in a couple of areas and your ability to anticipate and manage those pipelines as the needs ebb and flow is a critical piece to continued success with development projects. For me this means balancing the bugs, new features, internal projects and changes.

Kudos to the Phoenix Project for making this approach available for the general masses. It took me a year to read this book after a friend and coworker recommended it, and I regret that I lost an entire year of this perspective. It’s now required reading for my team and I keep a fresh copy at my desk for any new-comers. The sooner you understand what is really happening and what is really needed to run year after year, the sooner you regain the reigns on your team’s ability to really make the business move.

Those of us who have been around the block a few times know the pitfalls that come with maxing out development on new functionality. Don’t get me wrong here, I’m talking about sustained focus, like years.*  After huge pushes like this you can turn around to find out that your existing customers are leaving you because their product–the one that you built last year–is falling apart. Almost all of the time this is a bad thing, so unless you’re pivoting your product to a new market and pretty much leaving the old one behind, you’d better run some quick calculations.

Now here’s where I get a little fuzzy, so you’ll have to finish these equations out for yourself

Your velocity for new development (Vn) is what’s left over after you subtract velocity wasted on bugs (Vb), spent on changes (Vc) and invested in internal projects (Vi). In my experience the internal projects are usually small and pretty easy to prioritize where they support Sales, Marketing or Support. Hint: one of those is a cost center. That leaves bugs and changes.

This simple little ditty came to me during a daydream I had while in a product meeting. I’ll walk you through it and then we can have a good laugh about it. Your change velocity (which is refactoring, rearchitecting, server and configuration changes) can be calculated by examining the size of your code repository (R), multiplying that by the sum of the features that Marketing (Fm) and your customers (Fc) think you have and then you multiply that by the number of people on your Sales team. Then you divide that by the number of QA (Eq), product (Ep) and developer employees (Ed) on the team.

The equation for bugs is pretty similar but the numerator is your repository size and the number of features you actually have. This is all tongue-in-cheek but either way it seems like these are the only hockey stick graph (exponential growth) the product development team every sees!

Alright in all seriousness, we know that:

  1. Bugs are best mitigated by talented QA team members and a development team absolutely devoted to testing. We’ve saved ourselves a lot of customer heartache by ensuring that our code is the best possible quality and we attribute that as the reason our customers stick with us so long, second only to product-market fit.
  2. Changes increase as the code and features increase. This part is actually pretty simple math, but never ignore it.

We also track the velocity ratio between our top projects and what we call our “tribute” work. The latter for bugs, small UI tweaks, very minor updates and any out-of-band refactoring. We’re holding at about a 3:1 ratio. This means the next four hires are only going to give us three heads to focus on new projects.

Here is the real key: find your ratios and plan for those as you make hiring decisions.

  • Understand how much of your time is spent in each of your types of work
  • Understand what the business really needs in each type of work
  • Understand how the ratios can change as you:
    • Add more features
    • Add more code
    • Refactor for reusability and modularity

* I have to qualify something here. Sometimes teams start to feel some burnout after huge pushes and new features are often the scapegoat. There is a whole area of discussion around helping your team understand why new features are necessary and why bug pushes can be critical for a company’s success. There are also horrible stories about how managers misuse “death marches” and fail to do necessary research and justification for their projects. These big, hard projects can be a harbinger for the death cycle but they can also be just what it takes to succeed.  If you’re in a position to question your company’s approach, I feel very strongly that it’s always on you to get your questions answered and either alleviate your concerns or confirm your suspicion.

I loved Good to Great and although it’s getting dated the principles keep coming up.

Companies that fall into the Doom Loop genuinely want to effect change—but they lack the quiet discipline that produces the Flywheel Effect. Instead, they launch change programs with huge fanfare, hoping to “enlist the troops.” They start down one path, only to change direction. After years of lurching back and forth, these companies discover that they’ve failed to build any sustained momentum. Instead of turning the flywheel, they’ve fallen into a Doom Loop: Disappointing results lead to reaction without understanding, which leads to a new direction—a new leader, a new program—which leads to no momentum, which leads to disappointing results. It’s a steady, downward spiral. Those who have experienced a Doom Loop know how it drains the spirit right out of a company.

 

Eyes wide open. Success is predicted and planned for.

Statistically significant?

If a p-value of 0.05 (5%) is too high, and the inventor of that threshold says it shouldn’t be used without consideration–which it is, all the time–then things we think are significant today just aren’t. We don’t always apply this kind of statistical rigor to decision making, but when we do we’re generally satisfied with this threshold. But we shouldn’t be.

Applied to product development teams, methodology matters a lot. But it can matter so much less than the talent of the individuals on a team. But methods and tools are still championed much more loudly than professional development.

In short, we should be looking for more obvious wins, and recognize that when something is close, but still statistically significant, it’s probably still too close to really move the needle.

 

Investing in professional development

There’s some debate still about these 20x developers out there, complete geniuses that gave rise to the “rock-star” moniker that every company seemed to advertise for. I’ve remained a touch skeptical–not that there aren’t amazing software engineers, I know several — but I’m skeptical because we still don’t have a really good, accessible ways to measure human contribution to software. Lines of code, commits and story point velocity are readily available. Regressions, debugging and runtime numbers can be gathered as well, but we know that all this still doesn’t paint the whole picture. Also add in the real-world impact that other factors outside of development have on performance as well: good product management, architecture and QA among them, but also deadlines, poor communication and bad management. With all of these variables, it’s going to be pretty tough to accurate gauge order-of-magnitude differences between programmers in the real world. And frankly, I don’t know that you need to because in your gut you already know.

“The differences arising from individuals in any given study will drown out any differences you might want to attribute to a change in methodology.” — Steve McConnell

Back in 2011 Steve McConnell responded to criticisms of his 10x programmer claim thoughtfully by pointing out what he was seeing in the studies that have already been done, most not even focused on individual contribution. What he found was that “the differences arising from individuals in any given study will drown out any differences you might want to attribute to a change in methodology.” In short, it nearly totally depends on who’s on your team. However, over-emphasis on this leads to ignoring methodology and tooling altogether and focusing instead on individual development and hiring, which would be short-sighted. My point here is that on the balance, the latter should constitute a more significant portion of team building efforts.

The trouble arises when you commoditize development. While there are many software tasks that require no planning or design, there is much more that requires careful thought. As Yevgeniy Brikman said in support of the 10x developer, “It’s not about writing more code; it’s about writing the right code. You become a 10x programmer not by doing an order of magnitude more work, but by making better decisions an order of magnitude more often.”

“It’s not about writing more code; it’s about writing the right code. You become a 10x programmer not by doing an order of magnitude more work, but by making better decisions an order of magnitude more often.” — Yevgeniy Brikman

This is why I prefer the title software engineer. Writing good applications is really about a mind well-suited for building: choices, compromises and the collective wisdom of personal experience and industry best practices. Don’t commoditize development. It’s a highly-talented and highly-compensated profession. It’s not about nuturing a team, it’s about providing the humans working around you with the best possible resources to be successful, innovative and frankly, fun to be around. Yes, tools matter, and methodology matters–they matter a lot and they can have a big impact on the quality and speed of your work, but investing in these while sacrificing investment in the professional development of the individuals on the team… well… that’s as bad as it sounds.

 

Getting to “Know”

We spend too much of our lives in ambiguity. So many outcomes, so many possibilities! Even an enumeration of the combinatorics would be a staggering waste of time and so we consent to let things play out. But we don’t completely let go – of course not! We make decisions in the moment, we strive to utilize consistent tactics, even strategies as well as we can. And when we have successes they are the result of our efforts, and yes, some luck. And when we have failures they are bad luck, and yes, maybe poor planning. We can press forward in uncertainty but Sir Arthur Conan Doyle’s Sherlock tells us, “It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.”

Bricks-without-Clay-copy
“Data, data, data! I cannot make bricks without clay.”

Certainly, the exploration of risk is rewarding and I don’t mean to take the fun out of it! But making data-driven decisions removes the worst outcomes. And less negative risk means less unpredictability and this is a good thing for business. It is predictability that allows us to make wise investments, to outmaneuver the competition, and it is the predictable inflow of money that keeps the lights on. This is why getting to “know” is so vital. Reorganize your processes to identify and weed out the worst variables first. Answer every question amounting to “why this won’t work.” You need not wait until all questions have been answered, but there is a critical mass that should be addressed before safely riding the long tail of uncertainty. But knowing where you stand on the most key factors of success puts you on the threshold of good decision making. If you decide to stop, you can do so early and with good reason. If you need to pivot, you have the wisdom of understanding just how it should be done. And if you should indeed continue on, do it with confidence that comes from really knowing where you stand. The earlier you know, the less risk you take on and the less time you spend

The decision to move forward in light of the data is much more sound than the decision to move forward despite the risks. Get to “know” then make the call.

Rapid Design Prototyping

I’ve enjoyed listening to the Startup Podcast over the past couple of weeks. It’s encouraging to listen in on seemingly normal guys as they put together a brand new company. In episode #13 they talk about rapid design prototyping with Google Venture’s design team. The team walks them through a design sprint to discover what their mobile app should be.

Fake it ’till you make it… or don’t.

In Gimlet’s case they should not build a mobile app, and I can’t tell you how many months of development time I’ve saved by NOT BUILDING an idea that we’ve had. In a relatively short amount of time you can design out a website or app and see if it’s going to really meet your needs or if it’s just a cute idea. The world is full of cute ideas – I don’t want those. I want a great idea that can work.

Save yourself some time, developer. Don’t build it until you’ve seen it.

 

Ownership

Screen Shot 2013-12-23 at 10.32.56 AM

I’m learning Python and needed a decent IDE to get my work done – I’m so spoiled! I installed PyCharm and have found it sufficient. Today, after several days of using it, I finally noticed that the tip of the day is blank! This is fine, I unchecked the box, but what I wanted to point out was the missed opportunity: the tip of the day is your chance to educate the user on helpful functionality, to increase product retention through brand loyalty. And what is done with this opportunity?

Tips not found. Make sure you installed PyCharm Community Edition correctly.

Nope I won’t. If you can’t take ownership of this problem (first take: don’t show the tip by default if it isn’t working) at least in the tone of the presentation, then I know enough able you to move forward.  Disable the feature and wait for the functionality to be proven in the future.  This is a very matter-of-fact approach that could be softened by different messaging:

Woah.  I can’t find any tips!  Click _here_ if you have some time to help diagnose what happened with your install.  In the meantime we’ve disabled this feature, you can get to it again from the Help menu.

See, I already want to buy whatever software would give me this kind of lip service!

Four Not-So-Secret Ingredients of Moab

Today I was reviewing the Moab documentation for an upcoming training and I ran across several feature gems that I thought were worth calling out. I’ll call them some of the “not-so-secret ingredients” that makes Moab great.

Scheduling with Partitions
Moab uses partitions to logical divide available resources in your environment. This allows you to separate geographically or by hardware configuration. For example, because a given job can only use resources within a single partition, you may want to create a partition to group nodes on the same local switch so that you can guarantee the fastest interprocess communication speed. Another cool benefit of partitions is the ability to set partition-specific policies, limits, priorities and scheduling algorithms, though this is rarely necessary.

Green Computing
In an effort to conserve power, Moab can automatically turn off idle nodes until they are needed again. This can be a significant savings if you’re not maxing your capacity all the time. To save on time, Moab can keep a pool of nodes on standby so that there is no delay when additional resources are needed. Moab comes with reference scripts for IPMI, but can be configured to work with iLO, DRAC and xCAT as well.

Fairshare
Using configured targets, you can use historical resource utilization as a factor for job priority. In essence, this would prevent a user with relatively infrequent workload to get buried in the backlog of jobs from much busier users. This kind of thinking extends to groups, accounts and other groupings as well. There are a number of configuration settings for fairshare that allow you to tweak performance to your needs, such as caps, evaluation timeframe, and consumption metrics.

Reservation Groups
You can associate multiple reservations through reservation groups. This allows each reservation to share the same variables. These variables can then be used in job triggers to automate tasks like email notification, script execution and internal Moab actions, such as creating another reservation. Of course, you can always override the inheritance by setting a variable locally on an individual reservation in the group.

How Adaptive Decides to Develop a New Feature of Moab

If you’re reading this wondering why we chose to do one feature over another, please note that you’re not the first (or the last) to shake your fist while shouting this question! We hope that all of our customers are this passionate about our software. I know I am with the products I use, so we expect nothing less!

Unfortunately for you though, we don’t often reveal much about the planned features on our product roadmap. What little we do reveal are those things that we are most confident about delivering. Frankly, we don’t show more because we don’t want to disappoint you! We have a very complex product with a lot of integrations and moving parts. Sometimes it can take a good while to create and test additional functionality. Oftentimes delivery is influenced by changes in the market, current technologies, and planetary alignment. We’d hate for any of our customers to become dramatically attached to a favorite promised feature, and then disappoint them by delivering a different, though equally valuable feature.

I admire many of the approaches taken by the 37Signals team and ran across this line from David Hansson about the dangers of over-promising,

“It’s better to turn customers away than to placate their instincts and lure them in with vague promises. It’s incredibly rare that a single feature will truly make or break your chance with a customer. If your software is a good enough fit, most people can make do without that one or two things that they’d like to see.”

I want to break down some of our process around how we choose which features to develop, but it’s very interesting to talk about why we choose certain features. I love a line I hear from Simon Sinek, “People don’t buy what you do they buy why you do it.” But this is a topic for another day. So here are the four steps we use to get the right features in the development pipeline: discovery, organization, prioritization, and timing.

Discovery

Determining the next Moab feature can be overwhelming and it often feels as though we’re looking for a good read while standing in the middle of the Library of Congress. But we relish the fact that we are not alone! We hear our customers loud and clear and know exactly which areas of the product they’d like to see improvements in. We are working on some ideas to improve the quality of feedback we’re getting on this channel. We also work together with our partners to discover and evaluate new synergies and approaches. We are constantly looking for emerging technologies and market opportunities that align with our corporate strategies. And we regularly reflect on our own failures and search for better solutions. There is no shortage of good ideas, and we try to collect them all.

Organization

In order to ensure that we deliver real value with each release, we group all of these ideas into initiatives that address end-to-end functionality and practical use cases. We understand from experience that cherry-picking our favorite little features from various areas isn’t viable over the long-term. We try to make sure that we can deliver well thought-out solutions through the features that we provide; something that we can be confident will have a positive impact on our customers and their ability to accomplish to their goals. Once we’ve organized each of the ideas into feature initiatives, we also evaluate iterative approaches that would allow us to iteratively plan for work that might ultimately take more than a single release.

Prioritization

We can’t do it all. We wish we could, and we try, but we really can’t! We know how much we can accomplish in a given amount of time, and we’re always on the lookout for techniques and processes that will improve our development velocity, but we know that it won’t be enough to get everything we all want. Through prioritization we can ensure that the most impactful features are delivered first. This is the phase where we have to argue for our favorites and sometimes let them go. Some of the factors that we consider are: time to implement, degree of innovation, how competitors approach the problem, number of affected customers, bang for the buck, and how long we’ve already been waiting.

Timing

We all know from our own personal experience that if we simply work off a priority list, we can miss opportunities to innovate and optimize. For example, I missed seeing the new movie Pacific Rim last week because I didn’t consider rescheduling a high-priority but time-insensitive appointment. By the way, I am not sure if I’m worse off now, so maybe this is a bad example. But as we work out the implementation details of each new feature and its components, we discover multiple opportunity paths that can all lead to “good” delivery outcomes. We take time to plan these out so that we can rest assured that our timing optimizes our efforts and make changes as needed.

Once the timing of each epic’s features is nailed down, we hand the execution off to our capable Engineering team and monitor the progress as we work towards the next release. We can talk more about the engineering process in yet another blog post, but hopefully this gives some insight into our approach on feature planning.