Four Not-So-Secret Ingredients of Moab

Today I was reviewing the Moab documentation for an upcoming training and I ran across several feature gems that I thought were worth calling out. I’ll call them some of the “not-so-secret ingredients” that makes Moab great.

Scheduling with Partitions
Moab uses partitions to logical divide available resources in your environment. This allows you to separate geographically or by hardware configuration. For example, because a given job can only use resources within a single partition, you may want to create a partition to group nodes on the same local switch so that you can guarantee the fastest interprocess communication speed. Another cool benefit of partitions is the ability to set partition-specific policies, limits, priorities and scheduling algorithms, though this is rarely necessary.

Green Computing
In an effort to conserve power, Moab can automatically turn off idle nodes until they are needed again. This can be a significant savings if you’re not maxing your capacity all the time. To save on time, Moab can keep a pool of nodes on standby so that there is no delay when additional resources are needed. Moab comes with reference scripts for IPMI, but can be configured to work with iLO, DRAC and xCAT as well.

Fairshare
Using configured targets, you can use historical resource utilization as a factor for job priority. In essence, this would prevent a user with relatively infrequent workload to get buried in the backlog of jobs from much busier users. This kind of thinking extends to groups, accounts and other groupings as well. There are a number of configuration settings for fairshare that allow you to tweak performance to your needs, such as caps, evaluation timeframe, and consumption metrics.

Reservation Groups
You can associate multiple reservations through reservation groups. This allows each reservation to share the same variables. These variables can then be used in job triggers to automate tasks like email notification, script execution and internal Moab actions, such as creating another reservation. Of course, you can always override the inheritance by setting a variable locally on an individual reservation in the group.

Of Watts and Volume

There is an equation for watts to decibels, given a sensitivity of x decibels, (1w/1m)!  Wattage must double for each increase of 3db.  See here and here for more info.  So that sensitivity (SPL) score is a pretty crucial starting point for a speaker.  Not that adding more amplifier watts is not impossible, just moderately costly.  But then again, in the case of an amplified PA speaker with 700 watts peak (and 350 RMS) then you’ve a lot of power.  By my calculations, taking a 98db SPL woofer up north of 122db, and a 110db SPL tweeter up past 134db – past the threshold of pain!  Also, I’ve read that perceptively, humans believe +10db makes one sound twice as loud as another.

BUT distance from the speaker matters!  On a ratio of 1/d^2, the sound gets softer the further away you are!  If pumping 350 watts through that woofer gets us 122db, standing 52 feet away from it, it will sound just like it we only pumped in 1 watt at 1 meter = 98db.  That’s still plenty loud.

It makes sense to me now why stage monitors are always like 100-150 watts.  Even with all the noise on stage, you’re pretty close which means they’re pretty loud.

Here’s what that speaker system sounds like as the watts are increased:

Watts Woofer Tweeter
1 98 110
2 101 113
4 104 116
8 107 119
16 110 122
32 113 125
64 116 128
128 119 131
256 122 134
512 125 137
1024 128 140
2048 131 143

 

Contrast that with another system with a higher SPL:

Meters Woofer Tweeter
1 122 134
2 116 128
4 110 122
8 104 116
16 98 110
32 92 104

 

For reference, here is a reference table for decibels:

Source Intensity Level (db)
Threshold of Hearing (TOH) 0
Rustling Leaves 10
Whisper 20
 Quiet bedroom at night 30
 Quiet library 40
 Average home 50
Normal Conversation 60
Busy Street Traffic 70
Vacuum Cleaner 70
Busy road 80
 Diesel truck, 10 m away 90
Large Orchestra 98
Walkman at Maximum Level 100
Front Rows of Rock Concert 110
 Chainsaw, 1 m distance 110
 Threshold of discomfort 120
 Threshold of pain 130
Jet aircraft, 50 m away 140
Instant Perforation of Eardrum 160

How Geography Dictates Fates

I just watched a documentary on Netflix that was based on Jared Diamond’s book, Guns, Germs, and Steel: The Fates of Human Societies and I’ve been moved. Jared lays out the major factors for why some societies are wealthy and others are in poverty, and they all boil down to one thing: geography. It’s not this simple, but the idea is that temperate climates, combined with high-yield crops like wheat which allow for farm animal domestication, allow people to spend time developing new technologies to fulfill and build wealth. Where these things have not existed, we find poverty and the conquered.

Well worth my time to watch, and I’ve added the book to my list to learn more. It’s quite sad, but also encouraging.

How Adaptive Decides to Develop a New Feature of Moab

If you’re reading this wondering why we chose to do one feature over another, please note that you’re not the first (or the last) to shake your fist while shouting this question! We hope that all of our customers are this passionate about our software. I know I am with the products I use, so we expect nothing less!

Unfortunately for you though, we don’t often reveal much about the planned features on our product roadmap. What little we do reveal are those things that we are most confident about delivering. Frankly, we don’t show more because we don’t want to disappoint you! We have a very complex product with a lot of integrations and moving parts. Sometimes it can take a good while to create and test additional functionality. Oftentimes delivery is influenced by changes in the market, current technologies, and planetary alignment. We’d hate for any of our customers to become dramatically attached to a favorite promised feature, and then disappoint them by delivering a different, though equally valuable feature.

I admire many of the approaches taken by the 37Signals team and ran across this line from David Hansson about the dangers of over-promising,

“It’s better to turn customers away than to placate their instincts and lure them in with vague promises. It’s incredibly rare that a single feature will truly make or break your chance with a customer. If your software is a good enough fit, most people can make do without that one or two things that they’d like to see.”

I want to break down some of our process around how we choose which features to develop, but it’s very interesting to talk about why we choose certain features. I love a line I hear from Simon Sinek, “People don’t buy what you do they buy why you do it.” But this is a topic for another day. So here are the four steps we use to get the right features in the development pipeline: discovery, organization, prioritization, and timing.

Discovery

Determining the next Moab feature can be overwhelming and it often feels as though we’re looking for a good read while standing in the middle of the Library of Congress. But we relish the fact that we are not alone! We hear our customers loud and clear and know exactly which areas of the product they’d like to see improvements in. We are working on some ideas to improve the quality of feedback we’re getting on this channel. We also work together with our partners to discover and evaluate new synergies and approaches. We are constantly looking for emerging technologies and market opportunities that align with our corporate strategies. And we regularly reflect on our own failures and search for better solutions. There is no shortage of good ideas, and we try to collect them all.

Organization

In order to ensure that we deliver real value with each release, we group all of these ideas into initiatives that address end-to-end functionality and practical use cases. We understand from experience that cherry-picking our favorite little features from various areas isn’t viable over the long-term. We try to make sure that we can deliver well thought-out solutions through the features that we provide; something that we can be confident will have a positive impact on our customers and their ability to accomplish to their goals. Once we’ve organized each of the ideas into feature initiatives, we also evaluate iterative approaches that would allow us to iteratively plan for work that might ultimately take more than a single release.

Prioritization

We can’t do it all. We wish we could, and we try, but we really can’t! We know how much we can accomplish in a given amount of time, and we’re always on the lookout for techniques and processes that will improve our development velocity, but we know that it won’t be enough to get everything we all want. Through prioritization we can ensure that the most impactful features are delivered first. This is the phase where we have to argue for our favorites and sometimes let them go. Some of the factors that we consider are: time to implement, degree of innovation, how competitors approach the problem, number of affected customers, bang for the buck, and how long we’ve already been waiting.

Timing

We all know from our own personal experience that if we simply work off a priority list, we can miss opportunities to innovate and optimize. For example, I missed seeing the new movie Pacific Rim last week because I didn’t consider rescheduling a high-priority but time-insensitive appointment. By the way, I am not sure if I’m worse off now, so maybe this is a bad example. But as we work out the implementation details of each new feature and its components, we discover multiple opportunity paths that can all lead to “good” delivery outcomes. We take time to plan these out so that we can rest assured that our timing optimizes our efforts and make changes as needed.

Once the timing of each epic’s features is nailed down, we hand the execution off to our capable Engineering team and monitor the progress as we work towards the next release. We can talk more about the engineering process in yet another blog post, but hopefully this gives some insight into our approach on feature planning.

Check for Tiddly Winks

Wink in the Wii
In 1888 Jospeh Assheton Fincher invented Tiddledy-Winks and the game was an instant hit. Somehow one of these ancient winks (as I’m quite sure it’s not a squidger) found its way into, and was nearly the demise of, my Wii. As a matter of attack vector analysis, do not rule out 19th-century game pieces. It’s time to move the Wii up to at least five feet from the floor.

Technical Leads in Scrum

I just read a great article on Effective Technical Leadership that outlines with a fair amount detail, the role of a great development technical lead. Since then, for the last few minutes I’ve been trying to figure out how a technical lead would fit into the scrum variant that we run, as we currently don’t have tech leads.

Simon Jones Leaps high at Lineout - Westcliff RFC

# How does this fit with scrum masters, off-team architects and team managers?
# Would you need a tech lead for each functional development team (UI, services tier, back-end), or one tech lead for each scrum team?
# Does an off-team architect become a technical lead if you assign him to a team?
# Do scrum masters have the time and technical chops to be a tech lead?

I realize that team titles past PO, scrum master and team member are no less than subversive to scrum, as the whole team needs to own the process and the results, but it is clear that there is room within the team for these responsibilities. Additionally, several of these positions are, in a healthy way, at odds with each other. Simply merging two scrum master and tech lead would result in only one individual responsible for both the results and the approach, which is a lot of weight to not be spread around the team. Perhaps technical leadership lies outside of scrum, but within agile’s self-organization principle, to be cultivated by the organization’s managers.

I’m going to stew on this for a few weeks. I was inspired by the content of the article, but I don’t know how to formalize it in my organization.

Scouting is Still About Religion

I appreciate a recent article from the Washington Post that discusses the changes to BSA policy and the Mormon faith. As a scout leader of eight years who is moving out of the area of my current service, this change has cause me to reflect on my involvement with Scouting and the policies of large organizations.

One key line in the new resolution that the scouting body approved is worth citing: “…any sexual conduct, whether homosexual or heterosexual, by youth of scouting age is contrary to the virtues of scouting.” That is it, in a nutshell. For the Church of Jesus Christ of Latter-day Saints, this was never about whether the BSA or local scout leaders should try to discern or categorize ill-defined and emerging sexual awareness of pre-pubescent boys and early pubescent young men

“…Some may not see the sacred gatekeeping role scouting plays. They may see only fundraising and not a foundation. Others may brand scouting activities as merely outdoor recreation, but it can and must be shown that BSA is not a camping club; it is a character university centered on duty to God. I quote again from Robert Baden-Powell: ‘The whole of [scouting] is based on religion, that is, on the realization and service of God.’

I am satisfied that a renewed focus on the BSA’s foundational principle – Duty to God – is sufficient for the continued support of the LDS Church and my continued participation, should new opportunities arise.

Cloudifying Your Datacenter

Whether VMware, HP’s Cloud Service Automation, xCAT or any of the other myriad provisioning solutions, there are so many progressive stages of private cloud. Although that term remains somewhat nebulous, the pieces are familiar:

1. Standardization and consolidation of hardware and infrastructure
2. Virtualization and automation (most of us are around here)
3. Self-service infrastructure (next step)
4. Service lifecycle management
5. Service brokering and hybrid environments

The barriers are also ever present, among them manpower, optimization, guaranteed SLA enforcement and accounting – each increasing the difficulty of making progress or really even getting a good understanding of the end-game for your private cloud. The best place to start your push to cloud is a strong focus on return on investment. Gathering an accurate understanding of current costs and demand is a significant first step, and from there a hope can build around the potential for cost savings. It is no stretch to claim between 10 and 100 times faster deployments, depending on your current setup! Our research has shown that our customers are spending 2-3 times more in manpower and hardware before moving to a cloud.

Another strong selling point for cloud and a potential for savings is the concept of a self-service portal. We’ve all dreamed of facilitating users, in a safe way, so that then can request and manage their own workload without requiring much IT team. Another bonus is the addition of chargeback concepts to help manage resources in an accountable manner. Workload placement and migration is another level of management that is expensive and time-consuming to handle manually. Even setting up the rule sets and auditing policies can be overwhelming.

So if you are focusing on a consolidation of hardware into a single datacenter, consolidating deployment efforts and processes, looking to increase your ROI on your infrastructure, decreasing IT staffing and computing investments or just wanting to add additional machines and VMs without staffing up, Adaptive Computing’s line of cloud solutions can help your success.

Self-service Portal
Using Moab’s service templates, actual service consumers can ask for what they need, when they need it. With chargeback you can govern accountability for resource uses, which will limit overuse and waste. Remember that a free cloud is a recipe for failure. Self service is what makes a your datacenter a cloud, and can facilitate 10-100x increases in deployment speed.

Continuous Optimization
Move from automation to orchestration. VM sprawl, just like server sprawl, can fill up your datacenter quickly! Do you want your VMs to be scattered across your datacenter for better performance, or would you like to consolidate so that you can take advantage of licensing constraints or VLANs. Don’t overlook the value of accurate initial placement with granular service allocation policies that can target by processors, memory, chipset, software licenses or other arbitrary metadata. Also, you can always reserve pockets of your datacenter for certain kinds of work, or work from certain users. Once those services are running, they can be locked down on that hypervisor to secure high availability and security. Now all this can be done manually, but let’s be honest, it’s a lot of work, and your ability to keep in sync with upcoming needs will be limited by staffing and the other fires and stuff your IT staff it doing. You don’t want to just add capacity like we have in the past. Now is the time to efficiently allocate what we already have with a variety of policies around placement, overcommit and allocation.

Integrating Moab in Your Cloud
Dip your toe, try this out. We know that you may want to keep one foot in the traditional IT infrastructure model, or even outsourced IaaS . We know that this perpetuates inconsistent development environments, disparate architectures and different management and security, so pick a single small group to focus on. You want to provide all the capabilities (optimization, chargeback, service catalog, etc.) for a each grouping one at a time so that you can demonstrate the ROI as you work to cloudify (bring standardization, automation and self-service) your datacenter.