Why the RFP process doesn't work



At DeveloperTown we’ve built a fairly comprehensive design process that we use to kick off most of our client engagements. When I look back at our kickoff design process when we launched in 2010 to where we’ve matured today, I’m taken aback by how good it is. I’m simultaneously amazed by the work product we can produce in a mere four weeks, and eager to see how much better it gets over the next four years as others add their mark to the process and we continue to refine it.

It’s this process (and a recent argument with my partners) that has convinced me that most Requests for Proposals (RFPs) are fundamentally broken. They ignore (or grossly gloss over) the unknowns of a project, remove the ability to discuss the real objectives and the tradeoffs associated with technical decisions, and increase cost and risk for the people who submit the request. As I look at the types of outcomes we can create for clients now with our process, I see nothing but waste in most RFPs that I see.

In our process – or any good process of product design and technical discovery - it’s not uncommon for us to:

  • Fundamentally influence the business model of the product, moving the solution in a slightly different direction than what was originally requested;

  • Come up with a breakthrough design that dramatically simplifies the product, requiring us to build less than was originally conceived – or build it in a different way;

  • Discover that some piece of technical wizardry isn’t going to be as easy as originally assumed at the outset, requiring us to fundamentally rethink the solution;

  • Create product roadmaps that release features into the market in smaller increments allowing for feedback and tuning;

  • Take on short term technical debt or leverage a third party solution until a feature is proven in the marketplace before spending more to build it out completely;

  • Look at the long term (12 to 24 month) finances of the project - not just the next three to six months, where most RFPs seem to focus;

  • Or other tradeoffs that emerge based on debate and discussion or feedback from early validation of ideas with potential customers and users.

Our process – while unique – isn’t the only way to make these tradeoffs. But I’ve yet to see the RFP that says “We think we want X, but we’re open to the idea that we may really want Y. Help us figure that out.” They always seem to assume the end solution proposed is the right one. There’s never any discovery, learning, or long-term planning baked in. Or if there is, I haven’t seen it. I would openly applaud the company who submitted an RFP that did contain those parameters.

Why does an RFP process lead to increased cost? Without discussion of the many tradeoffs noted above, there’s no room for the technology team to recommend alternative methods to accomplish the end goals. Ideally, you’d like to find a team to help you solve a problem. Instead, most RFPs tend to lead to teams implementing a given narrow solution. Which assumes the problem is fully understood and adequately solved by the submitted proposal. It costs more to be wrong and have to go back to first principles than it does to move in a more exploratory way at the onset where you continuously question core assumptions and validate those assumptions over time as the project unfolds.

More importantly, consulting companies will likely charge more for an RFP than for a product that evolves over time. If I attempt to do pricing for an RFP, here is my process:

  1. Do some top-down estimation in my head based on my experience in software development and past projects that I’ve seen. Is this product $300k to $500k? Or is it $1.5M to $2M? Figure out what the primary drivers of cost are likely to be.

  2. Assemble a very small team of designers, developers, and testers to have them help me quickly estimate the work from the bottom-up. Figure out the high-level tasks for the project, and have the team put hourly estimates on those tasks.

  3. Iteratively work with the team to get my bottom-up and top-down estimates to mesh. It’s likely somewhere in between the two. For this illustration, let’s assume that what the team and I believe is a “likely” cost to build a product is $100k.

  4. Once we have that “likely” cost, add a contingency. Because who knows, we could be wrong. (And we likely are.) For argument, say that contingency is 20%. For most purposes, this adjusted cost estimate - $120k in this case – is the good faith estimate I’d want to provide a client.

  5. But I’m not providing a good faith estimate in most RFPs. I’m either fixed-bidding the work, or doing smaller fixed bids in phases, or doing some sort of not-to-exceed, or setting myself up for a “Well we told you everything you needed to know upfront, so if it costs more than you estimated then that’s your fault.” So I now need to price in the risk of missed client expectations, re-work, and evolving scope. Evolving scope includes all the things the client wants that they didn’t include in the RFP... And how could they know what they really wanted without seeing designs or a working product yet? Let us assume I price the “expectation gap” risk at an additional 50% on top of our $120k. So I’ll bid $180k.

  6. Now I need to list out all of the legitimate assumptions for my estimates (we think were only building this for iOS, not Android, we only plan to support IE 9 and above, etc) as well as ridiculous assumptions that should immediately cause you to fire someone on the spot if they include them in a document they deliver to you. Actual examples from past proposals I’ve seen: “… the client responds in a timely fashion … the project team will have continuous access to development and test services/environments … key business resources are available throughout the engagement.” Seriously? If you include crap like this, you don’t know what you’re doing. It’s like saying, “If the sun doesn’t come up tomorrow, we may fall behind schedule.” Really? Shocking insight! However, if I don’t include it, I can’t “point to it” later when you come back and yell at me about increased costs. So in this crazy RFP world, I have a perverse incentive to include more and more of this crap, and the more outlandish and vague it is, the better for me.

  7. Finally, while I have a price and all my assumptions to guard me, I still have one more ace in the hole. The ever-present “change request” process. I know you don’t know what you really want yet. So I know the requirements will change. They almost have to or you’re likely guaranteed failure upon launch because you can’t possibly know everything months in advance. And I know when you change requirements – then I get to submit a change request. And with each change request, I get to increase the cost.

Final result? A bloated quote for a solution I know you won’t really want, with a process and incentives setup to ensure that we negotiate more than we collaborate. It’s an awful awful process. All the incentives are wrong. An RFP buys the false illusion of certainty, and does so expensively.

A more “sane” consulting company (and there are many of them – not just us) will ask you to engage in an ongoing team-based model. Form a team, have that team deeply learn the problem you’re trying to solve and the business constraints on the project today, and then that team works with you to iteratively solve the problem as quickly and safely as possible. You lose the false appearance of “certainty” that the RFP process provides – and that’s difficult to do – but you gain better risk control, which in the long run will yield lower costs and a better product. A better product also leads to increased revenues, increased user adoption, or whatever your business objectives are.

In addition to the up-front pricing problem, there’s also no opportunity for deep discussions around tradeoffs. There might be three, five, or twenty ways to accomplish some feature, task, or product goal. All of them have different short term and long term costs, different short term and long term implications on user experience, different implications to technical debt and product quality, etc. Those decisions should be ongoing and ever-evolving discussions – not a list of assumptions made by someone who doesn’t know your users, long-term business goals, competitors, or technical competency.

All of this leads to increased risk. RFPs codify early assumptions into contracts. They remove incentive to learn and adapt throughout the project. They remove the ability (through change requests and CYA activities) for people to change their mind based on new information. They encourage developers to take shortcuts since pricing is fixed (or feels fixed) and the goals are short-term goals. You want your developers to think long-term, but then simultaneously act short-term with that long-term goal in mind. You want them to behave as if they were spending their own money. You should want them to argue with you, debate tradeoffs, and care about outcomes.

Dave Christiansen, a former “Townie” and close friend, likes to say, “Software development is hard.” He says it out loud and he says it often. He says it because it’s easy for us to forget. It’s easy to think we can know what we want, know the best way to do it, know the unintended consequences of our decisions, know how long it will take to do something that’s never been done before. He’s right. It is hard. Most RFPs seem to assume it’s easy. Building and launching a new product is one of the most difficult tasks you can undertake in business. Building and launching a new software product is even harder.

So I bid farewell to the RFP. I suspect we will still get them, and that I’ll have to come up with a well crafted response that says, “Thank you for the interest in working with us. We’d love to help. Here’s how we can best do that…” While this likely closes some doors, hopefully it opens others.

First published on the DeveloperTown blog.



A project case study

Last summer at DeveloperTown, we chartered a project to have Spencer (one of our interns) along with Chris (acting as product owner and tester) build a new product for an existing client of ours. The work was a bit speculative, but low risk since Chris and I really felt like we knew what the client wanted and needed. Chris and Spencer used a fairly standard vanilla scrum process – leveraging Pivotal Tracker and some other basic tools – to deliver the foundations of the product. Every two weeks or so they would get together to discuss stories, progress so far, and feature tradeoffs. In about three months, they had all the high-level features implemented.

After a very productive summer, Spencer went back to school to finish out his degree. We decided it was time to show the product to our client to get some feedback. At the same time we also decided we should show it to some other potential clients as well. Partly because any feedback is good feedback, but also because we took some risk taking the time to build it out, and two paying clients are better than one. At this point – like any good helicopter project sponsor – I decided to re-engage with the team to get a feel for where we were.

Initial spike completed, let’s make it look good

I was thrilled with how well Spencer and Chris had executed the vision for what Chris and I had discussed as the opportunity. The product worked (minus a couple of trivial bugs waiting to be fixed), and virtually all of the core features were present. It was fantastic. However, it wasn’t “perfect.” From a sales perspective, it needed some sexiness. It worked, but it wasn’t awesome to look at. (You know – out of the box Rails and bootstrap... what I consider the new “vanilla” web.)

We decided to kickoff a short sprint to give the product a minor facelift, closeout any remaining bugs, and setup the production environment. Enter David, Steve, and Lisa...

When I say, “kick off a sprint”, you likely suspect that I mean we did something formal. Sadly, we didn’t. I simply asked my partners if I could invest a bit more in getting the final product ready – they said yes – and I pulled three people into the project based on availability. Each of them then proceeded to amaze me with how they approached their work. 

Imagine for a second I come up to you on a Monday morning.

Mike: “Hi Steve! Guess what? I need your help on ProductX. In seven days we’re going to be demoing the product to a couple of possible customers, and I was wondering if you could help make the product more ‘sexy.’” 

Steve: “Um. What? What exactly do you want me to do?”

Mike: “Well, you know… Make it beautiful. Add ajaxy goodness. Do the thing you do. Make it cool. Oh yea, and the primary platform for one of the customers is iPad, so you should make sure it’s responsive too. You good?”

Steve gives me a blank stare. I’m not an idiot; I know that I’m giving him nothing to work with. He should be imagining knives stabbing my stupid pointy-haired head.

Steve: “Okay. I’ll go talk to Chris and figure it out.”

I walk away and go to Lisa.

Mike: “Hi Lisa! Guess what? I need your help pulling together a logo for ProductX. It’s a product that helps people do foo. I’m guessing Steve will need it soon, and I don’t have that much money to spend on it, so you likely only have a couple of hours. Can you help?”

Lisa:  “Um. What? What do you want it to look like?”

Mike: “Well, you know… a logo. Make it beautiful. Something better than the placeholder text we have there now. Make it foo-y. And really, I can only afford to give you a couple of hours. You good?”

Lisa gives me a blank stare. Again, I know that she’s secretly envisioning Digby (our resident Golden Retriever) pulling me violently out of her house by my leg.  Lucky for me, he’s a creampuff.

Lisa: “Okay. I’ll talk with Steve.”

In my head, I think… “Steve has no idea…” But I trust they will figure it out. I walk away.

At this point, I leave the building. I have to go downtown to meet with another client. While I’m driving downtown, I read an email from David. (Don’t judge me, I could have lied and said Siri read me the email. But I didn’t. I was honest.) David heard he was on this project for a week, he had already met with Chris to discuss some of the stories in Pivotal Tracker, but he wanted to setup some time to get some additional background. I call him while driving. 

David: “Hey Mike. Thanks for calling. Chris gave me the high-level overview, but I’m still not 100% sure what problem this product is solving. Can you talk me through it?”

Mike: “Sure. Let’s start at the beginning. There are really three separate users for this software… [15 minute dissertation on the problem space]… and at that point, they login to view the weekly report. Make sense?” 

David: “Wow, very helpful. That makes a couple of the feature make a ton more sense.”

At that point, David peppers me with about five questions that sound a lot like, “But what about X?” “How will they handle Y?” or “Have you thought about Z?” To which, I have my canned answer ready… “David, that’s a great point. We’re really focused on minimum viable product. All of that will come later.” David is happy, and we get off the phone.

That night Lisa emails me an entire page of logos, perhaps 12 to 15 of them. They are good, but I don’t love any of them. I give her some feedback. Steve gives her some feedback as well.

The next day, I don’t talk to anyone about the project. I’m buried in other work. At the end of the day I get another email from Lisa with another round of logos. Lisa has some solid options in this set. None of them are the end game, but certainly good enough for minimum viable product. I tell her which one I want, and she passes it along to Steve. (I later learn that Steve would have picked the same one. That made me feel good.) 

For the rest of the week, I ignore the entire team. I see some regular emails from Chris with status updates. I largely ignore those too due to time constraints. In those emails, I see Chris prioritizing issues, outlining risks, asking questions. I’m scanning the emails, but if there’s not a clear “I need a ruling here!” call out, I archive the email and move on. I know Chris is following up with the team, accepting stories in Pivotal, and giving feedback. 

The next week, I travel to Chicago for my first demo. I’ve literally only glanced at the finished product once when Steve gave me a very quick demo (like 15 seconds) from his desk. I’m going in completely cold. I show the client and I get an actual “Wow.” It was awesome. A few days later, Chris shows the product to our existing client, and he gets “Love how much more user friendly the new software is. Many thanks for getting this improvement to happen. We want to cut over to it in the next two weeks.”

What happened?

We broke a ton of rules in those two weeks:

  • We didn’t hold a planning/kickoff meeting.
  • We didn’t do daily scrums (that I know of).
  • We didn’t have stories for Lisa or Steve. And I’m fairly sure not all of David’s work was done via stories (but I believe most of it was). 
  • We had a helicopter project sponsor.

We broke our own rules, but in the process we met the deadlines and we were still able to get accolades from potential clients. What happened?

Since those two weeks, I’ve been thinking about this a lot. I’m a big process guy. I don’t mean I love process for process sake. I simply mean that I believe process matters. I believe the way teams choose to work is a reflection of what they value, their specific context, and the million lessons learned that emerge over time. I view that as process. I’m captivated by why some practices work so well for some teams, but not for others.

So what happened? We did everything wrong, right? Here’s what I think happened.

  1. I completely and totally trusted the team. I knew Lisa would deliver a logo in the handful of hours I gave her. I knew Steve would crush the UI challenges and deliver something awesome. I knew Chris would manage the risks and communicate with the team. And I trusted that David would make the right technical decisions behind the scenes – even though he was completely new to the code-base. Because of that trust, I simply made sure everyone knew what was needed, knew what the timeline was, and then I stayed out of the way. (The “stayed out of the way” part of that last sentence is likely a little self-congratulatory. In all honesty, even if I had wanted to stick my nose back in the project, I wouldn’t have had the time. For those of you who don’t believe in revisionist history, that sentence right there is what it looks like.)
  2. Chris provided daily communication and prioritization. Chris sent emails. Chris updated Pivotal. Chris answered the numerous questions that I’m sure Steve and David had. He was the model of what you would hope a product owner would be: available and engaged, knowledgeable, deeply cared about the outcome of the project, and active in providing feedback through Pivotal comments, testing, and emails. I was able to put things in motion and walk away, because he was there to manage the fallout of my “help.”
  3. The team cared about the outcome. I have to admit; I was kinda floored when David asked to meet with me to get more context on the overall product and market we were going after. In my experience, it is a rare thing for a developer to “zoom out” like that – especially with a tight deadline – to ask big picture questions. The only reason he did that, is because he cared. He could have just read a story in Pivotal, and then done his best to pound out the minimum lines of code to meet the needs of that story. But he didn’t. He wanted to make sure he really understood what we were doing.

Similarly, Steve could have phoned in some of the UI changes and just leveraged the contemporary design metaphor of the day. He didn’t. Some of the touches he added reflect deep thinking about how the user would actually interact with the software. It wasn’t just better looking – it was truly easier to use in some areas. He called me at one point to tell me that there was still a javascript bug that he couldn’t solve when rendered on an iPad. He wanted to make sure I knew about it so I wouldn’t hit it in the demo. He knew he would fix it within a day or so when he could get back to it, but he was looking out for me and protecting my interaction with the potential client. Again, I’ve worked with many developers in my past who would simply not say a word, and hope I didn’t hit it in the demo – if they even cared about the demo.

 Agile teams often claim that their process reflects the values of trust, frequent communication, and human caring. However, my experience tells me that those are precious and rare things. What the team was able to accomplish – from Spencer’s early foundational development work to Lisa’s logo, in the time and budget they did it – is amazing. At the points where our internal agile process helped, we used it. When it didn’t, we didn’t. And it worked because the process wasn’t what made us successful. It was trust, frequent communication, and human caring.

Imagine that some jack-ass (me) walks into your office and simply says, “Make it beautiful” and walks away. I can hear business analysts and testers around the world crying out in pain at the idea. It’s not documented well enough. It’s not testable. Bah. Steve and Lisa know what beautiful is. David knows what technology is production-ready and what technical debt we can live with. Chris knows what the client needs, and what’s fluff. At this scale, they don’t need specifications. They care, they ask, and they know their craft.

I wouldn’t run a project that way for six months. And if you remember, we didn’t. Chris and Spencer spent the first few months working in our run-of-the-mill Scrum process. They delivered a ton of value. But it’s refreshing to know that when the team needs to self organize and run at a different pace, it can.


The MVP Reading List

I regularly get asked for book references. From employees new to DeveloperTown, to founders who walk in the door and aren’t sure where to start, to people who look at what we do and how we work and ask how we figured all that stuff out. Some of it we figured out, much of it other people figured out and wrote down. We were smart enough to read it, apply it, and adapt it to the way we work. You can too.

Here is my take on the books that best capture how we think about product development. At DeveloperTown we’re big on iterative development (or as Ries puts it – small batches), customer development, validated learning, clean and simple design, and clean and simple business documents. The following books have been very influential in how we build our products:

The Lean Startup, Ries
The Four Steps to the Epiphany, Blank
The Entrepreneur’s Guide to Customer Development, Cooper and Vlaskovits
Business Model Generation, Osterwalder and Pigneur
Rework, 37 Signals

Because we often aren’t just building products, but are also helping founders build companies (and sometimes building them ourselves), I also recommend the following to help navigate venture capital, starting to understand what it’s going to be like working with investors once you get their money, and building and running a business:

The Art of the Start, Kawasaki
Venture Deals, Feld and Mendelson
Unfunded, Carter 
Traction, Wickman
Founders at Work, Livingston

When it comes to the specifics of building our software, we use our own flavor of agile development – but in it’s early days it was strongly influenced by Scrum. We develop primarily using Rails and an army of controlled but ever-changing frameworks to go with it, we do everything in the cloud, and we test constantly (much of it automated, some of it manual, a bit of it with users). For those who want to know more about our development methodology and where it comes from, I recommend:

Lean Software Development, Poppendieck and Poppendieck
Agile Software Development with Scrum, Schwaber and Beedle
User Stories Applied, Cohn

That list doesn’t really touch on some of the development and testing principles, but that’s just because I’m not really aware of any good books out there that capture those concisely. I could recommend a bunch of books from The Pragmatic Bookshelf, and each of them would have a piece of it. But my experience is that much of that knowledge is captured in blog posts, on forums, and in the rich debate that happens between team members when they go to solve a problem. We post about some of those debates occasionally.

Finally, if you’re responsible for getting software out the door, there is another list you might be interested in. It’s the list on getting things done, managing the process, and shipping great software – some of the best works on project management I’ve read (and I’ve read a lot on the topic):

Ship It!, Richardson and Gwaltney
Making Things Happen, Berkun
Manage It!, Rothman
Release It!, Nygard

Some project managers may look at that list and say Release It! doesn’t belong there. They are wrong – it does. On small/agile projects, if the person leading the project isn’t constantly thinking of those things, then it’s very likely that no one is. For us, project management is the art of alternating between the strategic and the tactical, balancing tradeoffs, and communicating progress. To do that, you need to be knowledgeable (not expert – just knowledgeable) about all aspects of the product.

(I originally wrote this post on the developertown.com blog in 2012. We've upgrade the website - killing the old blog - so I decided to repost here.) 


Three alternative uses for test management software

If a quality assurance team has implemented a sophisticated test management system, then its members have no doubt experienced some of the significant benefits this technology can provide, including syncing on-site and offshore teams, creating and storing automated test scripts and sharing testing resources across the entire company. Once these features have been digested and fully implemented, QA leaders might wonder how they can further take advantage of their test management software to wring the most value out of this high-performance utility. With a little creativity, QA management can discover innovative uses for this technology, optimizing the performance of software testers and developers.

Get more value out of test management platforms

Of the many ways that quality assurance professionals can leverage their test management software in new and exciting fashions, three stand out as the most surprising:

  1. Cultivate testing innovation - Sometimes testers and managers can hew so closely to accepted QA dogma that they run the risk of production stagnation. Teams and technology change, and what once worked effectively in the QA environment may no longer offer the same benefits. The addition of new software testers presents an opportunity for managers to reconsider how to best utilize their personnel and resources. Quality assurance experts Rikard Edgren and Qamcom Karlstad noted in a report the pair authored for The Testing Eye that infusing standard processes with some creativity can pull testers out of disruptive production ruts.

    By experimenting with new testing techniques, QA management may come across a new approach that makes the best use of current personnel. For instance, pair testing - where two testers collaborate on a single process, with one running scripts while the other documents the results - may prove to be an effective method for those staff members. QA teams will need to have access to a flexible and comprehensive management system that can quickly and conveniently create, store and share test scripts and reports in order to try out new approaches to testing.

  2. Leverage mind mapping - There is no single best way to attack a particular piece of software. Typically, a balanced diet of disparate testing methods and tools offers the most effective way to thoroughly check for errors or performance issues. Because the approach is not set in stone, some testers find it is beneficial to come up with new testing methods as they mentally develop. Encouraging these processes and sharing successful methods and techniques with other team members can be extremely challenging without an overarching management platform.

    Many testers have found that creating mind maps to generate graphical representations of their mental testing concepts can help them structure abstract or unfocused ideas, as well as better communicate these thoughts to team members for further use. A high-quality test management system can accommodate the creation of the necessary materials as well as facilitate the dissemination of successful methods across the team.

  3. Optimize test sprints - QA veterans - particularly those who have served on agile projects - are by now extremely familiar with test sprints, when specific software features are examined in a short window of time. Software engineering researchers Shlomo Mark, David Ben-Yishai and Guy Shilon explained on Testuff that QA can get the most value out of these processes by approaching the code in different ways and seeing what methods prove to be the most effective at identifying flaws and errors. Because these production windows are often extremely hectic, keeping track of which tactics worked and which fell flat can be very difficult. However, QA management can record the success rate of different approaches and scripts within a test management system, giving team members a roadmap for optimal test sprints moving forward.

The above post is a guest post provided by Zephyr.


STARWEST 2013 Slides Posted

Just posted the slides for both of my STARWEST talks from earlier this week:

All feedback welcome!