Three Levers In Product Development

As a product manager at startups, I am often put in charge of overseeing engineering and delivery schedules.  Along with the number of hats I wear, I am fortunately afforded more levers that PMs typically don’t have.

Over the years, I’ve come to realize that there are 3 main levers in building and launching products.


  • For a new product, decide what is the minimum set of features to include to determine there is a product-market fit.
  • For an existing product, assess the incremental set of features to meet immediate customer needs and differentiate from competition.

In both cases, it’s important to be “ruthless” in cutting out features that are not crucial.  In my last job at Reverb, I was tasked to launch the iPhone app as a follow-on release to the iPad version already in the market.  It’s both a new product (in terms of device/form factor) and an existing product (server-side functionality remained mostly the same).  After considering iPhone usage patterns, I decided to cut out an edit-heavy feature (creating and editing a collection of articles).  That significantly reduced the project scope and simplified the user experience.


At launch, I always make sure that there are no showstoppers/crashes (P1) and critical bugs (P2), those include illogical user flow, typos, functionality not working as described).  Assessing the criticality of each bug is mainly the role of a product owner (using Scrum speak), supported by a QA colleague.


Most Internet software companies are adopting Agile.  At end of each sprint, there should be a releasable candidate (theoretically), but it’s not always the case.  While I respect the philosophy behind Agile, you really need to build in time for integration testing and device compatibility testing, particularly for mobile apps and responsive websites.

With all that said, holding quality constant, you are really down to two levers – features and schedule.

  • Schedule – it’s not in the spirit of Agile/Scrum to set a release date in advance, but it’s often necessitates by market timing (e.g. consumer electronics targeting Q4 in retail channel, promotion in time for Dads & Grads in June, etc).  In the case when a schedule is inflexible, push back on new feature requests and actively work to manage the scope of committed features.
  • Features – when a feature is crucial and must be included in the next release, the schedule becomes a secondary consideration.  When iOS 8/iPhone 6 were released, we ran into issues with an early release of Xcode 6 in terms of auto layout.  It then took several days before we resolved the auto layout issues to support both iOS 7 and 8 devices.  I held up the release and “ignored” our original launch schedule.

In summary, you have 3 levers, but cannot be prescriptive about all 3 at once.

I would love to hear what other considerations or levers you see in terms of building and shipping products.


Intuition Vs. Data-Driven



Much like the decade-long debate on nature vs. nurture, there has been much discussion on whether product management should be intuition vs. data-driven.  I’ve often been asked what type of product manager I am – the intuitive (aka. fly by the seat of my pants) or data driven (aka. don’t do anything unless the data justifies it) type.

Product management requires a healthy mix of product intuition and data-driven decision making.  You have to find a balance.  Product intuition about what to build, whether you have product-market fit, whether this feature is part of MVP is all forward-looking.  Where as data-driven decision making, e.g. whether to deprecate a feature because of its low usage or does the A/B test result show a statistical significant improvement, is backward-looking.

Ingenuity and intuition are necessary to come up with differentiating features that make your product stand out in the marketplace.  Looking at data closely helps you make informed decisions about an existing product.

My first job out of Kellogg’s MBA program was a product role in the Marketing group at Netflix.  My job involved running pricing tests, redesigning the free trial funnel, and shepherding a number of non-member site changes.  I ran dozens of A/B tests and analyzed results before making a site change recommendation.  In that role, I made decisions primarily based on data.  Because pricing and user acquisition were so closely tied to revenues, my job involved a lot more data analysis rather than defining new features.

Fast-forwarding nearly a decade, I was in charge of redesigning the TechCrunch redesign in 2012.  To begin, I re-imagined how the tech blog could be organized, how I could bring videos much closer to the rest of the site, and how the river (of stories) could be less monotonous, and so on.  I had the charter to keep the engagement up (retain or increase all page views), as well as bringing a modern feel to the site (making it both responsive and highly performant).  The major redesign project requires a lot of thinking about how to do things differently, improving reader experience and writer productivity at the same time.  Once I pulled the data on engagement and traffic numbers, the rest was making product calls every step of the way.

As for me, I adopt a blended approach, slightly tilting toward the intuition side.  I don’t believe in purely making calls intuitively.  Even my personal hero Steve Jobs made mistakes with Lisa and NeXT, lacking product-market fit.  However, purely basing everything on data means that you are hampered from experimenting.  This is particularly crucial when you are entering a new market or defining a new feature.

Apple and other like-minded companies use an intuitive approach to create products.  Consumers don’t know what they want.  Henry Ford, on innovation, said, that “If I had asked people what they wanted, they would have said faster horses.” Other successful tech companies, including Amazon and Netflix, fervently adopt a data-drive decision making approach.  Both can be successful.  Having worked at a number of tech companies, I’d like to share scenarios in which one approach is more applicable than the other.


  • You are developing an MVP (minimum viable product).  Make the hard call on what is necessary in your first version.
  • Usability – you should use beta users or existing customers for feedback on products once the project reaches the “feature complete” milestone.  However, if your gut always tells you that this doesn’t work, trust your instinct.  You won’t find a good test result from a feature you yourself don’t believe in.

Data Driven

  • You just launched a new feature.  Track its engagement for a week, a month, 3 months.  Don’t over-react when a feature is not doing well initially.  After a one month for most consumer products, you’ll see a steady pattern.
  • Pricing changes – any revenue-impacting changes should be tested via A/B tests.
  • Product design – I am not talking about visual design.  You can show mocks to friends, coworkers and family to get some feedback.  It’s easier to get and swallow harsh feedback before you invest time in coding an iOS app.  Much easier to get the feedback when it’s in the “mockup” stage.

I would love to hear from fellow product folks about when you should trust your instinct and when you should examine data thoroughly before making a product call.