This week Product School hosted Sam Stone, Product Lead at Opendoor, for an #AskMeAnything session. Sam answered questions on everything from product culture, building great products, user research in 2021, and the importance of backtesting.
Sam is the Head of Product for the Pricing Group at Opendoor – a late-stage startup that uses algorithms to buy and sell homes instantly. Sam is passionate about building products at the intersection of finance and machine learning.
“What’s the key to building a great product?“
I’m going to constrain my answer and talk about building a great product in the algorithm/ML space. I think there are 3 things that stand out:
- Know your user and your data – they’re different. Don’t just view the world through data, talk to users, understand their needs qualitatively and emotionally, and only then use data to hone in on certain aspects of their pain points.
- Be very clear about what impact the algorithm/ML product is trying to have and how you will measure success. For example, if you’re trying to increase click-through-rate, that will lead to a very different product than trying to maximize conversion/sales.
- Be open-minded about how you interact with users. It’s easy to build algorithm/decision products that fail to ask for user input, or allow users to change their parameters because that simplifies things from a data science perspective – but it doesn’t improve the user experience!
You might be interested in: Building Products for Fraudsters
“Does your team have a Product Operations function? If so, what are two things that org did that helped the product team as a whole?”
Yes, we have a Product Ops function at Opendoor and it’s hugely valuable.
- They play a key role in soliciting and prioritizing feedback from our operators (in my group, people we call “Pricing Associates”) for how to improve our products/systems. This is important for our quarterly planning processes.
- They help to triage bugs from ops to product and to communicate new features/changes from product to ops. This is important in our weekly sprints.
“How has 2020 and the rise of remote work changed the way your team approaches user research ?”
It’s changed it a lot, and, in my opinion, it’s made it harder – although no less important.
Prior to Covid-19, we sent all of our new hires to one of our operating markets for a few days to spend time touring homes we might buy with our ops inspection team. Where they did home valuations with our pricing ops team, and shadowing calls with our customer support team. Through this, even the most backend-focused of engineers would interact with many end customers (home sellers).
Now that Covid-19 has happened, we can’t travel, and so we have to rely on virtual means, largely asynchronous, to deliver this type of end-customer experience to our teammates (both new and experienced). We’re doing more call and video recording of customer interactions, more user panels, and more user testing (both moderated and unmoderated). To be honest, it’s less efficient than sending teammates to spend a week “on the ground” with customers and operators, but it’s still a high ROI investment.
Check out Product Management Skills: User Research
“How does your experience in different domains (finance, tech) inform your perspective in building products?”
There’s a few ways my past work in finance has informed my current work in tech, and my relationship at Opendoor with our finance team:
- One of the big differences between typical finance investing and data science is the use of backtesting. A lot of traditional finance will involve laying out a clear and logical rationale for an investment, but it won’t involve backtesting, partially because it’s hard to backtest certain types of illiquid investments. This is partially because backtesting just isn’t in the tool-king of many traditional finance professionals. Backtesting is a huge part of what we do at Opendoor, and so we’ve had to spend a good amount of time helping our finance partners understand the nuances of it and why we will only make production algorithm changes if they show improvement in backtesting. On the contrary, even if a change seems logical if we can’t demonstrate that via backtesting, we’re probably not going to launch it.
- Interpretability of algorithms: one thing that I think traditional finance does really well, and where Data Science as a function learn from finance, is building models that are interpretable to laymen and have a good UI. In finance, there are well-established conventions around using spreadsheets as a UI, so that all stakeholders can see how forecasts are made. In Data Science, the convention is more to report out accuracy metrics, and then assume stakeholders will trust the model, even if they can’t see its “guts”. I think the former strategy tends to be better and means we should push for more interpretability in production models at tech companies.
You might also be interested in: How to Transition From Finance to PM
“What about the intersection between ML/ Data science Engineers vs Product Managers? How much analytical background should a good PM have for managing a data-driven product?”
Here’s how I think about the different roles:
- Data scientists need to push the frontier in terms of research. They need to (i) understand user’s behavior so they can make causal connections and (ii) be able to rapidly test ideas for modeling those causal connections.
- ML Engineers (i) provide the platform and tools that allow the DS to rapidly test ideas in code and (ii) allow successful ideas to be deployed in production, at scale, and with high reliability. Our ML engineers also need to have a deep understanding of users, especially around the subtleties of user data and translating that from prototype to production
- PMs are like referees – they help set the environment and processes by which a team succeeds. Environment = bringing in the voice of the customer and articulating the business needs so that all team members, regardless of function, are motivated. and generating new ideas. Processes = having a clear way to prioritize new ideas and decide which to pursue and why. If the prioritization process is good, then the process resolves which ideas the team works on – it’s NOT the PM acting as “mini CEO” who makes an arbitrary, personal decision.
“What is the key to balance products that enable company efficiency/scalability and serve as a marketing tool to generate leads as the Opendoor evaluation tool? “
This is specific, but as a risk-taking business (we hold $ billions of homes on our balance sheet). Opendoor needs to closely tie risk (e.g. if the market moves south and homes become worth-less) and marketing. This involves close collaboration between our marketing, finance, and pricing groups, where both pricing and marketing make forecasts that our finance group then uses to tie together outputs (inventory size and composition) and inputs (marketing dollars spent to achieve that inventory).
“When do you implement an algorithm that changes people’s daily activities and reduces time?”
I think this would generally relate to a new feature of an internal tool or an algorithm that allows our internal operators to work more efficiently. How to spend the “extra” operators’ time is generally left to our Ops leads? At Opendoor we’ve been lucky enough to be growing quickly enough that this hasn’t been much of an issue – there’s enough new volume being added that our ops team has remained busy even as we’ve released products that make them more efficient.
Check out: 10 Tools to Make a PM’s Life Easier