What success metrics would you set for the reactions team at Facebook? - Facebook Product Manager Mock Interview

Mock Interview

Product Manager

21st December, 2020

Product ExecutionLeaning towards No Hire

This is a Facebook Product Manager mock interview. The expert takes a product execution interview and asks "What success metrics would you set for the reactions team at Facebook?" as an interview question to assess the candidate's product skills.

Written feedback for the candidate

What went well?

  • Clear structure and format, easy to follow
  • A good tie-in to FB mission, clear articulation of overall goal from a user and business perspective
  • Reactors/creators identified as discrete user groups with different needs. Prioritization of reactors explained well.
  • Relevant identification of metrics. Prioritization of a key metric well explained
  • Launch execution: Decent fundamentals and triage between ramp up vs full-scale A/B test.
  • Unexpected results/contradicting metrics: Diagnosis could use improvement (have put details in what could go better), treatment and management were done well. Good tradeoff analysis. Would recommend making tradeoffs very explicit eg. rather than “tracking engagement”, making it specific eg. #actions tracked across all relevant engagement pieces - reactions, shares, and comments.
  • A good demonstration of acceptable loss threshold principles would recommend giving specific examples of what could drive this loss in addition to why it might be acceptable (eg. learning curve, fear of using a new feature till you see others using it etc)

Where is the room for improvement?

  • Metrics setting was overall well done, most of this is constructive:
    1. Explanation of the importance of sentiment mapping (+ve/-ve) of certain metrics was slightly shallow
    2. Linking metrics to retention can be a powerful way to demonstrate your experience and knowledge of how different metrics can be effective proxies for impact
  • Launch/testing execution could use some improvement:
    1. Would recommend reading up a bit more on A/B testing fundamentals - especially concepts of a power calculation, random vs stratified sampling, decisions on pre-allocating control/variant vs dynamic allocation, the introduction of bias and its implications, and some common pitfalls such as weekday/weekend patterns. Might be especially important in the context of a technical/infrastructure-focused PM team.
    2. Diagnosis of unexpected results: This was done to a basic extent, but there's an opportunity to go deeper. There were some slightly risky assumptions (eg. that reduction was due to new features). Especially given the chosen experiment setup, it could have been simply a selection bias, it could have been a quirk of the setup (some users can see the new reaction but not use it, etc). Also, there was a lost opportunity of reusing a lot of metrics identified in the first part of the interview which demonstrate the overall awareness of how the ecosystem works eg. overall reactions usage went down - what did we see in terms of other forms of emotional demonstrations (shares, comments)?

How was your experience with Prepfully ?

5/5
Review of the expert

How useful was the session?

5 / 5

Tell us how it went

The expert arrived on time

Yes

The expert asked relevant questions

Yes

It felt like a real interview

Yes

I enjoyed the process

Yes

How would you rate your expert's communication skills?

5 / 5

How would you rate the expert's expertise on the topic?

5 / 5

How likely would you be to refer your friends to Prepfully?

10 / 10

Do you have any message for your expert?

Thank you very much for your time & constructive feedback! It has certainly helped me to uncover my blindspots and how I can go from good to great.