Neil McCarthy, Yammer Product Manager and Ken Pascual, Data Analyst, Yammer gave a presentation for the AIPMM webinar series on April 11, 2014.
Neil and Ken refer to these background readings during their presentation:
Here it is if you missed it:
- There was great participation and many questions asked during the Q&A. Following are Neil’s answers to some additional questions asked.
Q: What do you look for in an ideal entry-level PM candidate? Do you feel that technical competencies are key to being a successful PM?
A: The ideal PM candidate sets a goal, thinks of a minimal feature idea that has a chance of accomplishing the stated goal, and correctly describes how to measure whether the feature was successful.
PMs should know enough about technology to be able to speak confidently with engineers about what’s possible. Under no circumstances should PMs try to overrule engineer’s technical decisions or presuppose the technical solution to proposed problems.
Q: Can you speak to whether or not validation bias plays a role in Yammer’s PM function?
A: We don’t think about validation bias much. PMs at Yammer do predict the outcomes of their feature tests before they ship the tests. This should minimize the likelihood that validation bias will be used to justify failed tests.
Q: How do you handle the interpretation and decision-making step if you’re measuring several things? I would imagine that it’s best to discuss before the test the relative importance of the different metrics (or even concoct a multiple-input objective function).
A: At some point, we have to make the decision that we think is right for the product. Designing an equation to determine whether tests win wouldn’t work for 2 reasons: (1) we’d need to reweight the equation for each test, which is too costly, and (2) the results tell a story and the story is what really helps us make the ship/no ship decision.
Q: It’s more complicated than that because one also has to factor in signup rate.
A: I don’t understand the question.
Q: Are these p-values all vs. the control as opposed to measuring how significantly different the options are from each other?
A: Yes, we measure each variation against control.
Q: What is the correct statistical technique to use if instead of A/B testing, you’re doing A/B/C/D/…/n testing? I assume you somehow generate a half-matrix of p-values…
A: The correct statistical technique is to measure each variation’s performance against control. The variations’ relative performance against each other can be determined by comparing their performance against control.
Q: Do you use any third party tools for your A B testing, for example Optimizely and how did you decide on the best one to use?
A: No. Since we implemented our data-informed processes before most of the existing third-party testing tools existed, we had to build our own. I’ve heard good things about Optimizely, MixPanel, and Mode Analytics.
Q: how do you develop new metrics and indicators?
A: For new feature metrics, we just have the engineers on a related project instrument new events to log. For new core metrics, our data science team will embark on studies to try to find new metrics that are highly correlated with existing metrics that we value.
Q: Wouldn’t they have know about the ‘groups’/'engagement’ linkage any way (from other analysis)?
A: I don’t understand the question.
Q: How do you use qualitative data like feedback comments
A: They are inputs into the top of our ideation funnel. We are very careful not to overreact to the vocal minority. It’s especially dangerous for us, because our vocal minority mostly consists of IT admins and business owners at companies that have paid us. We feel that optimizing for our users is more important than optimizing for IT admins and business owners, which is why we value engagement metrics over other options.
Q: Can you show us the inhouse data tool?
A: Too late, I suppose. It’s nothing too flashy. It has a place to type and save queries and it can show data in tabular and graphical forms.
Q: How do you avoid analysis/ paralysis? Especially when running continuous A/B testing?
A: We picked one analytical goal: long term retention. If you don’t have a clear idea of which metrics are important and which aren’t, you will get paralyzed.
Q: how does PM-ing differ between startups/”building product” companies and more mature/”existing product” companies?
A: Probably many ways. One that comes to mind is that we have to worry about negatively affecting our millions of existing users by shipping a feature that new users positively react to. Startups usually don’t have to worry about existing users very much.
Q: specific question for Neil — does revenue ever come into the picture as a metric for PMs?
A: The only time revenue affects my job is when a salesperson lobbies for a feature or bug fix that will help him/her in their sales cycles.
Q: Are you guys focused on correlation or spend resources on causation by tying data to understanding the market?
A: Multivariate testing uses the scientific method and thus does a pretty good job of proving causation.
Q: In small teams, can I still lead a data backed product manager with basic statistics skills? Sometimes in Big Data, if you don’t dive into data, overall results mask underlying “real” metrics.
A: Start at a high level. Pick the right metric. Make sure that you measure your feature’s affect on that metric. If you can do those things, then you’re off to a good start. The hard part is distilling the goal of your product down to a single metric.
Q: How did you measure retention because someone could use the service very infrequently…so what’s your cut off?
A: We can measure retention over any period of time. We chose 3-month retention, correlated to Days Engaged, as the right core metric for our product.
Q: It seems like the vision is a pretty fundamental concept, though it might sound a little esoteric: how concrete do you need this vision before you even begin with the ideation process?
A: Very concrete.
Q: Did you find any influence of non-organic users? I mean, users that were, for lack of a better term, forced into using the software (their company signed up and the employees have to use it)
A: The change was not relevant to non-organic (i.e. viral) users. Since they enter the signup flow having already verified their email address, there is no need to change the email verification step order for them.
Q: Can you recommend some materials for statistics important to PMs, for further study
A: A better use of your time would be to read more examples and descriptions of data-informed product management. There are some good posts on the Optimizely and Etsy blogs.
Q: Also, does Yammer use any specific analytical tools in order to help it’s decisions (i.e., Google Analytics, etc), or does the in-house analytical team handle that?
Q: Potentially funny question: regarding cover letters from candidates. I’ve heard about what stands out in interviews, but what about in cover letters (if they are read)?
A: I don’t read cover letters or resumes, although I imagine our recruiters do. We give candidates homework to complete, which I always read. After recruiting passes a candidate to us, we make our first go/no go decision based almost exclusively on the homework.
Q: How do you evaluate the total time & cost spent on each feature with regards to LIFT
A: We always build as fast as possible. We think in terms of impact while we’re thinking about features, but not impact per engineering resource or anything like that. Too much of our roadmap is governed by our product vision to be this clinical about our engineering ROI.
Q: Please define “days engaged” again – what are you measuring here? Thanks!
A: The number of days in a given time period during which a group of users engaged at least once, on average.
About Neil McCarthy
Neil joined Yammer as a Solution Engineer and soon after made the switch to Product Management. He immersed himself in Yammer’s data informed approach and has since taken an interest in spreading the word. He enjoys jogging and beer, usually not at the same time. https://www.linkedin.com/in/neilmccarthy