Sifting your User Research
So last we left you we were meeting with every meetup or conference organizer in the Metro-Nashville area. And then I emailed some in San Francisco for good measure.
When we’d meet, I had a giant list of questions that I’d generally attempt to cover during our chat. If we veered off into an uncharted area, no problem, but I typically tried to cover some of the same topics with each and every person I spoke with so that later I could compare and contrast responses. After each meeting, I’d “brain dump” everything we talked about into a text document. At the end of this I had over 20 pages of “notes” from organizers here in Nashville.
Welcome to Jennifer’s crash course in user research….
- Did everyone say the same thing? Nope.
- Did everyone have exactly the same issues? Nope.
- Did you really expect every single person, to have the exact same needs? Well, no…but that would definitely have made this easier.
So how do I take all of this broad feedback and use it to come up with meaningful answers?
First I looked for trends…
- What is something I heard from more than 3 or 4 people? Something that kept coming up without my asking or prompting?
- What was the response when broadly outlining my idea? Tepid, interested? People will rarely, if ever, ACTUALLY tell you they don’t like your idea. They’ll just…kind of nod and make noncommittal statements about the idea instead of discussing how they, themselves would use it.
Now what are the trends amongst the people who offered similar suggestions or had similar responses?
- Is it one specific kind of person or kind of meetup/conference?
- Are issues noticeably split amongst different groups?
- Are these suggestions/trends actionable, solveable, and from large enough demographics?
I discovered that those who, in a previous life, held a job that required them to be more social didn’t have as hard of a time going out there, hitting the pavement, and offering the right information to potential sponsors. I also noticed that organizers who had been doing this for a while succeeded through trial and error, or through helping with other local efforts like tech conferences where more experienced people gave advice.
All of a sudden our market for organizers who needed help with how to ask for sponsorship was smaller. It was looking like just newbies and those whose core competency was more solely focused on code versus people .
But, I DID have one piece of feedback that came up over and over from this more experienced group – “Once we’ve used up our personal connections, we don’t know who specifically to ask to grow our sponsorship” …not what to ask, or how to ask but WHO.
This was different from our initial assumption. (And why you do this process to begin with). And this isn’t feedback that we can easily solve with a simple solution. Which makes sense—usually the hardest problems are the most prickly. After some internal discussion we did determine that there were a few potential ways to tackle the “who to talk to” problem but none of them are super easy.
So now what?
We have to ask ourselves, can our potential product not only provide help to those with the “how to ask” problem (early stage meetup organizers), but begin to seed data and insights for those organizer-pros who have trouble with the “who to ask” part (veteran meetup organizers and larger conferences). Will our actual “value” be on the back end of the product and not where we initially thought?
In assessing your own user research ask yourself:
- Are there still problems I can solve?
- Can we still reach users with these pain points?
- Is this still profitable? (Are there enough users with this pain point, is the solution reasonable to build, and will those users pay for it?)
For us, it’s time to go back to the business model and adjust our numbers. If not as many people are within our initial demo or willing to pay for the proposal product, can we get enough users in there to build data for a sponsorship leads product which seems more valuable to experienced organizers?
Next steps? We’ll attempt to broadly scope and estimate features for supporting the two types of users in order the reach profitability instead of just one. We’ll need to talk with organizers again now that we have firmer features in mind. And we should put some infrastructure out there to test the interest beyond Nashville (Product Hunt, Beta List, etc).