Here’s what happened this month:
I tried to do a project. It was going to be a whole production, culminating in a couple of immaculately-produced videos. I’d go over my first set of interviews, then my second set. I’d showcase what worked, what didn’t, what people had to say, and how my changes reflected that feedback.
Instead, this post is going to serve the purpose which was supposed to be served by my videos: Explain my project, its rationale, and its goals; go over why I had to change the project timeline and goals; analyze the results of the project; and explain why this project is valuable. Buckle up, it’s a long one. Enjoy.
You can find this project in its original conception here. (I’d recommend reading it, and the subsequent blog posts, before reading this in order to understand everything I reference.) If it isn’t already abundantly clear to you, this is not what happened. However! That does not mean my project was a failure. Rather, it means I acquired some valuable knowledge and experience that I never would have learned had I not attempted such an ambitious project. But before I go further, let me explain what my project actually turned out to be.
Instead of meeting with random people in person, I set up Zoom meetings with fellow Praxians. I initially did this because I was in desperate need of feedback, but it ended up working out better. The feedback I received was, I think, more thoughtful than what I would have gotten from man-on-the-street interviews, and for first drafts, thoughtful commentary is important. Perhaps I’ll try my final drafts on the masses.
The other benefit of having Zoom meetings is the ease with which they are recorded. I recorded every meeting I had with the intention of using it in my videos. This hasn’t come to fruition quite yet, but having the recordings to study is valuable on its own.
My first Zoom meeting changed my project yet again. After about 10 seconds of reading my copy to the participant, I realized it was simply not very good. I decided that instead of emailing these same pieces of copy to my participants and having them judge it in that format, I’d meet with them a second time and read them entirely new pieces of copy based on their feedback.
Finally, this was all supposed to come together in a series of videos. I was going to show and explain clips from my interviews, have fun sounds and graphics, and put 17 exclamation points in my title in order to maximize my views from hyper middle schoolers and really lame high schoolers. Sadly, this proved too ambitious a venture, mostly because I have no idea how to use any video editing software more complicated than Adobe Premiere’s mobile version (yet).
So, thanks to Nathan Montgomery (a fellow Praxian), this blog post is the final result. It’s not indicative of failure, or even compromise. Rather, it’s the natural conclusion to an ambitious project whose mastermind might have overestimated his or her current abilities.
I began by writing my first two pieces of marketing copy ever. Check them out here. I wrote one from a data-driven perspective (Copy A) and one from a feelings-driven perspective (Copy B). I wanted to see which approach people responded to better—do they want to hear cricket nutrition facts and ominous climate prophecies, or do they want to hear how yummy a mealworm is?
I did my best to make each as good as the other, but I liked Copy B more. Most of my participants agreed. More on that later.
After I got responses from everyone, I sent them a scheduling link for another Zoom meeting. This time around, I read my shiny, new pieces of copy, reworked based on feedback from a dozen people. The meetings are still ongoing, but so far, it seems my efforts to improve the copy have paid off.
After this blog post, the next steps will be my videos. They’ll be as I described, one focusing on the process behind the project and the first round of meetings, and the second focusing on the second round of meetings and how I responded to them. I don’t have a timeline for these yet, but they’re coming. I promise.
I tried to be as scientific as I could in my data collection. After reading each piece of copy, I asked participants a series of questions, emphasizing that they not be afraid to hurt my feelings:
- On a scale of 1-10, how much did you like it?
- The purpose of this was simply to gauge preferences. I left a lot of space for exposition, and tried not to prompt respondents beyond, “Any rationale behind that score?” I wanted to see what people liked and disliked without me leading them in any particular direction. I think this question was successful and relevant.
- Did the ad make you much less likely, slightly less likely, about the same, slightly more likely, or much more likely to try an edible insect?
- A pretty standard question. I also encouraged exposition on this one, although I received less than I did for question one. This one was unambiguous and served its purpose well—another success.
- Did it keep your interest throughout?
- This was a simple yes-or-no answer. I didn’t ask participants if they had any more thoughts because I thought it would get too specific for a project of this scope. I think this question could have been done better. Asking this on a 1-10 scale would have been more valuable, despite the inherent variation of each individual’s scale.
- On a scale of 1-10, how effective was it as an advertisement?
- I’m not happy with this question. It’s too ambiguous, and there were a few repeated points of confusion: “What kind of advertisement?” “What do you mean by ‘effective?’” I don’t think the data gleaned from this question are useless, but I think they are fuzzy at best. Were I asking this question now, I’d be sure to define my terms and parameters. As it stands, it’s a pretty lackluster question.
I’m going to go over some highlights and important takeaways from the answers I received. You can view the full results, still in progress, here.
Copy B scored better than Copy A almost universally. Only two respondents rated Copy A better in both categories, and one other rated it better only in effectiveness. Most of the feedback focused on the fact that Copy B was more relatable, and made positive the experience of eating bugs, rather than listing reasons to eat them that had nothing to do with actually eating them. Participants also said that Copy A was more threatening, and didn’t have enough of a redemptive payoff to allay the initial fear.
Some participants said that Copy A would work better as a piece to be read by them instead of to them. One said it sounded more like a press release than an advertisement. A few participants responded well to the barebones data, but most preferred the imagery of Copy B to the spartan numerical approach of Copy A.
Interestingly, one participant said Copy B did not hold her interest throughout because it wasn’t focused enough on the data. When I started talking about using crickets instead of croutons, she zoned out because she didn’t find it particularly relevant. The participant who rated Copy B lower in both categories also had a unique point of criticism: She didn’t like the imagery of a salad covered in insects. It soured her on the rest of the ad.
I agree with most of this feedback. I had a tough time writing a data-based piece of copy that was also relatable. After some rereading, I found it to be negative, as well. I also tried to make incorporate more data into Copy B. In my initial drafts, I focused so much on making the pieces contrast that I lost sight of what actually makes advertisements effective. I did some tinkering with the two pieces, keeping the original focus, but incorporating relevant ideas, and came up with these.
My meetings aren’t finished yet, but so far, these pieces are scoring better overall. The biggest jump comes from a participant who gave Copy B a 5 in both categories; this time, she gave it an 8. The scores fell in two cases, but no more than a point. The rest improved. I look forward to finishing my meetings and painting a more comprehensive picture, but this will have to do for now.
What I Learned
A lot of the knowledge gained from this experience pertains more to my work habits and personality than to anything in marketing and sales. I’ve learned to temper my ambition, to treat my skills realistically (looking at you, video editing), and that not achieving the initial desired outcome does not indicate failure. I learned that to succeed, you need to be adaptable, and not tether yourself to a plan until it’s too late to change it.
I also learned about conducting good surveys. I learned why questions work, and why they don’t. I understand more clearly that the way in which you ask a question is almost more important than the question itself, and that the more specific you can be in your ask, the more valuable the feedback you receive will be. I realize now that 1-10 scales are not really reliable, but they’re probably the best we’ve got for now.
Finally, I learned what makes an ad work. I learned what people respond to—namely, a mix of data and good feelings, with a focus on feelings. I heard an incredible variety of perspectives from a wide range of people, and hearing the diversity of opinions that still had common threads between them was enlightening, to say the least. Some people liked Copy A more than Copy B, but everyone said Copy B was more relatable.
Was this study exhaustive? Of course not. Is it useful as market research? I doubt it.
But does it demonstrate a willingness to learn, the confidence to fail, and the ability to adapt under pressure and constraints? I think so. I also think it shows my eagerness to succeed, even under judgment from people I’ve never met. (Really, who wants to sit there for 15 minutes listening to a stranger prattle on about scarfing down dung beetles? Probably nobody.) It shows I’m fully capable of taking on important, difficult projects independently, and that I can squeeze the tiniest bit of knowledge out of any experience, no matter how difficult.
I’m not a professional yet. I don’t think I’m even close. But I know I have what it takes to get there, and that I have the wherewithal to rise to the top faster than most.
Thanks for sticking with me. Let me know what you liked and what I could have done better.