At ACUPOLL, we’re often asked a range of questions about the nuances of concept testing and brand positioning. From why certain tests seem to fall flat to the challenges of balancing uniqueness with appeal, these inquiries reflect the complex decisions marketers face daily.
In this post, we’ll address some of the questions we hear, offering insights and strategies that have helped us guide leading brands to success.
Why do positioning tests often end up with fairly flat results across concepts?
This is a question we get asked frequently, and it’s a great one. Positioning tests can sometimes feel underwhelming because by the time respondents evaluate a brand's core benefits—along with the price—most have already formed an opinion. The added "topspin" from marketing positioning refines their interest, but it doesn't usually create dramatic shifts unless there’s a significant increase in perceived benefits. That’s why our approach goes beyond looking at the overall positioning as a "whole piece of cloth."
We use a series of techniques to evaluate and optimize the individual "threads," helping brands find their strongest, most optimized directions. This approach has guided brands like McDonald’s and LensCrafters, among others, to discover revolutionary positioning strategies. We always encourage clients to push the envelope and consider truly different or even radical positioning ideas. How does your experience compare?
Why isn’t competition included in concept tests?
Many firms assume consumers are familiar enough with category options to evaluate your concept as they would in-store. But this isn't always the case, especially in categories with infrequent purchases or many first-time buyers. In these cases, we often familiarize respondents with the competitive context before they rate a concept, which mirrors real-life shopping behavior more accurately.
Additionally, instead of paying a lot more to fully test competitive concepts, we can supplement concept tests by evaluating your core idea against 8-10 competitors using our validated Impulse measure, which has been shown to be twice as predictive as Purchase Intent. This approach not only provides more accurate data but also offers compelling insights to share with management and retailers. How do you incorporate competition in your concept tests?
Why do so many people say they “definitely will buy” a product in concept testing but don’t actually buy it?
This is a criticism that has come up over the years, and it’s often misguided. The main reason the percentage of respondents who say they "definitely will buy" (DWB) is higher than actual in-market trials is that concept testing gives your product 100% of the consumer’s attention. In the real world, products never have 100% distribution, and marketing plans rarely achieve 100% awareness. Additionally, concepts often contain more communication points than can be effectively delivered through marketing, and, sometimes, advertising doesn’t translate the concept well.
Despite these challenges, higher-performing ideas in concept tests consistently perform better in-market once adjusted for distribution and awareness. This reinforces the value of concept testing while reminding teams to stay realistic about what can be fully communicated in the market. Does this resonate with your experience?
I can create ideas that are appealing but not unique, and I can create crazier ideas that stand out as unique but aren’t appealing; but how can I achieve both?
This is a challenge we hear about frequently, and it’s a common dilemma for innovators. The key lies in understanding the psychology of schemas—cognitive models we form to simplify our mental processing. These schemas define the prototypical aspects of product categories, like how mouthwash is packaged, tastes, and is used, and for what benefits. Uniqueness comes from disrupting these schemas. For instance, Plax stood out because it was used before brushing, and Listerine PocketPaks gained attention as a solid form of mouthwash that could be used on the go.
To drive uniqueness, catalog the schemas in your category and think about how you can disrupt each one, ideally in a way that enhances the product's benefits, like Milk Bar’s Cornflake Chocolate Chip Marshmallow Cookies. Sometimes focusing on a specific segment, or removing an element to highlight a particular benefit—like Ikea’s elimination of salespeople to reinforce its value—can also drive both uniqueness and appeal. What strategies have you found effective in achieving this balance?
How can testing concept elements lead to more powerful concepts?
Concepts are the sum of their parts, and each element can be optimized to enhance the overall impact. Clients often have multiple ways to articulate a benefit but end up testing only a few concepts, making decisions without fully understanding consumer preferences. By testing more iterations of each element—whether via screening elements on Impulse measures, conjoint, or other techniques—you increase your chances of finding the best combination.
This approach has delivered significant success, as seen in our work with P&G, where optimized concepts outperformed internally developed concepts by up to 30% in BASES forecasts. While this approach might add another research step, it’s often worth it, especially when working on positioning or innovations with multiple possible articulations that might otherwise require testing a lot more concepts. Have you tried this approach, and what have you discovered?
Concepts vs. packcepts vs. adcepts—what should I test?
This is another frequent question, and the answer depends on your objectives. Here are three guiding principles:
The length of stimuli should match the amount of information your marketing plan can communicate. If you don’t have meaningful ad dollars, consider using a packcept.
The closer to the market execution, the more predictive the test is.
Decide if you’re trying to nail the strategy or the execution.
Historically, most CPG companies tested concepts in ordinary language to determine strategy, allowing their agency to develop execution within certain guardrails. However, some companies now prefer adcepts to minimize the risk of an executional twist from the agency. While this might align better with the launch, it can undermine the ability to get a "clean read" on the strategy. It’s a tough call that depends on your objectives, agency strengths, and marketing development process. Which approach do you prefer?
These are just a few of the important questions we encounter, and we hope our responses provide some clarity. If you have more questions, feel free to reach out—we’re always happy to dive into these discussions!