The key to successful marketing experiments is allowing your team to fail fast and learn quickly, with a minimal amount of effort and expense. A minimum viable product (MVP), sometimes called a prototype, is the version of a marketing tactic that enables completion of a full learn-measure-build (LMB) loop with the minimum amount of effort and the least amount of development time.
A MVP may lack many features that will prove essential later on, but it is important not to get bogged down in the details of features that do not contribute directly to the learning you currently seek. The goal is to get the product in front of customers as quickly as possible, and measure its impact based on their behavior. For example, if you are planning to launch a large-scale mass media campaign with billboards, radio ads and commercials, your MVPs might be a series of social media advertisements where you test various creative and copy elements on your target audience and monitor their behavior.
As marketers, how can we frame experiments that unambiguously test the hypothesis and control for nuisance factors that could obscure results? Experiments can help pinpoint causes, but they need to be set up in controlled conditions so they can be generalized to the broader market. Begin with a clear hypothesis that makes a prediction about what will happen. Then, assign consumers randomly to groups of test and control conditions. Control conditions could be an existing tactic, or an example of a tactic optimized in other programs through best practices. The test tactic should vary just one element of that control tactic.
In general, it's a good idea to test the riskiest assumption first. If you can't find a way to mitigate that risk, then it might not be worth testing the others. All best practices should be considered provisional until it's been shown through validated learning that they apply to the specific audience persona and customer journey. During active A/B testing, it may be necessary to log metrics and check for statistical significance on a weekly basis. As each learning is validated, the team can move onto the next hypothesis. Over the course of a month, cycle through up to four LMB loops. That will help inform the customized set of best practices.
Developing and testing MVPs, then logging metrics and developing a set of customized best practices allows for continuous improvement because each iteration can build on a prior best practice. A/B tests help save time in the long run because they eliminate work that doesn't matter to customers. They help the team quickly refine their understanding of what customers do and do not want. The marketing tactics themselves can therefore change constantly through the process of optimization.
As the team improves, they will be able to minimize the time spend in LMB loops and fail faster. If optimized marketing tactics still do not produced the desired result (e.g., enrollments, audit requests, product sales), then the team can make an informed decision to adjust the overall strategy as part of a pivot. If the MVPs and testing suggest you are building the wrong thing, then optimizing the product or the marketing will not improve its results.