Testing Your Prototypes

Prototypes are built to be shared. By continuously testing your prototypes in the market you are collecting your own data. Simultaneously, with every market test, you also have the opportunity to learn more about your potential customers/users, their needs, their pain points, and their behaviours. In some cases, you might even refine your product together with your users and other relevant stakeholder groups. Owning your own market data is an invaluable resource. It will help you make tough decisions about product directions and features, allow you to refine your underlying assumptions, and refine your business model. The collected data will also help with improving your predictions about the potential scale of your product as well as the overall market. 

If your goal is to test the general usability of your (digital) product, your strategy should be to test with multiple smaller batches of people and refine your solution after each user test. Such an iterative approach was already suggested a few years ago by Nielson and his colleagues and still holds true today. In one of their studies, Nielson et al. (1993) found that testing your solution with five people reveals around 80% of the inherent UX problems of your product (see this summary article about their study).

If you are conducted face-to-face user tests, try to capture the most important feedback points in real-time, so you don’t lose track. This also allows you to weave interesting points back into the conversation later. It is useful to think about a structure for capturing feedback first. A simple, yet effective tool is the Feedback Capture Matrix described in the Design School Bootleg methods toolkit. In this matrix, you divide feedback into the following four categories: (1) things one likes or finds notable, (2) constructive criticism, (3) further questions raised, and (4) new ideas spurred during the test. If possible, record the sessions in some way so that you can go over them again. The article “Test Your Prototypes: How to Gether Feedback and Maximise Learning” also suggest a few alternative feedback capture grids.