During my master’s thesis project at Yolean, my thesis partner and I further developed a digital meeting tool which aimed to facilitate meetings and make them more efficient. The tool was in a concept state and had been tested at FlexLink for some time. They appreciated it but struggled with getting people to use it voluntarily. Thus, the goal of this project was to facilitate the adoption of new software for individuals in a company.
We used the double diamond process as a foundation but worked iteratively and constantly adapted the methods based on everything that happened. For example, the COVID pandemic was ongoing during this time, so we could not physically meet with any users at the end of the project.
In the first part, we wanted to explore and understand. We made observations of meetings at FlexLink and interviewed employees to understand their current meeting habits, routines and attitudes towards introducing the new tool. Furthermore, we conducted literature research on what influences engagement. Based on this initial study in the first part of the double diamond, we formulated problems and design goals and defined who the users were.
In the second part of the process, we conducted three iterations that were more or less alike. In broad terms, they consisted of:
1. developing the interface of the meeting tool
2. creating a clickable prototype in Adobe XD
3. testing the prototype with users to evaluate its usability and analysing what will need adjustments for the next version of the interface
Before developing the first interactive prototype, we did plenty of sketching to generate ideas, and we A/B tested some features using simple paper prototypes. We also created flowcharts to define all the possible ways the user can interact with the tool, and we made an entity relationship diagram to show how all the ”parts” relate to each other.
Once we had made the first clickable prototype, we tested it with users based on a scenario where the user got different tasks to perform in the prototype. We used both new and returning test subjects to test both learnability and guessability.
An interview was performed at the end of each test, and a semantic word scale was used to evaluate how well the interface lived up to the users’ expectations and wishes. We used affinity diagrams to structure the results from the interviews.
After the evaluation, we analysed the results, compiled the problems we had found, and specified what needed adjustments for the next prototype. And this is more or less what we did three times!
The project ended with an improved interface, a list of design criteria, and a video tutorial to make it easier for employees to adopt the new meeting tool.
In this project, there was a lot of focus on the entire process and following it closely. However, one part I especially enjoy is doing evaluations together with users and analysing the results. I love the thrilling moment when you get the answer to: ”Did we do better than last time?”. But even if you do not always follow, or even should follow, the process to the letter every time (I believe it is more like a toolbox), I think the whole process is essential. You can not simply do one thing or the other. Everything builds on each other.
This project was made together with Sandra Jansson.