Preliminary User Interviews
I have kicked off this project with preliminary users interviews with the following goals:
1. To understand how people manage their time,
2. To learn what tools they use, and how they use them,
3. To learn about the problems they encounter, and
4. To understand what their ideal time management tool would be like.
During this research phase, I had three very lengthy user interviews and several relatively short interviews.
"That being said I am a bit stressed about the workload. I should only handle the urgent task today and put my best effort from tomorrow." - Interviewee O.K.
Key Learnings from Preliminary User Interviews
1. People who consider their lives to be "disordered" prefer to not make any schedules and not use any time management tools, even if they think that they would benefit greatly from them.
2. Daily energy levels and moods are very important factors for people to consider while making their daily schedules.
3. It is extremely tedious to bring schedules to date when users do not or could not regularly update them.
As I had planned to run a solo design sprint of two weeks from zero to the first complete prototype, and I was satisfied with the outcome of my preliminary interviews, I decided to spend the rest of the time allocated for research on extensive competitive analysis.
The purposes of this competitive analysis were:
1. To understand the common layout patterns they use,
2. To learn what features users of time management tools look for,
3. To learn more in detail about people's current problems with their time management,
4. To see what solutions were proposed for these common problems and how the problems I had learned about during the preliminary user interviews are solved.
During this extensive competitive analysis study, I have checked almost every single productivity, time management, to-do list, etc. product on ProductHunt and Google Play Store to understand the landscape. Among the crowd, those that deserve at least honorable mentions are Any.do ↗
, Todoist ↗
, Microsoft To Do ↗
, Google Calendar ↗
, Reclaim.ai ↗
, and Untime ↗
Key Learnings from Competitive Analysis
1. All time management tools ignore the fact that sh*t happens.
2. Making your daily schedule truly became a task on its own.
3. Most time management tools create a cognitive overload on their users.
4. Most time management tools use very similar interfaces.
Just after I had completed the first iteration of Tomo's prototype, I had a chance to have a discussion with the CEO of quite a successful time management product that is about to launch a new product. Upon taking a look at the prototype I designed, they told me...
"Because in my experience getting people to use new productivity apps is really hard and having a completely different interface than to what they are used to, makes it harder"
So, what now?
Based on my research, I have concluded that a time management product that I would be designing ought...
1. to make the user's schedule based on their energy levels and moods,
2. to be extremely easy to use and reduce the time spent on time management significantly,
3. to allow the user to (re-)make their schedule based on their input with only a few clicks,
4. to be there for the user when sh*t happens,
5. not to clutter the user's mind with unnecessary information at any time, and
6. to allow the user to focus on "one task at a time".
Automated Dynamic Scheduling Joins the Party!
While conducting competitive analysis, I had come across some products with partially automated schedule making (e.g., Reclaim.ai ↗
), but I noticed that they were not making the best use of a feature with a business opportunity with immense value.
After a brainstorming process, I have come up with a task prioritization and sorting algorithm, computationally so simple, that even a beginner like me can code, which I call Automated Dynamic Scheduling.
The algorithm re-queues user tasks based on user inputs (task actions, energy level, task categories) and task properties (due date and time, priority, and appx. time to complete), and further, this can be achieved only with a few clicks, any time the user wants.
In cases where the algorithm has only partial information, the product would prompt the user, asking them to choose between equal options.
So, the user would focus only the current task which is the highest priority task with the closest deadline, and would not be prompted by Tomo unless there is an absolute necessity, e.g., they will miss a deadline for another task if they do not complete their current task shortly.
"A to-do list with just one task on it reflects a strategic and intentional choice about what you will do next, and continue to focus on until it’s done. It might feel silly, but writing that one thing down on its own list is the key—it makes it a commitment that you’re far more likely to follow through on. Make meaningful progress, one task at a time." - Peter Bregman
After I had completed Tomo's first prototype at the end of my two-week solo design sprint, it was finally time to test it with users, so the following questions in my mind would be answered:
1. How easy or hard is it to complete the most common actions?
2. Given Tomo has a different interface than most similar products, how well or poorly would users find it?
3. Which of the two different "daily onboarding" flows would users prefer?
Then, I conducted an ONLINE unmoderated usability test in which six participants would complete six tasks and answer two additional questions. The tasks they were asked to complete are as follows.
Usability Test Statistics
Let Tomo know how you feel and pick task categories to work on.
Complete this daily onboarding flow.
Check Tomo's suggestion and take an action.
Delete your current task.
Check a future task.
Mark a future task as completed on List View.
At the end of the usability test, each test participant was asked if they had any suggestions or feedback regarding their experience using Tomo, and some were kind enough to provide several actionable items.
A/B Testing Two Flows
As I was not able to decide between two "daily onboarding" flows, the most important insight that I was looking for from this usability test was the test participants' choices on them: (i) which one they would rate higher (or lower), (ii) which one they would state to prefer, and (iii) which one they would take shorter (or longer) to complete.
Avg. completion time: 81.45 secs.
Avg. score: 3.33/5.00
Avg. completion time: 30.70 secs.
Avg. score: 4.66/5.00
*Flow B was designed merely for test purposes with a time restriction at a mid-fi quality. See the final daily onboarding flow below.