RESEARCH
Preliminary User Interviews
I have kicked off this project with preliminary users interviews with the following goals:
- To understand how people manage their time,
- To learn what tools they use, and how they use them,
- To learn about the problems they encounter, and
- To understand what their ideal time management tool would be like.
During this research phase, I had three very lengthy user interviews and several relatively short interviews.

"That being said I am a bit stressed about the workload. I should only handle the urgent task today and put my best effort from tomorrow."
- Quote from an interviewee's diary
Key Learnings from Preliminary User Interviews
- People who consider their lives to be "disordered" prefer to not make any schedules and not use any time management tools, even if they think that they would benefit greatly from them.
- Daily energy levels and moods are very important factors for people to consider while making their daily schedules.
- It is extremely tedious to bring schedules to date when users do not or could not regularly update them.
Competitive Analysis
As I had planned to run a solo design sprint of two weeks from zero to the first complete prototype, and I was satisfied with the outcome of my preliminary interviews, I decided to spend the rest of the time allocated for research on extensive competitive analysis.
The purposes of this competitive analysis were:
- To understand the common layout patterns they use,
- To learn what features users of time management tools look for,
- To learn more about people's current problems with their time management, and
- To see what solutions were proposed for those common problems and how the problems I had learned about during the preliminary user interviews are attempted to be solved.
During this extensive competitive analysis study, I have checked almost every single productivity, time management, to-do list, etc. product on ProductHunt and Google Play Store to understand the landscape. Among the crowd, those that deserve at least honorable mentions are
Any.do ↗,
Todoist ↗,
Microsoft To Do ↗,
Google Calendar ↗,
Reclaim.ai ↗, and
Untime ↗.
Key Learnings from Competitive Analysis
- All time management tools ignore the fact that sh*t happens.
- Making your daily schedule truly became a task on its own.
- Most time management tools create a cognitive overload on their users.
- Most time management tools use very similar user interfaces.
Just after I had completed the first iteration of Tomo's prototype, I had a chance to have a discussion with the CEO of quite a successful time management product that is about to launch a new product. Upon taking a look at the prototype I designed, they told me...
"Because in my experience getting people to use new productivity apps is really hard and having a completely different interface than to what they are used to, makes it harder"
So, What Now?
Based on my research, I have concluded that a time management product that I would be designing ought...
- to make the user's schedule based on their energy levels and moods,
- to be extremely easy to use and reduce the time spent on time management significantly,
- to allow the user to (re-)make their schedule based on their input with only a few clicks,
- to be there for the user when sh*t happens,
- not to clutter the user's mind with information unnecessary at any time, and
- to allow the user to focus on "one task at a time".
Automated Dynamic Scheduling Joins the Party!
While conducting competitive analysis, I had come across some products with partially automated schedule making (e.g.,
Reclaim.ai ↗), but
I noticed that they were not making the best use of a feature with a business opportunity with immense value.After a brainstorming process, I have come up with a task prioritization and sorting algorithm, computationally so simple, that even a beginner like me can code, which I call Automated Dynamic Scheduling.
The algorithm re-queues user tasks based on user inputs (task actions, energy level, task categories) and task properties (due date and time, priority, and appx. time to complete), and further, this can be achieved only with a few clicks, any time the user wants.
In cases where the algorithm has only partial information, the product would prompt the user, asking them to choose between equal options.
So, the user would focus only on the current task which is the highest priority task with the closest deadline, and would not be prompted by Tomo unless there is an absolute necessity, e.g., they will miss a deadline for another task if they do not complete their current task shortly.
"A to-do list with just one task on it reflects a strategic and intentional choice about what you will do next, and continue to focus on until it’s done. It might feel silly, but writing that one thing down on its own list is the key—it makes it a commitment that you’re far more likely to follow through on. Make meaningful progress, one task at a time."
- Peter Bregman
Usability Test
After I had completed Tomo's first prototype at the end of my two-week solo design sprint, it was finally time to test it with users, so the following questions in my mind would be answered:
- How easy or hard is it to complete the most common actions?
- Given Tomo has a different interface than most similar products per its functionality, how well or poorly would users find it?
- Which of the two different "daily onboarding" flows would users prefer?
Then, I conducted an online unmoderated usability test in which six participants would complete six tasks and answer two additional questions. The tasks they were asked to complete are as follows.
Usability Test Statistics
Task
Avg. Duration
Avg. Score
Let Tomo know how you feel and pick task categories to work on.
81.45 seconds
3.33/5.00
Complete this daily onboarding flow.
30.70 seconds
4.66/5.00
Check Tomo's suggestion and take an action.
96.14 seconds
4.16/5.00
Delete your current task.
16.44 seconds
4.50/5.00
Check a future task.
28.51 seconds
4.66/5.00
Mark a future task as completed on List View.
40.73 seconds
4.50/5.00
At the end of the usability test, each test participant was asked if they had any suggestions or feedback regarding their experience using Tomo, and some were kind enough to provide several actionable items.
A/B Testing Two Flows
As I was not able to decide between two "daily onboarding" flows, the most important insight that I was looking for from this usability test was the test participants' choices on them: (i) which one they would rate higher (or lower), (ii) which one they would state to prefer, and (iii) which one they would take shorter (or longer) to complete.

Flow A
Avg. completion time: 81.45 secs.
Avg. score: 3.33/5.00

Flow B*
Avg. completion time: 30.70 secs.
Avg. score: 4.66/5.00
USERS' CHOICE 🏆
*Flow B was designed merely for test purposes. See the final daily onboarding flow below.