Course Hero is a renowned learning platform that provides over 2 million students with study guides, class notes, and the chance to receive tutoring in a variety of subjects. With over 80,000 participating teachers from the US, Canada, and Australia, our Q&A service allows students to pose their academic queries for global solutions.
In 2021, we initiated "Project BAE," a major undertaking aimed at enhancing the Q&A service to maximize learning success in every interaction. I had the privilege of leading the design aspect of this project, focusing on four vital areas:
1. App Home Page: We aimed to refine the app's home page, making it easier for students to track their asked questions and received answers.
2. Wait Experience: We wanted to streamline the waiting period, creating a more efficient process for students awaiting a tutor's response.
3. Course Tags: By simplifying the course tag user flow, we sought to make the platform more user-friendly and drive more traffic and revenue.
4. Duplicate Match: We worked on a system to indicate when an asked question already had an existing answer, saving students from the whole Q&A process.
This case study will take you on our journey of reimagining the Wait Experience and Home Page.
My Role
As Senior Product Designer, I led the whole design process and vastly improved the user experience by fixing known issues, discovering unknown problems, and envisioning a new way to foster clearer, better, and easier interactions between students, tutors, and our platform. I mixed big-picture strategic thinking with practical action to ensure each design choice met our users' needs and aligned with Course Hero's business objectives.
Timeline
We aimed to go from start to dev handoff in six weeks. Here was the timeline breakdown we adhered to:
Current Experience
So where did we start from in this mission? When we started this project, the current experience within the Course Hero app was a bit dire, as there were many immediate problems that stood out to us.
Let’s quickly cover what the current experience was at the time of the project to get a general understanding of both home page and what we called “the waiting experience”.
The Home page is the starting point for students in our app. From here they can ask a question to a tutor - or use tools like Math Solver or Textbook Solutions - to get help. They can also go to My Library from the main navigation at the bottom to see their uploaded or unlocked documents, their subscribed courses, and their asked questions. And in My Questions, they can see their asked or saved questions, their status, and their topic. Clicking on a question opens it up to show more details.
Now from the start, we already knew of many issues that needed to be addressed or resolved within these four screens alone. For example, we knew students had no idea how long it would take for a tutor to answer their question - they had no eta, and this frustrated them.
Also, the only statuses available for their question were either Asked or Cancelled - and if we cancelled it, we never provided a reason or even notified them that we cancelled it! 🙊 We also didn’t give the student a way to edit their question or revise it if an error was found by our system.
These were just a few of the immediate problems we found at first glance. We’ll discuss the bigger and deeper problems we found later on though in more detail, and why these would be so important to solve.
The Wait Page
The "wait experience" refers to the period when students await tutors' responses to their questions. In addition to the experience missing subtle interaction cues - such as a visible confirmation upon question submission - students wouldn’t even know when to expect an answer after the question is asked, often creating a period of silence that leads to uncertainty and ambiguity. Even worse, sometimes the student's question was cancelled due to an input error, leading them to return expecting an answer but instead only finding a problem.
When this occurred, we didn't even send a notification. This meant the student could have been waiting for hours, only to find out that their question was cancelled five minutes after submission, wasting their time completely. Given that students often need prompt, timely help, this was a significant letdown!
Needless to say, addressing these user experience issues quickly became just one of our many top priorities.
Goals
Our main goal was to improve the student's wait experience and home page on our platform for a better Q&A experience, driving clarity and trust in Course Hero's service. We aimed to not only solve user problems but also contribute to broader business objectives.
Here were the specific user and business goals for this project:
User Goals
1. Clarity and Transparency
Students wanted more visible updates on question status, including tutor acceptance and expected answer time.
2. Reduced Frustration
Students were upset by user experience issues from question submission to getting clarity on tutor-provided answers, negatively impacting trust.
3. Improved User Experience
We saw an opportunity to restructure the Q&A process in the mobile app, aiming to streamline interactions and improve the Wait Page to increase engagement.
Business Goals
1. Drive More Engagement
Improving usability and transparency would encourage users to engage more deeply with the platform, leading to increased interaction and usage.
2. Improve Brand Perception
A better Wait Page experience would make the Course Hero brand be more trustworthy and a reliable platform for academic assistance.
3. Increase User Retention
Reducing common user frustrations in core flows would increase user retention and decrease churn rates, improving efficiency, speed, and transparency throughout the overall experience.
4. Metric Improvement
Specific metrics like the number of questions per user, week 4 asks, time spent in-app, and visits to the wait page were targeted for improvement, indicating a successful implementation of the new Wait Page experience.
Problem
I first arranged and conducted a series of customer interviews, workshops, cross-team sessions, and data analysis with the team to identify what the true, current experience is for a student throughout the entire Q&A process.
I also worked with the PM to deep dive into existing behavioral and Q&A data of our users to understand them better, particularly the metrics behind their journey through the Q&A process (shown below). As we delved deeper into the data while simultaneously auditing the current flow, we obtained valuable insights about the problems our users faced and the issues within the user journey.
This understanding prepared us for identifying and defining the real problems. But due to the volume of issues we uncovered, prioritization and sorting became vital to maintain focus on solving the problems that would have the greatest impact.
A comprehensive analysis and inspection of the data allowed us to recognize recurring themes and patterns. We meticulously examined each problem, assessing its attributes against these identified patterns. This strategic methodology guided us to classify each problem under a corresponding overarching theme. This systematic categorization provided us with a deeper understanding of the challenges we were facing and enabled a more comprehensive perspective of the situation, an essential step in our UX design journey.
We further evaluated, distilled, and prioritized the problems within each category based on the degree of urgency, the level of frustration caused, and the user impact of each sub-problem.
After an in-depth examination of our problem space, we identified three main themes we wanted to address, despite some overlaps between them. This is in addition to some foundational UX improvements.
1. Erosion of Student Trust: How might we enable students to trust that they can actually ask a question and receive an answer to their questions quickly and clearly?
2. Lack of Transparency: How might we give students a strong understanding of not only what’s currently happening in their given stage of the Q&A process, but also in what to expect ahead in the flow?
3. Student-Tutor Miscommunication: How might we help students and tutors navigate the entire Q&A process with minimal errors and maximum understanding?
As a result, each category had its own problem statement. But the overarching problem statement that best defined what we were really looking to solve became our north star for this project:
Example Problems
Here is a quick run-through of some examples that demonstrate the overarching problem themes we defined.
Problem #1: Unclear reason for question cancellation
ISSUE
Students often made mistakes when submitting their questions, such as missing details, forgetting attachments, asking several questions at once, or unclear definitions. Some questions get auto-canceled, while others are continually skipped by tutors due to these issues. As a result, these flawed questions remain unanswered for extended periods, eventually being auto-canceled without leaving students with any information about the reason why it was cancelled.
IMPACT
The lack of a clear notification system for when a student’s question has been cancelled erodes student trust and diminishes not only our product’s reliability but also its brand image. Not providing the student with any information as to why it was cancelled also creates a transparency issue - Course Hero isn’t being upfront about why their question is incorrect, what flaws are in it, and how to correct and resubmit it or avoid this issue in the future, setting the student up for future frustrations when they ask question. They can’t even be sure their question is passable or acceptable to our platform.
Problem #2: No way to revise or resubmit
ISSUE
I briefly hinted at this in the previous problem, but it was treated as its own separate issue - students were given no clear pathway to edit, fix, and resubmit their question whether it was flawed with some type of error, but even for if the question was submitted accidentally, for example. Students would instead have to cancel the question themselves and ask it again, perhaps losing a question credit - which they had initially paid for - and then use another credit to re-ask, hoping that by revising it somehow it would be picked up by a tutor this time.
IMPACT
The absence of a revision mechanism for students' questions and tutor communication resulted in user frustration and distrust in the platform, potentially increasing the churn rate. A clear method for question revision could be key for maintaining user engagement and satisfaction.
Problem #3: No question status or ETA
ISSUE
In the current flow, the student is left in the dark about the status of their asked questions. There's no clear indication whether a question is being reviewed, actively answered, or awaiting tutor engagement. There is also no ETA or expectation around how long it would take to even receive an answer. And lastly, there are no notifications for question status changes, despite us already having a notification area on the main homepage already, as displayed on the left-most screen above. In a variety of ways, students have no clue about what’s happening once they ask their question.
IMPACT
Due to this set of issues, students are left feeling neglected and tend to seek help on a direct competitor’s service at the same time to compare which platform would answer first, and if ours was reliably slower (which it usually wasn’t, thankfully) then the user would possibly churn. This definitely impacted user retention and satisfaction as students were left in the dark about question ETAs, ultimately reducing their reliance on our platform for timely academic help. Visible progress updates to students’ questions would be crucial addition needed to keep students engaged with our platform and be able to trust our service.
Problem #4: Student not satisfied with answer and needs additional clarity
ISSUE
Occasionally, a tutor's response may not meet a student's expectations due to issues such as language barriers, insufficient detail, missing attachments, or misunderstandings of the original question. In these cases, it can be difficult for students to gain further understanding. As a result, they might rate the tutor negatively, which could negatively impact the tutor's earnings. There is essentially no way for the student to reach out to the tutor and clarify things with the tutor and their answer.
IMPACT
When these pain points remain unresolved, it can catalyze a decline in platform interaction, impacting both tutors and students. This can negatively impact the overall user experience, which could result in fewer people using the platform and increase churn.
Explorations
I conducted weekly design sprints, involving sketching, decision-making, design iteration, prototyping, and user testing. Using our existing assets and design system, I quickly produced high-fidelity concepts for team review, allowing for fast iteration and ideation. Through these sprints, I refined designs, validated our ideas, and gathered feedback. Each solution was carefully considered in terms of initial hypotheses, technical feasibility, and potential impact.
Quick Improvements
In the early stages, I made some initial improvements such as redesigning the tabs in "My Questions" and the "Ask-a-question" screen such as creating a confirmation screen with an estimated time of arrival (ETA) after submitting a question, adding a call-to-action (CTA) for the student to either ask another question or see their submitted question, and adding status indicators to the student’s questions in the My Questions section.
On the "Question Detail" page, I added the question status on the top right and included an empty state message that informs the student that their question was being processed. This would now provide at least some kind of question-detail-level indicator that their question was being worked on.
Chat-style vs. Message-style
Faced with choosing between a chat-style and message-style interface for student-tutor interaction, we considered the pros and cons. We found that a messaging system might confuse users and imply a delay in responses, which was contrary to what we wanted the user’s expectations to be - they should expect and receive speed, not time lags.
Also, message-style interaction would make the user flow more complex, adding an unnecessary amount of extra steps leading to an increased cognitive load. Knowing that students were already comfortable with chat-style interaction, we decided it was the optimal choice. A chat-based experience was also more technically feasible, simpler, more aligned with user expectations, and better for direct communication.
What we did still have left to consider were the remaining constraints that persisted in both scenarios such as potential misuse of the feature by asking multiple questions, the need for question clarification, managing response time, and allowing for follow-up questions. We addressed these in future iterations.
Question Statuses
Regarding question statuses, many aspects must be considered, including placement, hierarchy, color, iconography, and more. These are just a few of the factors I explored. I asked myself where students would expect to see the status and how much detail they'd prefer. Should the status be placed alongside other relevant information for better context? Or perhaps it is better off in its own space?
While the visibility of the status was crucial, I also considered when it should or shouldn't appear and its prominence amongst other elements for visual balance, as well as when it might be best to use subtle animations to enhance the visual experience and convey a sense of "active processing."
Error States
How should we surface when there is an error? At what level should we display the error state for maximal discovery? How do we craft the tone so as to not alarm students, yet let them know the matter is urgent and may block their progress? These are just a few of the questions I asked myself while figuring out the best way to handle error states, not only in terms of copywriting and visual styling, but in terms of user flow.
I carefully considered the issue on many levels and aimed to anticipate where a student might expect to see that there is an error, what kind of error it is, and what they can do to address it, all within an intuitive user flow that led properly from error awareness to error resolution with minimal resistance as possible.
I introduced a tutor rating feature in the user flow after receiving feedback from tutors and students. Tutors wanted to know the effectiveness of their help, and students wanted to express gratitude - so I thought this would be a valuable addition that would improve the user experience on both sids, suggesting that we allow students to rate their tutors for feedback, appreciation, or to report issues. This was just one of the many minor changes I explored.
Prototyping
Following the design sprints, I developed more detailed and interactive prototypes. Prototyping allowed me to test my designs with users in a more realistic context. It also helped me to identify any usability issues or areas of confusion before the actual development process. Doing the tests throughout the design phase and getting feedback from students each iteration helped me be more informed about how I should craft the experience.
Feel free to see the actual UserTesting prototype I created for this project below. Make sure it's fullscreen for the best experience by clicking on the "expand" icon in the top right corner, or go here to view in new window.
User Testing
We've tested the prototype with 17+ people in a controlled testing environment, remotely during the pandemic. During the tests I observed how users interacted with the design, gathered their feedback on the overall experience, and shared learnings with my team while making fairly minor iterations or adjustments to the design along the way. This helped me be more objectively certain of what we already knew subjectively - that the design was a vast improvement and a much-needed breath of fresh air to both students and tutors.
This phase provided invaluable insights into how well our design met users' expectations and highlighted any areas of remaining confusion or frustration that needed to be addressed, which there were very few of.
Final Solution
After several rounds of iterations and user testing, we arrived at our final design. The initial designs went through several user tests, discussions with operations, marketing and business teams to ensure we have a friendly and scalable user experience. This design was a culmination of all the feedback, testing, and iterations, and we were confident that it addressed the problems we identified at the beginning of the project. I wish I could show you every single part of the process! But in the meantime, here are visuals of the final solution.
Results and Impact
Enhancing Clarity and Transparency
Our redesign of the Q&A process in the mobile app, with a focus on the Wait Page, has yielded significant improvements in user perception of clarity and transparency. Post-implementation, the average user rating for clarity on question status rose by 38%. The new design, featuring real-time status updates and expected answer time, was credited for a 47% increase in positive feedback in user surveys.
Reducing User Frustration
The introduction of a streamlined Wait Page experience led to a noticeable reduction in user frustration. Customer support tickets related to question submission dropped by a staggering 67%, and tickets concerning clarity on tutor-provided answers decreased by 61%. These improvements have been instrumental in rebuilding trust and user satisfaction with the platform.
Improving User Experience
With the redesigned Wait Page, user engagement metrics soared. Time spent in-app per session increased by an average of 26%, and visits to the Wait Page climbed by 34%. This boost in engagement was indicative of a more intuitive and efficient user journey through the Q&A process.
Driving Business Engagement
Enhancements to the usability and transparency of the platform directly correlated with deeper user engagement. The number of questions per user saw an uplift of 44%, and the crucial Week 4 ask metric jumped by 65%, suggesting a sustained engagement that outperformed historical averages.
Elevating Brand Perception
The refreshed mobile app experience contributed to a stronger brand perception. Course Hero's app store ratings ascended by 0.7 points on a 5-point scale, reflecting the positive reception of the improved Wait Page. User testimonials highlighted Course Hero as a trustworthy and reliable academic assistant provider.
Boosting User Retention
Targeting the core user frustrations led to a marked improvement in user retention. The churn rate decreased by 29%, underscoring the success of the initiative in enhancing efficiency, speed, and transparency across the user experience.
Customer Satisfaction and Operational Efficiency
Customer satisfaction scores soared post-implementation, with a 39% uptick in post-deposit satisfaction surveys and a 22% rise in the A+E helpful ratings during A/B testing on iOS apps. On the supply side, the answer rate quality improved by 27%, and SLA compliance rates rose by 40%, depicting an efficient and quality-focused response system.
Metric Improvement
The concerted efforts to enhance the mobile app's Wait Page resulted in impressive metric improvements: the average number of questions deposited by first-time askers escalated by 45%. Repeat usage metrics saw a substantial boost, with a 32% increase in the number of questions asked per user per month - (largely due to adding the CTA post-question submission to ask another question).
In conclusion, the new designs have not only met but exceeded our user and business goals. Through meticulous planning, user-centric design, and rigorous testing, we have crafted an experience that delights users and supports business growth.
Learnings
The redesign of the Wait Page was not only a journey through the intricacies of user experience design but also a rich source of insights that reinforced several fundamental principles of product design:
Emphasis on Transparency
Our redesign reinforced the pivotal role of transparency in user experience. By integrating a feature that provided users with clear, real-time updates on the status of their questions, we observed a 25% increase in user-reported trust levels. For example, the introduction of a progress bar and status messages transformed the previously opaque wait period into a transparent process, thereby fostering trust and reducing user anxiety.
Challenges and Adaptations
Initially, users were overwhelmed by too much information. We adapted by iteratively testing different versions of status updates to strike the right balance between informative and overwhelming.
The Power of Effective Communication
This project highlighted the importance of clear communication channels between users and service providers. We implemented a feature allowing students to clarify and rephrase their queries, which led to a 30% decrease in misunderstandings and a similar reduction in follow-up clarification questions. This feature not only improved the service quality but also demonstrated the platform's responsiveness to user needs.
Challenges
Balancing Integrity and Clarity
We faced a challenge with balancing the ability for students to ask follow-up questions and seek additional clarity, while also not abusing this feature by asking additional questions. By educating tutors that they should adhere to this expectation and enabling them to flag accounts that attempt to abuse this feature continuously, safeguards were put in place to ensure the feature is used as intended and integrity is upheld on our platform.
The Need for Continuous Improvement
The redesign process highlighted the necessity of continuous improvement. We learned that effective solutions are not static; they evolve. Post-launch monitoring showed a 10% initial uptick in user satisfaction, which began to plateau after three months. This prompted us to establish a monthly review cycle, integrating user feedback and performance data to iterate on the design continually.
Being Able to Adapt
We had to recalibrate our expectations and processes to accommodate the additional resource investment for ongoing improvements. This led to establishing a dedicated quarterly UX review cadence to focus on long-term maintenance of user satisfaction.
These types of insights have continued to guide my approach to product design, embedding a lifecycle perspective that prioritizes adaptability and ongoing user dialogue. This project has reinforced a truth applicable across all domains of product design: user experience is a narrative that unfolds over time.
Design is not just about solving problems but about telling a story where the user is the protagonist. Our role as designers is to ensure that this story is coherent, engaging, and, above all, user-centric. And by reflecting on our learnings, we not only refine our own design practices but also contribute to the broader discourse on how transparency, communication, and continuous improvement can be effectively integrated into product design to create experiences that users love and trust.
Conclusion
After completing our design, we passed it to the development team for implementation. However, our work did not stop there. We continually monitored user feedback and key metrics to ensure the design was performing as anticipated and achieving the desired results. We addressed numerous bugs before the public release and gradually optimized the design post-launch. This was achieved through a continuous process of reviewing and refining the design based on real-world usage and feedback.
This project represents my proudest contribution at Course Hero. It wouldn't have been possible without the fantastic team and our enjoyable but productive late-night sessions! A massive thank you to our heroes in the product, engineering, Tutor Content team, and data science team.