We spend a *lot* of time thinking about ways to improve education. But we spend even more time figuring out how to deliver a better learning experience through Knewton.

That begins with listening to you.

After wrapping up an extensive pilot program last spring, we spent weeks analyzing the results and sifting through feedback from students and instructors. We mapped out all the ways we could make Knewton better.

Then, we spent the summer working hard to bring those ideas to life.

The first item on our to-do list was simple: make Knewton available for more courses! Today, **we’re excited to announce the debut of six new ****Knewton courses**, which includes Finite Mathematics, Survey of Mathematics, Business Calculus, Business Statistics, Survey of Economics, and Principles of Economics. That brings our total number of courses available for higher education to 23, for those of you keeping track.

Here are a few more features we’re excited to introduce for the new academic year:

Sometimes, students struggle to learn a new concept because they lack knowledge of a prerequisite from a different course or discipline.

To help them get up to speed quickly, we now offer Boosters: mini-assignments from different courses or disciplines that fill in the gaps of foundational skills students need, just when they need them.

Want to create a test tailored to your course through Knewton? Just select which learning objectives you’d like to include and go. You can now set a time limit, password protect your tests, and launch them in a secure browser.

Who doesn’t benefit from an extra bit of motivation? To keep students focused throughout the semester, we’ve added a Progress Bar that keeps them aware of how far they’ve advanced through an assignment — and presents a breakdown of their overall progress.

You can now export course data to a CSV file, giving you greater control and flexibility in analyzing student performance.

Knewton is now LTI v.1.1 certified by IMS Global, which means that integrating Knewton with learning management systems like Blackboard, Canvas, Moodle, and D2L is easy and secure.

Knewton has also been added to the Canvas App Store, so Canvas users can get up and running with Knewton faster than ever.

Students can now purchase a subscription directly from Knewton or through their campus bookstore. With a single $44 payment for two-year access or flexible $9.95 monthly subscription options, students can choose the plan that fits their budget.

]]>Every instructor wants their students to master the concepts they’re teaching in their course. But how can students — or an instructor, or tutor — know whether they have truly mastered a concept?

When it comes to learning, what does “mastery” mean?

In this video, Chief Research Knerd David Kuntz sets out to answer this question.

Once we’ve defined mastery, how do we help more students achieve it? In this video, Head Data Science Knerd Illya Bomash discusses how Knewton can help all learners achieve mastery.

Helping students achieve mastery is the key to developing successful learners. Over the next few months, we look forward to taking a deeper dive into what mastery is, why it’s so important, and what instructors can do to help make it a reality for more of their students.

]]>For these students, we know that these are trying times. We at Knewton would like to make a small gesture that will hopefully make things a little bit easier on them.

We’re providing complimentary access to students in Knewton courses who live in regions that were affected by the recent hurricanes.

**How to receive or extend your complimentary Knewton trial**

If you’re a student in Florida, Georgia or Texas who is enrolled in a Knewton course, you can receive courtesy access to Knewton or extend your trial by contacting support@knewton.com.

In your note, please provide the following information:

- Email subject Line: Hurricane Extension Request
- Your first and last name
- The email address you used to create your Knewton account
- Name of your college/university
- The state where your college/university is located
- Your instructor’s name

We’re working to provide access as quickly as possible, but as we respond to the volume of requests, please note that it may take up to 24 hours to activate your trial.

The effects of Hurricanes Harvey and Irma will be felt for some time, but we wish a speedy recovery to all those who were affected by the storms.

If you have any questions or require assistance, please contact support@knewton.com.

]]>Our mission is to personalize learning for the world and in order to get there we have to ensure our product is easy to use and intuitive. We’ve spent the last few months talking to hundreds of professors using Knewton in their courses to learn what will improve their experience. They spoke and we listened! We have improved the instructor experience by making it more user friendly and added new functionality to streamline tasks. **Our new Course Builder allows you to:**

- Build and organize Courses faster
- Browse through all available content easily – see individual questions and instructional content, such as videos, available for a specific course
- Invite co-instructors to be part of your Course
- Manage when your students can access assignments to help pace them

Plus, our new navigation makes it easy for you to move through different parts of your Course!

Do you have questions or want to learn more? Contact us at **support@knewton.com **

If you’re not from the South (or you’re just not a fan of Waffle House, or hash browns), you’re probably feeling a little lost right now. Don’t worry—a quick Google search will clear things right up for you! (Trust me, by the time you get to a Waffle House, you’ll likely know exactly what you want.)

If, on the other hand, it’s the “flipped, tipped, or traditional” that has you wondering, we can dig deeper here for you. What are the differences between these three models of blended learning, and what role can adaptive tools play in each?

According to EDUCAUSE the **flipped classroom** is a pedagogical model in which the typical lecture and homework elements of a course are reversed.

In other words, concepts or skills are introduced to students ahead of class time through a digital medium, and in-class time is spent working with, practicing, or applying their newly acquired knowledge or skills. (Check out a Knewton infographic on this model.)

In this model, homework typically functions to:

- prepare students for productive in-class time by giving them materials that introduce or develop key concepts and skills you’ll be working on in class;
- provide visibility into students’ current knowledge or skillset ahead of introducing challenging material in class;
- or both of the above.

Adaptive learning tools like Knewton can have a positive impact in this context because they guide students through material that’s coming up in class, offering lots of practice as well as an opportunity to demonstrate mastery so students feel more comfortable participating in class discussion or group activities.

Instructor dashboards and reports enable you to know ahead of time which concepts or skills your students struggle with as a group, so your instructional plan can be targeted to these learning objectives. Some instructors use the Knewton dashboard to build groups or facilitate other peer-to-peer learning opportunities between partners with complementary strengths and weaknesses, or to inform one-on-one instruction or meetings.

The **traditional model,** in the context of blended learning, refers to a pedagogy that utilizes homework in pretty much the opposite way. The concepts or skills students work on after class are those that were introduced or developed in the class immediately prior. In this model, homework can:

- provide additional practice opportunity;
- enable students to make use of and take greater ownership of new knowledge and skills;
- serve to demonstrate mastery or understanding;
- or, all of the above.

Adaptive learning tools can play a role here similar to their role in the flipped class, offering advantages for both teaching (instructor analytics let you know where your class stands as a whole and also see how each student is faring individually) and learning (lots of additional practice, instruction if needed, and opportunity to demonstrate understanding and build confidence). Inclusion of non-adaptive assignments like quizzes or tests are commonly used to give closure and provide evidence of student learning that can be easily measured and assessed.

This brings us to the **tipped model**—the newest of the bunch, and not one you’ll have much luck Googling (at least this was true at the time of this draft!) but it’s a term that’s begun surfacing in conference paper titles and abstracts. And it’s floating around with some uncertainty in conversation.

Personally, I love it! In part because of the imagery but mostly because it captures the interstitial nature of this model; its capacity for tilting between the flipped and the traditional.

In this model, you might assign homework that meets the same purposes for which you would assign homework in a flipped context (prepare students for the material, gain insight into students’ prior/current knowledge, etc.), and then use the instructor dashboard to help you decide on the best topics for a “mini-lecture” at the start of class and provide the focus for the day’s activities. Then, after class, you’re back to the traditional model—students return to the platform to take a quiz; you see the results in real time and can adapt accordingly.

Any way you slice it (or dice, smother, cover, cap, or pepper it), adaptive learning tools can add a lot to the experiences of both teaching and learning. I have no doubt that these tools would have made my early teaching life much easier—leaving the tough decision of the day to hash browns.

*Editor’s Note:* We’ve got some new tools on the market that fit any of these models. Take a spin for yourself!

*Aimee Berger, Ph.D, is a solutions architect for Knewton. She travels around the country helping college instructors implement adaptive learning tools. *

1. You’ve seen vast variation in your students’ level of preparation for your course.

You know how this goes: your students are all at different levels and you have to teach to the middle. What if you didn’t have to do that anymore? How much better would the learning outcomes be?

Classroom materials with adaptive technology can provide extra support to the students who need it and allow those who are progressing to advance at their own pace. You get to do what you do best: inspire a love of learning in each and every student.

There is so much material to cover in a short amount of time and you can’t predict how quickly your students will grasp each concept. You may spend a lot of time manually differentiating content and tailoring lessons to your students’ knowledge. And while you can act as a content curator and facilitate learning, as this useful piece from *Faculty Focus *explains, technology helps you do this efficiently.

Adaptive learning products, by definition, adapt content for each student. This saves you time so that you can truly teach, inspire, and help your students achieve mastery.

It likely has nothing to do with you. Perhaps your students aren’t prepared, are frustrated, or need extra support to understand your lesson. (There’s a new study that examines a tool that measures the soundwaves of your classroom to better understand the amount of active learning.)

One of the benefits of adaptive technology is its ability to increase student engagement. Because an adaptive product uses an algorithm to serve up lessons that meet students where they are, they become more engaged and more prepared to focus.

Ever teach a class and the participation is great, students appear engaged, and complete their homework regularly, but when they get to the exam, a large number of students do poorly?

You’re not a mind-reader. By the time you assess the class, it might be too late to help them where they are struggling.

Colleges are beginning to link data to various social and psychological factors to understand and further the likelihood of student success. At a classroom level, learning technology that provides you detailed analytics about your students’ progress and knowledge gaps can help you tailor lessons in real time.

If you could give each of your students your undivided attention, they would be far more likely to succeed. But, alas, you’re only one person and that’s just not sustainable.

The #1 advantage of an adaptive product is that it allows each student to get that one-on-one instruction that they need, *and* that you want to provide. This frees you up to focus on what you love most about teaching.

One of the core components of Knewton adaptivity is the *knowledge graph*. In general, a graph is composed of nodes and edges. In our case, the nodes represent independent *concepts*, and the edges represent *prerequisite* relationships between concepts. An edge between concepts A and B (A → B) can be read as *Concept A is prerequisite to concept B*. This means that the student generally must know concept A before being able to understand concept B. Consider the example portion of a knowledge graph below:

In math-speak this is a directed acyclic graph (DAG). We already covered what the “graph” part means. The “directed” part just means that the edges are directed, so that “A prerequisite to B” does **not** mean “B prerequisite to A” (we instead say “B *postrequisite* to A”). This is in contrast to undirected edges in social networks where, for example, “A is friends with B” *does* imply “B is friends with A”. The “acyclic” part of DAG means there are no cycles. A simple cycle would involve A → B → C → A. This would imply that you need to know A to know B, B to know C, and then C to know A! This is a horrible catch-22. You can never break the cycle and learn these concepts! Disallowing cycles in the graph allows us to represent a course, without contradictions, as starting with more basic concepts, and leading to more advanced concepts as the student progresses (this progression is top-to-bottom in the graph above).

Another crucial aspect of the knowledge graph is the content: i.e. the assessing questions and the instructional material. Each concept has a number of such content pieces attached, though we don’t show them in the picture above. You can think of them as living inside the node.

Of course, we can never know exactly what you know– that’d be creepy! Instead we *estimate* the student knowledge state using a mathematical model called the** Proficiency Model**. This takes, as inputs, the observed history of a student’s interactions, the graph structure, and properties of the content (question difficulty, etc.) and outputs the student’s proficiency in all the concepts in the graph at a given point in time. This is summarized below:

Abstractly, *proficiency* on a concept refers to the ability for a student to perform tasks (such as answer questions correctly) related to that concept. Thus, we can use the estimated values of the proficiencies to *predict* whether the student answers future questions correctly or not. Comparing our predictions to reality provides valuable feedback that allows us to constantly update and improve our model and assumptions.

The foundation for our Proficiency Model is a well-tested educational testing theory known as Item Response Theory (IRT). One important aspect of IRT is that it accounts for *network effects*— we learn more about the content and the students as more people use the system, leading to better and better student outcomes. IRT also serves as a foundation for our Proficiency Model on which we can build additional features.

One thing that basic IRT does not include is any notion of temporality. Thus older responses count the same as newer responses. This is fine in a testing environment, where “older” responses mean “generated 20 minutes ago”, but isn’t great in a learning environment. In a learning environment, we (obviously) expect that students will be learning, so we don’t want to overly penalize them for older work when in fact they may have had an “Aha!” moment. To remedy this, we’ve built temporal models into IRT that make more recent responses count more towards your proficiency estimate than older responses on a concept*.

Another thing that basic IRT does not account for is instructional effects. Consider the following example. Alice got 2 questions wrong, watched an informative video on the subject, and then got one question right. Under basic IRT we’d infer that her proficiency was the same as Bob who got the same 2 question wrong, did **not** watch the video, and then got one question correct. This doesn’t seem accurate. We should take Alice’s instructional interaction into account when inferring her knowledge state and deciding what’s best for her to work on next. We have extended IRT to take into account instructional effects.

Finally, basic IRT does not account for multiple concepts, nor their interrelationships in the knowledge graph. This will be the main focus of the rest of this post.

The titular question of this post: “What does knowing something tell us about knowing a related concept?” is answered through *Proficiency Propagation*. This refers to how proficiency flows (propagates) to different concepts in the knowledge graph.

To motivate why proficiency propagation is important, let’s consider two different scenarios.

First, consider the example shown below, where the only activity we’ve observed from Alice is that she performed well (a ✔ indicates a correct response) on several more advanced concepts.

We can’t know everything Alice has ever done in this course– she may have done a lot of work offline and answered tons of “*Add whole numbers*” questions correctly. Since we don’t have access to this information, we have to make our best inference. Note that all three concepts Alice excelled at are reliant upon “*Add whole numbers*” as a prerequisite. Let’s revisit the definition of the prerequisite relationship. We say “A is prerequisite to B” (A → B) if A must be mastered in order to understand B. In other words:

Concept B is mastered ⇒ Concept A is mastered

In our case, there are three different “concept B’s” that Alice has clearly mastered. Thus, by definition of the prerequisite relationship Alice almost certainly has mastered “*Add whole numbers*” (it’s the concept A). So let’s paint that green, indicating likely mastery.

By similar reasoning, if Alice has mastered “*Add whole numbers*”, then she has likely mastered its prerequisite “*Understand the definition of whole numbers and their ordering*”. However, we might be slightly less certain about this inference, since it is more indirect and relies on a chain of reasoning. So let’s paint that slightly less bright green:

What about the remaining two concepts? First consider “Multiply whole numbers”. Alice has mastered its prerequisite, which is promising. But she may have never received any instruction on multiplication, and may have never even heard of such a thing! On the other hand, she may be a prolific multiplier, having done lots of work on it in an offline setting. In this case, we don’t have the definition of “prerequisite” working in our favor giving us a clean inference. But certainly if we had to guess we’d say Alice is more likely to have mastered “Multiply whole numbers” than someone else who we have no info on. Thus, we give Alice a small benefit of the doubt proficiency increase from the baseline. Similar considerations apply to the last, most advanced concept:

Let’s summarize the lessons we’ve learned:

- Mastery (i.e. correct responses) propagates strongly ‘backwards’ to prerequisites.
- As we get further from direct evidence in the prerequisite chain, there is more uncertainty. Thus we infer slightly less mastery.
- Mastery propagates weakly ‘forwards’ to postrequisites.

Now let’s consider Bob, who has struggled on “Add whole numbers”, getting 3 incorrect:

Recall our deconstruction of the prerequisite relationship A → B:

Concept B is mastered ⇒ Concept A is mastered

Unfortunately, this doesn’t directly help us here, because Bob hasn’t mastered any concepts as far as we know. However, the contrapositive is exactly what we need:

Concept A is **not **mastered ⇒ Concept B is **not** mastered

Let’s take “struggling on” to be equivalent to “not mastered” for our purposes to get:

Struggling on Concept A ⇒ Struggling on Concept B

Thus, we now know that struggling-ness propagates strongly down to the postrequisites of “Add whole numbers”!

What about “*Understand the definition of whole numbers and their ordering*”? Similarly to the flipped situation of propagating mastery to postrequisites, we cannot make any strong pedagogical inferences just from the prerequisite relationship. However, we can still assert that it is more likely that Bob is struggling on it given we’ve seen him struggle on “Add whole numbers” than if we hadn’t seen him struggle on that concept:

Let’s summarize what we’ve learned about propagation of struggling-ness:

- Struggling (i.e. incorrect responses) propagates strongly forwards to postrequisites.
- As we get further from direct evidence in the postrequisite chain, there is more uncertainty. Thus we infer slightly less struggling.**
- Struggling propagates weakly backwards to prerequisites.

Notice these rules are just the mirror-opposites of the ones for propagating mastery! And all of this comes simply from the definition of “prerequisite-ness”, and some pedagogical reasoning.

While we now have a nice picture of how we want proficiency propagation to behave, that doesn’t count much unless we can rigorously define a mathematical model capturing this behavior, and code up an algorithm to efficiently compute proficiencies in real time for all possible cases. As they say, the devil is in the details. To give a flavor of what’s involved, here are some of the technical details our mathematical model and algorithm must obey:

- Convexity: This essentially means that the proficiencies are efficiently and reliably computable.
- Strong propagation of mastery up to prerequisites, and of struggling-ness down to postrequisites, with a slight decay in propagation strength at each ‘hop’ in the graph.
- Weak propagation of mastery down to postrequisites, and of struggling-ness up to prerequisites, with a large decay in propagation strength at each ‘hop’ in the graph.
- The above two points imply
*asymmetric propagation*: The impact of a response on neighboring proficiencies is asymmetric, always being stronger in one direction in the graph than the other. - All of this proficiency propagation stuff must also play nicely with the aforementioned IRT model and the extensions to include temporality and instructional effects.

Coming up with a well-defined mathematical model encoding asymmetric strong propagation is a challenging and fun problem. Come work at Knewton if you want to learn more details! !

So what good exactly does having this fancy proficiency model do us? At the end of the day, students care about being served a good educational experience (and ultimately, progressing forward through their schooling), and in Knewton-land that inevitably means getting served good recommendations. Certainly, having a pedagogically-sound and accurate proficiency model does not automatically lead to good recommendations. But having a bad proficiency model almost certainly will lead to bad recommendations. A good proficiency model is necessary, but not sufficient for good recommendations.

Our recommendations rely on models built “on-top” of the Proficiency Model, and answer questions such as:

- What are useful concepts to work on next?
- Has the student mastered the goal material?
- How much instructional gain will this material yield for the student?
- How much will this piece of material improve our understanding of the student’s knowledge state and therefore what she should focus on next?

All of these questions can only be answered when equipped with an accurate understanding of the student’s knowledge state. As an example, consider Alice again. If we had a bare-bones proficiency model that did not propagate her mastery to “Add whole numbers”, we might consider this a valid concept to recommend material from. This could lead to a frustrating experience, and the feeling that Knewton was broken: “Why am I being recommended this basic stuff that I clearly already know?!”

At the end of the day, it’s user experience stories like this that motivate much of the complex data analysis and mathematical modeling we do at Knewton. And it’s what motivates us to keep pushing the limit on how we can best improve student learning outcomes.

**There are other temporal effects that kick-in if you’ve seen the same question more than once recently*.

*** There is a whole other layer of complexity in our Proficiency Model that we’ve glossed over. We actually estimate a student’s proficiency and a measure of our confidence in that estimate. These are the proficiency mean and variance, and can be combined to obtain confidence intervals, for example. For the purposes of this blog post, we are only considering the propagation of proficiency means.*

*This post was written by Michael Binger, a data scientist at Knewton.*

*Share the infographic on Twitter.*

College today vaguely resembles what it looked like 20, 10, or even five years ago. We’re serving a more diverse student population than ever before, and the skills we’re teaching are for jobs that never before existed. First time college-goers are on the rise, in part, because a high school diploma no longer cuts it in today’s knowledge-based economy. And with the cost of college—combined with this more economically-diverse student body—more and more students are attending part time (37% to be exact) so they can balance work and school.

What’s more, with an increase in students attending college, more students are entering higher education at varied levels. Young people stuck in the cycle of remedial coursework is costing families over $1.5B annually.

With all this change, we know the ways that we educate students needs to change. Students expect learning to happen at their fingertips, and they expect it to be personalized the way that all other aspects of their lives are personalized.

What are you doing to end one-size-fits-all education? Share on Twitter (and tag us!) using #CollegeToday.

Contact us to learn how Knewton can help you personalize education for your students.

]]>

Teachers, parents, and students have strong and sometimes contradictory opinions about the value of homework, but answers to these and other questions are not easy to come by. A lot depends on contextual information about individual students, their teachers, and the subject matter, and this information is difficult to gather and analyze.

We do know, however, that some students are using Knewton-powered learning applications to do homework, and the anonymized data that Knewton collects can shed light on its impact on student performance.

More than 100,000 elementary school students using one of our partner applications gave about 18.5 million answers to math questions over the course of the 2015-16 school year. What does this wealth of data tells us about doing homework?

It turns out that good students do, in fact, do their homework.

Math problems answered during the school day — between 8 a.m. and 3 p.m. — are presumably done in class, while answers submitted between 3 p.m. and 9 p.m. are most likely homework.

The graph below shows when during the day students did their math problems over the course of the 2015-2016 school year:

Most of the work in this Knewton-powered partner application happened in school. About one-sixth of the work, however — about 3 million answers to math problems — got done as homework.

When these students did homework, they answered math problems correctly more often than they did at school. Before 3 p.m., about 65% of answers were correct. Between 3 p.m. and 9 p.m., more than 80% of answers are correct. You can see how their performance improves after school:

(The y-axis begins at 50, not zero.)

What’s going on here? There are a couple of possible explanations:

- Doing work outside of school helped the students
*get more questions right*than they did at school. Maybe they had more time to think, or felt less pressure, or had to contend with fewer distractions, or got help from parents, older siblings, or the Internet. - The other possibility is that stronger students
*did more homework*than lower performing students did. Since the percentage of correct responses is an average of the efforts of 100,000 students, the diligence of the strongest students could lift the entire group’s performance above 80%.If this second possibility is true, a seemingly impressive statistical gain masks the fact that, for many of these students, homework isn’t making a difference. It’s a lot like when you have a group of five people and one of them gets $100: The group’s average wealth goes up $20, but four of them don’t see any benefit.

So which is it: that students do better when they do homework, or that hard-working whiz kids are making everyone look better?

In an attempt to find out which possibility is more likely, we sorted the 100,000 students into five groups of equal size, based on how often they answered correctly — kind of like a teacher grading on a curve.

The average student in the highest performing group (in orange, let’s call it Group A) answered 94% of math questions correctly. The students in the lowest performing group (light blue, Group E) gave right answers only 24% of the time.

How many math problems were each of these five groups doing, and was it school work or homework?

At school, there is a peculiar relationship between practice and proficiency. The lowest-scoring group of students, Group E, does the least amount of work. Group D, which performs better, does more work than Group E. Groups B and C, which perform better still, do the most work at school.

However, the strongest students, Group A, are barely working in school more than Group E, the lowest-scoring group.

The data can’t explain *why* this relationship exists, but it’s easy to imagine the classrooms they describe. Math teachers know that different concepts will come naturally to some children, while for other students greater effort is required. Some students succeed without trying very hard, others slack off, and still others struggle despite diligent practice.

When it comes to homework, however, we clearly see that the higher a group’s score was, the more homework they did. The result is striking: The five groups are in alphabetical order with Group A, the top performers, doing the most homework, and Group E doing the least. This suggests that stronger students simply do more homework than their peers.

How does each group perform over the course of the day?

The performance of each group improves during homework time, but these increases are too slight to suggest that homework makes students perform better. And the differences between groups are bigger than differences within each group. Scores for Group E fell off around 7 p.m.: Maybe they were in an afterschool program until their parents picked them up after work? This drop is statistically significant, but you can see that the shaded areas, showing the margin of error, get bigger as fewer students work on math.

We still don’t know whether the stronger students are performing better *because* they did their homework, or that they did their homework because they’re stronger students. So this finding will not resolve the debates over which approaches to homework are productive and beneficial, or whether it all amounts to busywork.

Still, aggregated student data does contain valuable lessons about how students learn. We’ve seen that student performance after school is much higher because well-performing students tend to do their homework, not because students tend to perform better after school hours.

So your teachers are correct when they say that good students do their homework.

*Ruben Naeff is a data scientist at Knewton*.

So more than 20 million Americans go to college each fall, even though it keeps getting more expensive every year, whether they go to an Ivy League school or a community college.

To afford college, about half of American college students take out loans. And eventually, most people pay them back, although it can take decades.

But affordability is not only about money. There’s also what I call “mental affordability”: your stamina for the stresses of school. Stresses and surprises from other areas of life — a sick child, a visiting cousin, a flat tire, a blown water heater — impede upon the ability to make it to class and on the focus a student needs to learn.

When it comes to mental affordability, your mind is like a checking account. On days where you’re doing great, your balance is high and you can accomplish a lot. When you’re low, you might be overdrawn and the next challenge becomes the straw that breaks the camel’s back.

As Knewton’s Jose Ferreira explained in a recent post, today’s college students aren’t all fresh out of high school. One in five college students is at least 30 years old, according to the U.S. Department of Education, and 37 percent of college students attend part-time. While in school, many of these part-time students are working low-paying jobs or caring for their children or other family members.

When I visit an institution like Mott Community College in Flint, Michigan, I see 40- and 50-year-olds who once had factory jobs and are trying to remake their lives for a 21st-century economy. There are so many different elements to their lives: They are parents, they are employees, they are sons and daughters. If anything has to go, it’s going to be school.

Today’s students have to tune out a lot of distractions, and their lives may not be predictable enough to follow a master plan. While balancing school with other responsibilities, they may be taking it one term at a time, even one day at a time. On Monday, they know they can make it to class, but who knows what Wednesday and Friday will bring?

So I know how hard it is for some students just to get into that chair.

Once students are in the classroom, they see things they haven’t seen before. A former blue-collar worker sits down in front of the computer and wonders, “How do I get this text from this line to the next line?”

Meanwhile, for first-generation college students without mentors who studied beyond high school, the sheer intimidation can be overwhelming. You’re in a new academic and socioeconomic setting. Feelings of anxiety and incompetence swirl around in your head, and the expectations — your own and from those who depend on you — create more noise. There’s no way to shut off those feelings.

All of these thoughts and feelings come before anyone gets to making sense of standard deviations, or the functions of cells, or the law of supply and demand. Schools lose a lot of hopeful people because of these distractions, and that puts more of a burden on these individuals, financially as well as mentally. Two-thirds of student loan defaults are for $10,000 or less, and loans like these generally mean that the borrower never received a degree. These adults who made sacrifices in an attempt to improve their lives can’t afford to try again.

Educators today are better positioned than ever to help make college more mentally affordable. Some of this is about adapting to the wide range of students and responding to what each one is going through at any given moment, rather than expecting students to be responsible for all the adapting.

One way to keep down the mental cost of college is by providing better course materials. If you’ve only got ten minutes to study, you want to spend that time in the best possible way, and students using adaptive courseware can pick up exactly where they left off. An adaptive learning platform can figure out what each student is ready to learn and adjust its sense of their knowledge over the course of the semester, as each student learns. Instructors can see where their students need help in real time, and step in to support them when they need it.

Beyond courseware, there are many other ways institutions are adapting to student needs. Schools like Florida Keys Community College have dedicated scholarships for first-generation students, and they can get advice and support through organizations like uAspire. Baton Rouge Community College offers seven-week sessions that may be easier to complete than a standard 15-week semester, because there’s less of a chance of life getting in the way.

There’s been a lot of talk lately about making college more affordable. At the same time, we need to make higher education more mentally affordable for all students: whether they are veterans, single mothers, factory workers, or teenagers who just finished high school. That’s the only way we can put the American dream back within reach for everyone.

*Bacari K. Brown is an institutional partnerships representative at Knewton.*