Roles: Principal UX/UI Designer (current), Senior UX/UI Designer
Responsibilities: UX/UI, Research, Prototyping, Usability-Testing, Department-wide Design Standards
January 2018 - Present
As principal designer, I acted as design lead to set direction, corral scope, and provide guidance inside and outside my project teams. I also worked directly as a UX designer in daily practice. In my role, I established innovative projects that improved the delivery of service while defining digital tools that better served millions of riders navigating our transit system.
The MBTA possesses all the complexity and limitations one might expect of the nation's oldest transportation system. Yet, surprisingly, within this large and occasionally challenging agency sits a relatively new and agile Customer Technology Department. CTD is a team of designers, engineers, content, and product specialists assembled like a tech startup within the agency. The MBTA tasked us with solving problems and building internal and customer tools with modern, rider-centered approaches.
In this role, I was embedded in two teams simultaneously.
In 2015, an unrelenting series of blizzards brought the city's transportation system to a halt. In the aftermath, the state decided to reinvest in the entire system and identified its outdated website as an area of focus for immediate improvement.
The effort to make this mobile-unfriendly, inaccessible website meet modern web standards was the first major project under the nascent Customer Technology Department.
Significant issues on this pre-CTD website were the lack of mobile responsiveness, cluttered and outdated interface, and inaccessible markup. Nevertheless, this website was well-beloved by its longtime users in terms of functionality and therefore set a helpful foundation of comparison for subsequent redesigns.
The example below shows what MBTA.com looked like when I first inherited this project. While the new team had improved the website's responsiveness, there was little sense of priority on this beta site. As a result, it was visually underwhelming (a common complaint among the early beta testers), but more concerningly, it proved itself to be quite cumbersome to navigate.
From this state, I joined the web team to perform an audit and enhance the original vision of the redesign. While the front end may have felt like a step backward to users at the time, even this more limited beta website was powered by significant improvements to the MBTA's transit data, courtesy of this young department.
I joined the team and inherited the project in January 2018 after the beta website was launched in 2017.
The 'Rider Tools' team for MBTA.com consisted of:
The screenshot below is from the version of MBTA's homepage that I helped to design. It shows a culmination of improvements I led across navigability, search, content strategy, and optimizations based on user behavior.
This design took inspiration from the original feature-set in 2015 while enhancing the information architecture, interface, and functionality on a greater diversity of devices. From the early, static, and hard-to-update application to this dynamic set of rider tools– MBTA.com, like the historic MBTA system itself, had come a long way.
While we did not initially set out to attract more users or increase engagement as a public website, when I worked on MBTA.com these cumulative improvements saw pageviews rise from around 6 million pageviews annually in 2017 to over 14 million in 2019.
It may come as no surprise that one of the most popular features of MBTA.com is our public-facing schedules. The definition for a 'schedule' depends on who you ask, however:
Generally speaking, think of a schedule as "what ought to happen" in the system.
Operationally, the organization thinks of a schedule in terms of people and vehicles. But the public doesn't care quite who will drive their vehicle so much as when it will show up.
Unfortunately for us, much of the data that undergirds our schedules looks like this:
As you can probably tell, the rectigraph is not suited to being a public-facing artifact, but it's fundamental to how we've run our service the last century. The name comes from an early version of the photocopy machine (this information is still distributed in paper form by operations staff across the T).
The MBTA intended it to help manage shifts, not purpose-built for public consumption.
The MBTA does publish public schedules, but like the rectigraph, they have also not changed much in the last 100 years. Most people in the area would recognize the paper version of schedules shown below. It's a time-tested design used by many but genuinely ill-suited for reading on a mobile phone or by a screenreader. They are still updated manually as hundreds of separate design assets, printed en masse.
In the last ten years, schedules have changed dramatically in nature. A rider's concept of "what ought to happen" is no longer informed by printed matter but by newer digital countdown clocks, websites like mbta.com, and applications like Google Maps and other transit apps. And while many of these things are technically predictions and not schedules – to riders they are still expectations that we either meet or fail.
I've titled this case study "How bias makes you late". But what does bias have to do with a timetable? Shouldn't a schedule, just a list of times and stops, be a perfectly rational thing?
When I joined the T, if you were a bus rider, your method of transportation was only 69% reliable.
Compared to Subway at 89% reliable and Commuter Rail at 91%, there were significant discrepancies in service delivery. When service is very unreliable (as in, it rarely adheres to what we promised on those paper schedules), riders need accurate realtime information, especially if their ride is late or worse - not coming at all. Poor reliability and inaccurate, hard-to-find realtime information spelled awful experiences for bus riders across the system.
What's worse, these riders tend to be more likely to be our most vulnerable populations and the most transit-dependent. Bus riders are more likely than others to be women, elderly, and lower-income (or all three) than any other riders in the system. They also accounted for our second-largest riding population overall. The combination of these factors can create biased experiences, whether intentional or not.
Shortly after I joined the team, one of the first things I asked to do was audit the website by feature and mode. I wanted to see how customers may be differently impacted in their experiences on our website as well. So much of what I found reflected that disparity from the real world:
Commuter Rail 11
Commuter Rail 00:32
Bus riders not only had worse service, but it was also taking them twice as long to find schedule information as Commuter Rail Riders, averaging over two dozen clicks each time. I observed all kinds of troubling trends through a combination of reviewing passive, anonymous session recordings in FullStory, as well as quantitative observations collected in Google Analytics.
Part of what drove the UX discrepancy was that many of the early stakeholders and creators of the website were Commuter Rail riders. Consequently, those team members and many of their stakeholders largely believed the schedule work to be "done" from a product perspective. They had already invested two years into the schedules on the beta site and felt it worked well (and it did - for them). It wasn't easy, but I set off to challenge that perception with research that I presented to my team/stakeholders in April 2018.
Here are just a few examples of what I initially identified and how we ultimately addressed that iteratively over the last year-plus.
While Commuter Rail riders typically landed on a page that immediately displayed train times to them, bus riders were presented long pages of text, with no "times" on screen. So if you wanted to know when to expect a bus, you had to dig deeper.
With so many visitors arriving organically by Googling their bus route at really any stage of travel, we didn't have much time to orient users on the website. So just like it worked for the Commuter Rail users, it was important for bus riders in transit to see predicted departures immediately, without additional clicks.
Bus stops are notoriously unremarkable in the real world - often no more than a metal pole by the sidewalk. Having predictions available on the website in this format could potentially put a countdown clock (just like the ones we have at our major stations) in the hands of anyone waiting at a bus stop.
I also created a series of unmoderated user tests tasking participants to discover common and uncommon pieces of bus schedule and route information. Measuring success through task completion and the System Usability Score (SUS), we validated these final designs.
To get a schedule, you need to indicate the direction of travel. For train service running over fixed steel rails, directions are simple and binary, like heading either north or south.
Bus service, however, is a different beast. For example, a single bus route can have trips that service close to a dozen different combinations of stops. On top of that, buses on the road can switch up throughout the day by skipping or adding stops, sometimes changing into a completely different bus mid-route. These complications certainly convoluted our backend data, but when unchecked in the frontend, it created a serious usability issue.
A bus route should be a simple circuit, but it's more like a loose collection of spiderwebs.
Let's take a look at this example from the old beta website. Route 39 typically goes from Back Bay to Forest Hills, but a small handful of trips operate differently along the route. So if a visitor to the website wanted to get a schedule for buses going to Forest Hills, they had to interact with the multi-part control shown below.
First, the page prompted them to choose a direction.
In this case, that means clicking to switch from inbound towards the city center to outbound towards Forest Hills.
Below the direction control, the page then prompted them to choose a "Route Variation" from a pop-up.
These confusingly similar options presented an impossible choice: which Forest Hills are you going to?
Considering that this is one of the most popular routes in the system, and over half of our routes had similar boggling variations, the website ultimately confused many bus-riding visitors.
The problem was that for these trips, we had no valuable data to make them distinguishable. Buses belonging to the same route and headed to the same destination had the same name, even if they started from two completely different towns.
Unfortunately, as a designer who is not a service planner, I couldn't address the root problem: complicated bus service. But I knew that we could help riders avoid unnecessary and confusing choices like this one. Most were unlikely to get on these rare variant buses, yet we were presenting every user of bus schedules with the same confusing inputs and options with each visit.
Uncommon trips are now demoted to an "Uncommon destinations" submenu, and the most typical route patterns are automatically selected, reducing unnecessary decision-making.
Descriptive labels in the menu like "Early mornings only" were borrowed from paper schedule information and provide uniformity where we talk about schedules.
Working with our Transit Data team (among other things, they condense our many data sources into our public API), I helped establish product and design goals that led to new and better ways of describing routes. This new "route pattern" data sat below the hierarchy of the route itself (e.g. Route 39), but above the level of individual trips. With this, my team was able to help distinguish these variant trips from one another and both explain and prioritize them for users as was sorely needed.
Adding this new data for over 170 bus routes was no small ask to external teams. To accomplish this, I looked to precedents set with our paper schedules. Subject matter experts across Wayfinding, Planning, and Maps helped me identify where we might align these sources of information. And by highlighting our most egregious friction points to our team and citing examples drawn from recorded user sessions, I drew upon relevant customer complaints to provide the justification and the direction that led me to this final design.
The subsequent designs were also iterated upon and user-tested via remote usability testing ahead of development and with passive data collection after the team launched the feature.
The final piece of information you need to pull up a schedule is the time period you want to view.
On the old website, choosing a time to view a schedule involved interacting with a large calendar picker, triggered from a button labeled "Schedule for: Today".
For our schedules, Wednesday really shouldn't look that different from Tuesday. But, unfortunately, while trips based on calendar dates are simple for a computer to fetch from a data endpoint, the technological ease comes at the expense of our human users.
The implementation I settled upon to address this was deceptively straightforward. It was just a standard dropdown containing just a few items: the Weekday, Weekend, and upcoming holiday schedules.
The over-large calendar picker on the old website was not accessible. While a user with good vision may find a calendar easy to use within a couple of clicks. But, a poorly implemented calendar picker can be upwards of 30 elements to announce and traverse for a user on a screenreader.
The updated schedules replace the calendar with a far simpler dropdown. While hardly beautiful (though it is, in a way, to me), it's much easier to understand and traverse a list of three items on a screenreader than a complex custom calendar component.
Why not use a calendar to pick a date? Ultimately, a real schedule should be a repeatable and predictable set of events, not unique to a single day.
Perhaps one of the most critical parts of this shift in how we displayed schedules is persistence. Design without execution is not much more than a wish or an intention. Despite the fact that paper schedules have always divided trips by 'Weekday,' 'Saturday,' and 'Sunday' schedules, accomplishing the simplicity of what I've shown above required serious persistence and cross-team communication.
For example, an early iteration of this feature looked like this:
If you recall the rectigraph from earlier, the reality of operating a transit system produces some pretty ugly data. And to squash that data into shape to simplify it for our riders took a lot of effort.
Here's a snapshot of what that effort sometimes looked like: a combination of persistence, insistence, and asking for help along the way.
It's no secret that every transit system has its shortcomings. My great pleasure in having done this work for Dotcom is undoubtedly not for the praise. Anyone who follows the MBTA's official Twitter account knows: congratulations are not coming. But wrestling with these challenges is worth it, given how vital this service is to my city. It makes me proud to tough it out and make even minor improvements every day. Not every project is an extensive sweeping redesign of something essential to the website like schedules - sometimes it's enabling small corrections about bike rack locations or putting a lost and found number somewhere someone might find it faster.
Overall, the existence of digital schedule information and public realtime data is relatively new for the T. It's bringing opportunities to think and drive the direction for much more than the website and playing into how the organization feels about its public communications and service delivery changes. Whether my work has directly impacted that or not, it's been some of my proudest work, and I look forward to more challenges to come.