Supporting the delivery of crucial data between NASA engineers
Due to the nature of this work (space secrets), there are some screens and artifacts I can't show. I've abstracted and redacted a few things to tell this story.
Designing NASA's new Space Launch System (SLS) is no small task, and engineers and contractors from across the agency have to exchange and review all manner of models, drawings, and more with one another. Their process wasn't going as expected. Things weren't being delivered on time and it was difficult to get everyone to sign off, causing tension within the organization. I led the research, information architecture, and design to create DEx, a web application to address some of SLS' biggest challenges.
Since retiring the Space Shuttle, NASA has been hard at work on a new human spaceflight vehicle, the Space Launch System (SLS). To design and build the rocket, all manner of specialized engineers and contractors have to collaborate on, exchange, and review data artifacts with one another. These are things like environment models, trajectories, drawings, schematics, tables, and more. Each of these deliverables are dependent on one or more others, so it's important that data is completed and delivered on time. In addition, every deliverable needs to be signed off on by multiple different parties.
To keep track of everything, the folks at SLS had come up with a process for requests, signatures, and deliveries. However, it was run by a single person inside an Excel spreadsheet of ever-increasing size and complexity. Nothing was working as expected. There were many bureaucratic silos and unclear requests. Things weren't getting delivered on time (if at all), and it took way to long to get everyone to sign off on things. While there was a process in theory, it wasn't really being followed, sewing discord, stress, and unhappiness throughout the SLS organization.
I led the user research, information architecture, and product design on SLS Data Exchange (DEx), a web application that addresses some of SLS' most critical issues. Our goal was to make the process more efficient by breaking down key barriers, reducing stress for the organization.
Opportunity: Make it easier to locate and access material.
Solution: By providing all the data on a single, web-based platform rather than individual engineers' hard drives or scattered throughout email, we made it easier for people to find what they were looking for.
Opportunity: Reduce the time needed for sign off.
Solution: Web-based signatures that can be completed with the click of a button rather than running around searching for people to physically sign off on paper.
Opportunity: Better communicate status and impacts.
Solution: By linking records together, users can easily see how each is related and know which items have an impact on others.
Opportunity: Encourage collaboration.
Solution: Using comments, users can discuss an exchange in a single, public place so everyone involved in the process can see. This keeps people in the loop and reduces the load on people's mailboxes.
After a kickoff with our main clients, we traveled to Marshall Spaceflight Center (MSFC) in Huntsville, AL to do some interviews. I facilitated thirteen interviews with folks from different disciplines, elements, and roles in the process. Our research questions sought to answer who was involved in the process, what it looked like from each participants' perspective, frustrations and pain points, and any things that might be working well, if any.
Rather than doing an affinity diagram, we tried out using Airtable to document and classify our notes. We thought it would save us some time and allow us to quickly add data and make associations. My colleague Stephen and I combed through our raw notes and pulled out "nuggets" that could stand on their own. We tagged each note as we went in order to pull out common trends. We went through rapidly and at the end a pretty good set of themes had emerged.
We quantified our data in two ways to filter out the most important opportunities. The first was a simple bar chart showing how many times a theme was mentioned. Nothing was too surprising, but we now had evidence to back up some of the frustrations we heard about when the project began.
The second model we created was a "prevalence-severity matrix". I had used impact-effort matrices in past projects to help prioritize work, and wondered if we could use something similar to further narrow our opportunities. On this chart, we plotted how often a theme came up against how severe it was. The severity, was, admittedly, a subjective measure based on the language our participants used and our previous domain knowledge. However, it offered a useful tool to get a sense of our top opportunities.
We wanted a clearer picture of the overall process, and wanted to be able to show our stakeholders where the biggest issues were happening. We pieced together everything we heard into a unified model and highlighted the breakdowns.
We travelled back out to Marshall, and I facilitated two workshops with our clients and a few other leaders from SLS. The first was to generate project goals, and the second was to explore design ideas.
For the goals workshop, we did a simple brainstorm on sticky notes. In the second exercise, we got them drawing! I wanted to make sure our findings and the goals of our stakeholders were aligned. If there were goals or design ideas that didn't map clearly to our findings, we could analyze them further. We also wanted to make the process participatory and make folks feel included in the process. Making changes is a scary thing, so we worked hard to make sure everyone felt welcome in shaping the project.
We divided our user community into a few main roles:
For each role, we wrote several user stories that combined everything we learned from our research and workshops. We used these user stories to scope the work for the first release.
My team's flagship product, Mission Assurance System, is a customizable platform that fits into a lot of NASA use cases. For many projects, it's a way to rapidly the meet our users' needs without spending too much time creating an entirely new piece of enterprise software. We decided that it would be a good fit for SLS as well.
Even though we were building using our platform, we still needed to define a logical information architecture. We drew content from all the artifacts we had collected (memos, the original Excel spreadsheet, and more) and our primary research in order to create our IA.
In addition to creating an information architecture, we worked with our clients to draw out a new workflow. We defined what the responsibilities of each role would be in the new system.
With our information architecture and new process workflow in hand, I created a prototype environment of our product to put in front of users. We took our third trip to Alabama to test and validate the interface. I facilitated seven sessions where we sought to validate some of our concepts and get some light usability feedback. We provided our participants a different set of scenarios depending on what role they played in the process.
What are we going to do with all of our free time?
Since we released DEx, SLS has begun using it for their next review cycle. We're closely monitoring usage and checking in with users for feedback. We want the product to evolve alongside their process.
We've been doing a roadshow to upper management folks at SLS, walking them through the system capabilities and demonstrating the value. We've gotten very positive feedback and SLS leadership is eager to see what's next.
Tremendous work! We might know how to do this by the time we fly.