Domino environments

Overview

Domino Data Lab provides enterprise customers with an open MLOps platform for data science, machine learning, and AI research. It serves as central system of record that tracks all end-to-end data science activities across an organization.

Domino environments, in this respect, are central to the streamlining of processes across a customer's portfolio of projects running on Domino. Domino environments operate as an abstraction of a Docker image. As users execute code in Domino, all units of execution create an isolated Docker container based on an environment template defined by users. This allows them to facilitate experimentation and customization within a self-contained space that is versioned, shareable, deployable, and discoverable.

My contribution

Product design Visual design Usability testing

The team

1 × product manager 1 × product designer 4 × engineers

Year

2023

Process

Starting out as a usability improvement project

The notion that environments management, as a product area, direly needed an upgrade was actually design-initiated. As a design team, we recognized there were usability gaps that existed at multiple levels across different environments-related workflows. And many customer anecdotes supported these insights.

As a design team, we recognized there were usability gaps that existed at multiple levels across different environments-related workflows.

To better refine our focus, we conducted an end-to-end usability audit to list out specific usability issues, along with their impact levels. Overall, we organized them across 2 themes:

  • Lack of experience built around environments discovery use cases
  • Unintuitive transition between basic vs. advanced creation modes

Piggybacked on corporate re-brand & UI reskin initiative

Around the same time we were working on improving the environments experience, a company-wide re-brand / UI reskin initiative was underway. A whole new Domino Design System was being developed for our platform, and I really hit the jackpot as the environments project got picked as one of the first candidates to be designed using this new framework!

Theme 1: Improving discoverability

For a long time, Domino had mostly assumed that users would only create new projects under a firm understanding of what specific environments they'd need per the project requirements. The old environments landing page was designed based on this assumption - a simplistic tabular list of existing environments that lacks rich metadata as well as a robust filtering mechanism to support more complex browsing use cases.

For users who already knew what environment they were looking for, they'd need precisely a portion of the environment's name in order to leverage the list search filter. Otherwise, they really didn't have any other choices. The limited data columns also made it challenging to really browse this big list via column sorting only.

For a long time, Domino had mostly assumed that users would only create new projects under a firm understanding of what specific environments they'd need per the project requirements.

As subsequent customer conversations revealed, this was a huge unvalidated assumption around how Domino users would go about choosing an applicable environment for their projects. There was a big need for browsing capabilities that would support scenarios where users typically didn't know where to start, and they might want to review multiple existing environments based on certain configuration parameters. What if there was one they could start using right away or utilize as base template to build on top of?

Theme 2: Creating new environments (basic vs. advanced)

When creating new environments, users had told us they'd ideally rely on 3 different methods depending on their project needs:

  1. Create an environment by specifying a simple combination of underlying Python packages. This is the most basic approach to environment creation for the very simple data science projects.
  2. Create an environment by configuring requirements.txt or Conda files. This approach is bit more advanced, but are fairly standard amongst data scientists. It helps with cases where users might want to add specific packages through these files on top of an existing environment template.
  3. Create an environment by configuring Dockerfiles. This approach is the most advanced, yet not popular amongst data scientists as many are not well-versed in Docker commands / syntaxes. Typically these are favored by more development-driven data scientists and/or ML engineers whose needs for environments span beyond the underlying Python packages. Like the 2nd approach, it's also common with scenarios where users might want to make specific modifications on top of a base environment template.

As one can see, these different methods illustrate varying levels in configuration complexity. And the old environment creation workflow did not provide a seamless transition between them.

The environment creation workflow was nested in a single modal, which only provided sufficient room for the more basic use cases. If users wanted to add more complex configurations, they'd need to click on the Customize before building button in the screenshot above. This action would accordingly take them to the environment details page with available text fields for further customizations:

This badly designed interaction was what threw a lot of users off. At this point, they had in fact created a new environment (or at least that's what happened in the backend). The system was basically saying: "further customize it if you want, but we have already created it for you".

User confusion was inevitable. It wasn't immediately obvious that clicking on a button labelled Customize would actually execute the Create operation. This along with the transition from the modal to a full details page only made it worst. How do I go back to the previous modal? What if I want to cancel out this workflow and go back to the environments list? Where exactly am I right now?

These different methods (to creating new environments) illustrate varying levels in configuration complexity. And the old environment creation workflow did not provide a seamless transition between them.

The solutions

Based on these research insights, we went through an iterative design-test process to come up with a solution that generally encompasses these areas:

Theme 1: Improving discoverability
  • Provide a robust filtering capability on the environments list, so users can effectively scope down their desired environments based on different filter combinations.
  • Provide richer metadata on the environments list to help users make more informed decisions as they review multiple environments at once.
  • Provide a way for users to quickly review the high-level configuration of an environment before they decide to drill down further for additional analysis.
Theme 2: Creating new environments (basic vs. advanced)
  • Provide a clear multi-steps workflow experience for creating environments. All nested in a drawer panel. Give users the ability to review their inputs before hitting the Create button.
  • Provide a way for users to configure an environment template by adding / editing requirements.txt or Conda files. Give them the ability to either create this configuration from scratch, or on top of an existing environment. (This capability did not exist before even though this is in fact the majority of environments creation use cases)
  • Provide a way for users to configure an environment template by adding / editing Dockerfiles. However, it shouldn't be a visibly obvious path because this isn't a common use case. Furthermore, provide syntax-suggestion capabilities within the Dockerfile code editor to help users fully utilize this powerful yet complex feature.

Design-test-iterate

Below is a set of hi-fi design specifications / mockups I produced to help bring these ideas to life.

More content coming soon.

Outcome