At Manifold, we often work with organizations that are using machine learning and statistics to make decisions and become data-driven for the first time. For this reason, having a simple process that we can follow together is key. In the course of our client work, we've refined a process that helps us deliver business value quickly and efficiently. In 2016, we coined a name for it: we call it Lean AI.

WHY USE LEAN AI?

At its core, Lean AI is a set of mental models that help teams work together smoothly to deliver business value faster when implementing AI in their organization. It's not just about aligning expectations between us and our clients; it's about aligning expectations across all the different teams within our client as well: IT, DevOps, engineering, business, etc.

Our clients receive not only a set of deliverables or Manifold's expertise, but also a transparent and efficient process with clear expectations attached to each step. As we work alongside them, our client teams often learn and continue using the Lean AI process for future projects after our engagement together. They trust the process to deliver impactful results and find it helps them to stay organized.

WHAT IS LEAN AI?

Lean AI is a set of six steps that is carefully designed to address risk early on in the process and make a tangible business impact as efficiently as possible.

base
Of course, none of the individual steps is new. What's important about the process is the emphasis on performing each step in a carefully considered order. Many data scientists want to focus on the modeling step—which is the fun part where you get to do fancy math. But building and tuning models is only one piece of the puzzle. In fact, all the steps before and after the modeling is where much of the risk lies. We've learned to use the Lean AI process to tackle the big risks as early as possible.

Lean AI is not a perfectly linear process. We perform data modeling iteratively, and also use modeling to help us better understand your data and the engineering and ETL that has gotten us to a given point. In addition, user feedback after we've done some of the work often helps us clarify the business problem. Sometimes the process causes us to “short circuit” and go back to the beginning. And that's okay—that's part of what makes it the most efficient way to get to a polished end result that directly addresses the needs of your business.

Let's walk through each step.

UNDERSTAND

There are two things we need to understand when approaching an AI project:

  1. The business problem our client is hoping to address
  2. The data

We can reduce risk by pointing technology at the right business problem, and by ensuring that the quality of the data we're working with is high enough to solve that problem. During this step, we often run workshops to help our clients determine what is a “solvable problem” with AI on a reasonable time scale.

The output of this step is a set of AI specs, which include the problems we think we can solve with particular data sources—and, most importantly, the approximate investment of time and money required to create a v1 system to solve them.

ENGINEER

In the end, solid AI isn't magic—it's software development. We use industry best practices to ensure a robust and flexible result. At Manifold, one of the most important practices we’ve implemented involves using Docker to take advantage of containerized data science. The resulting developer flow is cleaner and more collaborative, and it's ultimately far more productive.

The team takes time at the beginning of a project to think ahead to deployment and engineer a robust architecture that will allow us to accomplish our goals together. It's very difficult for one person to embody all the required skills across the spectrum from creating complex data pipelines to complex ML models, so most of our delivery teams include both data engineers and ML engineers.

MODEL

Once data engineering has set up a solid foundation, we move onto building data models. When we get to this stage, it's sometimes tempting for our clients to let their imaginations run wild. But we advocate taking Emmanuel Ameisen’s advice—efficient problem-solving happens at the most straightforward, basic level. We have found that baseline models will consistently deliver superior end products, especially for the user. To this end, we use explicit rules in our process to keep simplicity in mind.

We believe in really nailing a core set of features first; we can always add more later. A corollary to this viewpoint is that we should have shippable models at every stage—call it the “minimum viable model”—so that even in the simplest form we have something we can put into production. This approach is key to the Lean mentality.

USER FEEDBACK

Shiny AI models are great, but at the end of the day, it's humans who have to interact with and make sense of the machine's recommendations. Thats why it's important to get the results from the ML/AI system in front of users as fast as possible. To that end, we often run workshops with the final end-users, whether they're maintenance techs, product purchasers, marketing professionals, or health coaches.

There are two key threads in the user feedback step of our Lean AI process:

  1. Raw predictions are often insufficient on their own. It’s necessary to build a user interface that allows post-processing and interpretation so users can use the AI output to solve the tangible business problem. This interface may be a search engine, app, or dashboard, but it needs to be human-friendly.
  2. We have to build trust in the model. Many people are naturally wary of “black box” machine learning models, and a certain amount of early skepticism is helpful. The model has to earn the user's trust by proving itself to be accurate and reliable.

DEPLOY

The ultimate goal of the Lean AI process is getting a model into production. If the model is still in the sandbox rather than having real-world impact at the end of an engagement, then we have failed. The key to the Lean AI process is to think about deployment from Day 1, so we don't end up having to reinvent the wheel later on in the process. What does this mean in practice?

We work with the software and operations teams, as well as other key stakeholders, to understand the key integration points, as well as the volume and velocity of data and predictions, so that we can set up clean interfaces for upstream and downstream integrations. We find that when we put in this work up front in the Engineer step, the Deploy step becomes simple and seamless.

VALIDATE

The final step of our Lean AI process is about observing how real end-users interact with the final product we've built. Sometimes we're able to go back and make changes at this stage, although if we've done a good job in the User Feedback step then there are very few surprises left at this point. Rather than actively soliciting feedback in this step, we try to get out of the way and act like an ethnographer, watching how the tool we've created is being used “in the wild.”

In our client work, one of our main goals in this step is to offer their internal teams support in this phase—whether in the form of technical training, change management support, etc. To support them in maintaining the build, and to aid our own future learning, we often build instrumentation into the AI workflow interface to produce product metrics. We can then use data science and analytics on our own data science product to understand how users are employing it.