Ikigai — Filling Enterprise AI Gaps Unaddressed by LLMs

Date
August 24, 2023
Ikigai — Filling Enterprise AI Gaps Unaddressed by LLMs
Premji Invest & Ikigai Partnership

Today, we are excited to announce a new partnership with founders Vinayak Ramesh and Devavrat Shah, along with the rest of the Ikigai team. As investors, we are humbled when we meet entrepreneurs that are able to solve a highly technical and challenging problem with a brilliantly novel approach. And this was exactly our first impression when we saw Vinayak and Devavrat introduce Ikigai: pure awe. Ikigai marries the sleek design of a tool like Figma with the vision of a platform like Databricks. The ultimate vision is to enable a low code approach built on top of graphical models for predictions, data reconciliation, and data optimization with tabular enterprise data — i.e., generative AI for tabular data.

Ikigai fits squarely with our mission — finding and partnering with exceptional entrepreneurs that are building n of 1 businesses to tackle difficult problems. We take a long duration approach to each investment, whether that be Coupa, Anaplan, Mulesoft, Zuora, Looker on the private growth side, each of which we held through acquisition and/or anchored the IPO and followed on in the public markets, or companies like Ikigai on venture side. We’ve spent a lot of time in data & AI, and while we’re extremely excited about the prospect of LLMs, we feel Ikigai’s differentiated technology and approach will enable them to solve a specific set of problems much better.

Ikigai Overview

With Ikigai, customers will be able to build end-to-end applications utilizing machine learning on sparse enterprise data without requiring a team of database engineers, machine learning experts, and special purpose cloud and engineering resources. The founders’ goal is to enable companies to create and deploy graphical models to power apps within their organizations. Customers can train models on-demand on their enterprise data, creating models that assist in forecasting, scenario planning and analysis.

Ikigai hopes to uncover a slew of machine learning use cases in enterprises that previously would have required multiple database and data science practitioners hacking datasets together. If you survey data science and machine learning teams in enterprises, many will tell you that one of the most tedious part of their roles is the part that involves data cleansing. For unstructured data, this can include labeling individual pieces of data. For structured data, this involves stitching together disparate datasets sitting across multiple databases using complex SQL queries and database joins.

Ikigai is taking a path less traveled by utilizing large graphical models, a field of machine learning that marries probability theory and graph theory, to ingest datasets, transform data, make predictions, and recommend decisions. Most enterprise data is tabular, sparse and typically time-stamped. Large graphical models are precisely well-suited for this setting. In modern terms, they’re generative AI for tabular data. Taking a step back, it’s worth setting the stage for how Ikigai came into existence.

Large Graphical Models (LGMs) Explained

Graph machine learning involves the application of machine learning to graphs specifically for predictive and prescriptive tasks. Take a simple spreadsheet with rows and columns of data, for instance. Imagine you have 10 columns that represent 10 different variables. The rows, therefore, are samples of the probability distribution over these random variables. A large graphical model is nothing more than a “probabilistic” view of this dataset. Neural networks, on the other hand, are actually deterministic versions of graphical models, meaning that LGMs, in theory, if utilized well, should be able to represent a much larger set of potential outcomes.

While machine learning has a number of branches, including deep learning, clustering (unsupervised), regression (supervised), reinforcement learning, and more, in recent years, deep learning has really risen to prominence while graph-based machine learning has declined in popularity. The exciting innovations we’re seeing around multi-headed attention & transformer-based architectures all derive from innovations in deep learning.

LGMs vs. LLMs

One might question why Ikigai’s graphical models are superior to, say, the large language models (LLMs) that’ve gained currency in recent years. LLMs work well for text and other unstructured data, but are not well suited for sparse, tabular enterprise data. There are limitations to the effectiveness of “fine-tuning” on enterprise data, because the vast majority of the data that LLMs are trained on stem from pre-training and supervised fine-tuning. The analogy is you’ve spent 20 years training as a marathon runner, and then you’re asked to become a professional tennis player in 2 months. Also, LLMs are very expensive to operate, requiring vast amounts of storage compared to their graphical model equivalents, and require far more processing time.

So what happened to large graphical models (LGMs) in the last decade? For one, they’ve been around, but certainly less popular than deep learning architectures. The reasons for this are a few-fold. Firstly, large graphical models work with highly specialized structured datasets, and these forms of tabular data typically do not have prescribed model structures. In other words, LGMs need to develop an intuitive understanding of relationship between variables that underlie some prediction model, and these models will differ significantly from organization to organization.

Secondly, LGMs historically had computationally limiting architectures, making it difficult to “train” the model and pass information along efficiently. Unlike neural networks which have been empowered by GPUs, graphical models have fallen out of fashion due to how expensive they are to compute. LLMs are “linear” captured by the very high-dimensional encoding enabled through highly parameterized model structure. Large Graphical Models have the ability to go beyond this “linear” structure and can capture relationships across multiple dimensions, which also made these models extremely difficult to train historically.

Ikigai Breakthrough & Use Cases

Ikigai had a major breakthrough, which is now patented. They developed a novel computational architecture using a PubSub-like methodology. When data is represented in a graph-like format, it naturally lends itself to a message-passing architecture such as in the image below. A computationally scalable way to implement such an architecture is by utilizing the classical PubSub infrastructure — what is utilized by modern data bus e.g. Apache Kafka. Ikigai aims to bring the compute to data (i.e. enabling each node to pass information along to the child nodes, as needed) vs. bringing data to compute (i.e. training a large language model with a pre-defined set of nodes and attention heads). As Vinayak found in his thesis at MIT, this enables computation at >13x speed on a laptop compared to Spark on 68 m/cs.

Within the Ikigai platform, LGM enables three major use cases: aiMatch, aiCast, and aiPlan. The goal of aiMatch is to stitch data together. By viewing data from the lens of a LGM, aiMatch attempts to intuitively learn the relationship between columns of a dataset and then uses it to further match rows. An expert-in-the-Loop (EiTL) enables end users to provide minimal supervision to correct the inaccuracies. aiCast enables effective forecasting, and aiPlan allows decision-makers to scenario plan. By viewing data from the lens of a LGM, tasks can be answered using other data as partial observations.

Ikigai’s mission is to bring machine learning to enterprises to solve real-world problems with tabular data that is housed within their existing databases and data warehouses. We’re early days in understanding the power of Ikigai’s patented technology, and the types of problems Ikigai will be able to help solve at scale. We’re excited because we think the opportunity here is quite large, and we can’t wait to see what lies ahead for the team!

Share
Perspectives

Related articles

No items found.