What Successful AI Products Do Right
Advertisements
The excitement surrounding AI products and their potential for profitability is palpableRecent statistics have shown that two popular software applications, Jieming and CapCut, boast over 800 million users globally each monthProjections suggest that by 2024, their revenue will have more than tripled, potentially reaching almost 10 billion RMBThis raises an intriguing question: what is the secret behind their success?
At first glance, these applications may not seem directly related to AIHowever, a closer look reveals that they leverage cutting-edge technologies, such as smart illumination and noise reduction features, which enhance user experience significantly.
Now let’s take a step back and consider standalone AI applicationsBy September, Keling AI had already surpassed 1.5 million monthly usersAlthough we don’t have exact data for Jieming, market analysts claim it has a market worth ten times that of Jieming
This trend points to a growing interest in AI tools that cater to various creative endeavors.
While scrolling through Douyin, I noticed an increased number of AI applications being integratedIt seems ByteDance is actively developing an AI-native version of Douyin, centering around the Jieming productThis development involves addressing user needs in the rapidly evolving landscape of creative content creation.
Yet, a conundrum arises: why do large corporations effectively commercialize AI products while smaller model-focused companies struggle to generate profit? To shed light on this question, we can identify four key insightsThese insights offer a paradigm shift in understanding the case for AI product monetization.
First, let’s distinguish between large language models (LLMs) and finished productsA simplified analogy likens LLMs to toolboxes filled with a range of tools—hammers, screwdrivers, wrenches, and more
- Two Issues with Buffett's Market Valuation Metrics
- Will Willow Quantum Chips Challenge Bitcoin?
- Will Powell Signal Shift Toward Easing?
- Market Synchronization Key to Risk Mitigation
- IPO Market Gains Momentum
Conversely, AI products can be viewed as furniture designed for specific uses and functionalitiesA chair, for example, is made for sitting, and a table for placing itemsIn essence, furniture resolves user problems directly.
This emphasizes a crucial difference: LLMs serve as API interfaces, while products are the items users directly engage withLLMs that have yet to be incorporated into specific products remain mere capabilitiesThis distinction raises the question: why haven't LLMs turned directly into lucrative products?
The first observation is that although large models can perform various functions, product value emerges from the ability to address specific challengesJieming, for example, employs smart illumination and noise reduction technology, simplifying the video creation process for users and alleviating their frustrations.
Alternatively, when simply providing a toolbox, users must strategize how to utilize these tools
This self-reliance can elongate the commercialization path and complicate the user experience.
The second insight involves the necessity of an ecosystem and resource support for standalone modelsWhat does that mean? Models can provide capabilities, but products need to be embedded into a user's requirements—commercializing AI products requires ecosystem backing.
For instance, Jieming leverages Douyin effectively; users can import videos directly into Jieming from Douyin, creating a seamless creative-to-distribution processMeanwhile, Jieming integrates short-form video with live streaming, allowing creators to produce content efficiently and monetize it immediately—this ecosystem creates a closed loop.
In contrast, standalone models lack such ecosystemsRelying solely on model capabilities makes it challenging to create sustained commercial value, even when user contact is achieved.
The third insight centers around the clarity of commercialization paths
Despite the highest quality tools, users are less inclined to invest without tangible problem resolution that products offerAI ventures seeking profit must decide from the onset how they will monetize.
Take Wenxin Yiyan 4.0, for exampleIt implemented a membership model at the outset, allowing paying users to access enhanced model reasoning capabilitiesSimilarly, Monica provides an omnifunctional tool that integrates AI assistants to meet varying customer needs.
The market is saturated with tools that often underperform individually, making it inconvenient for users to rely solely on a single model or expected functionalityUsers appreciate flexibility; the prospect of switching models if one underperforms is inherently valuable.
Furthermore, when many large model companies remain vague about their commercialization strategies, they risk continuing to dwell in the metaphorical "toolbox" phase, leaving users confused about what they can do or why they should pay.
To highlight a pivotal point: LLMs may not inherently grasp user needs as effectively as products designed around specific scenarios do
Take the example of smart noise reduction; while it encompasses complex model technologies, users care more about functionality than understanding underlying frameworks.
In observing these patterns, we can conclude that models alone do not serve as fully fledged productsThey must be integrated into user-facing solutions to effectively capture market engagement.
Moving beyond that, if we wish to translate models into effective products, we must delineate the differences between the twoI recently encountered an intriguing scenario: I asked Doubao to analyze an Excel file, and it promptly interpreted the contentConversely, when I tried utilizing a model from GitHub, it could not process the Excel file.
Why does this discrepancy exist? Upon investigation, I discovered that Doubao functions as a robust model product bolstered by an array of multimodal modelsWhen Doubao accesses the Excel file, it first converts it into XML format, understandable by the model, before performing any inference.
This underscores an essential principle: while models provide capabilities, products necessitate a full engineering process enabling these capabilities to translate into visible, usable functionalities for consumers
One particular question remained: why can't these large models directly read certain contents without transformation?
First, the process may involve several back-and-forth interactions, incurring substantial costs that API providers ultimately cannot sustainMultiturn conversations illustrate this pointFor example, when a user inquires about credit card payment timelines, the model would need to first identify the individual's ID before querying a banking database—if not set up correctly, the model would have difficulty connecting the data.
Herein lies the crux: product engineers must fashion external logic around each interaction step, ensuring user simplicity while retaining depthBy devising logical controls, users can interact naturally without needing to comprehend model intricacies.
Moreover, models can fail when requested to analyze intricate datasetsTraditional models might struggle to provide meaningful output when tasked with generating insights from lengthy PDF documents with convoluted content.
If the product doesn’t slice PDFs into manageable segments to extract critical aspects, users may receive voluminous but ultimately useless information
So, it is evident that inherent model instability requires structured engineering intervention to mitigate errors; this points to the divide between LLM technology and practical productization.
Incorporating models into products demands a rigorous approach from product managersThe quintessential role of a product manager thus becomes one of interpreting the essential intersection of models and products.
As previously stated, LLMs function as remarkable toolboxesEngineers and product managers must operationalize these tools, akin to equipping a brain with sensory perception and motor skills to ensure user supportThis underscores the criticality in evaluating product effectiveness, particularly in the realm of AI.
An AI product may possess astounding intelligence, but if it's overly complex to operate, users may shy awayConversely, excessive simplicity without core competencies fails to satisfy user expectations
This synergy is crucial, as effective AI products necessitate strong models, combined with skilled engineers and adept product managers to realize collective success.
In a practical sense, addressing the question of why users should pay for AI products leads to three primary advantages: first, they enhance efficiency to complete tasks more rapidly; second, they simplify operations for easy navigation; and third, they cater to personalized needs, providing tailored solutions for diverse scenarios.
As evidence of this, I recall a conversation with a friend who praised the multifaceted capabilities of the Chengpian application, particularly in generating process, line, and pie charts—qualities compelling enough for him to subscribe for membership.
When asked why he wouldn’t use similar features in other tools like Kimi or Doubao, he simply replied he was unaware of their existence
This highlights a significant issue—despite having comparable abilities, brands lacking targeted and clear messaging often fall short of establishing user engagement.
To strike a balance between model and product effectively, we can capitalize on the relative strengths of the model to tackle routine, repetitive, mundane non-creative tasks, as large language models currently find it challenging to excel in complex creative tasks such as critical thinking and brainstorming.
For instance, while summarization may seem inherently creative, it doesn’t necessarily require reliance on an LLMConversely, mundane tasks such as search queries could effectively be delegated to LLMs, pinpointing certain responsibilities more effectively associated with models.
Identifying suitable roles for models involves acknowledging categories they perform best, primarily revolving around search and categorization, perpetual and repetitive tasks, and interactive functionalities like chat assistants or smart customer service interfaces.
These tasks share commonality in minimizing user involvement while enhancing efficiency without complex cognitive demands
However, different user groups will carry diverse expectations concerning such tasksReflecting on consumer scenarios, users of Kimi Chat and Baidu exhibit completely different usage patterns, while MiTa AI and Zhihu AI searchers represent yet another unique demographic.
For example, MiTa AI revolves around legal documentation searchesIts users are primarily concerned with obtaining specific legal referencesIn contrast, Zhihu AI users prioritize source credibility, diverse perspectives, and nuanced thinking—they seek not just results, but also thoroughness in listing essential sources mid-search.
Regrettably, many designers of payment models often overlook the unique requirements of varying users and may take wide-ranging perspectives when devising a generalized billing model, such as opting for membershipIn reality, there isn’t a universal billing format any longer.
We see challenges presented by one-size-fits-all models; if one could augment commercial strategies through adequate source disclosure—this could present alternative approaches.
Now shifting focus to enterprise users, their primary goal lies in seamlessly integrating models into their workflows for enhanced efficiency and streamlined business operations
Notably, I categorize enterprise service teams into two camps: traditional service providers and emergent AI companies.
The former aims to augment pre-existing capabilities by infusing LLM technologies, often finding that core enterprise service goals remain unchanged while adapting to the new toolsIn contrast, they perceive LLMs as auxiliary instruments for competitive advantage against their traditional counterparts.
Emergent AI firms focus primarily on leveraging LLM capacities for creating innovative enterprise solutionsHowever, reliance solely on one-time transactions risks engendering consumer skepticismSustainable value remains the heart of successful engagements; ultimately, the market will rationalize and pay for true utility.
Thus, in the arena of enterprise services, LLMs must transcend mere augmentation, becoming integrated components enhancing traditional service methods
Clients aren’t merely purchasing a model; they are investing in comprehensive solutions capable of improving operational efficiencies.
For instance, envision a scenario where SCRM software is augmented by AI or SaaS offerings benefit from AI integrationAlready seen in practices like Youzan, these solutions allow consumers to specify requests via conversational (LUI) or command-based (CUI) interfaces, presenting tailored functionalities or execution capabilities in real-time.
AI can significantly aid enterprises in navigating complex tasks such as inventory management, customer relationship optimization, and forecasting trends—showcasing the true essence of merging enterprise applications with LLMs, rooting them into standard operational proceduresUltimately, enterprise clients invest not in the model per se but rather in solutions that demonstrably enhance business value.
In conclusion, synthesizing model capabilities with products unlocks real value
Leave a Comments