Search 2.0: Language Model Providers Are Prioritising Distribution to Dominate Next-Gen Search
A Look at How Accessibility Shapes the Future of Search, Data Discovery & Synthesis
Something I have noticed is that Language Model providers have shifted their focus from API development and developer tools to end-user interfaces…
Why?
This is in an effort to reach distribution and gain critical mass in the race to be the new Google, the preferred new search tool…
Is Search Broken?
In its current form, yes. Users do not want to search using 3 to 5 words anymore. Users want to use 20+ words to create a contextual reference. They have become accustomed to this due to the use of ChatGPT.
With old search, users are presented with links which need to be opened in different browser tabs and curated. Many of the links are sponsored, or are leveraging SEO strategies and are not highly relevant.
Users then need to curate the data and synthesise the data from various sources.
At the recent Gartner IT symposium in Barcelona an executive from Google stated that search phrases grew from 3 to 5 words, to more than 20 words.
Hence users express a desire to make use of more verbose input.
New Search
New search jumps the queue pass links and the need to sift through websites, collect data and synthesise data.
The results are contextual, curated and various sources and points of view are taken into consideration.
In the image above I lay out the elements included in the new UIs, code, image, video, model orchestration under the hood and more.
The evolution of search technology, which I refer to as New Search, represents a paradigm shift in how users interact with information retrieval systems.
Unlike traditional search engines that rely heavily on concise keyword queries and return a list of links — often muddled with sponsored content or SEO-optimised pages — new search leverages advances in Language Models and AI-driven contextual analysis to handle verbose, complex inputs exceeding 20 words.
New search delivers a streamlined experience by directly providing curated, contextually relevant results that integrate diverse data types — text, images, videos and even code snippets — into a cohesive response.
At its core, new search employs sophisticated model orchestration, combining large language models, real-time web indexing and multimedia analysis to anticipate user intent and present synthesised insights.
For instance, a query like What are the environmental impacts of deforestation in the Amazon, including statistics and expert opinions from the last five years? would no longer yield just a list of links but a comprehensive summary with sourced data, visuals and perspectives, all tailored to the user’s expressed context.
Reasoning
Language Model reasoning significantly enhances both insights and explainability, making complex information more accessible and actionable for users.
By interpreting a query’s intent — say, Why did renewable energy adoption spike in Europe last year? — reasoning allows the AI to go beyond surface-level data, connecting economic trends, policy changes and public sentiment from diverse sources.
This results in deeper insights, such as identifying a key EU subsidy program as a driver, rather than just listing adoption rates.
Unlike traditional keyword matching, reasoning evaluates the relevance and credibility of information, filtering out noise to highlight what truly matters.
It also synthesises findings into a narrative, offering users a clear why behind the what.
For explainability, reasoning provides transparency by tracing how conclusions are drawn — explaining, for instance, that statistics from a 2024 report were prioritised due to their recency and authority. This builds trust, as users can see the logical steps from query to answer.
In practical settings, like a business analyst assessing market shifts, this means faster, more reliable decisions without wading through irrelevant data. Reasoning thus transforms raw information into understandable, justified insights, bridging the gap between data and human understanding.
Ultimately, it empowers users with not just answers, but the confidence to act on them.
Finally
The opportunity for new search in enterprises lies in addressing the limitations of general-purpose UIs like Grok 3 and ChatGPT, which often fall short of stringent risk and compliance requirements.
Off-the-shelf solutions may not adequately handle sensitive data or adhere to industry-specific regulations, creating a gap that bespoke vertical enterprise solutions can fill.
Bespoke new search platforms can be designed to integrate seamlessly with an organisation’s internal knowledge, ensuring that knowledge workers access contextually relevant, secure and compliant information for agentic workflows.
This customisation enables enterprises to manage proprietary data, enforce access controls and audit search processes, meeting the unique needs of sectors like finance, healthcare, or legal services.
As a result, specialised tools empower employees with efficient, trustworthy insights while aligning with corporate governance standards, driving a creating market for enterprise-grade search innovation.
Chief Evangelist @ Kore.ai | I’m passionate about exploring the intersection of AI and language. From Language Models, AI Agents to Agentic Applications, Development Frameworks & Data-Centric Productivity Tools, I share insights and ideas on how these technologies are shaping the future.