takes aim at Google and Microsoft with multimodal chat search | founder Richard Socher knows that his company has always been a David going after the Goliath in search, Google, and to a lesser extent Microsoft. He likes to point out that his company built search based on generative AI in December, several months before the other giant search players made their announcements.

Today, the company is announcing it’s taking that head start and building on it with multimodal search. That means it can add elements beyond text to help answer a question more precisely. So say you ask a question such as “Which company has the most CRM market share,” you will get the answer “Salesforce,” and if you follow up with “What is Saleforce’s stock price?”, you will get a stock chart instead of a text-based answer.

Socher says that’s a big leap forward for chat-based search, and puts his company ahead of his much larger competitors. “Instead of making up a bunch of numbers, which every other language model would do, we’ll just show you our stock app right there inside the conversation,” Socher told TechCrunch.

He believes that’s a much more effective way to answer that kind of question and these different modalities can be applied to other questions depending on the context. “It’s a big step forward to get large language models to be multimodal in the sense of the different modalities being text, but also code, but also tables, and also graphs and images and interactive elements — and sometimes that is the best way to answer your question. I truly believe that this better way to represent the answer to this question than any text could be,” he said.