This is part of a series on conversational intelligence: where the intelligence is today, and how to use it well in business.

Ten years ago, sentiment libraries did not exist for most languages in the form this work needed. We started with Spanish, then sourced six additional languages through a university research lab because they were not available off the shelf. What can be pulled into a project in an afternoon today often had to be assembled by hard code back then.

What sentiment analysis was, then

In its earliest broad usage, sentiment analysis was a labeling exercise. A piece of text is scored as positive, negative, or neutral. The output was simple, the input was simple, and the use case was simple. Companies wanted to know if customers were happy or unhappy in their reviews, social posts, and surveys.

That work made sentiment analysis a category. It puts an emotional signal on dashboards. For a while, that was the whole conversation.

The work I was doing operated at a different layer. Not text-only labeling on finished documents, but emotion extraction from spoken language while a conversation was happening, and carrying that emotion through translation. That required signals beyond words. Tone. Pacing. Cadence. The half-second pause before a difficult sentence. The slight rise that turns a statement into a question without changing a single word.

To do any of that across languages, you needed sentiment libraries that understood how those signals worked in each language. And ten years ago, those libraries did not exist for most of the language’s enterprise clients.

The years between

Work on the platform paused. By the time I returned to developing it, the field had moved on. SpaCy and other open models had matured. Multilingual sentiment libraries that once required access to research labs have become a commodity. The infrastructure that needed to be assembled by hand was now downloadable. The boundary of what an individual developer could build had shifted dramatically.

That shift is invisible to most people using these tools today. A developer in 2026, building a sentiment-aware feature, pulls a model into their project and gets reasonable results in minutes. Ten years ago, that same task required research relationships and a year of work. Both are true. Both happened. The path between them is what most public conversations about AI skip over.

What sentiment analysis is now

By 2026, sentiment analysis will no longer be a single thing. It is a stack of capabilities that share a name.

At the surface layer, polarity scoring still exists. Positive, negative, neutral, applied to text. Reliable enough now to be embedded in customer relationship platforms, marketing tools, and review aggregation systems without much thought. Most business owners are using it whether they know it or not.

Beneath that, multidimensional emotional modeling has become practical. Not just polarity, but specific emotional states. Frustration, satisfaction, confusion, urgency, hesitation, and confidence. Different categories. Different signals. Different operational implications.

Beneath that, voice-based emotion analysis works. Systems can hear tone, pacing, register, and prosody, and can make useful inferences about emotional state from how words are spoken rather than only from the words themselves. Accuracy is not perfect. It is high enough to be useful in real workflows.

And beneath that, conversational systems are beginning to model emotional trajectories. Not just what someone is feeling at a single point, but how their state is changing across a conversation. A customer who started frustrated and is now de-escalating is in a different operational situation than a customer who started calm and is now becoming agitated. Surface scoring cannot tell those apart. Trajectory modeling can.

A system can hear signals in speech. Emotion mapping is the process of making sense of those signals.

What is often debated publicly is not whether the signals exist, but how far interpretation should go. That is a different question.

What this means for business today

The practical implication for a business owner is that the phrase “sentiment analysis” no longer points to a single capability. When a vendor tells you their tool includes sentiment analysis, the next question is what kind.

If they mean polarity scoring on text, that is useful for some things. Tracking review trends. Flagging unhappy customers in survey data. Sorting incoming email by tone.

If they mean voice-based emotion detection on customer calls, that is a different capability with different operational implications. It can route calls. It can flag escalations. It can support agents in real time.

If they mean trajectory modeling across conversations, that is a third capability, requiring different infrastructure and different process design around it. Few tools do this well, and the ones that do require careful integration.

None of these is interchangeable. A business that buys polarity scoring expecting trajectory modeling will be disappointed. A business that buys voice-based detection without designing the workflow around it will get signals it cannot act on.

The deeper question is not which sentiment analysis to buy. It is what the business will do with the signal once the system produces it.

Where the risk sits now

Ten years ago, the risk was technical. The libraries did not exist. The computer was heavy. The data was hard to source. Building anything required years of foundational work.

In 2026, the technical risk is dramatically lower. The capability is real. What has not kept pace is the process design around the capability.

A sophisticated sentiment system in a business with an unclear customer service workflow produces signals that nobody acts on. Voice-based emotion detection in a call center without a defined escalation path results in alerts that are ignored. Trajectory modeling in a sales pipeline without a clear handoff between automated and human attention produces dashboards that nobody reads.

The technology is no longer the limiting factor. The limiting factor is whether the business has designed something worthy of the signal.

What a system preserves matters more than what it processes.

Sentiment analysis became deeper because the underlying technology learned to perceive more. What the technology cannot do on its own is decide what any of it should mean within a specific business. That decision still belongs to the people who run the business.

It always will.

 

The series on conversational intelligence

  1. Conversational Intelligence: How It Started
  2. Why Friction Was the Real Problem
  3. When Words Were Not Enough
  4. What Sentiment Analysis Became (you are here)
  5. What AI Can Perceive
  6. Where Emotion-Aware AI Stops
  7. Cloud Before the Edge
  8. How to Add a Second Language
  9. Voice AI for Your Business
  10. Monitoring Versus Understanding
  11. What Comes Next

 

About Mary Lee Weir

Mary Lee Weir has been building websites for 27 years and digital products in 7 countries. She holds U.S. Patent 11,587,561 B2 for a communication system and method of extracting emotion data during translations, and continues research and development in conversational intelligence. She runs Vero Web Consulting in Vero Beach, Florida, and founded Belize Web and Information Systems at home in Belize to serve Belizean businesses. She writes about AI, search, and the practical realities of building for the web at maryleeweir.com.

 

If any of this is useful

Book a 60-minute strategy call ($250) to work through how any of this applies to your specific business. Or start with a free 15-minute intro to see whether a longer conversation makes sense.