×
×
homepage logo

Tech Matters: Is this the year of voice?

By Leslie Meredith - Special to the Standard-Examiner | Feb 10, 2026

Photo supplied

Leslie Meredith

From the halls of the Consumer Electronics Show to the pages of The New York Times, analysts agree that 2026 may be the turning point for voice interaction as the default mode of communicating with our devices. Why? Because of the rise in conversation-style exchanges with chatbots like ChatGPT and the improvement of synthetic voices to sound more human, real humans can convey their thoughts more easily while feeling like they’re talking to another human. It’s a natural progression. And, iPhone designer Jony Ive may have the ultimate unveiling later this year to seal the deal.

Think of 2026 as the tipping point from typing to talking. On the software side, voice developers have finetuned computerized voices to not only sound more natural, but to respond in more human ways. New systems add emotional cues and better reflect the intonation of natural language, adding to the illusion that people are interacting with a human.

Further, with the introduction of AI agents, an AI bot that can be given a specific task to do like organizing files, summarizing emails and generating presentations from various sources, the AI itself has become more sophisticated. It requires multistep instructions, which are much more efficiently conveyed through voice than through typing. Imagine typing out your side of a conversation with a co-worker and it’s easy to imagine the attractiveness of simply talking instead.

Finally, throw in the pace of modern life. No longer are computer tasks an isolated activity; we have become multitaskers to various degrees. We want to check off items on our to-do lists while walking, waiting in line or traffic and while doing household chores. With a wearable device or a phone on the counter, you can as long as you’re hands-free, using your voice. 

To strengthen the voice-first trend, hardware manufacturers are increasing the selection of wearables, largely in the form of smart glasses, that ditch the screen and offer audio interactions. Today’s audio’first wearables are mostly smart glasses and tiny AI pins designed to be “always listening” for quick commands. Ray’Ban Meta smart glasses, for example, combine open’ear speakers, microphones and Meta’s voice assistant so you can ask questions, take photos or translate speech without pulling out your phone; they typically run around $350 to $500 depending on frame and retailer. 

On the more experimental side, screenless AI wearables like Bee and Omi clip to your clothing, record ambient audio all day and use AI to summarize meetings, conversations and tasks via a companion app. These are typically in the $50 to $200 range and sold direct to consumers online. Together, they illustrate how personal tech is shifting from screens to lightweight microphones and speakers. But that’s not all.

In what could be 2026’s most transformational tech launch, former Apple design chief Jony Ive and OpenAI CEO Sam Altman are preparing to unveil a screenless, voice-first device that Altman calls “the coolest piece of technology that the world will have ever seen.” Following OpenAI’s $6.5 billion acquisition of Ive’s io Products, the device, which is scheduled to debut in the second half of this year, represents a deliberate move away from screen-addicted smartphone culture toward what they’re calling “calm computing.” The pair’s stated ambition is to “completely reimagine what it means to use a computer” while making the new device accessible to the general population – read that as not exorbitantly expensive!

OpenAI has reorganized its entire engineering team to develop dramatically improved audio AI capabilities, including more natural-sounding speech and real-time conversational abilities, specifically to power this device. The exact form factor remains mysterious; rumored prototypes include an earbud-style wearable codenamed “Sweetpea” and a pen device called “Gumdrop.” Ive’s team is already working with suppliers targeting production of 100 million units. Ive and Altman are betting that 2026 is finally the year when voice AI is ready to replace screens as our primary computing interface.​​​​​​​​​​​​​​​​

If you are interested in improving your voice skills, there are several things you can do. On a Windows PC, turn on Voice Access by going to Settings, Accessibility, Speech, then switching Voice Access on. For quick dictation in any text field, press Windows key + H. On a Mac, go to System Settings, Keyboard, Dictation and turn Dictation on. You can assign a shortcut or use the microphone icon that appears in text fields. On iPhone and Android phones, tap the microphone icon in any app. You can dictate texts, emails, calendar entries and notes without touching the screen once dictation starts. 

The key thing to know is this: Voice works best when you stop thinking in commands and start thinking in requests. Say what you want done, then add the detail. Talk as if you’re talking to an assistant, which is really what these tools are. The more you use your voice, the more comfortable you will feel.

Leslie Meredith has been writing about technology for more than a decade. As a mom of four, value, usefulness and online safety take priority. Have a question? Email Leslie at asklesliemeredith@gmail.com.

Starting at $4.32/week.

Subscribe Today