I recently spotted an interesting post by Benedict Evans on Threads. He argued people spent 20 years dreaming about pen computing, but now Apple has a flawless pen computer, it’s “pretty much useless for anything except actually drawing”. He therefore concludes: “Pen computing didn’t happen. I do wonder how far that is applicable to voice, natural language processing and chat bots – the fact they didn’t work was a trap, because even now that they do work, they might be a bad idea.”

I have a different take. If people did once dream ‘pen computing’ was the next step, it feels more like Apple subverted this by removing the need for a specific input device. Instead, you just use your fingers. ‘Pen computing’ became a subset of that, for people who needed more control and precision. Arguably, then, ‘pen computing’ is a massive success, because what it evolved into is how the majority of people use computers – that is, touchscreens on smartphones.

The takeaway here for me isn’t so much that Benedict is wrong nor that I’m right. It’s that you cannot predict the details of the next big thing. We don’t know with any certainty how things will play out, even when the broad brushstrokes become obvious and later largely come to pass.

So with voice, will it work? Quite possibly. But not necessarily in the specific ways we currently imagine it will.