Too much assembly required for AI



One sign that we’re still very early in the evolution of AI is how much heavy lifting is still left to the user to figure out. As Community Leadership Core founder Jono Bacon laments, even the very act of “need[ing] to choose between [large language] models” to run a query is “complex and confusing for most people.” Once you’ve chosen the “right” model (whatever that means), you still need to do all sorts of work to get the model to return relevant results (and forget about getting consistent results—that’s not really a feature of current LLMs).

All that said, when I asked RedMonk co-founder James Governor if AI/genAI had lost its shine, his response was an emphatic “No.” We may currently be sitting in the trough of disillusionment (my phrase, not his), but that’s just because we’re following the same timeline all important new technologies seem to take: from indifference to worship to scorn to general adoption. Some software developers are already jumping into that last phase; for others, things are going to take more time.

Eventually consistent

It’s been clear for a while now that AI would take time to really hit its stride. I mean, all it takes is a little fiddling with something like Midjourney to create an image before you notice, as Governor did, that “the majority of AI art trends to kitsch.” Is that because computers don’t know what good art looks like? As inveterate AI grumbler Grady Booch notes, we sometimes pretend that AI can reason and think, but neither is true. By contrast, “Human thinking and human understanding are not mere statistical processes as are LLMs, and to assert that they are represents a profound misunderstanding of the exquisite uniqueness of human cognition.”



Source link