Back in the days when I was a "computer telephony" app developer in the early 1990's a client paid me to learn the latest state-of-the-art technology – speech recognition – for future application development. The way the instructor explained it, they used complex human vocal tract models, serial filtering algorithms, and heavy computing to achieve an industry bragging right accuracy rate of 80%, if I remember correctly.
And to understand what a full sentence means, they had to program in the grammatical rules and syntax structure and used more computing that resulted in even less successful outcomes.Then came the big data and the neural network: All you need to do is feed the machines with a mass amount of human speech and their corresponding text and they soon learn to transcribe and understand what they mean with near perfect accuracy.
Similarly, instead of programming in game rules and winning strategies for chess or Go, just feed the mainframe computers with massive game scenario data and they soon figure out how to play the games at a skill level that exceeds the best of human players.
Such are the "bitter lessons" we learn in the AI age, i.e., in the long run, general search and learning methods that leverage computing power always outperform those based on encoding human knowledge or human-like rules. The "bitter" part comes from the realization that decades of human expertise and carefully crafted systems are often surpassed by brute-force computation, which is a hard pill for human researchers to swallow.
**********************************
As AI and robots take over the mundane and not-so-mundane tasks of the world, finding and developing new resources, manufacturing and delivering goods, creating and managing wealth, all with frictionless efficiency and optimum results, some say we are entering an "age of abundance": Nobody needs to work for money, everyone gets "universal basic income" and is free to pursue what their passion leads them to: be the painter or the musician you've always wanted to be, create a video game for yourself and play it all day long if you want, etc.
But am I not already at the "age of abundance" after ridding myself of the 9 to 5 work yoke and free to pursue whatever hobby or long-desired project I've always wanted to do, yet still feeling restless at times? Doesn't true artwork come more from experiencing deep human suffering than from an all's-well life? How many video games can one play before getting bored anyway?
**********************************
A world of "augmented reality" or "virtual reality" is looming large again, with Meta's new Vision glasses that can surreptitiously display the background information of a scene or people you look at, perform tasks per your finger gestures as if you were sitting in front of a computer rolling mouse around...
I may like to see text appearing on my glass screen explaining the history and layout of an old castle I am visiting, but probably not any geographical information of the rivers and mountains of a shockingly beautiful scene I bump into in a national park, when I want to dedicate my full sensory attention to the here and now while piping down my cognitive activity like a computer going dark into the sleep mode...
As the world of synthesis becomes more and more indistinguishable from the world of physical reality, both the sayings "we may be living in a computer generated simulation world" by Elon Musk and "the world is a phantom mirage sired by our vanity mind" by Buddhism seem to ring comically true and truer.
**********************************
I don't like using Large Language Models such as ChatGPT or Gemini for research, for the simple reason that they would "hallucinate", making up things when they don't have good answers for. They say the best (most dangerous) liar is one who speaks half-truth, but I think a chatbot who tells lies only 1 out of 100 times is 100 times worse than a half-truth teller.
Then there is "AI slop", people using LLMs and other AI tools to generate low quality, fluffy, inauthentic, and erroneous content that flood the internet, academia and workplace, drawing eyeballs while spreading falsehood, facilitating cheating and hemorrhaging productivity. Things will get better, say AI "effective accelerationists" (people who believe rapid technological advancement will solve universal human problems), as we are in a transition period towards a "singularity" point when AI becomes ASI (Artificial Super Intelligence) that would self-improve and eliminate all the slop we've seen.
Before that would happen, however, we humans would have to clean up the slop AI creates – or, to be fair, the slop we make AI create – ourselves. Case in point: it is estimated that 10% of the software generated by AI through prompts (called "vibe coding") is faulty. It would then take a human software engineer with superior coding know-how to fix the faulty code generated by AI. Furthermore, to cut down the faulty code generation requires a human software engineer who is not only a better coder but also skillful in giving precise, pertinent, proprietary prompts that instruct AI to create less sloppy software in the first place.
If we want to stay one step ahead of ASI, we would need to move our human intelligence up a notch, to AHI (Advanced Human Intelligence), so to speak 😁
**********************************
Nvidia is the highest market-cap company nowadays because it holds complete monopoly of GPU chips, the essential hardware component for data center servers that form the backbone of AI services, just like Cisco was king of the nascent internet industry when its routers were the basic building blocks of inter-networks during the dot-com era. Then comes the news that Nvidia is investing $100 billion in OpenAI, one of its major customers, to buy/lease its chips to build data centers, reminding me of some major telecom company buying equipment from a "unified messaging" startup company that was doing similar technologies my startup company was doing at that time and ran up both companies' stock prices overnight, also during the dot-com era.
Are we in an AI build-up bubble now? Nah, say some analysts: these AI companies have real users (800 million for OpenAI), real revenue ($4.3 billion for the first half of 2025 for OpenAI), and the demand for AI is insatiable and growing exponential, the only limitation to AI booming is the computing capacity the world can provide, hence the justifiable build-up.
And even when the bubble bursts, some well managed companies will stay, as well as the infrastructure and truly innovative technologies. Thus we have Google and Amazon coming out of the dot-com crash stronger and better than ever, fibre optics high speed internet everywhere, voice calling through internet at zero cost today, don't we?
I don't have a crystal ball telling which companies will survive and thrive when the AI craze cools, but I think AI slop will get worse before it gets better, smart phone will remain the most popular AR/VR gadget people wear, "age of abundance" sounds too Utopia to be real, and a bitter(sweet) lesson I soon will learn is I will be able to take a self-driving robotaxi to LAX sooner than a bullet train from LA to San Francisco (if ever that would happen)!