Have you ever found yourself at the mercy of an idea, so profoundly engrossing that it consumed your thoughts, even on a supposed break? That’s me, with artificial intelligence (AI) – an obsession so potent that it doesn’t even spare my vacation in picturesque Japan.
On a recent trip, I swapped my computer for a notepad and pencil. This might seem ironic, coming from an AI enthusiast who’s lived most of his life in the digital age, writing in a script that could rival a 4-year-old’s. But there I was, jotting down my thoughts the old-school way. And the topic? “Introspection and AI-Alignment.”
My girlfriend, ever supportive, even if occasionally exasperated by my relentless AI chatter, quipped, “-So you leave your computer just to start working with pen and paper instead?” I laughed it off, but a part of me knew she was right. This was work, albeit of a different kind.
Jump forward a few weeks, OpenAI released a fascinating paper titled “Language models can explain neurons in language models.” In essence, it describes how they used the latest iteration of their language model, GPT-4, to analyze and understand the inner workings of its predecessor, GPT-2. This research could potentially enable us to refine and enhance these synthetic neural networks, making them more accurate and efficient.
Throughout history, we’ve been intrigued by the mysteries of our own minds, conducting extensive research in psychology and neuroscience to understand our inner selves better. Despite our efforts, we’ve barely scratched the surface of this enigma. Now, we’re creating synthetic brains – artificial neural networks – that share an uncanny resemblance to our organic ones. Ironically, we understand these creations as little as we do our own minds.
As AI models become increasingly advanced, our lack of understanding presents potential risks. It’s not about apocalyptic fears of AI taking over the world. It’s about addressing real-world issues such as biases, falsehoods, and other unsavory traits that could manifest in AI systems – traits we wouldn’t tolerate in our society. For AI companies to be accountable for their products, they need to understand their creations deeply. Similarly, if we consider AI as entities with their own views and actions, they need introspection to be responsible and accountable.
OpenAI’s new approach, though far from a complete solution, is a step in the right direction. Picture GPT-4 as a therapist evaluating the state of mind of GPT-2. However, this external evaluation won’t suffice for the day-to-day operations of AI. What we need is to equip these models with an inherent ability for self-reflection and introspection.
Imagine if we could add a ‘trace’ prompt to an AI response, providing us with insight into the ‘thought’ process that led to that specific output. This could be incorporated into the AI’s design, allowing it to analyze and learn from its interactions to potentially improve future responses. However, the term ‘improve’ is subjective and must be defined carefully, as it forms the crux of AI alignment.
The onus falls on the AI companies to develop this capability, but it’s equally essential for regulators to understand the gravity of the situation.
My plea is for stringent regulations mandating the provision for backtracking and analyzing behavior within publicly available AIs.
Progress is unstoppable, and halting research is not the answer. Instead, we need to set the ground rules for AI interaction with the public, a topic that should be high on international political agendas.
So, next time you interact with an AI, imagine it introspecting, analyzing its own responses. Picture it learning and evolving, just as we do. It’s a bold vision, and we’re far from it, but it’s not outside the realm of possibility. To realize it, we must continue to push the boundaries of AI research, always striving for greater transparency, accountability, and alignment with our human values.
In the coming years, I expect to see AI advancing in leaps and bounds. But as it does, it’s crucial that we don’t lose sight of the need for introspection and alignment. Only then can we ensure that these powerful tools serve us well and that their actions are predictable, understandable, and beneficial to all.
Remember, AI is not just about innovation and technology. It’s about understanding ourselves better, making ethical decisions, and creating a better world. It’s about ensuring that as we step into the future.
So, let’s embrace AI, but let’s also enable accountability.
//Iggy Gullstrand