When I began exploring AI, my focus was primarily on its capabilities. I swiftly employed it for both essential and trivial purposes. This experience greatly shaped my understanding of AI. The significance of using AI optimally cannot be overstated. Some term this mastery as “prompting”, but this definition feels reductive. To say someone skilled in human interaction is just “good at asking questions” overlooks the depth of cooperating with an external party.
Mastering interaction is about understanding the other party: recognizing their strengths, weaknesses, and operational tendencies. It’s about knowing when to support and when to trust them. Our ability to relate with humans is bolstered by shared experiences: living in the same environment, consuming similar media, and even engaging in face-to-face conversations.
Many express disappointment with AI outputs, expecting factual accuracy akin to an encyclopedia. But AI, particularly models like GPT-4, operate more as reasoning engines. They rely on input variables (prompts) and internally associated data for context. The inherent limitation lies in the data and its associative power. Compared to a human the models context is extremely limited, both to language as well as to personal history.
To harness AI’s potential, we should view it as a reasoning entity. GPT-4’s intelligence, if we differentiate between intelligence and data-based associations, is exceptional. This differentiation is similar to the Mensa IQ tests which measure intelligence without requiring knowledge context.
In essence, the model’s internal data serves as foundational knowledge, enabling it to comprehend abstract input and produce contextually-rich outputs. Yet, this “general context” is curated from vast internet data, which may not always reflect reality. To maximize its effectiveness, we must provide as much specific context and information as possible.
So far we have had very limited results in comparison to effort when it comes to finetuning, instead the current paradigm seems to lean towards highly evolved general foundation models together with augmented data. OpenAI has allowed us to do this in a fairly advanced way through their embedding API, which allows us to create mathematical semantic associations to any information we want. This is off course something we can do when we are systemizing associations and creating our own AI augmented systems.
One approach I’ve adopted when it comes to the more ad-hoc interaction, that is more common in my own interactions with the AI, is having the AI ask questions to understand an issue better. Basically you ask it to take the role of an interviewer and asking it to ask any question it needs to be able to adhere to a specific intention you have. Another technique we’ve employed in building a legal application involves extensive databases on Swedish law. By segmenting and vectorizing this data, the AI can produce precise answers when queried, since we are working with large corpus of data it requires a lot of pre-defined logic, and intricate chains that go both in parallell and serialized to reason and work through all the relevant data.
Now, let’s delve into “intention.”
According to Wikipedia, the term “intention” refers to a spectrum of related concepts. Prospective intentions relate to future plans, while immediate intentions guide actions in real-time. When prompting AI, we’re often dealing with prospective intentions. However, to leverage AI’s capabilities, we can provide it with both prospective and immediate intentions, thus sharing our context for a more tailored response.
For instance, asking the generic question, “How to do good marketing?” may yield a generic list of marketing strategies. By adding more context and intention to our query, the output becomes more specific and actionable.
Sample 1: