
The Human/AI Language • Chapter 2
Introducing Large Language Models
Inside every Large Language Model lies a universe of patterns—billions of connections drawn from human knowledge, experience, and creativity.
These aren't just word prediction engines; they're sophisticated systems that can reason, create, and collaborate in ways that transform how we think and work. Understanding how they function isn't just about technology—it's about unlocking new possibilities for human-AI collaboration.
Estimated time.
Part 1
What are Large Language Models?
Large Language Models are artificial intelligence systems trained on an unprecedented amount of textual data—spanning everything from books and academic articles to websites, code repositories, and social media conversations.
To give a sense of scale, these systems are trained on datasets that can contain trillions of words, equivalent to reading the entire contents of a vast library hundreds of times over.
However, "trained" doesn’t mean what you might think. LLMs don’t study this content like a student might. Instead, this extraordinary volume of data enables them to identify patterns, linguistic structures, and contextual nuances across diverse domains. LLMs learn how words tend to follow one another, how ideas connect, and how language flows in different contexts.
Think of it this way: If you’ve read thousands of fairy tales, you develop an intuition for how they work. When you see the phrase "Once upon a time" you instinctively anticipate what might come next. LLMs operate in a similar way—but on a massive scale and across all types of text, not just fairy tales.
Part 2
Interacting with LLMs
Interacting with an LLM can open up a world of possibilities. From answering questions to helping you brainstorm creative ideas or analyse complex topics, these AI systems adapt to your needs in surprisingly human-like ways.
Next, we’ll explore some of the most common types of responses LLMs can generate, along with examples of how you might use them. Whether you’re looking for guidance, inspiration, or just a casual conversation, understanding these response types will help you engage more effectively and make the most of what LLMs have to offer.
Factual Responses
When you ask, "What is the capital of France?" the LLM doesn’t consult a database of facts like a search engine. Instead, it draws on patterns it has seen during training. For example, the word "Paris" has appeared alongside "capital of France" so frequently in its training data that the connection is deeply embedded.
Example
User: What’s the tallest mountain in the world?
LLM: Mount Everest, standing at 8,848 metres.
This process makes responses feel accurate, but it’s important to remember that LLMs match patterns rather than verify facts. Always to double-check critical information.
Creative Responses
When you ask an LLM to write a story, it doesn’t "imagine" like humans do. Instead, it generates new combinations of patterns it has learned from millions of existing stories, books, and other texts. This is why AI-generated content can often feel familiar—it’s a remix of what already exists.
Example
User: Write a short story about a lost puppy who finds its way home.
LLM: Once there was a puppy named Max who wandered too far from his yard. The sun was setting, and he felt lost—until the scent of his owner’s jacket guided him back…
While the response may be heartwarming, the LLM isn’t experiencing emotion or inventing from scratch; it’s weaving together story elements based on patterns. Instead, think of it as a creative collaborator—offering starting points that you can shape and refine.
Analytical Responses
When you ask an LLM to analyse something—such as summarising a report or breaking down a poem—it’s not "thinking" like a human. Instead, it’s applying patterns it has learned from similar analytical texts. This allows it to simulate analysis by combining patterns with relevant information.
Example
User: Analyse Shakespeare’s use of metaphors in Sonnet 18.
LLM: Shakespeare uses metaphors to compare the beloved to a summer’s day, emphasising qualities like warmth, beauty, and vitality while contrasting them with the transient nature of seasons…
These analytical responses can feel sophisticated but don’t stem from true critical thinking. LLMs are reflecting patterns, not forming original insights.
While they can save time and inspire ideas, always rely on your own judgment for deeper interpretation.
Instructional Responses
LLMs are often used to provide step-by-step guidance on completing tasks or solving problems. These responses synthesise patterns from instructional material to deliver clear, actionable steps.
Example
User: How do I bake a chocolate cake?
LLM: To bake a chocolate cake, you’ll need to:
Preheat your oven to 180°C (350°F). Combine dry ingredients like flour, sugar, and cocoa powder. Mix in wet ingredients such as eggs, milk, and oil...
Instructional responses excel at breaking down processes but may lack nuance for complex or specialised tasks. For critical instructions, validating steps against a trusted source is essential.
Conversational Responses
LLMs can simulate casual, open-ended dialogue, often used to engage users in brainstorming, emotional support, or informal chats. These responses aim to feel natural and relatable by mirroring human conversational patterns.
Example
User: I’m feeling nervous about my presentation tomorrow.
LLM: It’s natural to feel that way. Have you practiced your key points? You’re going to do great—just take a few deep breaths and remember why you’re giving this presentation.
While conversational responses can feel empathetic, it’s important to remember they’re not a substitute for professional advice or genuine human interaction.
Exploratory Responses
When you ask an open-ended or speculative question, LLMs can generate exploratory responses that synthesise various perspectives or expand on ideas. These responses often blend factual and creative elements.
Example
User: What might cities look like 100 years from now?
LLM: In 100 years, cities could be transformed by vertical farming, renewable energy grids, and autonomous transportation. Imagine skyscrapers covered in greenery, smart systems managing energy consumption, and drones delivering packages directly to your balcony...
Exploratory responses are useful for ideation but should be treated as imaginative possibilities rather than concrete predictions.
Summarisation Responses
LLMs can distil long or complex texts into concise summaries, highlighting key points while preserving the core meaning.
Example
User: Summarise this article about climate change.
LLM: The article discusses the urgent need to reduce greenhouse gas emissions, focusing on renewable energy adoption and policy reforms. Key challenges include international cooperation and economic barriers.
Summaries can save time and aid comprehension, but they may oversimplify nuanced content. Critical details or context might be omitted.
Part 3
Limitations and Challenges
Understanding the limitations of LLMs is key to using them effectively and responsibly. While their capabilities are remarkable, their constraints remind us why human oversight remains essential. Let’s explore some of the most critical challenges.
Pattern Matching vs. Genuine Understanding
LLMs can hold sophisticated conversations on virtually any topic, but their knowledge is based on pattern recognition, not real-world understanding. For instance, they can discuss the concept of gravity in detail, but their understanding is derived entirely from patterns in text—scientific explanations, metaphors, and analogies that others have written about gravity. They have never actually experienced the physical sensation of gravity.
Similarly, LLMs can describe the taste of an apple, but this ability comes from patterns they’ve learned in descriptions written by others. For example, they may have encountered phrases like "apples are sweet and crisp, with a refreshing tartness," and inferred the common ways humans talk about apples. However, this knowledge is purely text-based; they’ve never tasted an apple themselves or experienced its flavour.
This fundamental limitation means LLMs are always working from patterns in text, not from lived or sensory experience. Their "understanding" is a simulation, built from associations rather than direct comprehension.
Hallucinations
In the context of LLMs, "hallucinations" refer to responses that sound plausible but are incorrect or entirely fabricated. This happens because LLMs generate text based on patterns in their training data rather than verifying factual accuracy.
Pattern-Based Predictions: LLMs predict the most likely sequence of words based on their training data. When they lack sufficient context or knowledge, they may "fill in the gaps" with convincing but incorrect information.
No Fact-Checking Mechanism: Unlike a search engine, LLMs don’t have a built-in knowledge base to confirm the accuracy of their responses.
Overconfidence in Uncertainty: LLMs generate answers even when they lack enough information to ensure correctness, leading to plausible-sounding but inaccurate statements.
This limitation highlights the importance of verifying critical information and treating LLM responses as starting points rather than definitive answers.
Dependence on Training Data
Think of an LLM as a student whose knowledge comes solely from the books it’s read. What’s in those books—and what’s missing—shape what it can understand and discuss. For example, if a dataset includes numerous texts about Western history but very few about African history, the model’s responses will reflect this imbalance, offering more detailed insights on the former than the latter.
Additionally, since many LLMs lack continuous updates or live internet access, they may not know about recent events or advancements in fast-changing fields. For instance, an LLM trained before 2023 might not be aware of groundbreaking developments in renewable energy or major geopolitical events that occurred after its training cutoff.
This dependence on training data makes LLMs great for foundational knowledge but less reliable for timely or underrepresented topics.
Lack of Consistency
Imagine having a conversation with someone who refreshes their memory every time they speak—that’s a bit like how LLMs work. Within a single conversation, they can use earlier exchanges to provide context (a capability known as a "context window"—more on this later), but this "memory" is temporary and limited.
Each response is generated anew, without any true memory or persistent beliefs. This means that:
If the conversation becomes too lengthy or detailed, earlier parts may fall outside the LLM's context window and be "forgotten."
Asking the same question twice might yield different answers, as responses are shaped by probabilities, not fixed knowledge.
LLMs might contradict themselves, particularly when nuanced topics are phrased or revisited in different ways.
Unlike humans, who maintain consistent beliefs and memories across interactions, LLMs don’t retain information beyond the session. This limitation highlights the importance of carefully managing context and verifying responses when interacting with them.
Struggles with Ambiguity
Humans excel at interpreting meaning from subtle cues, sarcasm, or incomplete statements. For example, the phrase "I’m fine" can mean very different things depending on tone, context, or even accompanying body language. LLMs, however, derive meaning purely from text patterns.
If an LLM encounters "I’m fine" in a positive context (e.g., "She smiled and said, ‘I’m fine.’"), it may interpret it as a genuine statement. In a negative context (e.g., "He sighed and muttered, ‘I’m fine.’"), it might miss the sarcasm or underlying sentiment entirely unless these patterns are explicitly linked in its training data. This reliance on explicit patterns makes LLMs less adept at handling ambiguous or layered communication.
Cultural and Language Nuances
Language is more than just words—it’s deeply tied to culture, context, and shared experiences. While LLMs can process multiple languages, they often lack the cultural awareness that gives words their full meaning.
For example, an LLM might accurately translate the literal meaning of an idiom like "spill the beans," but without cultural context, it might not recognise that it means "to reveal a secret." Similarly, when processing text about cultural rituals or social norms, the LLM might miss the significance that only comes from lived cultural experience.
It’s like using a tourist guidebook: you might know the basic customs, but you won’t fully grasp the deeper cultural layers or unspoken social cues.
Why These Limitations Matter
These challenges don’t diminish the extraordinary capabilities of LLMs but instead highlight their role as tools to augment—not replace—human thinking. By understanding their constraints, we can better navigate interactions, leveraging their strengths while compensating for their weaknesses. The key is to use LLMs thoughtfully, always applying human judgment to ensure they serve as effective collaborators in problem-solving and creativity.
Part 3
Reflection
When an LLM confidently makes something up, it reminds us that sounding right isn’t the same as being right.
In a world full of noise, how do you stay curious, ask better questions, and focus on what really matters?
Next • Chapter 3
AI Tokens and the Role of Randomness
Every conversation with AI is shaped by two key ideas: tokens and randomness. While they might seem technical at first, understanding these concepts unlocks a deeper connection with AI—helping you get more creative, precise, and effective results every time you interact.