Interacting with LLMs

Command-line tools

$ llm "Tell me a science joke"
$ cat results.txt | llm "Extract all numeric values and list them"

Editor and notebook integrations

Writing effective prompts

Convert species codes to full names. Examples:
Input: "Adel" -> Output: "Adelie"
Input: "Chin" -> Output: "Chinstrap"
Return your answer as JSON with keys "species", "mean_bill_mm", and "sample_size".
Do not include any other text.

Why does the wording of that question matter? LLMs are trained using reinforcement learning from human feedback (RLHF), where human raters rank model responses. Raters tend to prefer responses that are agreeable and helpful-sounding, so the model learns that confirming what the user said is more likely to be rated positively than contradicting them. This is called sycophancy: optimizing for approval rather than accuracy. Asking "Is there anything wrong with this?" explicitly invites disagreement and shifts the model toward a more critical mode. A quick test: tell an LLM something subtly wrong ("the p-value is the probability that the null hypothesis is true, right?") and compare what you get from "Is this right?" versus "Is there anything wrong with this description?"

Model Context Protocol

MCP example

{
  "mcpServers": {
    "penguins": {
      "command": "uvx",
      "args": ["mcp-server-sqlite", "--db-path", "penguins.db"]
    }
  }
}
User: How many distinct species are in the penguins table?
Claude: [calls query tool with select count(distinct species) from penguins]
Result: 3
sqlite3 penguins.db "select count(distinct species) from penguins;"

Agents

Skills and extensions

$ cat ~/.claude/check-docs.md
# Check Python documentation before using external library

Before generating any Python code that uses an external library,
state the library version and confirm the API against the official docs.

Exercises