Concept of the Week - Tokens
Understanding tokens helps you control cost, length and clarity in AI chats
The idea
A text model such as ChatGPT, Microsoft Copilot or Google Gemini reads and writes in tokens: small chunks of text (pieces of words, spaces, punctuation). Pricing and length limits are set in tokens, so both your prompt and the model’s reply count.
Where you’ll find it
You’ll see per-1,000-token prices on model pages. Long chats that stop mid-sentence have usually hit a token cap. Short, specific prompts waste fewer tokens and reduce drift.
One practical test (3–5 minutes, any mainstream chatbot)
Goal
Feel the trade-off between brevity and loss.
Metric
Facts kept or dropped.
1. Paste a 150-word paragraph.
2. Ask for a one-sentence summary.
3. Ask for the same summary with a 30-token limit.
Good result
The short version keeps the core fact and date, even if the detail is lost.
Safety
Don’t paste sensitive text.
What to watch
One token is not one word. Leave slack for headings, numbers and punctuation when planning a workflow or a budget.
Takeaway
Tokens are the measure of cost and length. Write shorter and more specific to get clearer results.