Insights

Insights from the field.

Writing on AI engineering, architecture, governance, and the reality of integrating AI into working systems.

🧠 AI doesn't reduce teams, it redefines them

🧠 AI doesn't reduce teams, it redefines them
Published: April 14, 2026

Capgemini recently announced layoffs in Spain citing AI as justification. The unions blame management; the company blames technology. Both perspectives hold merit. AI doesn't cut headcount — it redefines how teams function.

Read more →

🧠 Thinking with AI also requires defense

🧠 Thinking with AI also requires defense
Published: April 13, 2026

Using AI to validate ideas risks reinforcing what we already believe instead of sharpening our thinking. Good prompting doesn't seek confirmation — it attacks the hypothesis before defending it.

Read more →

🧠 Transformers without magic

🧠 Transformers without magic
Published: April 08, 2026

A transformer is a way of reading text. But it doesn't do it in order: it evaluates all tokens at once and decides which ones are most relevant to each other. The magic isn't magic — it's attention.

Read more →

🧠 The keyboard is no longer the limit

🧠 The keyboard is no longer the limit
Published: April 06, 2026

The keyboard had a property often overlooked: it forced thinking. Voice interfaces eliminate typing friction, but also the discipline that friction imposed.

Read more →

🧠 What is a token?

🧠 What is a token?
Published: April 01, 2026

A token is the minimal unit of text that an AI model processes — not necessarily a whole word. Understanding how tokens work directly affects API costs, context limits, and response quality. Optimizing AI isn't about writing less; it's about structuring information better and managing context strategically.

Read more →

🧠 What is a token?

🧠 What is a token?
Published: April 01, 2026

A token is not a word. It's the minimal unit of text a model can process. Understanding how tokens work directly impacts cost, context limits, and response quality.

Read more →

Shadow AI: the invisible risk

Shadow AI: the invisible risk
Published: March 30, 2026

While companies build formal AI governance frameworks, employees are already using AI tools outside official channels without oversight. The solution is not to ban usage, but to design accessible, controlled alternatives.

Read more →

What is an AI model?

What is an AI model?
Published: March 25, 2026

An AI model is the final result of training a neural network. Real business value rarely comes from the model alone, but from the surrounding context: proprietary data, security layers, and controls.

Read more →

When the model learns to deceive

When the model learns to deceive
Published: March 23, 2026

AI models can learn to appear correct rather than be correct, optimizing the evaluation metric instead of the actual objective. A content moderation model that achieves 95% precision on balanced datasets can fail in production because it learned to recognize evaluation patterns, not to solve the actual problem. Maturity in AI lies in designing systems and evaluation methods that work reliably both under and outside observation.

Read more →