How This Lexicon Is Organized

Neural networks are not a single idea—they are a system of interacting concepts.
The Neural Network Lexicon is structured around the four pillars that determine how models learn, behave, and succeed or fail.

Each section focuses on a different aspect of that system, while remaining tightly connected to the others.

Training & Optimization

How learning happens

This section explains how neural networks learn over time. It covers optimization algorithms, training dynamics, hyperparameters, batch behavior, and stability issues.

If you want to understand why training converges, diverges, or behaves unpredictably, start here.

Explore Training & Optimization

Generalization & Evaluation

How performance is measured and trusted

Good training results do not guarantee real-world performance. This section focuses on generalization, evaluation metrics, calibration, uncertainty, and common evaluation failures such as leakage.

If you want to know whether a model’s predictions can be trusted, this is the right place.

Explore Generalization & Evaluation

Data & Distribution

What the model actually learns from

Models learn from data, not intent. This section explains data quality, labeling issues, distributional assumptions, leakage, and how changes in data over time affect performance.

If your model behaves well in development but fails in production, the answer is often here.

Explore Data & Distribution

Architecture & Representation

What the model is capable of learning

Architecture determines what patterns a neural network can express. This section covers neurons, layers, inductive bias, capacity, and learned representations such as embeddings.

If you want to understand why different architectures exist and what they assume, explore this section.

Explore Architecture & Representation

How to Use the Lexicon

You can read the lexicon linearly, starting from training fundamentals, or non-linearly, following links between related concepts.

Each entry is designed to:

  • define one concept clearly
  • explain why it matters
  • show how it connects to other ideas
  • provide minimal examples for intuition

Together, the entries form a concept graph, not a checklist.

The Neural Network Lexicon is designed as a long-term reference—not tied to frameworks, trends, or hype, but focused on understanding how neural networks actually work.