Short Definition
A neuron is the smallest computational unit in a neural network.
Definition
A neuron is a simple mathematical function that combines multiple input values into a single output. Each input is multiplied by a weight, all weighted inputs are summed, and a bias term is added to shift the result. On its own, a neuron performs a linear computation.
Neurons become powerful only when they are combined into layers and paired with activation functions. In this way, complex models are built from many very simple, repeated units.
Why It Matters
Every neural network is ultimately just a collection of neurons. If you understand what a single neuron computes, you understand the foundation of all neural models.
How It Works (Conceptually)
- Multiply inputs by weights
- Sum the results
- Add a bias
- Pass the result forward (optionally through an activation)
Minimal Python Example
def neuron(inputs, weights, bias): return sum(x * w for x, w in zip(inputs, weights)) + bias
Common Pitfalls
- Thinking neurons are intelligent entities
- Confusing inputs with weights
- Ignoring the role of bias
Related Concepts
- Weights and Bias
- Activation Functions
- Forward Pass