Weights and Bias

Short Definition

Weights scale inputs, and bias shifts the output of a neuron.

Definition

Weights determine how strongly each input influences a neuron’s output. A large weight amplifies an input, while a small or negative weight reduces or reverses its effect. Bias allows the neuron to produce non-zero outputs even when all inputs are zero.

During training, learning consists almost entirely of adjusting weights and biases so that predictions better match the target values.

Why It Matters

Weights and biases are the parameters that neural networks learn. Without them, no adaptation or learning is possible.

How It Works (Conceptually)

  • Each input has an associated weight
  • All weighted inputs are summed
  • A bias shifts the final value

Minimal Python Example

inputs = [1.0, 2.0]
weights = [0.5, -1.0]
bias = 0.1
output = sum(i * w for i, w in zip(inputs, weights)) + bias

Common Pitfalls

  • Treating bias as optional
  • Initializing all weights to the same value
  • Forgetting that bias is trainable

Related Concepts

  • Neuron
  • Gradient Descent
  • Backpropagation