← All Demos
robert@barcik.training

Neuron Playground

A neuron is the fundamental building block of neural networks. It takes multiple inputs, multiplies each by a learnable weight, adds a bias, then passes the result through an activation function to produce an output. Adjust the sliders below to see how each parameter shapes the neuron's behavior in real time.

Inputs & Weights
Input x1
I see steam rising
Steam doesn't always mean danger — we see steam from hot showers and kettles. Slightly positive weight.
x1 0.0
w1 0.4
0.0 × 0.4 = 0.00
Input x2
The burner is glowing red
A glowing red burner is strong visual evidence of extreme heat. High positive weight — this is a reliable danger signal.
x2 0.0
w2 1.5
0.0 × 1.5 = 0.00
Input x3
Someone says it's cool
Someone saying "it's cool" should reduce the danger signal — but people can be wrong. Negative weight, but not too strong.
x3 0.0
w3 -0.6
0.0 × -0.6 = 0.00
Bias
b 0.0
0.4 1.5 -0.6 x1 0.0 x2 0.0 x3 0.0 σ output 0.00
Output
Weighted Sum
0.4·0.0 + 1.5·0.0 + (-0.6)·0.0 + 0.0
= 0.00
Activation Function
ReLU(x) = max(0, x)
Neuron Output
0.00
-1 Signal Strength +1
Analogy: Weights as Volume Knobs

Think of weights as volume knobs — they control how much the neuron “cares” about each input. A large positive weight amplifies the signal; a negative weight inverts it; zero means the neuron ignores that input entirely.

Scale: From One Neuron to Trillions of Weights

In a real neural network: GPT-4 is estimated to have ~1.8 trillion weights across its network. Each weight was learned during training by processing billions of text examples. This single neuron demonstrates the fundamental building block that, when combined in layers of millions, gives rise to language understanding.