A neuron is the fundamental building block of neural networks. It takes multiple inputs, multiplies each by a learnable weight, adds a bias, then passes the result through an activation function to produce an output. Adjust the sliders below to see how each parameter shapes the neuron's behavior in real time.
Think of weights as volume knobs — they control how much the neuron “cares” about each input. A large positive weight amplifies the signal; a negative weight inverts it; zero means the neuron ignores that input entirely.
In a real neural network: GPT-4 is estimated to have ~1.8 trillion weights across its network. Each weight was learned during training by processing billions of text examples. This single neuron demonstrates the fundamental building block that, when combined in layers of millions, gives rise to language understanding.