Unit 3: Neural Networks

Lesson 2: How Neural Networks Learn (1 hour)

Lesson content from Unit 3: Neural Networks

Lesson 2: How Neural Networks Learn (1 hour)

Learning Objectives

  • Understand the learning process in neural networks
  • Understand the concept of weights and how they're adjusted
  • Understand the basic idea of backpropagation (conceptually)
  • See how training improves network performance

Materials Needed

  • Internet connection
  • Neural network training visualization
  • Examples of learning process
  • Student notebooks
  • Whiteboard for diagrams

Time Breakdown

  • Review neural network structure (5 min)
  • Introduction to learning (15 min)
  • Weights and adjustments (15 min)
  • Training process visualization (20 min)
  • Wrap-up (5 min)

Activities

1. Review Neural Network Structure (5 min)

  • What are the three main parts? (Input, hidden, output layers)
  • What connects neurons? (Connections with weights)
  • Bridge: "How do networks learn the right weights?"

2. Introduction to Learning (15 min)

The Learning Problem:

  • Neural network starts with random weights
  • Doesn't know anything yet
  • Needs to learn correct weights to make good predictions

The Learning Process:

  1. Feed forward: Input data flows through network → prediction
  2. Compare: Compare prediction to correct answer → error
  3. Backpropagate: Send error back through network
  4. Adjust weights: Change weights to reduce error
  5. Repeat: Do this many times with many examples

Simple Analogy: Learning to Throw a Ball

  • First attempts: Way off target (random weights)
  • Each throw: See how close you got (compare prediction)
  • Adjust: Change how you throw (adjust weights)
  • Many attempts: Get better (training)
  • Eventually: Hit target consistently (learned)

Key Concept:

  • Network makes a guess
  • Sees how wrong it was
  • Adjusts to be less wrong
  • Repeats until good enough

3. Weights and Adjustments (15 min)

What are Weights?

  • Numbers that determine connection strength
  • Like: How much does this input matter?
  • Example:
    • Input: "Has fur" (weight: 0.8 - very important)
    • Input: "Color" (weight: 0.3 - less important)
    • Input: "Size" (weight: 0.5 - somewhat important)

Initial Weights:

  • Start random (network knows nothing)
  • Like: Guessing randomly

Learning = Adjusting Weights:

  • If network's prediction is wrong:
    • Increase weights that would have helped
    • Decrease weights that caused mistakes
  • Over time, weights converge to good values

Learning Rate:

  • How much to adjust weights
  • Too small: Learns very slowly
  • Too large: Might overshoot, never converges
  • Just right: Learns efficiently

Visual Example:

  • Show simple network with weights
  • Show one training example
  • Show how weights adjust
  • Show how prediction improves

4. Training Process Visualization (20 min)

Activity 1: Watch Training in Action (10 min)

  • Use online neural network training visualization
  • Show network learning to classify images or recognize patterns
  • Observe:
    • Starting accuracy (random, ~50%)
    • How accuracy improves over time
    • How weights change
    • How network gets better with each example

Activity 2: Human Learning Simulation (10 min)

  • Students act as neural network
  • Setup: 3 students as input neurons, 2 as hidden, 1 as output
  • Task: Learn to recognize "happy" vs. "sad" faces
  • Process:
    1. Show input (face description)
    2. Each neuron makes decision based on weights
    3. Output neuron makes prediction
    4. Teacher gives correct answer
    5. Students adjust their "weights" (how they decide)
    6. Repeat with new example
  • Observe: Gets better over time!

Key Observations:

  • Starts with random guesses
  • Gets better with each example
  • Eventually makes good predictions
  • More examples = better learning

5. Wrap-Up (5 min)

  • Learning = adjusting weights to reduce errors
  • Process: Predict → Compare → Adjust → Repeat
  • More training = better performance
  • Preview: Next lesson - Deep learning and why depth matters

Differentiation Strategies

  • Younger students: Focus on analogies, simpler explanations, hands-on simulation
  • Older students: Introduce gradient descent concept, explore learning rates, research backpropagation
  • Struggling learners: Use physical simulation, simpler examples, more repetition
  • Advanced learners: Research optimization algorithms, explore different loss functions, analyze training dynamics

Assessment

  • Understanding of learning process
  • Participation in activities
  • Quality of observations
  • Reflection journal entry