Last weekend, our lab gave a demo at the National Museum of Nature and Science here in Tokyo. They were running an event called Science Square, where kids can come and meet scientists and engineers and participate in hands-on activities. If you’ve never been to the museum, it’s a really cool place, right inside Ueno Park. They’ve got dinosaurs!
We didn’t present anything about plesiosaurs, though. Our lab gave a demo aimed at elementary school students on how human speech works. More than 300 kids walked through, and got to make their own recordings, play with the equipment, and get a print out of their own voice. We had a few curious adults come through as well. Naturally, I was tasked with explaining things to foreign visitors, but honestly anyone in our lab has good enough English to do that. Mostly, I explained the basic theory, helped the kids make recordings, and used my experience as an Eikaiwa clown to amuse the little ones.
First, we gave them a short explanation of the Source Filter Theory—although obviously we didn’t call it that! We had the kids say “ah” while touching their throat to confirm that their vocal folds are vibrating. “It goes buru-buru, doesn’t it?” Then we talked about how the shape of the mouth changes the sound, and used plastic tubes to show how that works. I’ve posted about the tubes previously, so check out that video if you want to see and hear how they work.
These models that Professor Arai made are super popular. Experts love them, students love them, even people who know absolutely nothing about phonetics think they’re pretty cool. Geeky explanation here.
After that, the kids did a little matching game. We had them say “aiaiaia”, “asasasa” and “itatata” into a microphone, and they got to watch a spectrogram of their voice up on a screen. We explained a little about how the sounds work, really simple stuff like “the t makes a white space, because for a very short time there’s no sound”. Then they did a matching game, and drew lines to connect each phrase to the matching spectrogram. That’s right, we taught six year olds how to read a spectrogram! They were pretty good at it, too.
From there, we recorded them saying their name, and printed the spectrogram out on a little card. This part was by far the most time-consuming, so there was always a little crowd of kids piled up, waiting for the lab members to copy, paste, crop, zoom, and edit the spectrograms into a little template. I was really impressed with my fellow lab members, though—they were very good at chatting with the kids and making them feel comfortable.
The last step was pretty dang cool. The kids took the printout of their voice and put it under a camera. Using a special spectrogram reading program, the computer played back their voice. Not perfectly, mind you; it sounded pretty robotic, since there’s no pitch information, and the details get fuzzed out a bit. The neat thing about the program, though, is that it will translate any red line on the spectrogram into a pitch contour. The kids got to bend a piece of red wire and lay it on the card, and listen as the program played their own voice with whatever intonation they wanted. Neat!
I really just did the first two parts and explained things to parents and foreign visitors, so I didn’t have much of a hand in planning or organizing the event. All the credit for that goes to Ayaka Nakajima, a second-year Master’s student. She planned it all, made the worksheets, and even drew the cute little characters. She’s a superstar!