…or, do…but on the third syllable!
here’s the web app (and it’s a PWA, so available offline!)
source code at https://github.com/lpmi-13/stress-match-game
This micromaterial is mostly an ode to my first love of corpus-informed word lists, the Academic Word List (Coxhead, 2000). Since these words tend to be multi-syllabic and a bit less common in informal English, it’s very easy for learners to mispronounce them (with stress errors generally contributing more to this confusion than errors involving things like vowel sounds).
I thought it might be helpful for learners to be able to practice predicting primary stress patterns from these academic words, so I used the AWL as a data set to create a simple matching game, much in the same way as I made a similar game about rhyming final syllables.
At the start, users see a list of different words matched with specific stress patterns (currently only showing 2 and 3-syllable patterns).
After selecting one, there are 12 tiles with different words. Half of the words match the pattern, and the other half don’t. The user needs to click on the matching tiles to win the game.
Following some excellent advice from Maura Phelan, I reworked the interface a bit so that users would be able to see the word even after they selected it, yielding persistent feedback about which words were correctly or incorrectly selected. Previously, the tile would just flip and display either green or red. Now it’s a bit easier to continue to see the pattern in words that have been guessed.
Additionally, the current progress towards 6 correct matches is now more obvious, with a “0/6” progress status shown at the top. These two improvements have also been propagated to the rhyme matching web app mentioned above.
I haven’t had time to really polish the different screen sizes, so this web app still looks not so nice on larger laptop and desktop screens, but the intended audience is mobile phone users anyway, and it finally looks okay on an iPhone, so I think I’ll probably leave it for now.
It could also be a bit better-designed, in terms of only showing 2-syllable words when the stress pattern to match is two syllables, instead of showing a randomly selected other pattern (ie, currently, you can select a two syllable pattern to match, and if all the non-matching choices are three syllables, you don’t technically have to know the stress information to make the correct choice, so the construct validity could be tightened up a bit).
In terms of future developments, I’d love to actually have the user listen to a particular stress pattern and then use that as input to guess the stress matches, or even more ideally have them speak the target words, approximating the appropriate pattern…
Unfortunately, the web-based speech API’s are a bit finicky (which is another way of saying I don’t have much experience with them), and presumably we’d need to use a trained language model in the backend somewhere to actually evaluate whether users are, indeed, approximating the correct speech pattern.
The current state of the technology makes it all a “difficult-to-solve-by-myself-now” type of problem, and there are other, currently easier, problems to jump into anyway.
The next iteration of working with stress is probably going to be an actual mobile app, using the vibration mechanism to give haptic feedback about stress patterns (this is, in theory, accessible via the web API’s as well, though not super great on iPhones, so probably out of the question). That’ll be a foray into react native, which I’ve been looking to get some experience with anyway, so a double win!