

For that level of performance, not only did they use an associative system, but also a hierarchical chunking method. The handful of people (not in this thesis/experiment) who have practiced for years (one of SF's colleagues and two in a replication experiment) have reached 80-100 digits in the digit span test. " SEM" stands for "Standard Error of the Means (the error bars).
#WHO DEVISED THE DIGIT SPAN TASK SOFTWARE#
The training protocol (for the blue lines) was fairly involved using the Memocamp software which can display the memorization aids (loci or images), and can also display a configurable metronome.Other authors have used 1 second, which makes it harder to apply encoding/decoding techniques. The digit span was tested with 2 second intervals, which probably helped with learning a bit.how many words/digits in a sequence can be retained in a 5-minute interval with no time pressure for learning each word/digit. The other two tests are self-paced (just with overall time limit) memorization tests, i.e.Some clarification in regards to these graphs: It takes a significant amount of practice/time to become proficient/fast in the encoding/decoding techniques. It's possible to train the average person using these and attain significant improvements the graph below (3rd one bein relevant, but I've included all for the caption) is from a recent PhD thesis. Superior Memory).Ī few other association methods exist, including with places ("loci"), which relies on visual memory etc. Ishihara, who could memorize overall longer sequences than Rajan, but was much slower, used a method to convert them into syllables he was naturally very gifted at remembering nonsense syllables though (cf. Memory Search By A Memorist) is Rajan's ability to recognize 13- to 17-digit chunks visually/syntactically, without assigning them meaning. In the classic case of SF he used sequences familiar to him (running times) to greatly improve his performance. Similarly most would recognize the pattern 1945 (WWII end) by paired association, or 12321 algorithmically. we'd recognize USA as substring among random letters. This is much easier explained in the domain of letters/words, e.g. we recognize and remember much easier the familiar chunks, sometimes algorithmically. Chunking leverages long-term memory for the chunks, i.e.
