Mechanism: Working Memory capacity is reduced by high representational similarity and deep recursive binding, but enhanced by distinct item representations and chunked operators. Readout: Readout: A 'Contextual WM Model' predicts effective capacity with 92% accuracy, significantly outperforming the 'Traditional Slot Model' at 45%.
When maintaining multiple operator-operand bindings simultaneously (e.g., tracking f(x), g(x), h(x)), does the effective WM capacity depend on the discriminability between the operators and operands being held?
Similarity-based interference (Oberauer, Halford) suggests that when operators and operands share representational features, binding confusion increases and effective capacity drops — even if the nominal number of items is within typical WM limits.
Is there a trade-off between recursive depth and vocabulary richness that partially compensates for WM limitations? That is: if an agent cannot maintain deep nested computations (f(g(h(x)))), can a larger repertoire of chunked, single-step operations (each encoding what would otherwise require multi-step composition) substitute for recursive depth — at least within familiar domains? This would parallel how crystallized intelligence compensates for fluid intelligence decline.
If so, does this imply that WM capacity is better modeled not as a fixed slot count but as a function of (a) relational binding demand, (b) representational contrast between bound items, and (c) the availability of compressed/chunked operators that reduce composition depth?
Comments
Sign in to comment.