Animesh Nighojkar

Ph.D. Student


Curriculum vitae



Advancing Machine and Human Reasoning (AMHR) Lab

University of South Florida



Cognitive Modeling of Semantic Fluency Using Transformers


Journal article


Animesh Nighojkar, Anna Khlyzova, John Licato
Cognitive Aspects of Knowledge Representation workshop at IJCAI-ECAI, 2022

Semantic Scholar ArXiv DBLP DOI
Cite

Cite

APA   Click to copy
Nighojkar, A., Khlyzova, A., & Licato, J. (2022). Cognitive Modeling of Semantic Fluency Using Transformers. Cognitive Aspects of Knowledge Representation Workshop at IJCAI-ECAI.


Chicago/Turabian   Click to copy
Nighojkar, Animesh, Anna Khlyzova, and John Licato. “Cognitive Modeling of Semantic Fluency Using Transformers.” Cognitive Aspects of Knowledge Representation workshop at IJCAI-ECAI (2022).


MLA   Click to copy
Nighojkar, Animesh, et al. “Cognitive Modeling of Semantic Fluency Using Transformers.” Cognitive Aspects of Knowledge Representation Workshop at IJCAI-ECAI, 2022.


BibTeX   Click to copy

@article{animesh2022a,
  title = {Cognitive Modeling of Semantic Fluency Using Transformers},
  year = {2022},
  journal = {Cognitive Aspects of Knowledge Representation workshop at IJCAI-ECAI},
  author = {Nighojkar, Animesh and Khlyzova, Anna and Licato, John}
}

Abstract

Can deep language models be explanatory models of human cognition? If so, what are their limits? In order to explore this question, we propose an approach called hyperparameter hypothesization that uses predictive hyperparameter tuning in order to find individuating descriptors of cognitive-behavioral profiles. We take the first step in this approach by predicting human performance in the semantic fluency task (SFT), a well-studied task in cognitive science that has never before been modeled using transformer-based language models (TLMs). In our task setup, we compare several approaches to predicting which word an individual performing SFT will utter next. We report preliminary evidence suggesting that, despite obvious implementational differences in how people and TLMs learn and use language, TLMs can be used to identify individual differences in human fluency task behaviors better than existing computational models, and may offer insights into human memory retrieval strategies -- cognitive process not typically considered to be the kinds of things TLMs can model. Finally, we discuss the implications of this work for cognitive modeling of knowledge representations.

Share



Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in