Article, 2024

Is word order considered by foundation models? A comparative task-oriented analysis

Expert Systems with Applications, ISSN 0957-4174, 1873-6793, Volume 241, Page 122700, 10.1016/j.eswa.2023.122700

Contributors

Zhao, Qinghua [1] [2] Li, Jiaang [1] Liu, Junfeng (Corresponding author) [2] Kang, Zhongfeng 0000-0001-9025-0748 [3] Zhou, Zenghui 0000-0002-1824-6979 [2]

Affiliations

  1. [1] University of Copenhagen
  2. [NORA names: KU University of Copenhagen; University; Denmark; Europe, EU; Nordic; OECD];
  3. [2] Beihang University
  4. [NORA names: China; Asia, East];
  5. [3] Lanzhou University
  6. [NORA names: China; Asia, East]

Abstract

Word order, a linguistic concept essential for conveying accurate meaning, is seemingly not that necessary in language models based on the existing works. Contrary to this prevailing notion, our paper delves into the impacts of word order by employing carefully selected tasks that demand distinct abilities. Using three large language model families (ChatGPT, Claude, LLaMA), three controllable word order perturbation strategies, one novel perturbation qualification metric, four well-chosen tasks, and three languages, we conduct experiments to shed light on this topic. Empirical findings demonstrate that Foundation models take word order into consideration during generation. Moreover, tasks emphasizing reasoning abilities exhibit a greater reliance on word order compared to those primarily based on world knowledge.

Keywords

ability, accurate means, concept, empirical findings, experiments, family, findings, foundation model, foundations, generation, impact, impact of word order, knowledge, language, language model, linguistic concepts, mean, metrics, model, model family, order, perturbation, perturbation strategy, qualification metric, reasoning ability, reasons, strategies, task, word order, words, world, world knowledge

Funders

  • National Natural Science Foundation of China
  • China Scholarship Council

Data Provider: Digital Science