Does ChatGPT have sociolinguistic competence?
Submitted: 2024-06-21
|Accepted: 2024-09-24
|Published: 2024-11-15
Copyright (c) 2024 Journal of Computer-Assisted Linguistic Research

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Downloads
Keywords:
Large language models, ChatGPT, variation, morphosyntactic variation, sociolinguistics
Supporting agencies:
Abstract:
Large language models are now able to generate content- and genre-appropriate prose with grammatical sentences. However, these targets do not fully encapsulate human-like language use. For example, set aside is the fact that human language use involves sociolinguistic variation that is regularly constrained by internal and external factors. This article tests whether one widely used LLM application, ChatGPT, is capable of generating such variation. I construct an English corpus of “sociolinguistic interviews” using the application and analyze the generation of seven morphosyntactic features. I show that the application largely fails to generate any variation at all when one variant is prescriptively incorrect, but that it is able to generate variable deletion of the complementizer that that is internally constrained, with variants occurring at human-like rates. ChatGPT fails, however, to properly generate externally constrained complementizer that deletion. I argue that these outcomes reflect bias both in the training data and Reinforcement Learning from Human Feedback. I suggest that testing whether an LLM can properly generate sociolinguistic variation is a useful metric for evaluating if it generates human-like language.
References:
Anderwald, Lieselotte. 2009. The Morphology of English Dialects: Verb Formation in Non-standard English. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511576539
Asher, Nicholas, Swarnadeep Bhar, Akshay Chaturvedi, Julie Hunter, and Soumya Paul. 2023. “Limits for Learning with Language Models.” In Proceedings of the The 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023), edited by Alexis Palmer and Jose Camacho-Collados, 236–48. Stroudsburg, PA: Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.starsem-1.22.
Bates, Douglas, Martin Mächler, Ben Bolker, and Steve Walker. 2015. “Fitting Linear Mixed-effects Models Using lme4.” Journal of Statistical Software 67(1): 1–48. https://doi.org/10.18637/jss.v067.i01.
Bayley, Robert, and Vera Regan. 2004. “Introduction: The Acquisition of Sociolinguistic Competence.” Journal of Sociolinguistics 8(3): 323–38. https://doi.org/10.1111/j.1467-9841.2004.00263.x.
Beal, Joan. 2004. “English Dialects in the North of England: Morphology and Syntax.” In A Handbook of Varieties of English, edited by Bernd Kortmann and Edgar Schneider, 114–41. Berlin: Mouton de Gruyter.
Beal, Joan, Lourdes Burbano-Elizondo, and Carmen Llamas. 2012. Urban North-eastern English: Tyneside to Teesside. Edinburgh: Edinburgh University Press.
Becker, Kara. 2013. “The Sociolinguistic Interview.” In Data Collection in Sociolinguistics, edited by Christine Mallinson, Becky Childs, and Gerard van Herk, 91–100. New York: Routledge. https://doi.org/10.1515/9780748664450
Belz, Anja, and Ehud Reiter. 2006. “Comparing Automatic and Human Evaluation of NLG Systems.” In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics, 313–20. Stroudsburg, PA: Association for Computational Linguistics.
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models be Too Big?” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, 610–23. New York: Association for Computing Machinery. https://doi.org/10.1145/3442188.3445922.
Bleaman, Isaac L. 2020. “Implicit Standardization in a Minority Language Community: Real-time Syntactic Change Among Hasidic Yiddish Writers.” Frontiers in Artificial Intelligence 3: Article 35. https://doi.org/10.3389/frai.2020.00035.
Bleaman, Isaac L., Daniel Duncan, Shelley Feuer, Gregory Guy, Zachary Jaggers, and Matthew Stuck. 2014. “‘She Said {That/ø} She Couldn’t Take a Complement’: Complementizer That Omission in American English.” Paper presented at New Ways of Analyzing Variation 43, Chicago, Illinois, October 23–26.
Bloomer, Robert K. 1998. “You Shoulda Saw Me: On the Syntactic Contexts of Nonstandard Past Participles in Spoken American English.” American Speech 73(2): 221–24. https://doi.org/10.2307/455743.
Bowman, Samuel. 2023. “Eight Things to Know about Large Language Models.” Ms. Accessed July 7, 2023. https://arxiv.org/abs/2304.00612.
Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. “Language Models are Few-Shot Learners.” In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, Virtual, edited by Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin.
Chambers, Jack K. 1995. Sociolinguistic Theory: Linguistic Variation and Its Social Significance. Oxford: Blackwell.
Chatten, Alicia, Kimberly Baxter, Erwanne Mas, Jailyn Pena, Guy Tabachnick, Daniel Duncan, and Laurel MacKenzie. 2022. “‘I’ve Always Spoke Like This, You See’: Preterite-for-participle Leveling in American and British Englishes.” American Speech 99(1): 3–46. https://doi.org/10.1215/00031283-9940654.
Chen, Yihan, Benfeng Xu, Quan Wang, Yi Liu, and Zhendong Mao. 2024. “Benchmarking Large Language Models on Controllable Generation under Diversified Instructions.” Proceedings of the AAAI Conference on Artificial Intelligence 38(16): 17808–16. https://doi.org/10.1609/aaai.v38i16.29734.
Childs, Claire, Carmen Llamas, and Dominic Watt. 2021. “Pronoun Exchange in North-Eastern English: Perception Versus Use.” Paper presented at United Kingdom Language Variation and Change 13, Glasgow, United Kingdom, September 8–10.
Chowdhury, Shammur Absar, and Roberto Zamparelli. 2018. “RNN Simulations of Grammaticality Judgments on Long-distance Dependencies.” In Proceedings of the 27th International Conference on Computational Linguistics, edited by Emily M. Bender, Leon Derczynski, and Pierre Isabelle, 133–44. Stroudsburg, PA: Association for Computational Linguistics.
Chung, John Joon Young, Ece Kamar, and Saleema Amershi. 2023. “Increasing Diversity While Maintaining Accuracy: Text Data Generation with Large Language Models and Human Interventions.” In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics Volume 1: Long Papers, edited by Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, 575–93. Stroudsburg, PA: Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.acl-long.34
D’Arcy, Alexandra, Bill Haddican, Hazel Richards, Sali Tagliamonte, and Ann Taylor. 2013. “Asymmetrical Trajectories: The Past and Present of -body/-one.” Language Variation and Change 25(3): 287–310. https://doi.org/10.1017/S0954394513000148.
Dathathri, Sumanth, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. “Plug and Play Language Models: A Simple Approach to Controlled Text Generation.” In Proceedings of International Conference on Learning Representations 2020. https://openreview.net/pdf?id=H1edEyBKDS.
Department for Levelling Up, Housing and Communities, and Ministry of Housing, Communities and Local Government. 2019. “Indices of Deprivation: 2019 and 2015.” Accessed July 5, 2023. http://dclgapps.communities.gov.uk/imd/iod_index.html.
Duncan, Daniel. 2024. “Do LLMs Have Sociolinguistic Competence?” OSF [Data set]. https://doi.org/10.17605/OSF.IO/KHF6M.
Duncan, Daniel. 2019. “Grammars Compete Late: Evidence from Embedded Passives.” University of Pennsylvania Working Papers in Linguistics 25(1): 89–98.
Eisenstein, Jacob. 2015. “Systematic Patterning in Phonologically-motivated Orthographic Variation.” Journal of Sociolinguistics 19(2): 161–88. https://doi.org/ 10.1111/josl.12119.
Eisikovits, Edina. 1987. “Variation in the Lexical Verb in Inner-Sydney English.” Australian Journal of Linguistics 7(1): 1–24. https://doi.org/10.1080/07268608708599371.
Gao, Tianyu, Adam Fisch, and Danqi Chen. 2021. “Making Pre-trained Language Models Better Few-shot Learners.” In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), edited by Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli, 3816–30. Stroudsburg, PA: Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.acl-long.295.
Geeraert, Kristina, and John Newman. 2011. “I Haven’t Drank in Weeks: The Use of Past Tense Forms as Past Participles in English Corpora.” In Corpus-based Studies in Language Use, Language Learning, and Language Documentation, edited by John Newman, Harald Baayen, and Sally Rice, 11–33. Leiden: Brill. https://doi.org/10.1163/9789401206884_003
Grieve, Jack. 2016. Regional Variation in Written American English. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9781139506137
Gulordava, Kristina, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. “Colorless Green Recurrent Networks Dream Hierarchically.” In Proceedings of NAACL-HLT 2018, Vol. 1, edited by Marilyn Walker, Heng Ji, and Amanda Stent, 1195–1205. Stroudsburg, PA: Association for Computational Linguistics. https://doi.org/10.18653/v1/N18-1108.
Han, Xu, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan Yao, Ao Zhang, Liang Zhang, Wentao Han, Minlie Huang, Qin Jin, Yanyan Lan, Yang Liu, Zhiyuan Liu, Zhiwu Lu, Xipeng Qiu, Ruihua Song, Jie Tang, Ji-Rong Wen, Jinhui Yuan, Wayne Xin Zhao, and Jun Zhu. 2021. “Pre-trained Models: Past, Present and Future.” AI Open 2: 225–50. https://doi.org/10.1016/j.aiopen.2021.08.002.
Hothorn, Torsten, Frank Bretz, and Peter Westfall. 2008. “Simultaneous Inference in General Parametric Models.” Biometrical Journal 50(3): 346–63. https://doi.org/10.1002/bimj.200810425.
Hovy, Dirk, and Diyi Yang. 2021. “The Importance of Modelling Social Factors of Language: Theory and Practice.” In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, edited by Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou, 588–602. Stroudsburg, PA: Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.naacl-main.49.
Hunt, Stephen A., ed. 2009. Family Trends: British Families Since the 1950s. London: Family & Parenting Institute.
Hymes, Dell. 1966. “Two Types of Linguistic Relativity.” In Sociolinguistics: Proceedings of the UCLC Sociolinguistics Conference, 1964, edited by William Bright, 114–58. The Hague: Mouton.
Ilbury, Christian. 2019. “’Sassy Queens’: Stylistic Orthographic Variation in Twitter and the Enregisterment of AAVE.” Journal of Sociolinguistics 24(2): 245–64. https://doi.org/10.1111/josl.12366.
Jankowski, Bridget L., and Sali A. Tagliamonte. 2022. “‘He Come Out and Give Me a Beer but He Never Seen the Bear’: Vernacular Preterites in Ontario Dialects.” English World-Wide 43(3): 267–96. https://doi.org/10.1075/eww.20014.jan.
Jentzsch, Sophie, and Kristian Kersting. 2023. “ChatGPT is Fun, But it is not Funny! Humor is Still Challenging Large Language Models.” In Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis, edited by Jeremy Barnes, Orphée De Clercq, and Roman Klinger, 325–40. Stroudsburg, PA: Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.wassa-1.29
Jiang, Zhengbao, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. “How Can We Know What Language Models Know?” Transactions of the Association for Computational Linguistics 8: 423–38. https://doi.org/10.1162/tacl_a_00324.
Johnstone, Barbara, Jennifer Andrus, and Andrew E. Danielson. 2006. “Mobility, Indexicality, and the Enregisterment of ‘Pittsburghese.’” Journal of English Linguistics 34(2): 77–104. https://doi.org/10.1177/0075424206290692.
Kearns, Kate. 2007a. “Epistemic Verbs and Zero Complementizer.” English Language and Linguistics 11(3): 475–505. https://doi.org/10.1017/S1360674307002353.
Kearns, Kate. 2007b. “Regional Variation in the Syntactic Distribution of Null Finite Complementizer.” Language Variation and Change 19(3): 295–336. https://doi.org/10.1017/S0954394507000117.
Kemp, Renee, Emily Moline, Chelsea Escalante, Alexander Mendes, and Robert Bayley. 2016. “Where Have All the Participles Went? Using Twitter Data to Teach about Language.” American Speech 91(2): 226–35. https://doi.org/10.1215/00031283-3633129.
Kroch, Anthony, and Cathy Small. 1978. “Grammatical Ideology and its Effect on Speech.” In Linguistic Variation: Models and Methods, edited by David Sankoff, 44–55. New York: Academic Press.
Kulkarni, Vivek, and Vipul Raheja. 2023. “Writing Assistants Should Model Social Factors of Language.” Paper presented at The Second In2Writing Workshop @ CHI ’23, Hamburg, Germany. https://arxiv.org/abs/2303.16275v1 (accessed 7 July 2023).
Labov, William. 1972. Sociolinguistic Patterns. Philadelphia: University of Pennsylvania Press.
Lam, Suet-Ying, Qingcheng Zeng, Kexun Zhang, Chenyu You, and Rob Voight. 2023. “Large Language Models are Partially Primed in Pronoun Interpretation. In Findings of the Association for Computational Linguistics: ACL 2023, edited by Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, 9493–9506. Stroudsburg, PA: Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.findings-acl.605
Laslett, Peter. 1969. “Size and Structure of the Household in England over Three Centuries.” Population Studies 3(2): 199–223. https://doi.org/10.2307/2172902.
Levey, Stephen, Susan Fox, and Laura Kastronic. 2017. “A Big City Perspective on Come/came Variation: Evidence from London, U.K.” English World-Wide 38(2): 181–210. https://doi.org/10.1075/eww.38.2.03lev.
Li, Jiwei, Michel Galley, Chris Brockett, Georgios P. Spithourakis, Jianfeng Gao, and Bill Dolan. 2016. “A Persona-based Neural Conversation Model.” In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, edited by Katrin Erk and Noah A. Smith, 994–1003. Stroudsburg, PA: Association for Computational Linguistics. https://doi.org/10.18653/v1/P16-1094.
Li, Junyi, Tianyi Wang, Wayne Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2024. “Pre-trained Language Models for Text Generation: A Survey.” ACM Computing Surveys 56(9): Article 230. https://doi.org/10.1145/3649449.
Linzen, Tal, and Marco Baroni. 2021. “Syntactic Structure from Deep Learning.” Annual Review of Linguistics 7: 195–212. https://doi.org/10.1146/annurev-linguistics-032020-051035.
Mendelsohn, Julia, Maarten Sap, and Ronan Le Bras. 2022. “Sounding the Bullhorn: Surfacing and Analyzing Dogwhistles with Language Models.” Paper presented at New Ways of Analyzing Variation 50, San Jose, California, October 13–15.
Nguyen, Dong, Laura Rosseel, and Jack Grieve. 2021. “On Learning and Representing Social Meaning in NLP: A Sociolinguistic Perspective.” In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, edited by Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou, 603–12. Stroudsburg, PA: Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.naacl-main.50.
Noble, Stefan, David McLennan, Michael Noble, Emma Plunkett, Nils Gutacker, Mary Silk, and Gemma Wright. 2019. The English Indices of Deprivation 2019: Research Report. London: Ministry of Housing, Communities & Local Government.
OpenAI. 2020. “GPT-3 Model Card.” Accessed July 7, 2023. https://github.com/openai/gpt-3/blob/master/model-card.md.
OpenAI. 2022. “Introducing ChatGPT.” Accessed July 7, 2023. https://openai.com/blog/chatgpt.
Pearce, Michael. 2021. “The Participatory Vernacular Web and Regional Dialect Grammar.” English Today 147, 37(4): 196–205. https://doi.org/10.1017/S0266078420000243.
Prabhumoye, Shrimai, Alan W. Black, and Ruslan Salakhutdinov. 2020. “Exploring Controllable Text Generation Techniques.” In Proceedings of the 28th International Conference on Computational Linguistics, edited by Donia Scott, Nuria Bel, and Chengqing Zong. International Committee on Computational Linguistics. https://doi.org/10.18653/v1/2020.coling-main.1.
Qiu, XiPeng, TianXiang Sun, YiGe Xu, YunFan Shao, Ning Dai, and XuanJing Huang. 2020. Pre-trained Models for Natural Language Processing: A Survey. Science China Technological Sciences 63(10): 1872–97. https://doi.org/10.1007/s11431-020-1647-3.
R Core Team. 2020. “R: A Language and Environment for Statistical Computing.” R Foundation for Statistical Computing, Vienna, Austria. Accessed July 7, 2023. http://www.R-project.org/.
Rauh, Maribeth, John Mellor, Jonathan Uesato, Po-Sen Huang, Johannes Welbl, Laura Weidinger, Sumanth Dalthathri, Amelia Glaese, Geoffrey Irving, Iason Gabriel, William Isaac, and Lisa Anne Hendricks. 2022. “Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models.” In 36th Conference on Neural Information Processing Systems (NeurIPS 2022): Track on Datasets and Benchmarks, edited by S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh. San Diego: Neural Information Processing Systems.
Serbicki, Sofia, Ruijin Lan, and Daniel Duncan. 2023. “Participle-for-preterite Variation in Tyneside English.” English World-Wide 45(1): 30–60. https://doi.org/10.1075/eww.00081.ser.
Squires, Lauren (Ed.). 2016. English in Computer-mediated Communication: Variation, Representation, and Change (Topics in English Linguistics volume 93). Berlin: De Gruyter Mouton. https://doi.org/10.1515/9783110490817
Tagliamonte, Sali. 2001. “Come/came Variation in English Dialects.” American Speech 76(1): 42–61. https://doi.org/10.1215/00031283-76-1-42.
Tagliamonte, Sali, and Jennifer Smith. 2005. “No Momentary Fancy! The Zero ‘Complementizer’ in English Dialects.” English Language and Linguistics 9(2): 289–309. https://doi.org/10.1017/S1360674305001644.
Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, Illia Polosukhin. 2017. “Attention Is All You Need.” In Advances in Neural Information Processing Systems 30 (NIPS 2017), edited by I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett. Neural Information Processing Systems Foundation, Inc. (NeurIPS).
Warstadt, Alex, and Samuel R. Bowman. 2022. “What Artificial Neural Networks Can Tell Us about Human Language Acquisition.” In Algebraic Structures in Natural Language, edited by Shalom Lappin and Jean-Philippe Bernardy, 17–60. Boca Raton, FL: CRC Press. https://doi.org/10.1201/9781003205388-2
Wei, Jason, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022. “Emergent Abilities of Large Language Models.” Transactions on Machine Learning Research. https://openreview.net/pdf?id=yzkSU5zdwD.
Weinreich, Uriel, William Labov, and Marvin I. Herzog. 1968. “Empirical Foundations for a Theory of Language Change.” In Directions for Historical Linguistics: A Symposium, edited by W. P. Lehmann and Yakov Malkiel, 95–195. Austin, TX: University of Texas Press.
Yang, Yi, and Jacob Eisenstein. 2017. “Overcoming Language Variation in Sentiment Analysis with Social Attention. Transactions of the Association for Computational Linguistics 5: 295–307. https://doi.org/10.1162/tacl_a_00062.
Yu, Dian, Zhou Yu, and Kenji Sagae. 2021. “Attribute Alignment: Controlling Text Generation from Pre-trained Language Models.” In Findings of the Association for Computational Linguistics: EMNLP 2021, edited by Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih, 2251–68. Stroudsburg, PA: Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.findings-emnlp.194.
Zhang, Saizheng, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. “Personalizing Dialogue Agents: I Have a Dog, Do You Have Pets Too?” In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), edited by Iryna Gurevych and Yusuke Miyao, 2204–13. Stroudsburg, PA: Association for Computational Linguistics. https://doi.org/10.18653/v1/P18-1205.
Zhuang, Ziyu, Qiguang Chen, Longxuan Ma, Mingda Li, Yi Han, Yushan Qian, Haopeng Bai, Weinan Zhang, and Ting Liu. 2023. “Through the Lens of Core Competency: Survey on Evaluation of Large Language Models.” In Proceedings of the 22nd China National Conference on Computational Linguistics, edited by Jiajun Zhang, 88–109. Chinese Information Processing Society of China.
Ziems, Caleb, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, and Diyi Yang. 2023. “Can Large Language Models Transform Computational Social Science?” Computational Linguistics 50. https://doi.org/10.1162/coli_a_00502.


