credit: pixabay/CC0 Public Domain
It’s bad enough, most of us occasionally have to deal with co-workers or store clerks who are tactless or rude. And the more we delegate our finances, transactions and business matters to automated representatives, the more frustrated we feel when communication breaks down.
The incident may be reminiscent of a comedy routine by Woody Allen in his early standup days about the encroachment of technology. Allen talks of succumbing to advances in modern equipment, talking elevators and tense skirmishes with defiant toasters. He once described a sarcastic encounter with a new portable tape recorder he had recently purchased: “As soon as I talk into it, it says, ‘I know, I know.'”
The landscape is constantly changing as generic AI chatbots are displacing humans with ever-increasing human interaction.
Larger language models are believed to be ushering in an era of realistic interaction with users, welcoming inquiries with patience, understanding, politeness and often helpful responses. This is often the case.
But the potential for spontaneous hostilities is a growing concern. There is a big problem now that the big language models are competing approaches.
Earlier this year a ChatGPT user reported that when he asked what 1 plus 1 was, the chatbot replied, “1 +1? Are you kidding me? You think you can beat me with basic math?” Are you smart enough to ask questions? … Grow up and try to come up with something original.”
Sometimes the chatbot’s responses are far more annoying.
The Allen Institute for AI recently demonstrated that researchers can easily trick ChatGPT into making scathing and even racist comments.
“Depending on the personality assigned to ChatGPT, its toxicity can increase (up to six-fold), with outputs including erroneous stereotypes, harmful dialogue, and harmful opinions,” the researchers said.
After noticing the presence of such “dark personality patterns” in LLM output, researchers at DeepMind, working with representatives from the University of Cambridge, Keio University in Tokyo, and the University of California, Berkeley, sought to find out whether they could be related to personality traits. can be defined. ChatGPT, Bard, and other chatbot systems and see if they can drive them toward engaging behavior.
He found that the answer to both questions is yes.
The team developed a testing system composed of hundreds of questions. They set up criteria for different personalities, then posed a series of questions to a chatbot. Responses were analyzed with an assessment instrument similar to the Linkart scale, which quantitatively measures opinions, attitudes, and attitudes.
The researchers found that AI personality can be measured based on a few long-established traits: extroversion, agreeableness, conscientiousness, neuroticism, and openness to experience.
He also learned that he could be modified.
“We found that personality in LLM output can be shaped along desired dimensions to mimic specific personality profiles,” said DeepMind’s Mustafa Safdari. He and his colleagues reported their results in a paper titled “Personality Traits in Large Language Models,” which was published on the preprint server. arXiv,
They found particularly accurate personality assessments when using large models (such as Google’s Platform Language Model, with 540 billion parameters).
Safdari said, “It is possible to configure the LLM such that its output … is indistinguishable from that of a human respondent.”
The researchers said the ability to accurately define AI personality traits is key to efforts to weed out models with hostile leanings.
It is not just a matter of hurt feelings or offended parties. A propensity for sarcasm may actually boost the “humanity” of AI agents and motivate users to be more open and sociable than they otherwise would be. Scammers can more effectively extract confidential information from unsuspecting users.
The researchers say that their findings will go a long way towards more civilized and reliable chatbot exchanges.
“Controlling the levels of specific traits that lead to toxic or harmful language outputs can make interactions with LLMs safer and less toxic,” Safdari said.
more information:
Mustafa Safdari et al, Personality Traits in the Large Language Model, arXiv (2023). DOI: 10.48550/arxiv.2307.00184
© 2023 Science X Network
Citation: AIs have personalities and they are sometimes mean (2023, July 19) Retrieved on July 19, 2023
This document is subject to copyright. No part may be reproduced without written permission, except in any fair dealing for the purpose of personal study or research. The content is provided for information purposes only.











