Does AI judge your personality?

This content has been archived. It may no longer be accurate or relevant.

Perhaps a writing prompt. Among many others, PG has always been fascinated by AI books and stories, but this one generates a bit less optimism.

From ArchyW:

AirBnB wants to know if it has a “Machiavellian” personality before renting a house on the beach.

The company may be using software to judge if you are reliable enough to rent a house based on what you post on Facebook, Twitter and Instagram.

They will free the systems on social networks, execute the algorithms and get results. For people at the other end of this process, there will be no transparency in the process, no knowledge, no appeal process.

The company owns a technology patent designed to rate the “personalities” of potential guests by analyzing their activity on social networks to decide if they are a risky guest that could damage a host’s house.

The final product of its technology is to assign each AirBnB guest customer a “reliability score”. According to reports, this will be based not only on social media activity, but also on other data found online, including blog posts and legal records.

The technology was developed by Trooly, which AirBnB acquired three years ago. Trooly created a tool based on artificial intelligence designed to “predict reliable relationships and interactions,” and that uses social networks as a data source.

The software builds the score based on perceived “personality traits” identified by the software, including some that you could predict – awareness, openness, extraversion, kindness – and some strangers – “narcissism” and “Machiavellianism,” for example. (Interestingly, the software also seeks to get involved in civil litigation, suggesting that now or in the future they can ban people based on the prediction that they are more likely to sue.)

AirBnB has not said whether they use the software or not.

If you are surprised, shocked or unhappy with this news, then it is like most people who are unaware of the enormous and rapidly growing practice of judging the people (clients, citizens, employees and students) who use AI applied to networks social. exercise.

AirBnB is not the only organization that scans social networks to judge personality or predict behavior. Others include the Department of Homeland Security, employers, school districts, police departments, the CIA, insurance companies and many others.

Some estimates say that up to half of all university admission officers use social monitoring tools based on artificial intelligence as part of the candidate selection process.

Human resources departments and hiring managers also increasingly use AI social monitoring before hiring.

. . . .

Some estimates say that up to half of all university admission officers use social monitoring tools based on artificial intelligence as part of the candidate selection process.

Human resources departments and hiring managers also increasingly use AI social monitoring before hiring.

. . . .

There is only one problem.

AI-based social media monitoring is not that smart

. . . .

The question is not whether the AI ​​applied to data collection works. It surely does. The question is whether social networks reveal truths about users. I am questioning the quality of the data.

For example, scanning someone’s Instagram account can “reveal” that it is fabulously rich and travels the world enjoying champagne and caviar. The truth may be that they are broken and stressed influential people who exchange social exposure for hotel rooms and meals in restaurants where they take highly manipulated photos created exclusively to build reputation. Some people use social networks to deliberately create a deliberately false image of themselves.

A Twitter account can show a user as a prominent, constructive and productive member of society, but a second anonymous account unknown to social media monitoring systems would have revealed that person as a sociopathic troll who just wants to see the fire burn. world. People have multiple social media accounts for different aspects of their personalities. And some of them are anonymous.

. . . .

For example, using profanity online can reduce a person’s reliability score, based on the assumption that rude language indicates a lack of ethics or morality. But recent research suggests the opposite: people with their mouths in the bathroom may, on average, be more reliable, as well as more intelligent, more honest and more capable, professionally. Do we trust that Silicon Valley software companies know or care about the subtleties and complexities of human personality?

. . . .

There is also a generational division. Younger people are statistically less likely to publish in public, preferring private messaging and social interaction in small groups. Is AI-based social media monitoring fundamentally ageist?

Women are more likely than men to post personal information on social networks (information about oneself), while men are more likely than women to post impersonal information. Posting about personal matters can be more revealing about personality. Is social media monitoring based on AI fundamentally sexist?

Link to the rest at ArchyW

3 thoughts on “Does AI judge your personality?”

  1. For me, the disturbing aspect of AI systems is the lack of meaningful quality control.

    In traditional software, the kind I developed tons of, quality assurance testing methodologies are highly sophisticated. You can buy thick and authoritative books on the subject and get a PhD specializing in QA from MIT, Stanford, or Carnegie-Mellon. The industry knows how to apply QA and the techniques have been confirmed over and over. Software QA works.

    However, in traditional software, short-sighted business decisions often demand sloppy QA and ignore inconvenient results. That’s bad.

    The same business pressures exist in the AI biz, but the situation is far worse. Lately, it has become clear that traditional QA is oblivious enormous problems in AI systems. Traditional testing doesn’t work. Traditional QA begins with a survey of possible inputs and a risk evaluation of the consequences of erroneous outputs. Then tests are devised to determine the potential for risky outputs and the risky outputs are systematically identified and removed until the level of risk in the system meets requirements.

    In AI, the survey of possible inputs so far has proved impossible. Recent AI software is trained, not programmed. It’s actually closer to training dogs or children than programming.

    I can only guess at the inputs that will cause my dog to bolt. When we moved last year, I had to start all over on his training because the environment (from rural with few neighbors, no sidewalks, little traffic, lots of rabbits, and few cats to a small town with neighbors all around, more traffic, sidewalks, deer– didn’t have any of those in the country– cats, squirrels, other dogs, and people). For something like facial recognition, self-driving cars, or AirBnB tenant evaluations, those kinds of environment variations happen when moving from Akron to Cleveland, not to mention NYC to the Bay Area. When you don’t know why the AI came to a certain conclusion, you’re in the same position I am with my dog, trying to figure out why my dog ignores geese, hates cats, and wants to make friends with crows, not to mention small children wearing masks in roving bands on Halloween. QA techniques to deal with this level of unpredictable inputs to trained systems are only beginning to be recognized or don’t exist yet.

    It took a decade for QA techniques to make the jump from mainframes to asynchronous distributed systems. And we still scratch our heads when a blip on a DNS server in Europe messes up a point of sale application in Tulsa. My guess is that the jump to AI systems is comparable or more difficult and the QA folks are just starting to figure it out now.

    • When you don’t know why the AI came to a certain conclusion, you’re in the same position I am with my dog, trying to figure out why my dog ignores geese, hates cats, and wants to make friends with crows, not to mention small children wearing masks in roving bands on Halloween.

      I don’t know why I came to a certain conclusion. I can distinguish a dog from a cat at an immediate glance, and I can pick out a Mustang from a stream of cars on the road, but I can’t say how I did it. And I have an inside track.

      Find the QA that works for me, and we can try it on a trained net.

Comments are closed.