* Shelly Palmer…
AI is empowering each candidate to present themselves as if that candidate were speaking to us one-on-one. This has always been possible in small groups or at political rallies. But no politician in history has had the ability to speak to every individual voter one-on-one. Human politicians still can’t, but their AI-generated political avatars can. And frighteningly, these AI-generated political avatars know more about our real hopes and dreams than any human candidate ever could. […]
Whenever you interact with an app (Facebook, Twitter, Instagram, Google) or website or any other online data aggregator (Nest, Alexa, Waze, your smartphone), you are creating two sets of data. The first set of data is the data required to enable the technology you are using to work. This might include the location of your device, if you’re using Waze or your smartphone. Or the current temperature of your home, if you’re using a Nest thermostat. Or what you are interested in at the moment, if you are using Facebook, Amazon, Google, Instagram, Twitter, etc.
But you also create a second set of data. Sometimes referred to as “surplus data,” these data are not specifically required to achieve your immediate objective – for example, your location when you tap a like button, or the time of day you are usually in your home when you adjust your thermostat, or the kinds of images that get your attention when you stop scrolling on a social network.
Surplus data are collected with the explicit purpose of improving the engineering of bespoke online environments and messaging that you will find irresistible. Said differently, these are the data used by algorithms to feed your social media addiction. […]
To customize messaging for multi-issue voters, behavioral data are fed into algorithms designed to score those behaviors and then predict what attributes should be crafted into the customized persona of the particular candidate. You can call it “pandering at scale.” While this technology is table stakes in best practices digital advertising, dynamic apps, and websites, it is relatively new for politicians. They may be late to the game, but they are now using the schooling they received in 2016, and we’re about to get an up close and personal view of the unintended consequences of the lessons learned.
Go read the whole thing.
* Related…
* New Illinois Law for AI in Job Interviews: The Artificial Intelligence Video Interview Act, House Bill 2557, requires companies to notify the applicant when the system is being used, explain how the AI works, get permission from the applicant, limit distribution of the video to people involved with the process and to destroy the video after 30 days.
- ChicagoVinny - Monday, Sep 23, 19 @ 10:08 am:
== The 2020 elections are going to be politically, technically, and physically hacked, and there is almost nothing any of us can do about it. ==
I hate this sort of cynicism and fatalism. There are absolutely things we could do about a lot of this. We’re just choosing not to.
-Strengthening security of voting machines and backend infrastructure
-Paper trails on ballots would be the #1 thing
-Laws to regulate Facebook and other online political advertising. We do this for TV and print!
- OneMan - Monday, Sep 23, 19 @ 10:11 am:
He has a point, but….
The challenge of AI in this context is going to be the modeling of success. When you ‘teach’ an algorithm it is to a degree with ‘previous success’ examples, that is samples where the correct answer is known and the AI is informed if the answer it came up with is right or wrong. These are easy to come up with when teaching an AI to recognize faces.
This gets harder when the source of the teaching data may have had a bias of its own, either intentional or unintentional. for example data about credit risks. Credit decisions in the past may have contained bias that is based on the recipient’s ethnic background. Even if the ethnic group of the borrower is taken explicitly out of the training data, the impact is still there and may result in the algorithm carrying the bias forward in a less obvious but still very real way.
Giving an AI the rules of chess and telling it to go at it and play itself several million times, is very different than trying to model the ’success’ of persuading a human to engage in a specific action. Elections going forward are going to provide great training data for down the road, I think the AI risk is lower than the ‘I just listen to what I agree with’ risk.
- Blue Dog Dem - Monday, Sep 23, 19 @ 10:53 am:
Kinda glad i am in the twilight of my career. Really didnt understand much of this post. Waze,Nest. Couldnt find either in my Websters handheld.
- Earnest - Monday, Sep 23, 19 @ 11:22 am:
Advertising is one of the biggest spending categories out of all research, but the results are often proprietary. When we think of privacy and the internet, we tend to think like we do in real life,keeping financial information or personal things hidden, but it’s really about the monitoring of all our everyday activities that build a very complete picture of who we are and what we do in our lives. Combine that level of information, proven research on influencing opinions and programming with computing power to individualize approaches and you have serious power to sway the public. Based on my n-1 observations it seems like the most effective approach is to insert some mild outrage to motivate some extra clicks on a website or more views of a video reinforcing a point of view. I see people who seem to be mildly offended much of the time. I catch myself seeing links or headlines and feeling the impulse to follow. It’s difficult to overcome on an individual level, much less in the entire society.
- NoGifts - Monday, Sep 23, 19 @ 11:36 am:
The bright side might be that if they know more about our hopes and dreams (aka what we want) politicians cans use the information to decide how to craft legislation or vote for it. With so many people opting out of surveys, this can be the only way to know what people want.
- A Jack - Monday, Sep 23, 19 @ 12:49 pm:
I have no doubt that some voters could be coaxed into voting for a candidate based on some type of AI profiling. I don’t think it would work very well on me since I view all politicians with a large amount of skepticism.
- Skeptic - Monday, Sep 23, 19 @ 2:20 pm:
BDD: Nest is a family of home products (thermostats, light switches, etc) that are Internet enabled.
- Techie - Monday, Sep 23, 19 @ 2:54 pm:
I just don’t see how this applies that neatly to political candidates, especially those with a record. Is Ted Cruz really going to try to convince pro-choice voters that he’s pro-choice? Probably not.
There are some issues which politicians might be happy to waffle on and pander to whomever, but their records will always exist, as will their overall ideological appeal.
The kind of politicians this would benefit most are those like Trump in the 2016 campaign; well-known outside of politics, but whose political ideas are not well-known. In that case, yeah, it’s easier to pander to everyone and pretend you are all things.