We can no longer use technology as passively as we’ve been doing so far. We need to be a little more deliberate about how we use technology and help it make decisions for us or about us – Kartik Hosanagar
Humans are making machines smarter, and the machines are learning.
So what is happening to humans using smart machines?
My guest on this episode is Kartik Hosanagar.
Kartik is a professor of marketing at the Wharton School of the University of Pennsylvania, one of the world’s top 40 business professors under 40. He is a 10-time recipient of the undergraduate teaching excellence awards at Wharton and his research has received several best paper awards. Kartik also co-founded and developed the core IP for Yodel and is involved with many startups as either an investor or board member.
Kartik’s new book, A Human’s Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control, is the topic of our discussion.
Problem solving at scale
Just to level set, an algorithm is a set of sequential instructions performed to solve a problem. To our discussion, a programming algorithm is a computer procedure that tells a device or application the precise steps ( inputs) to take to solve a problem (outputs).
These computer algorithms automate procedures and can solve complicated problems we wouldn’t otherwise be able to solve.
I love algorithms.
When applied to the right problems they make life better. Not only the multi billion dollar economic impact of businesses birthed and run via search algorithms but logistics, flight, encryption, blockchain, collision avoidance, drug design and even diagnostic applications for diseases. In fact, at some point, there may be malpractice cases made if a Doctor does not use some machine learning algorithms for diagnosis and treatment planning.
So let’s first acknowledge the amazing things we can now accomplish because of algorithms.
And although they are not sentient things waiting for a day to overthrow us, algorithms have become very intimate parts of our lives being embedded in the devices we own, wear and drive – they run the apps we use for many of the decisions we make.
Hence the risk
In Kartik’s words, they are nudging our choices here and there.
In my opinion, the risk of ubiquitous algorithm proliferation and the threat of A.I. for that matter, is not necessarily that machines become more intelligent but that people become less intelligent.
David Krakauer has interesting thoughts about complementary vs. competitive cognitive artifacts. Competitive artifacts being those that improve our ability to perform a cognitive task but if taken away, likely to make us worse at performing the task than we were before.
Consider spatial orientation.
There is some research on fMRI (functional Magnetic Resonance Imaging) of individuals using native spatial navigation abilities that showed increased activity in the hippocampus. They also found the opposite was true – excessive use of GPS navigation could lead to atrophy in the hippocampus.
Kartik asks us to consider the impact on our decision making ability.
They’re making so many choices for us, mostly in ways that allow us to be productive but the flip side is the extent to which we are fully in control of our decisions. It’s not quite what it used to be. The algorithms are nudging us in different ways
He talks about research for example that shows they have a very significant impact on our purchase decisions and entertainment choices. “Over a third of our choices on Amazon are driven by algorithmic recommendations. On Netflix over 80% of what we view is driven by algorithmic recommendations.”
As innocuous as shopping or movies may be, it’s an illusion to think we are making these choices solely via free will. And the impact is much broader; their utility has raised the stakes.
In its recruiting process Amazon used an algorithm that was discovered to carry gender biases – and here’s the rub – which could not be eliminated by their own engineers despite trying multiple solutions.
We might be tempted to think that machine learning in an algorithm means pure objectivity, but in many cases we see that’s not true or at least yet to be realized.
In a research report, the Partnership on A.I. opposed the use of AI algorithms by law enforcement in cases of parole, bail, and probation. The report said algorithms dedicated to helping police in the jailing process are “potentially biased, opaque, and may not even work,”
Filter bubbles and priming
Is it not yet obvious that having an algorithm that serves up content to you based solely on maximizing some click value thus rewarding your personal biases is not good for you?
There is a disconcerting story in The New Yorker about a growing number of people who believe the earth is flat. One of those profiled in the story is Darryle Marble who over a two year period “drank in” conspiracy stories via YouTube and the algorithm served up one related video after another and then the article notes…
“Marble found the light in his YouTube sidebar, he said. ‘I was already primed to receive the whole flat-Earth idea, because we had already come to the conclusion that we were being deceived about so many other things.’ “
Ignorance and fear
There seems to be a lot of angst about the future. Buzzwords like digital disruption and A.I. evoke in people a sense of helplessness because they are very big question marks.
Will machines take over?
Will I be “obsolete?”
What feeds these fears is a primal fear of the unknown, and it’s in all of us. If something is foreign, we tend to fear it.
Taking back control
Shining a light into the dark places reveals the hat on the stand. Understanding replaces feelings of fear with feelings of familiarity and cultivating a habit of curiosity and learning can increase your sense of agency while decreasing your feelings of fear.
Let’s throw in decreasing your frustration level as well.
Do you have any idea why Google Maps took you the way it did?
Instead of constantly yelling at Maps you should investigate not only your map settings which you can change ( click-drag on a route forces the app to take a different way rather than it’s algorithm default) but it also helps to have some general idea of the inputs Maps uses – what data the program is crunching when it recommends a route?
Things like real-time traffic pattern data from the devices of millions of drivers who are on the road when you are, tolls, highways, time through intersections (left turns usually take much longer) complexity, etc., etc.
When you default all of your decisions to others – to include devices – you’re giving away your sense of agency a little at a time.
I’m not suggesting you shun algo driven tech, quit the opposite. Use them as tools, incorporate the automation where you can and work with them.
Look, you don’t need to know to program a computer to get better at understanding what they’re doing. Just knowing and being more aware that an algorithm may be involved in your decision making processes can put you in place a bit more empowering.
Outside of learning code, here are just a three easy ways to exercise your agency.
1. Read bad reviews.
Expand YOUR inputs. Be more discerning with product reviews and curated recommendations. At this point, online crowd feedback can be gamed. That said, reviews are a godsend to avoid a poor purchase or service provider. But you have to vet them by reading and make up your own mind. A negative review has . much higher probability of being honest than a glowing one.
2. Get lost.
Check maps for traffic and look at the overall direction and if you’re familiar with the route – turn it off. Exercise your hippocampus.
3. Investigate your device / software inputs.
When google gives you an answer, where did it come from? Search uses multiple algorithms but It’s in part information retrieval and rank is no proxy for truth. Sources matter.
Learn to parse how you enter a query in search. There are multiple advanced search settings Google offers under the box. You can also use Google Scholar (scholar.google.com) to narrow search to books, peer reviewed research and even case law.
Why you should listen
This episode is a look into the brains of all of your devices, at least the software that’s running them and how they are learning and influencing you.
You need very little technical knowledge to enjoy it. In fact, the less you know technically, the more it would behoove you to listen.
Kartik does a masterful job not only explaining how algorithms work but how advances in A.I. are impacting your life – and what we can do about it personally as well as collectively as a society. He also proposes an algorithmic bill of rights.
And if you’re curious at all as to all the buzz about artificial intelligence, Kartik provides a unique take on its history and evolution from narrow automation to autonomous learning via the story of algorithms.
If you only have a few minutes you might want to listen here for a great retelling of how AlphaGo beat the world’s best *human* Go player. A bit eerie.
There’s so much more in this episode to include…
- Algorithmic and human decision making
- Aglo inputs; data and data brokers
- Dealing with algorithm and human bias
- Raising humans and raising A.I. comparisons in human nature
- Black box problems – when we cannot know why a machine takes an action it takes
- Advancing while mitigating risk – The predictability-resilience paradox
- How to take control, the importance of agency
- His view of the future and how we can prepare for it
I think you’ll learn a lot – enjoy!
Listen on iTunes
Listen on Google Play Podcasts
Listen on Stitcher