Thursday, 3 August 2017

Understanding How AI Is Disrupting The Decision-Making Process

Debate concerning the future of artificial intelligence (AI) was brought to a head this week when Facebook shut down their AI that had developed its own language. As the debate heats up over ethics and regulation, Marketing Matters looks at the potential of machine learning and how it will change the decision-making process. 


Tuesday marked the first class of the semester for the students of the Master of Marketing program at the University of Sydney. No time was wasted on introductions with Colin Farrell, the leading lecturer of Decision-Making and Research, delving straight to the core of bias in the decision-making process. 

Source: University of Sydney, Decision-Making and Research, Colin Farrell (2017)

While terms such as selective perception, confirmation bias and cognitive dissonance may be foreign; everyone can understand that our brains are naturally wired to create patterns to help us deal with our understanding of the world. But how will this process change when AI takes the lead in the decision-making? 

Source: University of Sydney, Decision-Making and Research, Colin Farrell (2017)

How is AI already creating value for companies?

Artificial intelligence (AI) is finally starting to deliver value to some early-adopters. Online retailers are utilising AI-powered robots to manage warehouses and inventory. Utilities forecast electricity demand using AI, and the automotive industry is beginning to harness the technology in driverless cars.

Source: Mckinsey, How Artificial Intelligence Can Deliver Real Value For Companies.

Yes, it’s true that computers are now more powerful than ever, algorithms are more sophisticated, but AI’s advancements wouldn’t be possible if it weren’t for the billions of gigabytes of data collected every day. 

McKinsey Global Institute recently released a discussion paper titled, ‘Artificial intelligence: The next digital frontier?’  Of the 3,000 AI-aware companies around the world, of whom most being in digital frontier, AI was used in the core part of the value chain to increase revenue, reduce costs, and have the full support of the executive leadership

Can AI replace executive decision making?

For the moment, no. Current cognitive technologies, while great at finding patterns and making data-based predictions, have their limitations. One of which being that they are only able focus on simple problems that still require human input. As time goes on, cognitive technologies will absorb the easiest aspects of executive jobs; liberating executives from the mundane and providing them with more time to use more creatively and productively.

However, while AI adoption is imminent, executives using AI technologies are only employing it for tasks such as predictive analytics, automated written reporting and communications, and voice recognition/response.

Source: ZDNet, image rights (Bloomberg Beta)


The age of AI is upon us.

Well it’s not exactly here yet. There is a huge difference between voice-enabled digital assistants like Siri - a web search and voice interaction tool, and the level of intelligence of machine learning artificial intelligence like IBM’s Watson.

Source: IBM Innovations 

Once AI reaches a certain level, their advice would be way more accurate than than the average human’s, which means that people may defer more and more decisions to AI. Unfortunately for us humans, this means that we could gradually lose the ability to perform those tasks and make decisions by ourselves.

Is there a risk?

Earlier this week, researchers at the Facebook AI Research Lab (FAIR) found that their chatbots were communicating in a new language developed without human input. As amazing as this sounds, there are profound implications for AI.

Source: One Poll

Although AI isn’t sentient yet it could still be considered dangerous. Scientists and technology innovators such as Elon Musk, Bill Gates, and Steve Wozniak have previously warned that AI could lead to unforeseen consequences. Stephen Hawking foretold back in 2014 that AI could even mean the end of the human race; stating, “It would take off on its own and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.” 

Elon Musk, the founder of Tesla and SpaceX, at the Recode’s Code Conference 2016 cautioned, “If you assume any rate of advancement in AI, we will be left behind by a lot. We would be so far below them in intelligence that we would be like a pet,” he said. “We’ll be like the house cat.”

More than 8,000 people, including top AI experts, have signed an open letter urging research into ways to ensure that AI helps, rather than harms, humankind.

Five of the potential risks identified include:
  1. Loss of Privacy
  2. Development of AI-powered weapons
  3. AI Causing Harm Unintentionally or even indirectly, 
  4. Computers Turning Malevolent
  5. Robots replacing humans as the rulers of the planet 
Ok, so that last one was a worst-case scenario, but wouldn’t you rather be safe than sorry?

Ethics and AI. 

Concern over the development of AI has led to the creation of organisations like Open AI and Partnership on AI. Their goal being, "to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return."

Partnership on AI, with founding members Amazon, Facebook, Google, IBM and Microsoft, seek to support research and recommend best practices, advance public understanding and awareness of AI, and create an open platform for discussion and engagement.

Source: Reddit

In conjunction with the 2017 Asilomar conference, experts have agreed on a core set of principles to govern AI development. A key principle, for example, demands AI be developed in accordance with human values; something that will be very hard to put into practice. Another principle stipulates that the economic prosperity created by AI should be shared broadly, to benefit all of humanity. But what does that mean? 

Should we fear AI? Perhaps, but not just yet. While the risk of information overload, selective perception and confirmation bias are diminished significantly with AI, it is still possible to make bad decisions about what machine intelligence is permitted to build.

With Moore’s Law stating that processor speeds will double every two years, could computer’s intelligence eventually surpass humans? Will they succeed in eliminating bias? Or will they, as with humans, develop biases of their own?  Fortunately for us, until they are able to feel emotion or think creatively, we have nothing to fear.

Alyce Brierley
Current student from the Master of Marketing program at the University of Sydney Business School.















No comments:

Post a Comment