On-Orbit GPS Receivers: Not Just for Navigation

Dr. Rebecca Bishop, Senior Scientist, The...

On-Orbit GPS Receivers: Not Just for...

Autosmart Solution: The Flexibility of a Solution Increasing the Accuracy of Infrared Temperature Sensors

Bram Stelt, CEO of Exergen Global

Autosmart Solution: The Flexibility...

Making Sense of Environmentally-Aware Robots

John Dulchinos, Vice President, 3d Printing &...

Making Sense of Environmentally-Aware...

With an Aim to Optimize Telematics Security

Kevin Baltes, Director - Product Cybersecurity...

With an Aim to Optimize Telematics...

Responsible AI: The Human- Machine Symbiosis

Sal Cucchiara, CIO & Head Of Wealth Management Technology, Morgan Stanley

Responsible AI: The Human- Machine SymbiosisSal Cucchiara, CIO & Head Of Wealth Management Technology, Morgan Stanley

Over 20 years ago, a supercomputer beat world chess champion Garry Kasparov at chess. That moment helped plant the seeds for the era of big data in which we live today, and it marked a critical cultural turning point that suggested a machine could outsmart a human after all.

While Kasparov initially expressed cynicism regarding the computer’s methods and its intelligence, he has more recently changed his tune, crediting the power of artificial intelligence (AI) and advocating for a symbiotic relationship between humans and machines. “At the end of the day,” he argued, “it is for us to even explain when something is successful. It is still for us to define success and machines to perform their duty” – underscoring the significance of our human role in defining and creating the knowledge base, the logic, and the authority that we empower our AI systems to wield.

What does this mean for those of us who create AI systems, in the era of big data, and in an era where consumers are (rightfully) expecting and demanding that we leverage that data responsibly and accurately? To ensure both consumer trust and high-quality products and services, AI systems need to maintain a high degree of data integrity, enable end-users effectively, and employ responsible use safeguards.

DATA INTEGRITY

Data integrity sits at the core of AI systems and is the foundation of client trust. Consumers hold higher expectations of accuracy for machines than for humans, however all machine intelligence is derived from human inputs. Our AI systems have the capacity to deliver this accuracy if we have an exacting grip over how our data is identified, collected, maintained, and integrated into our systems.

"While a common myth conflates machine learning (ML) with AI itself, ML is merely the tool that renders systems artificially intelligent"

The first step to achieving this is through effective data curation, which involves identifying the most authoritative, trustworthy source for each piece of data, and then structuring the data in such a way that makes it easily accessible and eliminates ambiguity. We then need to implement robust knowledge management practices to continuously ensure that information remains up-to-date. We must account, for example, for any and all events that impact our data, from world events, to regulatory changes, to individual client events. Finally, AI systems must implement built-in feedback loops, to give technologists visibility into how the system interacts with end-users, and to allow end-users to communicate the accuracy of the answers that AI systems provide.

Beyond the answers themselves, end-users also tend to demand the reasoning behind these answers. For example, if a client’s request to execute a trade is denied, he or she will generally want to know why. To engender optimal trust, AI systems must thus also enable model explainability, to provide evidence for its responses and actions. If a system detects a transaction to be fraudulent, it should be able to provide evidence for its detection. If a ChatBot provides an answer to a question about stock prices, it should also be able to link to its source. When an AI system provides evidence in support of its response or actions, users can trust it more completely, and can also indicate whether the evidence itself makes sense in the feedback.

END USER ENABLEMENT

The goal of AI, at its core and across industries, should be to help users optimize the work that they do—whether by solving problems more efficiently or by adopting tactical work and enabling its users to think and operate more strategically.

One way we’re using AI at Morgan Stanley is to support our financial advisors in managing their client relationships. On one level, we’re doing this by employing AI to automate time-consuming manual tasks. Automating these tasks makes our branch staff considerably more efficient, and allows our financial advisors to reinvest their time in building client relationships.

We’re also using AI to help our financial advisors build these very relationships more strategically through our Next Best Action (NBA) platform. NBA optimizes financial advisors’ daily activities with prioritized recommendations that are instantly actionable, employing a task-ranking algorithm that enables financial advisors to dedicate their time to the most valuable tasks at any given moment. For example, NBA would notify a financial advisor of a client’s life event such as changing jobs and advise the financial advisor to reach out accordingly, and quickly.

Recommendations are ranked in terms of predicted value, the likelihood of the financial advisor and client to act, clients’ optimal contact schedules and indicators of potential attrition. NBA’s integration with client relationship management applications enables financial advisors to execute recommendations with scale and ease: it takes just a few clicks to initiate bulk client engagement activities such as emailing cybersecurity recommendations or executing on an investment idea that many clients qualify for. The platform constantly learns by tracking which actions financial advisors enact versus ignore, and leverages this learning to constantly improve future suggestions.

These are just a few examples from the wealth management side of our business.

RESPONSIBLE USE

AI systems should be built with a certain level of self-awareness to ensure that they are only used for tasks they can competently handle and accurately deliver on. To facilitate this, we build algorithms into our systems that constantly score the system’s confidence in responding to requests. Confidence scoring helps prevent our AI systems from providing erroneous information to our clients. Complex or ambiguous questions receive lower confidence scores, which indicate when humans are better suited to handling the given task.

Effective and responsible AI also necessitates a vigilant safeguarding of client data. To protect our clients, we maintain robust physical, electronic, and procedural safeguards that are designed to guard client information against misuse or unauthorized access.

While a common myth conflates machine learning (ML) with AI itself, ML is merely the tool that renders systems artificially intelligent. Humans still need to teach the machine how to learn, and ultimately, a system is only ever as intelligent as the data that underpins it. Our role and our responsibility in creating successful AI systems thus remain central, ongoing and necessarily human.

Read Also

4 MUST-HAVE TECHNOLOGIES FOR METALS & MINING

4 MUST-HAVE TECHNOLOGIES FOR METALS & MINING

Sharon Gietl, VP-IT & CIO, The Doe Run Company
ROLE OF INNOVATIVE TECHNOLOGY TAKING CENTER STAGE IN WORLD-CLASS MANUFACTURING COMPANIES

ROLE OF INNOVATIVE TECHNOLOGY TAKING CENTER STAGE IN WORLD-CLASS...

Susan Kampe, CIO, VP, Information Technology, Cooper Standard [NYSE: CPS]
Automating Smart Buildings in a Smarter Way

Automating Smart Buildings in a Smarter Way

Ajay Kamble, CIO, Turtle & Hughes, Inc.
Top