Onboarding AI into an organization in phases and addressing privacy and autonomy concerns can increase the acceptance of artificial intelligence by wary employees.
Artificial Intelligence is both appreciated and feared: appreciated for the benefits it offers in terms of speed, accuracy and consistency; feared because many believe those benefits can render many human employees obsolete—a fear summarized in the familiar phrase, “Machines will take our jobs.”
Studies show that such fears, leading to employee resistance and suspicion, will undermine the effectiveness of AI technology as it is newly introduced into an organization. Based on their research and others’, four academics from universities in Canada, France and the U.S. propose a four-phase approach to “onboarding” AI into an organization. This approach parallels the onboarding process of new employees—a process in which new employees are given the time and opportunity to become familiar with the company’s people, processes and goals, while the organization has the time and opportunity to gain trust in the new employees and slowly give them more responsibilities.
The four AI onboarding phases proposed by the researchers unfold as follows:
The Assistant. The first onboarding phase is to use AI as an “assistant.” Often, a new employee is given an assistant job, taking on basic (but often time-consuming) tasks, thus learning on the job while freeing up time for current employees to work on more important tasks. As with human employees, giving AI assistant tasks can give current employees the opportunity to work with AI in a low-risk situation, and to slowly gain confidence in and become more comfortable with the new arrival. One AI assistant job, for example, would be to sort data: humans set the criteria, and AI sorts the incoming data—a task that, given the amount often involved, AI can fulfil more efficiently than people.
The Monitor. The monitoring capability of AI is familiar to all who use computers. If we write in an email that something is attached and try to send it without an attachment, the email program will warn us that there is no attachment. This same process can be applied to much more sophisticated (and consequential) decisions. The researchers use the example of AI flagging a portfolio manager’s decision to make an investment that significantly raises the overall portfolio risk—in essence, giving the portfolio manager an “are you sure?” warning that allows him or her to reconsider the decision. For employees to accept this monitoring role, however, they must be allowed to override the warning—after all, the portfolio manager may have information not available to the firm’s AI capabilities that explains the decision.
The Coach. Traditionally, performance reviews are done periodically, often annually. Expectations are set at the beginning of the period, outcomes analysed at the end of the period, and feedback is offered. AI can generate feedback on a more ongoing basis, by continuously monitoring decisions and highlighting patterns, variations or errors. The researchers note, for example, that portfolio managers might receive feedback, based on the AI data analysis of their investment decisions, highlighting their risk-averse or overconfidence tendencies. As coachbots involve more sophisticated AI technology, humans can not only override the AI, but can actually “teach” it to adjust its feedback. For example, the continuous override of certain criteria might prompt the AI to eliminate that criteria from consideration in future feedback.
The Teammate. The most sophisticated application of AI, and one that is not yet adopted by organizations, is to use AI as a “thinking” teammate. This fourth phase for onboarding AI is based on the concept of the “extended mind”—that is, the expertise acquired by the human mind would be coupled with “expertise” acquired by the AI. The source of the machine expertise is input from humans, which is accumulated, analysed and processed into knowledge.
While the fourth phase of the researchers’ AI onboarding framework is still on the horizon (although the technology currently exists), many companies have experimented with AI as assistants, and some have incorporated the suggested phase 2 and 3 AI functions of monitoring and coaching. That said, the researchers do not underestimate the challenges of the acceptance of AI by employees, even with a phased onboarding approach. Over time, AI will lead to a net creation of jobs, but in the meantime, it may replace some human work functions.
Beyond the fear of job loss, acceptance of AI is undermined by employee distrust. However, companies can take important steps to increase employee trust in machines, the researchers argue. One step is to address privacy concerns, for example, by separating engineering from management. AI collects data on decisions; employees will be reticent about interacting with the system if they fear any mistakes will be funnelled to their managers. A second step is transparency—that is, employees should be involved in developing the system so they understand why the AI reacts as it does. Finally, autonomy is key. The employees must feel in control. AI may flag a decision, but the employee must be allowed to override the warning. AI may offer feedback, but on the employee’s schedule—and employees may respond to the feedback. If autonomy is lost, employees will view AI as an opponent and even another boss: a sure-fire way to ensure continued resistance.
Ideas for Leaders is a free-to-access site. If you enjoy our content and find it valuable, please consider subscribing to our Developing Leaders Quarterly publication, this presents academic, business and consultant perspectives on leadership issues in a beautifully produced, small volume delivered to your desk four times a year.
For the less than the price of a coffee a week you can read over 650 summaries of research that cost universities over $1 billion to produce.
Use our Ideas to:
Speak to us on how else you can leverage this content to benefit your organization. info@ideasforleaders.com