Community
The shift towards specialist chatbots has highlighted how organisations are vulnerable to decision distortions and deceptions, with the way their people work deep within the organisation.
This vulnerability relates to inherent weaknesses within repetitive tasks, which are reliant upon individuals making accurate decisions. The scope is extensive as decisions are embedded within the very fabric of work activities such as those people involved with performing any combination of the following tasks when they:
Each of these activities involves contextual knowledge to support making informed decisions.
Sometimes there is a singular decision with limited options such as: yes, no or not sure. Other times, decisions flow in different directions due to the complex permutations of choices, pathways and outcomes involved.
By their very nature, decisions sometimes go wrong simply because of human shortcomings. Behavioural economics shows that any decision with an element of risk is subject to human biases.
Let’s take a simplistic case, where a service agent pressurises a customer to give them a rating of at least 8 out of 10 when they receive a survey, otherwise the agent claims they could get into ‘trouble’. This is a clear case of attempting to distort a customer decision. The consequence is that it can result with two types of deceptions:
This simplistic case is a powerful illustration of when employee behavioural incentives result with decision distortions and deceptions contaminating the brand, strategy and tactics of the organisation.
The vulnerability of decision distortions and deceptions is pervasive as it impacts decision-making processes embedded within activities deep within the organisation. For example, let’s take most workflows have one or more data input forms that require decisions to be made as part of the input. Typically, these types of forms are supported by procedures to guide the decision-making, but the absence of a decision audit trail means there is no transparency and traceability of the actual decision process applied. Frequently, these forms are completed by people using subjective decision-making based on their experience. Flawed decisions can lead to unintended consequences negatively impacting revenues, costs, risks and sometimes brand contamination. Misselling of regulated products is a common example.
As the rate of knowledge increases in complexity there is an amplification of poor quality decisions in the way people work. Leaders should first target the crucial decision-making processes, which are vulnerable especially for organisational exposures related to negligence, errors, false positives, false negatives and handoffs. Using these priorities, it is relatively easy using the right tools to identify the weaknesses within the decisions flow. At this stage, it is relatively a small step to rapidly prototype chatbots to strengthen the decisions flow and deliver conversations-as-a-service, with the full benefit of transparency and traceability.
Organisations cannot afford to ignore the human factor deep within their decision-making processes involving many activities leading to non-productive interactions.
This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.
David Smith Information Analyst at ManpowerGroup
20 November
Konstantin Rabin Head of Marketing at Kontomatik
19 November
Ruoyu Xie Marketing Manager at Grand Compliance
Seth Perlman Global Head of Product at i2c Inc.
18 November
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.