Join the Community

22,037
Expert opinions
43,964
Total members
420
New members (last 30 days)
182
New opinions (last 30 days)
28,682
Total comments

Conversation-as-a-Service: decision distortions and deceptions

The shift towards specialist chatbots has highlighted how organisations are vulnerable to decision distortions and deceptions, with the way their people work deep within the organisation.

 

This vulnerability relates to inherent weaknesses within repetitive tasks, which are reliant upon individuals making accurate decisions. The scope is extensive as decisions are embedded within the very fabric of work activities such as those people involved with performing any combination of the following tasks when they:   

  1. accredit
  2. analyse
  3. advise
  4. appraise
  5. approve
  6. assess
  7. audit
  8. check
  9. complain
  10. comply
  11. diagnose
  12. escalate
  13. feedback
  14. guide
  15. handoff
  16. identify
  17. improve
  18. instruct
  19. learn
  20. notify
  21. qualify
  22. recommend
  23. reconcile
  24. regulate
  25. reject
  26. respond
  27. resolve
  28. safeguard
  29. secure
  30. select
  31. standardise
  32. train

 

Each of these activities involves contextual knowledge to support making informed decisions.

 

Sometimes there is a singular decision with limited options such as: yes, no or not sure. Other times, decisions flow in different directions due to the complex permutations of choices, pathways and outcomes involved.

 

By their very nature, decisions sometimes go wrong simply because of human shortcomings. Behavioural economics shows that any decision with an element of risk is subject to human biases.

 

Let’s take a simplistic case, where a service agent pressurises a customer to give them a rating of at least 8 out of 10 when they receive a survey, otherwise the agent claims they could get into ‘trouble’.  This is a clear case of attempting to distort a customer decision. The consequence is that it can result with two types of deceptions:

  1. If this type of behaviour is repeated frequently across multiple service agents, it means the contaminated customer insights potentially distort strategic decisions taken in good faith. If the distorted insights are reported to stakeholders, the deception is amplified.
  2. The customer insights for the service agent is used to appraise their performance, which is then linked into bonuses, salary increases and even promotion to becoming a supervisor. An inappropriate promotion of an agent to a supervisor risks amplifying the level of customer feedback distortion and deception as they put pressure upon their team of service agents.

 

This simplistic case is a powerful illustration of when employee behavioural incentives result with decision distortions and deceptions contaminating the brand, strategy and tactics of the organisation.

 

The vulnerability of decision distortions and deceptions is pervasive as it impacts decision-making processes embedded within activities deep within the organisation. For example, let’s take most workflows have one or more data input forms that require decisions to be made as part of the input.  Typically, these types of forms are supported by procedures to guide the decision-making, but the absence of a decision audit trail means there is no transparency and traceability of the actual decision process applied. Frequently, these forms are completed by people using subjective decision-making based on their experience.  Flawed decisions can lead to unintended consequences negatively impacting revenues, costs, risks and sometimes brand contamination. Misselling of regulated products is a common example.       

 

As the rate of knowledge increases in complexity there is an amplification of poor quality decisions in the way people work.  Leaders should first target the crucial decision-making processes, which are vulnerable especially for organisational exposures related to negligence, errors, false positives, false negatives and handoffs.  Using these priorities, it is relatively easy using the right tools to identify the weaknesses within the decisions flow. At this stage, it is relatively a small step to rapidly prototype chatbots to strengthen the decisions flow and deliver conversations-as-a-service, with the full benefit of transparency and traceability.

 

Organisations cannot afford to ignore the human factor deep within their decision-making processes involving many activities leading to non-productive interactions.

 

  

 

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

22,037
Expert opinions
43,964
Total members
420
New members (last 30 days)
182
New opinions (last 30 days)
28,682
Total comments

Trending

David Smith

David Smith Information Analyst at ManpowerGroup

Best 5 White-Label Neobank Solutions in 2024

Ruoyu Xie

Ruoyu Xie Marketing Manager at Grand Compliance

Governance, Risk and Compliance: How AI will Make Fintech Comply?

Now Hiring