Blog article
See all stories »

Voice recognition Part 2

Gary Wright see https://www.finextra.com/blogs/fullblog.aspx?blogid=2447  sat and listened to a presentation that I made on Tuesday and wrote up a piece on his blog, due to reasons far beyond our understanding (technical issue we suspect)  I have been unable to post my reply to a very extensive comments on to his blog site, so I apologise for this unconventional approach to blogging but if you click on the link above you will be able to follow the thread. My comments below relate to the comments by Stephen Wilson who was questioning and highlighting some of the challenges that are likely to experience if you don’t have  $10M budget and R+D team which fortunately we have had available with our company. I also guess that the unconventional use of the blog system to post this reply also clearly demonstrates that all technology is of course going to improveJ

So to follow on from the introduction above read on, or if you want you can just read this for a bit of background into what we have been doing with voice signatures and Voicepay.

Stephens’s comments are very interesting and to be clear as I said at the meeting no biometric system, including DNA and fingerprints have been tested on everyone, but the probability is that they are unique. All biometric systems have their own individual challenges as well, and for those who heard me talking on Tuesday morning and didn't on Tuesday night laryngitis is a serious software bug for voice biometric systems!

We have developed a highly complex system and have gone through all the challenges of false accept and false reject to get to a system that will give us a 99.6% FTA (first time authorisation). Our platform uses what we call a "voice signature" which is a complex device that uses voice biometrics as part of its overall score to then approve or decline a transaction. We probably, like Stephen, have gone through discussions with various " voice biometric software vendors" who as opposed to delivering a system that works, attempt to pass the buck onto the customer to set FAR's and EER's and FRR, and any other acronym that their marketing department happens to have invented at the time, to try and make their software sound more complex.  Our biometric verification core, which is developed on the Nuance platform who we have a very close development and working relationship and this platform is working 24x7x365 within 300 organisations, today.  Some of the issues that Stephen raises are also the reasons behind why last year, Voice Commerce Group, started to establish a framework for global interoperability on voice signatures within financial services, and why we are members of PCI.   

In 2001 whilst CEO at WorldPay I convinced my Board that we should guarantee Internet payments, which we did, and this protection is still in place today. Today with Voicepay and our voice signature system we are again using our own systems for our own payment processing services and we guarantee our transactions against repudiation. Over the next few years no doubt we will create improvements and refinements to our systems, and also why we have 2 R+D teams working 6 days a week on our future technologies.

Sometimes if you don’t do, you can’t learn, and every day we learn. By having the advantage of our own global customer base from which to draw experience, and from the fact that we know the solution works, we trust it and the fact that we underwrite it financially gives us a lead, and if we find from that real experience that some changes are required, we will be the first make them.

5388

Comments: (3)

Stephen Wilson
Stephen Wilson - Lockstep Consulting - Sydney 31 January, 2009, 23:28Be the first to give this comment the thumbs up 0 likes

Thanks Nick.  It's good to be able to continue this thread.

The figure of 99.6% "first time authorisation" sounds good. Can you provide details of how that's measured please?  For comparison's sake, what's the Equal Error Rate (which the FBI reckons is 2% best case for voice). And is it measured under the "Zero Effort Imposter" assumption?

Anyway, I am not trying to drag us into specmanship.  Rather, I am keen to debate the 'fine print' for the sake of policy people and non technologists who are making important decisions, without properly comprehending how biometrics work, and how and why they fail.  I'm sure that good progress will continue to be made on most biometrics, and I agree they have a role to play in some applications.  But I believe that use of the term "unique" may mislead people, perhaps inadvertently, perhaps not.  It doesn't quite gel to claim on the one hand that a certain trait is "unique" while on the other hand, real life performance of the biometric apparatus falls short of 100%.  Sometimes, it falls way short.

[For instance, only recently, former US Homeland Security chief Michael Chertoff, in a speech about identity technology , stated without qualification that “your fingerprint is unique”.  Even if that were true (and some researchers actually think it’s a myth) the fact is that fingerprint scanners don’t have the precision to deliver “uniqueness”.  The UK Government CESG testing I mentioned last time showed typical False Accept Rates of 1 in a thousand.  That sounds nearly unique, but the corresponding False Reject Rates for two of the scanners tested were 10% and 25%, which would render them unusable in automatic banking.  So in practice, the False Accept Rate will have to be de-tuned, maybe to 1 in a hundred, where the False Reject Rate is still worryingly high, between 5% and 15%. Getting this tradeoff right in unattended banking security applications will involve tough choices, hopefully made transparently.]

Is my criticism of abuse of the word "unique" mere pedantry?  I don't think so.  We're talking about security, and the first thing anyone needs to know -- as Nick has properly reinforced -- is that no security is perfect.  Yet loose talk around "uniqueness" is giving all sorts of lay people (policy makers, politicians, bankers ...) a false sense of security. “Uniqueness” just doesn’t manifest in practical biometric performance.  To make systems not falsely reject unreasonably high numbers of genuine customers, the False Match Rate needs to be compromised.  Uniqueness is deliberately sacrificed by engineers, while the marketing departments continue to bang on about it.

Worse than this, published error rates for biometrics are chronically optimistic.  As I mentioned previously, biometric performance measurement almost always uses the “zero effort imposter” assumption, which causes systemic over-estimation of their strength.  False Match Rates are worked out by counting accidental matches, and do not look at scenarios where someone is trying to get matched falsely.

The recent FBI report State of the Art Biometrics Excellence Roadmap, Technology Assessment October 2008 says:

The intentional spoofing or manipulation of biometrics invalidates the “zero effort imposter” assumption commonly used in performance evaluations. When a dedicated effort is applied toward fooling biometrics systems, the resulting performance can be dramatically different. (see Vol 1, page 1.4).

And

For all biometric technologies, error rates are highly dependent upon the population and application environment. The technologies do not have known error rates outside of a controlled test environment. Therefore, any reference to error rates applies only to the test in question and should not be used to predict performance in a different application. (page Vol 1, 2.10).

In other words, the stated performance specifications of biometric solutions don’t tell us much about how well they stand up to criminal attack.  This failing is nothing short of remarkable.  Shouldn’t resistance to would-be robbers be top of mind when commissioning banking system security?

Cheers,

Stephen.

 

 

 

 

Elton Cane
Blog group founder
Elton Cane - News Corp Australia - Brisbane 02 February, 2009, 10:35Be the first to give this comment the thumbs up 0 likes

Regarding the technical issue, we're looking into it to see if there are any problems. But I'd just like to remind community members that commenting on a blog is a two-step process: Once you've written the comment in the box provided, the button at the bottom is labelled save. Click this and your comment is then displayed to you as a type of preview, but you then need to click the publish button below the preview to actually make it live.

A Finextra member
A Finextra member 03 February, 2009, 14:12Be the first to give this comment the thumbs up 0 likes

Indeed - the idea is you get the chance to review and fix any problems with your comment before it goes live. If you forget to click publish - just go to your personal home page and you'll get the chance to fix it.

Nick Ogden

Nick Ogden

Founder and Director

RTGS.global

Member since

17 Sep 2008

Location

London

Blog posts

47

Comments

59

This post is from a series of posts in the group:

Innovation in Financial Services

A discussion of trends in innovation management within financial institutions, and the key processes, technology and cultural shifts driving innovation.


See all

Now hiring