So ... "More than half the UK population still does not shop online and of these 41% cite security fears as a cause for concern".
Do they appreciate that staying offline will not save them from CNP fraud? One of the crucial lessons of the recent massive breaches at processing centres is that organised crime is obtaining credit card numbers from conventional merchants, and then replaying them online. So you can try to 'play it safe' and never shop online but still have your account abused online.
16 Mar 2009 19:19 Read comment
One time password tokens are understandably native/proprietary amd divergent. But I don't get why the unconnected card readers don't interoperate with all EMV cards. Are there variations in the CAP implementation that means Visa cards don't work in MasterCard issuers' readers?
[The Cambridge boys have grabbed headlines once again, but the moral of their findings is really that security devices need careful engineering, not that "Chip and PIN is broken".]
Isn't it time we took the next step, and advanced from unconnected readers (where there is clearly no imperative for interoperability) to connected readers where, like ATM and EFTPOS, the one reader would be expected to accomodate all cards?
I note that integrated smartcard readers are getting more widespread once again, probably in response to the fact that there are 1000 million EMV cards around the world, and at least 200 million government and health smartcards. Dell even has a laptop now with both contact and contactless readers!
And here's a funny story about someone who didn't even notice that his laptop had a reader built in.
Connected readers remain a bugbear, but for no good reason. It's a simple matter of supply and demand. In 2003, most major laptop vendors (including Dell, Acer and Compaq) released product with built-in smartcard readers, in response to announcements from Bill Gates that Microsoft was committed to chips. That wave of interest turned out to be premature, applications didn't materialise, and the vendors took the readers out in favour of other features. Yet the clear lesson is it doesn't take much of a trigger for laptop makers to provide smartcard readers.
If just one large financial institution was to commit to connected readers -- for all the inherent and unique benefits you get from proper signing of transactions -- then it's likely that built-in readers will quickly become standard, and we'd be on the way to getting rid of all the special gadgets.
05 Mar 2009 19:41 Read comment
Robert,
I do not believe it's reasonable to put the onus on users to this extent to protect themselves online. This is not supposed to be the Wild West anymore; we're talking about making people safe as they go about their business in the digital economy.
The traditional preoccupation with education and training does not serve ordinary people well. The best advice from governments and banks -- such as www.protectfinancialid.org.au and www.staysmartonline.gov.au -- runs to a dozen pages or more. It's simply overwhelming. And technicians know it's already out of date: keeping an eye out for the SSL padlock for example is not enough anymore given Man-in-the-Middle attacks on the certificate chain. The theft of credit card details en masse from back-end databases means that even when a user diligently follows all the cyber safety advice, they can still have their IDs stolen and replayed in CNP fraud.
As for the idea of 'safe driving' online, well when the equipment is inadequate, relying on education alone, with a dose of tough love, can border on reckless. As Ralph Nader said, some cars are "unsafe at any speed".
I don't want to turn people off, nor do I expect people to be protected from stupidity. My point is that we need more balance in the cyber safety debate, and a greater emphasis on proper security infrastructure is urgently needed, as a matter of public policy. In particular, it is high time that governments and banking regulators took a greater interest in digital identity technologies, and stopped letting market forces alone cater for consumer protection online. We don't let the public connect any old bit of kit onto the telephone network, so regulators should be prepared to take a more active role in digital economy infrastructure.
21 Feb 2009 06:11 Read comment
You ask, why is it so difficult to get customers to install a toolbar?
Robert, if your starting premise is that they are "complete idiots", then you have, in one fell swoop, answered your own question, pissed off a whole lot of your targets, and exacerbated the problem.
Yes, those who fall for phishing are not highly computer literate, but why should we expect them to be? We don't expect all drivers to be automotive engineers, or Niki Lauda. Any purported solution to phishing that requires average Internet users to install special toolbars is likely to be beyond the capacity of the very people who are falling prey.
Cheers,
Stephen Wilson, Lockstep.
21 Feb 2009 04:01 Read comment
There are a few grains of truth here; e.g. that people tend to be lazy and that contributes to insecurity. But that simple truism has nothing at all to do with "addiction".
I am troubled by the implicit wild west psychlogy, the siren call for individuals to take the fight up to the hackers. Sure, it's a good idea to buy a shredder and lock your mailbox. But the really big issues in e-security today are the responsibility of banks and governments: the intrinsic value of stolen IDs, the consequential profit motive for mass ID theft by organised crime, the ID black market, the multi-billion dollar epidemic of Card Not Present fraud, not to mention the coming wave of medical identity theft and the like.
It does no good to exagerate the psychology of insecurity. In fact it may detract from the much more important issue: the rights of the public to proper, reliable identity security infrastructure.
05 Feb 2009 21:20 Read comment
Marite,
If you're refering to US Patent 6,931,382 (mentioned in your Finextra profile), it is listed by the USPTO as being invented by Dominic P. Laage and Maria T. Laage, and is assigned to CDCK Corporation. What is the relationship with you and CardSwitch Technology?
Stephen Wilson.
05 Feb 2009 03:38 Read comment
Thanks Nick. It's good to be able to continue this thread.
The figure of 99.6% "first time authorisation" sounds good. Can you provide details of how that's measured please? For comparison's sake, what's the Equal Error Rate (which the FBI reckons is 2% best case for voice). And is it measured under the "Zero Effort Imposter" assumption?
Anyway, I am not trying to drag us into specmanship. Rather, I am keen to debate the 'fine print' for the sake of policy people and non technologists who are making important decisions, without properly comprehending how biometrics work, and how and why they fail. I'm sure that good progress will continue to be made on most biometrics, and I agree they have a role to play in some applications. But I believe that use of the term "unique" may mislead people, perhaps inadvertently, perhaps not. It doesn't quite gel to claim on the one hand that a certain trait is "unique" while on the other hand, real life performance of the biometric apparatus falls short of 100%. Sometimes, it falls way short.
[For instance, only recently, former US Homeland Security chief Michael Chertoff, in a speech about identity technology , stated without qualification that “your fingerprint is unique”. Even if that were true (and some researchers actually think it’s a myth) the fact is that fingerprint scanners don’t have the precision to deliver “uniqueness”. The UK Government CESG testing I mentioned last time showed typical False Accept Rates of 1 in a thousand. That sounds nearly unique, but the corresponding False Reject Rates for two of the scanners tested were 10% and 25%, which would render them unusable in automatic banking. So in practice, the False Accept Rate will have to be de-tuned, maybe to 1 in a hundred, where the False Reject Rate is still worryingly high, between 5% and 15%. Getting this tradeoff right in unattended banking security applications will involve tough choices, hopefully made transparently.]
Is my criticism of abuse of the word "unique" mere pedantry? I don't think so. We're talking about security, and the first thing anyone needs to know -- as Nick has properly reinforced -- is that no security is perfect. Yet loose talk around "uniqueness" is giving all sorts of lay people (policy makers, politicians, bankers ...) a false sense of security. “Uniqueness” just doesn’t manifest in practical biometric performance. To make systems not falsely reject unreasonably high numbers of genuine customers, the False Match Rate needs to be compromised. Uniqueness is deliberately sacrificed by engineers, while the marketing departments continue to bang on about it.
Worse than this, published error rates for biometrics are chronically optimistic. As I mentioned previously, biometric performance measurement almost always uses the “zero effort imposter” assumption, which causes systemic over-estimation of their strength. False Match Rates are worked out by counting accidental matches, and do not look at scenarios where someone is trying to get matched falsely.
The recent FBI report State of the Art Biometrics Excellence Roadmap, Technology Assessment October 2008 says:
The intentional spoofing or manipulation of biometrics invalidates the “zero effort imposter” assumption commonly used in performance evaluations. When a dedicated effort is applied toward fooling biometrics systems, the resulting performance can be dramatically different. (see Vol 1, page 1.4).
And
For all biometric technologies, error rates are highly dependent upon the population and application environment. The technologies do not have known error rates outside of a controlled test environment. Therefore, any reference to error rates applies only to the test in question and should not be used to predict performance in a different application. (page Vol 1, 2.10).
In other words, the stated performance specifications of biometric solutions don’t tell us much about how well they stand up to criminal attack. This failing is nothing short of remarkable. Shouldn’t resistance to would-be robbers be top of mind when commissioning banking system security?
Stephen.
31 Jan 2009 23:39 Read comment
Nick said that "as far as can be proved everyone's voice is unique."
Sorry but once again we see here a serious misrepresentation of biometrics. The term "unique" in the context of biometrics is utter hyperbole. Even if it were true that voice patterns are "unique", the critical question is whether a biometric mechansim is capable of telling all voices apart. And the truth is that no biometric apparatus is perfect. In fact, most biometrics fall so far short of perfection that I believe use of the word "unique" constitutes false advertising.
All biometrics commit two sorts of error. A False Match (or False Accept) is when the apparatus is presented with an imposter but wrongly confuses them for an enrolled user. And a False Non Match (or False Reject) is when the apparatus fails to recognise a legitimate user. It's worth repeating, all biometrics commit both sorts of error to some degree. So already the claim of "uniqueness" is wobbly.
The False Accept Rate (FAR) and the False Reject Rate (FRR) can be traded off to produce a sort of performance compromise that makes sense according to the application. If the application is access control on the door to a nuclear missile silo, then the system will be biassed towards lower FAR because the consequences of admitting an imposter are dire. But if the application is an ATM, or an e-commerce system, then the proper tradeoff is a tough choice. Is customer convenience more important than security?
For voice recognition, typical results are:
When tuned towards security: FAR can be reduced to 0.1% (1 in a thousand) but the FRR rises to 6% (1 in 16 legitimate attempts will be rejected)
When tuned towards convenience: The FRR can be reduced to 3% but the FAR rises to 20% (1 in 5 imposters are admitted).
[Reference:Biometric Product Testing Final Report by the National Physical Laboratory for the Communications Electronics Security Group (CESG) 2001. Note that indications in the more recent report by Mitre Group for the FBI shows no great general improvement in commercially available voice biometric systems. Technology Assessment for the State of the Art Biometrics Excellence Roadmap October 2008.]
Finally, the nail in the coffin for "uniqueness" is what's called the "Zero Effort Imposter" assumption, which leads to a systemic over-statement of the security of biometrics. Pardon me for getting technical, but this is really worth understanding. All standardised biometric testing uses the assumption that False Matches are the random results of instrumentation error and algorithmic imprecision. That is, the testing assumes that an imposter has made zero effort to fool the system. As stated in the Mitre/FBI report of October 2008: "When a dedicated effort is applied toward fooling biometrics systems, the resulting performance can be dramatically different".
That is, the published performance specifications for biometric security systems do not apply to people who are actually trying to break in. Where does that leave banks when trying to evaluate these solutions for their ability to resist attack?
29 Jan 2009 19:54 Read comment
If "Visa and MasterCard warned of suspicious activity surrounding processed card transactions" then presumably data stolen from Heartland was found being replayed in CNP frauds. Yet "[the] firm says no merchant data or cardholder social security numbers, unencrypted PINs, addresses or telephone numbers were involved in the breach." That's hard to believe. Most CNP replay attacks will need to use the billing address and full name on top of the CCN and Exp Date. So it seems unlikely that addresses would not have been involved in the breach. Furthermore, when attackers go to all the trouble of stealing CCNs and Exp Dates, why wouldn't they also nab SSNs, phone numbers and anything else they can get their hands on?
22 Jan 2009 21:42 Read comment
Dean wrote:
Stephen I've looked at what you are doing and cannot see past the difficulty in becoming the repository of individuals's data. ... I assume that no data can be safeguarded by any central repository. ... Surely you can see that your approach actually increases risk!
Our approach doesn't store anything centrally; see the explanatory slide deck. I agree that no central repository should be considered secure.
So, what we do is allow transaction system designers to decide up front what really needs to be known about a customer in a given context, and then we put that discrete data under the control of its owner. For instance, in a credit card transaction, the discrete data might just be the credit card number; for an e-health record entry, it might be a Unique Health Identifier (or a scheme-specific private identifier). Some call this data a "claim" or an "assertion"; usually it can be as simple as a numeric ID.
Technically, Stepwise secretes the minimum ID -- and nothing more -- as an anonymous digital certificate bound to a unique chip (which can be a smartcard or a SIM or an HSM or a USB crypto stick, anything so long as the private key is generated in the chip and remains there). Then when the chip-owner 'presents' their ID they do so by creating a digital signature on a particular piece of transaction data, using that certificate. Because the private key and chip combination is unique, it is not possible for an outsider to take over an ID. Nothing in the data stream can be stolen and replayed; to steal an ID, you need to steal the actual chip (and know its PIN). If desired, we can put multiple IDs on a chip, each ID secreted in its own certificate, and each invoked seamlessly according to context. A typical smartcard can hold a dozen Stepwise IDs; a 3G SIM can hold scores of them.
So there is no centralisation of personal data. Quite the opposite. By allowing customers to present the bare minimum ID required for each transaction (and allowing merchants et al to have radically better confidence in the pedigree of IDs received), we can stem the flow of extraneous personal information.
A podcast of a live demo of the CNP application is available here.
11 Dec 2008 08:02 Read comment
Online Banking
Transaction Fraud Systems and Analysis
Richard CarterManaging Director at Equiniti Credit Services
Melvin HaskinsManaging Director at Haston International Limited
Ian Hillier-BrookManaging Director at MCO Europe
Peter ThomasManaging Director at DLRT Ltd
David JoyceManaging Director at KIngsbrook Consulting Ltd.
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.