[The] compliance procedures applied to the current physical process of card issuance become a millstone around the bank’s neck...
I don't disagree, especially as I am one of those who complain that the world has gone compliance mad! However, a good deal of payment card compliance relates to security, and this creates a significant barrier to entry for phone based payments.
If Apple becomes a bank, and the handset becomes a special private wallet, it creates a whole new jungle track along which money changes hands and criminals will find weaknesses. Security challenges and resulting tensions include:
Handset certification: as merchant terminals need to be Visa/MasterCard/EMVco certified, then so will handsets. Certification creates significant bottlenecks in product release cycles. Yes, handsets are subject to telecommunications testing too, and we might expect the vendors to be comfortable being highly regulated, but their traditional compliance burden applies more to hardware than software, and certified RF modules can remain stable across multiple products. In contrast, payments functionality is software-based and evolving very rapidly. Every new handset model and software upgrade may need fresh certification.
Platform security: need I say more? Smart phone malware is emerging rapidly, and nobody is sure how inherently secure mobile phone operating systems really are. The nice thing about smartrcards is that their computers are very compact, dedicated to a small set of tasks, much more testable, much less configurable, and their architectures mostly hark back to security applications, so they're better pedigreed.
Social Engineering: as the Droid09 scare showed, there are limitless new opportunities for criminals to dupe smartphone owners into loading malicious applications. As ever so sexy payment apps flood the markets (especially less regulated ones like Android's), how will customers sort the sheep from the goats? It's likely to become completely bewildering, with enormous opportunities for good, in joining up payments, loyalty, money-management, e-commerce and bartering. During the development rush, the quality of well intended third party apps will be very suspect. And criminals will be able to slip in utterly bogus software.
This is going to be a wild ride!
Security will give the banks several years breathing space. A smart strategy to protect their turf would be to hybridise the handset with bank-issued chips is some manner. They could shift the UX from card to phone while preserving the card's intrinsic security benefits.
Cheers,
Stephen Wilson, Lockstep.
19 Oct 2010 16:57 Read comment
I totally agree. To treat the PAN as some kinda secret is a fool's erand. Equally, to put all of one's effort into security policies and promises and audits that essentially seek to hide the PANs -- but still do nothing at all to make systems immune to stolen PANs -- is nuts. A secure payments system would be designed around an assumption that PANs and personal data will still be stolen.
The proper pathway to improvement is to protect merchant systems against the replay of stolen or illegitimate PANs. The solution is technically very straightforward: asymmetric cryptography and chip cards. A PAN presented from a chip card can be differentiated from a PAN presented by other means, such as by a replay attacker.
PCI-DSS offers some protection against accidental breaches and against amateur attacks. But it does little to deter organised criminals, or corrupt insiders, from stealing and abusing account data.
06 Oct 2010 11:59 Read comment
Sorry John, I may have missed your point.
My focus was on automated electronic authentication, where revocability I believe is essential. Yes today we present handwritten signatures to other human beings as a form of authentication, but the manual decision to accept or reject is subject to all sorts of extra cues and layers of verification, and also forensics. In contrast, decisions made by machines around electronic biometrics are essentially instantaneous. The potential for fraud is quite different from traditional in-person presentation of photo IDs, signatures etc.
So we're playing an entirely different game here with electronic biometrics. A forged signature or photo ID cannot be replayed (in person) in the same way as stolen electronic biometrics can. And a compromised photo ID can always be revoked and re-issued. Nobody has explained how a compromised finger vein pattern will be revoked. Vendors will instead try to argue they cannot be stolen. In principle, that's a bad kind of answer. And in practice, the likes of the FBI will counsel us to watch this space! All biometric vendor specs are produced under the "Zero Effort Imposter" assumption, and wilfully ignore the possibility that an attacker might actually make a concerted effort to break the system.
07 Sep 2010 08:53 Read comment
"Knowing your vein pattern, retina details, thumbprint, face geometry etc - so what?"
Because then they get replayed, that's what. It's simply not true that biometrics are "a lot, lot harder to duplicate". See the Gummy Bear attack on fingerprints for starters.
And even if a particular biometric does prove very hard to duplicate, are we willing to bet the house on it being literally impossible? Because once compromised, no practical biometric can be revoked and reissued. There is no room for error.
Everyone knows there is no such thing as perfect security, but biometrics are actually premised on the supposed impossibility of compromise. They defy security logic.
03 Sep 2010 13:47 Read comment
No way I would use a biometric ATM.
It's said that biometric ATMs are proving popular in Japan and elsewhere but actual performance figures are still hard to come by. The vendors' marketing claims for false positives (must be low for security) and false negatives (must be low for customer convenience) border on the hyperbolic.
Hitachi claims "there's only a 0.0001% chance of someone passing off their vein patterns as yours". But that's only half the story. What is the corresponding false negative rate when the system is tuned to be so ultra discriminating? Well, the only independent testing I have managed to find for finger vein technology shows that at a False Match Rate of 0.0001%, the False Non Match Rate can deteriorate to 20%. That is, one in five times the customer will have to try again. I suspect that to keep the retries down, the systems are de-tuned in practice to be rather less accurate than 0.0001%, but exactly what the accuracy and overall security are in practice, we just don't know.
Meanwhile the FBI urges caution because lab testing doesn't translate to real world experience:
For all biometric technologies, error rates are highly dependent upon the population and application environment. The technologies do not have known error rates outside of a controlled test environment. Therefore, any reference to error rates applies only to the test in question and should not be used to predict performance in a different application.
Yes, there are major privacy concerns. At present, no chip-and-PIN card is going to have the biometric template stored on the card, much less matched on the card, because of the cost of the extra memory and software. So the templates must be stored centrally, and each time you visit an ATM, your biometric data will be sent out for matching. I agree with Lachlan Gunn's concerns over the ability to safeguard thiese data stores. If they can't stop card data today being stolen en masse, then we have to assume that stolen biometric data too will one day be in circulation amongst organised crime gangs. And then what? At least when my card is stolen, I can have it revoked and re-issued. But with biometrics that's impossible. They leave no room for error.
And when the thieves of tomorrow's brave new world steal my biometric template, lord knows how many other accounts they will also automatically have the keys for!
02 Sep 2010 07:36 Read comment
There is a category error in many of the calls for a single global identification system. We need to be careful about what is meant by "identification".
There are several weaknesses in the way we go about "identifying" people: some fraud happens by false registration, and some happens by co-opting digital identities after they've been issued. The latter is far more prevalent ... because it's so easy. Why go to the bother of opening a fake credit account when I can steal the identifiers for an existing account and replay them in CNP fraud?
And so a single global digital identity might be disastrous if it weren't vastly more secure in respect of counterfeiting and replay attack.
I urge a careful revisiting of the identity problem. By and large, we do a good job of identifying people in the real world; there are abundant and effective measures for verifying identity when opening up new accounts. But then we do an awful job of protecting the digital identities used to exercise our accounts on line. So I would like to see standardisation of the authentication technologies. Today we have a crazy profusion of divergent, awkward, novel and imperfect ways of proving ourselves online: one time passwords, CAP readers, visual puzzles, grids, biometrics etc etc. None of them directly protect the integrity of digital identities, so they're all vulnerable to some degree to replay and Man in the Middle attacks.
John Dring is quite right that Big PKI proved too hard. Largely that's because it was trying to build something we don't really need: a global identity. As John says, we need to revert to authorities that are already trusted to issue identities. But we also need to stick to our knitting, and not have existing identity issuers overstep. Despite what federated identity proponents would have us believe, the identities issued by banks (accounts) are not the same thing as the identities issued by retailers (customer reference numbers) or by governments (social security numbers, Medicare numbers, tax numbers). Identity silos emerge for a reason: digital identities are actually proxies for customer relationships, and these cannot be mixed up without radically altering business rules and liability arrangements.
It would be a huge breakthough if we simply preserved the existing business processes for issuing identities to customers, and concentrated on conveying those identities using non-replayable authentication technologies. That's where PKI does come into play. PK technologies let customers present the right digital identity in each different context, and bind it to the transactions so they cannot be replayed or counterfeited.
We could use the same methods and user interfaces for conveying identities globally, without forcing people into just one identity. We do this now: all magnetic stripe cards and all phones work the same way worldwide, but we don't have a single phone number or a single bank account. Equally, with smartcard and smart phone technologies, we could provide people with a universal online authentication experience based on non-replayable PK technologies, while preserving their real world relationships and business processes.
15 Jul 2010 02:16 Read comment
Brett,
You've hit on a pet topic of mine ;-)
I believe there is definitely an important divide between the digital persona and the person. In real life we exercise a portfiolio of identities, and we should do so online as well. Except that lots of the newer identity frameworks seem to overlook the importance of having separable independent digital identities. Biometrics too is a worrying technology because it can remove user discretion and jam all digital personae together.
It's not an academic idea that digital identities are real and separate from the 'true' biological identity. Two examples. First, my corporate bank account is legally different from my personal bank account, so my digital identity when I bank online for my business has to be kept separate. Second, the exciting new field of medical social networking, where for example there are findings that mental health patients may enjoy better clinical results from online psychological counselling compared with face-to-face. One factor may be that patients deliberately adopt a sort of disarmed digital persona making them more amenable to therapy; if so, this surely demands the greatest care be taken separating digital identity from biological.
Some of my more detailed thoughts are set out in a one page paper Introducing Identity Plurality.
Steve.
03 Jun 2010 03:02 Read comment
Yes, I agree there is no one-size-fits-all, so the idea of different privacy settings (plural) is an important one. Lots and lots of fascinating research is yet to be done on how to create usable privacy GUIs.
On the other hand we might agree to disagree over privacy and the young. I worry that sexting and the like are results of youthful exhuberance coupled with seductive, fun, prestigious technology, exacerbated by the developmental problem that the frontal cortex might not be able to properly compute risk until age 25 or so. Accordingly, youthful OSN users may need to be protected from themselves.
But there is another famous test about Gen Y attitudes to privacy: Barge into a teenager's bedroom unannounced and watch how they reflexively protect their privacy! So it comes back to control. I think almost everyone at all ages actually cherishe their ability to control what is known about them. If some people on OSNs seem to have abandoned their inhibitions, we may need to discount this to some extent because they simply might not fully comprehend what they're doing, because they're young, and/or because the OSN privacy configuration is so opaque.
SW.
03 Jun 2010 02:06 Read comment
I concur with David Divitt. "Banks must ensure they take full advantage of the technologies offered in these solutions, such as signing transactions". Until now, most "signing" using CAP readers and the like has been mickey mouse. A proper long term solution will sign the entire data payload between browser and server, and will need to use connected smartcard readers at the customer end. These have been a long time coming, but thanks to the rise in non banking smartcards like US PIV ID cards, we're seeing more laptops feature integrated card readers (like the Dell e series). The beauty of the connected reader is that it provides a sensationally easy to use, ATM/POS-like customer experience for online shopping and banking alike. I appreciate there is anxiety about Man-in-the-Browser malware being able to co-opt the card, but these attacks can be mitigated by WYSIWYS tools in the chip.
03 Jun 2010 01:51 Read comment
Brett, I was kinda with you most of the way, until the last paragraph. I especially like your reminder that Facebook is not mission critical. Yet when you say "we have to get used to a different level of privacy, openness and communication" I beg to differ.
Facebook is the way it is precisely because Zuckerberg set out to design a vast collector of personal network information, with the intent of capitalising on it. When it was revealed recently that he called early FB users "dumb fucks" for trusting him, the more important revelation was that also he told his colleague, 'if you ever need to know anything about anyone at Harvard, just ask me'. I found that quite chilling.
Yes indeed FB is top fun, and it need not be dangerous if users are self conscious and wary. However, I don't think we should let FB off the hook for being (a) wanton pirates of personal information, and (b) probably deliberate in the way they manipulate privacy norms. When Zuckerberg and others assert that privacy is changing, their self-serving edicts are based on a scant few years experience of a biased selection of a risk-taking cohort. It's just too soon for the FB experience to tell us how privacy attitudes in society at large are changing.
We don't let adolescent males set road safety policy and we shouldn't let them make privacy policy either.
I am myself extremely wary of the idea that privacy and utility/participation has to involve a tradeoff. My experience as a privacy professional is that whenever someone says 'privacy is dead' or words to that effect, they are actually trying to sell something, be it new sneakers because a retailer detected via Foursquare that I go to the gym a lot, or national security ideology and airport body scanners.
Why should privacy and utility be fundamentally at odds? Why should online social networks necessarily provide an eye-in-the-sky for their operators to collect information at will about their members? If it wasn't for the underlying profit motive, OSNs could easily be designed with privacy as the cautious default, and "exhibitionism" as the hard-to-reach option.
And so, because I view FB as a business more than a social phenomenon (much less a well designed experiment), I have to discount claims arising from this specially selected population, that Gen Y and Digital Natives are in general more relaxed about security and privacy. I am not sure about that at all.
Stephen Wilson.
02 Jun 2010 13:12 Read comment
Online Banking
Transaction Fraud Systems and Analysis
Phil DaviesManaging Director at Bancom Europe Ltd
Jason MaceManaging Director at Gala Technology
Matt NeillManaging Director at Beyond
Luke WattsManaging Director at RoundWorks IT
James BerryManaging Director at Valuedynamx
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.