Saturday, December 23, 2006
PS - Did you know that googling for "Bono" and "wanker" gets 39,000 hits? If you restrict it to blogspot, you still get over 790 :)
Monday, November 20, 2006
I understand that there has been some discussion and concern around “pharming” – the (ab)use of the Domain Name System to misdirect people away from legitimate site(s) – normally financial institutions, to fraudster controlled sites. It has been alleged that banks are pretending that the problem doesn’t exist, refusing work properly with the police, and “want to keep online crime (an implicit “artificially” should be recognised here) figures low”.
Where to start – well, let’s start at the end. Yes, banks do want to keep fraud against them and their customers within manageable levels and ideally as low as practical (note, not possible – as low as possible would be £zero, and this is impractically expensive, even if you exclude collusion between criminally inclined bank staff). Risk appetite varies with the specific banking product - consider the difference between mortgages and "unauthorised overdrafts", for example, and controls are selected, in part, for their cost effectiveness. Pretending crime is lower than it actually is will happen in some areas, I am sure, but in the end the books have to balance, one of the pretty fundamental attributes of banking and all that would happen, really, is that the amount would move between different fraud categories – customer to third party or internal.
Let’s look at the initial premise – does a problem exist? DNS attacks, of various levels of subtlety and technical prowess, have been around for some time. The fraudulent transfer, for example, of the sex.com domain (which is still rumbling on over 10 years after the event) was a totally non-technical DNS hijack.
So, how can the DNS compromise take place:
- On host - modify \etc\hosts - this will often be done with a memory resident browser (mostly IE, as ever) exploit, therefore will not show on AV
- On host - modify resolving DNS servers to ones controlled by the fraudsters and fake authoritative for bank domains - as above
I do not personally consider those to be "pharming" - they should be categorised as malware payload impact and, it needs to be pointed out, that the only way you can spot these is to be using the client resolver on the customer's machine - not a practical detection (as opposed to investigation - you can ask the customer to run nslookup or dig and ip/ifconfig and compare the results with what might be reasonably expected) proposition for the banks.
- At the ADSL router or cable modem - these often will act as a DCHP server and there is a history of security vulnerabilities and lack of a patching mechanism. These can then either direct DHCP clients to use a fraudster controlled DNS server or, where they also act as a DNS server, either be falsely authoritative for the banking domains or use a non-standard root to find the "authoritative" servers. This has the same advantage for the fraudster that you need to be on the customer's LAN / WLAN to see the issues.
- At ISP - compromise the DHCP servers to get them to return DNS resolving servers controlled by the fraudsters: this has the advantage that it can be trivially invisible to even the most well aware customer.
- At ISP - compromise the DNS servers to get them to pretend to be fake authoritative for the target banking domains or to refer to fake auth servers controlled by fraudsters.
- At ISP - DNS cache poisoning using glue records - this shouldn't work as the attack is 3 to 4 years old. However, surveys suggest that too many resolving DNS servers are still vulnerable: I have seen data which suggests over 80% of boxen surveyed are broken - not that I have any trust in the methodology or the results of that survey.
- At the registry or root servers - as per two layers above - these should be better protected 'though.
- At the authoritative servers for the banking domains.
Now, only the last two will affect all of the bank's customers and only the very last will be on infrastructure that the bank reasonably control or have direct contractual influence over. The previous 3 will only affect customers of the specific ISPs targetted, who may or may not be an ISP that the bank uses for some form of internet connectivity. Now the bank's customer relations / internet banking helpdesk will normally access the site through the bank's internal network, so they are going to need (and, in my experience, often have) terminals capable of dialing in through consumer ISPs, in order to see as close as possible what the customer is seeing.
It should also be noted that "compromise" does not mean "hack" - the malware, router modification and DNS cache poisoning are all technical compromises, but for the others a suborned or placed ISP employee would be perfectly viable method. I suppose that you could also get an ISP employee to place a dodgy upgrade to the routers, but you would expect that this would be more likely to be detected by the ISPs controls framework.
Saturday, November 18, 2006
I am not a "No2ID" purist, although I am a member. I think that the Identity Cards Act 2006 was an awful, regressive piece of legislation. But please consider an analogy.
Let us take some awful waste of public money - say, for one of many examples, Jonathan Ross's current BBC contract. Now, this is appalling (at least for people who consider Woss to be essentially pointless) but it is not actually evil. To consider true evil, no matter how cheap, look at the ministerial component of John Reid's salary (the poor, benighted souls of Airdrie and Shotts did vote for this repressive Stalinist, so now matter how much I loath him, I can't quite bring myself to resent the Parliamentary salary component quite so much.) Now, that is evil.
In the same vein, we have the ID Card itself - expensive and pointless (if it ever worked) - versus the National Identity Register - truly evil.
So, what do I believe? Cards first:
- There is nothing fundamentally evil about a National Identity Card - although it does change the relationship between state and citizen a bit - although not too much, provided there is no compulsion to carry - I have eight pieces of government issued ID that I normally carry on a working day (and no, I don't work for them, but I am counting the new driving licence as two pieces.)
- The card should only be used where you are proving who you are in a government mandated way - i.e. to the government itself or one of its agencies, or where the government sets the rules for how you identify yourself - e.g. bank "Know Your Customer" rules for opening new accounts.
- It would be reasonable to expect a smart card containing simple biometrics, to demonstrate that I am probably the person to whom it was issued (including a hi-resolution picture that can be displayed on a simple user terminal or a PC with a reader), some clever crypto stuff to prove it was issued through the proper process, and a unique reference number, backed up by www.cardregister.gov.uk. This would be a readily available register of card numbers and card status, so that anybody can check whether a card is valid - i.e. properly issued and not revoked (lost, stolen or issued incorrectly - whether in error or malice.) It would not contain any personally identifiable data of the card holder. (I can see where including some record of where and how the card was issued might make a degree of sense, but I am not sure that it would not add sufficient additional complexity to make the whole project even less feasible.)
- The government (not just this current bunch of turkeys, but any likely bunch thereof, while the civil service, particularly the Treasury, work the way they currently do) has very little change of making a project of this size and technical nature work - its record is appalling and getting worse.
Many of you will not be aware that the "Police and Justice Act 2006" received Royal Assent on the 8th of November. Amongst the many, many things that this typical piece of Nu-Lab gimmickry does are some significant amendments to the Computer Misuse Act 1990. The amendment to the Section 1 offence is technical (legally, not computery). The amendment to the Section 3 offence to make explicit that conducting Denial of Service and Distributed Denial of Service attacks is covered is welcome.
The creation of the new Section 3A offence - to do with making, supplying and obtaining articles which may be used in the commission of a Section 1 or 3 offence (what has poor Section 2 done, I ask you), was, last time I saw the draft words, bad, bad law. I have numerous tools available on various computers, including the one I am using to write this, that could be used to assist, commit or prevent (the latter being my job) the commission of various CMA90 offences. These tools may no longer be legal.
Why am I so fucking annoyed? Because I don't know what is legal and what is not. Because the incompetent fucking morons who preside over ever-increasing aspects of our endlessly monitored (it's to protect the kiddies / grannies / you, you know) existences cannot be bothered to deign to shift their fat arses away from their self-congratulatory face-stuffing awards dinners to publish their fascist (or Stalinist, this is Reid we are talking about, but the effect on us is the same) commands to the ever-so-humble peasantry they deem us to be. It may be available here by the time you are reading this. Or it may not. You can see lots of stuff here, but that is not designed to be readily parsed by us peons and it still doesn't tell me what exactly they made Her Majesty sign us all up to in what I can only assume is still Section 42 (but may not be - somewhere around there, probably.)
Thanks ever so fucking much, Dr Reid. Yes, this arrant incompetence does concern a piece of what even you admit is the "not fit for purpose Home Office" legislation. I hope you enjoyed your dinner.
Oh, and readers, I apologise to you, but not to the pols, for the language. I will try not to let it happen again.
Update (25 Nov) - it is here now. It isn't Section 42 - it's now Section 37. For those too lazy to wait for the OPSI site:
|37||Making, supplying or obtaining articles for use in computer misuse offences|
|After section 3 of the 1990 Act there is inserted-|
Friday, November 17, 2006
Bill: Isn’t it great new that after 31st December everyone will have to use the new Version 1.1 of the Payment Card Industry Data Security Standards? (WARNING - irritating American legalese click-through required.) It will be a real improvement in security at online retailers and it should help reduce the massive spate of data leakage incidents (although not as much as wide-spread laptop encryption.)
Bill: Surely not?
Ben: Flob’a’lob’a’lob a’dob’a’dob
Bill: No, you’ve got to show me some evidence. This cannot just be a cynical arse-covering attempt by the Cards Schemes based on dodgy security principles and poorly thought out practices.
Bill: Surely not – You mean that with the vast majority of (hacker) attacks occurring in the application space, there is no requirement for comprehensive pre-production application level pen-testing? What about 11.3.2?
Bill: Oh I see: “significant … application upgrade or modification” and then all the examples they give are for infrastructure changes. Yes, and I do remember that it is often the small or emergency changes, that will have been through less QA, that cause issues.
Bill: I didn’t know that. You mean that the Cards Schemes are considering the forensic recovery from disk of CCV2 values associated with PANs as fully probative evidence of a breach of 3.2.2? Haven’t they ever heard of asynchronous comms? Oh well, have you seen Weed anywhere?
What Bill is hinting at, here, is the weird contradiction between the requirement of 3.2.2 and the requirement to actually authenticate the transaction: at some point all of the transaction data captured is going to be assembled, encrypted and sent, by a normal asynchronous comms method, to the acquiring bank or financial institution. It will be kept in, probably in memory, until the accept / deny message (and the authorisation code) comes back. The computer is likely to suspend the relevant process / thread and there is a chance that this will mean that the memory is paged onto disk storage. Ergo, unless there is a requirement for the assembly to take place in a Hardware Security Module (expensive, and introduce security management issues of their own), any computer which is regularly used to process cards transactions is likely to have, somewhere on disk storage that was once used as virtual memory, multiple instances all of the sensitive card and transaction data. Without any intent to breach Section 3.2 or even with good technical and procedural measures to comply with it.
Sometimes, my job just makes me cry. Oh, and a little birdie told me that the auditing practices are so rigid for this standard that nearly no company can pass 1st time round, so it is another great money-spinner (and reputation killer) for the information security industry.
Ho hum. It's the weekend and Scotland really shouldn't lose at the rugby tomorrow. :)
Saturday, November 11, 2006
If the terminal has not been modified to record your PIN (which should stop it working, but some crims are clever):Okay - but why Chip and Pin - it is relatively easy. Much credit card fraud was conducted with "cloned" cards - mag stripe reader/writers are cheap, ISO standard card blanks are readily available, and the more organised fraudsters can produce very good looking fake cards. ATM skimming gave them your PIN as well - and it is much safer to get money from an ATM than to buy goods - and you have cash immediately. Also, for those really interested, until the new Fraud Act, there was always the issue with the crime of deception that it required you to deceive a human, a machine didn't count.
When you conduct a transaction, you enter your PIN and this unlocks the card, allowing the card to do a "challenge / response" authentication with the bank (or, if you are below the floor limit in the shop, just saying "I'm unlocked" to the retailers terminal, which then just does the transaction ignoring the rest of this paragraph.) On most machines, you see "PIN OK" or something similar. The bank and the retailers terminal then do a little dance to determine whether you have enough credit / cash, whether your card has (correctly or otherwise) been reported stolen etc, etc. If the bank says "yes", you get your goods, the retailer's terminal gets an auth code and off you go. The retailer can then modify the transaction - look at this in hotels:
You go in, they make a reservation against your card, which you have authenticated with your PIN. When you check out, they get your signature on your bill (which, unless you have been a real pig, is less than the reservation) - they don't need your card in most cases, they really don't need your PIN. (If they ask for them, they are not necessarily defrauding you, but they are doing a new transaction. The reservation will remain against your credit limit until it times out or they cancel it or you complain bitterly to your card company.
I appreciate people's concern, especially scotstoryb, but the systems as currently engineered to allow for amendments , as far as I am aware this is a function of the electronic tills rather than C&P - what would be interesting to know is whether or not this now makes it a CNP transaction (retailer liable) if they make a deliberate (or otherwise) error ...
The HBOS retail person was wrong - they have the bank auth code, they don't have your PIN. (This doesn't detract from the various comments about the relative weakness of Static Data Authentication as opposed to Dynamic Data Authentication C&P cards - see www.lightbluetouchpaper.org.)
Now, if as in the earlier Shell case, somebody has modified the terminals, especially where you have handed your card over and it has been "swiped and docked" - as per Tescos - they have your PIN and they have the mag-stripe data. Not enough to clone your chip but enough to create a mag-stripe only card and use it where either they don't do C&P or where the machines will "fall-back" to mag-stripe. The intent was for you to personally place it in the short terminal (therefore preventing the swipe either in a Tesco's style till or swiftly through a stripe copier).
Chip and PIN was designed to make cloning cards (much, much) more difficult. In order to prevent all the fraud transferring to stolen (but not yet reported as such) cards, you also have your PIN, which (read the small print) only you should know, so your card gets unlocked as described above, before it works.
However, there are, as ever in life, a few complications:
- The type of smart-card chosen was the SDA rather than the DDA card - this makes it open to PIN capture attacks using modified terminals - see Mike Bond's Cambridge article here. As far as I am aware (and I was nowhere near the decision-making process), the decision to use SDA was taken because of the then cost of the DDA compatible terminals. I suspect that a few bank execs are now re-considering this.
- All of our lovely C&P cards still have mag-stripe on the back. This means that if somebody swipes your card, either 'cause they are not yet C&P merchants, 'cause they capture your details for their MI purposes (e.g. Tesco - but they do this, already having swiped my Clubcard - so their computer already knows who I, or my wife, are - still stops them having to train the staff for dealing with "options", I suppose), or because they are collecting data for fraudulent purposes, they can still make a fake mag-stripe card. If they have buggered the terminal (or have a camera, or are just watching you), they have your PIN as well, so can use the card in swipe and PIN scenarios, such as ATMs. I wasn't aware that this was a cards scheme requirement - in the fuss following this press release, I heard an APACS spokes-weasel (may have been Sandra) say on BBC radio that Mastercard and Visa rules meant that we could not have chip without stripe cards. With Maestro having supplanted Switch, we can't even do it for debit cards :(
- Many UK devices (especially ATMs) were configured to do "fall-back" - if they couldn't read the chip (or there was no chip) they would just use the mag-stripe data.
- C&P was a UK thing, therefore all you need to do is take your fake cards abroad ...
Tuesday, October 24, 2006
Heise have also been pushing their latest comment on frameset issues, which, as they quite rightly point out, have been public for some time. (And, you can engineer their exploit to work on the HSBC site, at least with
IE6. Sorry, my error, Firefox 1.5)
Well, Bank security is a complicated thing. Part of the problem is that technical solutions often aren't possible and lots of this is not visible to the users. This, of course, causes other problems when the invisible stuff goes wrong, 'cause the bank can lie about it, but just read Ross's stuff on this. A lot of the fundamental security is built into fraud monitoring processes and back-office systems and the sort of inter-bank co-operation that would scare the conspiracy theorists.The core processes are not technological therefore are driven by people and people make errors (deliberate or otherwise.) And, irritatingly, banks tend to be large organisations and the customer-facing people (suffering in the call centre) will not have the detailed knowledge of how / why the risk management decisions were taken or the compensating controls already or now in place.
Oh, and now please explain this to your on-call media relations person, without using any technical terms, so that they can sound suitably convincing in front of the MSM.