Google has patched a dangerous issue in Chrome that enabled attackers to spoof legitimate domains in the browser by using unicode characters rather than normal ones.
That vulnerability is the result of the way that Chrome handles some unicode characters and it’s not necessarily a new issue. Security experts have known about the underlying problem for several years, and the browser vendors have made changes along the way to address it. But Chrome and Mozilla Firefox don’t prevent all of the different variations of the attack. Last week, researchers showed that both browsers could still be tricked into displaying some unicode characters in a way that’s essentially impossible to distinguish from the normal ASCII characters.
“From a security perspective, Unicode domains can be problematic because many Unicode characters are difficult to distinguish from common ASCII characters. It is possible to register domains such as ‘xn--pple-43d.com’, which is equivalent to ‘аpple.com’. It may not be obvious at first glance, but ‘аpple.com’ uses the Cyrillic ‘а’ (U+0430) rather than the ASCII ‘a’ (U+0041). This is known as a homograph attack,” researcher Xudong Zheng wrote in a post on the attack.
The biggest risk with this issue is its potential use in phishing attacks. If an attacker is able to register a domain that is visually indistinguishable from a legitimate one, he would have the ability to trick users into trusting the site. Google fixed this vulnerability in Chrome 58, released on April 19. However, Mozilla has decided not to make a change to Firefox to address the problem. The company has published a FAQ that explains both the attack and why Mozilla isn’t planning to address it in Firefox.
“Our response to this issue is that in the end, it is up to registries to make sure that their customers cannot rip each other off. Browsers can put some technical restrictions in place, but we are not in a position to do this job for them while still maintaining a level playing field for non-Latin scripts on the web. The registries are the only people in a position to implement the proper checking here. For our part, we want to make sure we don’t treat non-Latin scripts as second-class citizens,” the document says.