Launch Recap and Q&A with Pindrop CMO Mark Horne
Pindrop has just announced an evolution for Pindrop® Protect, our industry-leading anti-fraud solution now extending its protection into the IVR and finds more fraud leveraging Trace, graphic analysis technology.
Tomorrow, join Pindrop Chief Marketing Officer, Mark Horne as he talks about the new technology, and the future of graphic analytics to predict fraud. We will also open up time for Q&A.
What is Pindrop Trace?
Trace connects seemingly unrelated activities to reveal patterns that indicate fraudulent activity and provides:
- Increase accuracy, reduce false positives, and improve cross-channel fraud detection.
- A more complete view of your company’s “fraud universe”
- Fraud predictions by analyzing relationships between behaviors, accounts, other parameters
- Connections to seemingly disconnected activities across time, accounts, and activities



5 Insights in 15 Minutes: 2023 Authentication Trends
After significant research from 2022 data, our experts have predicted 5 compelling authentication trends looking into 2023 and beyond. Whether you’re considering optimizing customer experience, are concerned with recent deepfake news, or wondering how the rise of voice assistants will affect customer security, this webinar will touch on how consolidated market research shows the top trends to be mindful of as you consider your authentication strategy.
The 2024 Security Landscape: Your Guide to Modern Fraud Prevention
From data breaches to deepfakes, the current state of cybersecurity and the impact of generative AI on fraud activities has had massive implications on businesses and consumers. Join Pindrop leaders for an exclusive webinar as we dive into Pindrop’s latest findings detailed in our annual Voice Intelligence and Security report, covering the evolving fraud and security trends and solutions in contact centers.
Breakout Leadership in A Cautious New World
Leadership Insights with John Chambers: Excelling in difficult times and doing business in the new normal.
A global health crisis, civil unrest, and rapidly changing societal norms concerning space and capacity are forever changing the way we live, work, lead, and measure performance. Join Pindrop CEO and Co-founder Vijay Balasubramaniyan and John Chambers, CEO of JC2 Ventures, former CEO of Cisco Systems, and newly appointed Global Ambassador of La French Tech, for an engaging 1:1 discussion about leadership during unprecedented times. Learn from proven playbooks how businesses can not only start to move forward but to thrive in the new normal.
This webinar will provide leaders inside and outside of contact centers with actionable recommendations and proven strategies to keep your organization and its’ teams moving forward in our rapidly changing world.




Increase accuracy, reduce false positives, and improve cross-channel fraud detection.
A more complete view of your company’s “fraud universe”
Fraud predictions by analyzing relationships between behaviors, accounts, other parameters
Connections to seemingly disconnected activities across time, accounts, and activities
Your expert panel


Mark Horne
Chief Marketing Officer, Pindrop




Users want options for convenience
Multifactor is no longer optional
OTP risks cause push to contact center
Authentication must evolve for deepfakes
Hands-free highlights need to secure voice
Your expert panel


Nicole Culver
Director of Product Marketing
This paper aims to provide a comprehensive guide to authentication strategies and predict trends for authentication in the coming years, through the following chapters:
- History of authentication
- What is authentication
- The landscape today
- Authentication weaknesses
- The human authentication experience
- Authentication trends in 2023 and beyond
It all started with the invention of Phoneprinting. In 2011, Pindrop’s founder and now CEO, Vijay Balasubramaniyan, patented this unique technology that broke new ground in contact center fraud detection. Pindrop hasn’t looked back since. Pindrop has reached a new milestone: in 2022, a record number of 30 patents were granted, bringing us to 104 total patents awarded. These patents span over 25 jurisdictions around the globe including North America, Europe and Asia. Our patents cover a wide variety of topics in the contact center and IoT space, including audio-based fraud detection, voice authentication, voice spoofing, and device authentication.
But this is about more than just the numbers. This is about how Pindrop is constantly innovating and solving problems that will come to define the security landscape of the future. Specifically, how speech, voice manipulation and non-audio elements will interact to create more security threats and challenges for contact centers. Pindrop is already preparing for this future. Just over the last few months, Pindrop was granted patents in three new patent families:
- Caller verification via carrier metadata
- Keyword spotting
- Voice modification detection
Caller verification – Look for fraud before it happens
Pindrop’s patented technology tracks calls in real time as they come into a contact center and looks at the purported Caller ID of the call, as well as the unaltered carrier metadata, which is then used to generate risk scores backed up by proprietary risk models. With this technology, call centers can start assessing the risk of the call even before the potential fraudster has had the chance to press a button. While we had a previously patented system that analyzed carrier metadata, this new patent provides enhanced caller ID verification and spoof risk detection models that strengthen our ability to detect the fraud risk on live calls.
Keyword spotting – Focus on what as well as who
Pindrop’s patented tech leverages unsupervised keyword spotting to identify phrases or words that could connect commonalities across multiple calls or multiple fraudsters acting in unison. As fraud techniques evolve across multiple channels, keyword spotting offers one more weapon to identify concerted fraudster activity across different parts of the organization.
Voice modification detection – The era of deepfakes is upon us
A growing number of people worldwide are concerned with the risk of deepfakes and how it would affect them. Frauds have already been committed with deepfaked audio, which is created with increasingly sophisticated voice modification technologies. Pindrop’s patented tech detects voices that have been modified deliberately, either using software or through manual means. This innovation is based on models of natural human speech production, so that when voice is modified beyond what the computational models consider natural, we can identify the anomaly. We have demonstrated this ability in a recent publication at ICASSP 2021 [link] where the speech production model is able to detect modified voices with similar or better accuracy than human listeners. Not only will this technology help to detect deepfake audio more accurately but will also lead to automation and a more passive approach to risk detection.
With our additional 120+ patent applications pending, you can expect Pindrop to continue blazing this trail of innovation and help keep your contact center and your business secure from fraud.
To learn more about Pindrop’s innovations, visit pindrop.com/technologies.
But how exactly did Pindrop achieve this? By attacking these challenges on multiple fronts leveraging our authentication, anti-fraud and data intelligence platform.
Vector 1: More effective fraud detection
Pindrop improved fraud detection rates by 15% over and above existing tools and systems used by customers. Pindrop’s multifactor approach and analytic capabilities led to over $5 Million in fraud loss prevention and an increased ability to see when a fraudster is coming into multiple lines of business at the same time.
Vector 2: Streamlined authentication
Customers leveraged our automatic number identification (ANI) validation, voice biometrics, and caller analytics to remove two knowledge based questions (KBAs) and lower average handle time (AHT) by as much as 90 seconds for customer interactions. They saved $6.8 Million and were able to personalize call experience for their callers.
Vector 3: Increased self-service
Pindrop’s risk scores and ANI validation allowed customers to trust the verified callers and opened the door for self-service more securely within the interactive voice response (IVR) system. Contact centers were able to contain an additional 1.5% calls within the IVR system in the first year itself leading to $6 Million in cost savings.
Vector 4: Improved security operations effectiveness
Pindrop’s case management tool helped fraud investigators to fine-tune their fraud alerts, work cases faster and to reduce fraud investigation time by up to 25%. These gains contributed to savings of more than $200,000.
In the words of one of our customers,
“It’s better for our customers, better for our agents. It is certainly saving us money on fraud, and it does allow us to adjust faster to new trends and be able to capture them.”
– VP of Authentication and Identity Technology, Banking.
If you are thinking of ways to keep your contact center secure, to delight your customers, and make your call center agents more productive, the Forrester TEI study is a must read.
Download the study to discover how advanced authentication and fraud detection leads to a stronger bottom line.
Identity confirmation is taking center stage for many companies undergoing customer experience initiatives or digital transformations. At Pindrop, we know that the key to a remarkable experience is one that is able to transcend asking about mothers’ maiden names or other security questions and get to the reason why the customer is on the phone.
But not all identity controls are created equal. Smishing, Vishing, and Phishing attacks (bad actors using text, calls, or internet links to obtain sensitive data) claim the most victims of any internet crime according to the FBI¹ and the information that gained through those attacks can lead bad actors to contact centers to gain access to the victim’s account by using social engineering. Bad actors easily provide ill gotten data to manipulate contact center agents into providing access to accounts is a common social engineering tactic.
Breach at a Major Gaming Company
Even sophisticated authentication policies can allow for exploitation. A major gaming company had their online gaming community breached, taking control of at least 50 customer accounts by socially engineering contact center support reps. The company stated “Utilizing threats and other “social engineering” methods, individuals acting maliciously were able to exploit human error within our customer experience team and bypass two-factor authentication to gain access to player accounts. Despite layered authentication, leveraging social engineering and contact center, bad actors targeted these players and some continued on to use information to attack institutions according to an article from CPO Magazine².
Humans are bad at avoiding socially-engineered traps. In 2021, Pindrop reported that on average one out of every 1074 calls in a contact center is from a bad actor. Without technology, social engineering can be difficult to identify and prevent even with proper training and some technology controls in place. Assessing the risk of each call could help reduce account takeover attempts in the contact center and alert agents when to flag for the fraud team to review. Pindrop fraud detection is capable of alerting over 80% of fraudulent calls, even social engineering attempts, allowing the customer experience teams to focus on service, not fraud detection.
Pindrop can help agents get a leg up on social engineers, by using machine learning, phoneprinting, and deep voice analysis, Pindrop can alert contact centers agents to social engineers and known bad actors while improving authentication practices while speeding up identity and verification obstacles.
For More Information Contact Us Here.
¹Federal Bureau of Investigation Internet Crime Complaint Center, Internet Crime Report 2020,
https://www.ic3.gov/Media/PDF/AnnualReport/2020_IC3Report.pdf
²Hope, Alice, EA Confirms Account Takeover Attacks Compromising High-Profile Gamers via Phishing and Social Engineering Attacks, CPO Magazine, January 2022,
https://www.cpomagazine.com/cyber-security/ea-confirms-account-takeover-attacks-compromising-high-profile-gamers-via-phishing-social-engineering-attacks/




Learn battle-tested strategies for leading in challenging times.
Explore the new challenges facing leaders operationally and socially.
Get an inside look at creating breakout opportunities.
Register today for this exclusive webinar focused on recovering and excelling in our cautious new world.
Your expert panel


John Chambers
Chairman Emeritus, Cisco / CEO, JC2 Ventures


Vijay A. Balasubramaniyan
CEO & Co-Founder at Pindrop
What do we mean by the conversational economy?
This is an economy driven by interaction. Currently, that means always-on internet connectivity, access to products and services anytime/anywhere through a plethora of devices, and platforms that allow people to engage directly with businesses and other consumers.
Businesses already participate in the conversational economy when they immediately respond to customer complaints on social media, engage with prospects through chatbots, or provide seamless omnichannel buying experiences for customers across physical stores, the internet, and the phone.
Why has voice become so popular with consumers?
Ease. Voice is the most natural form of communication and the first one we learn how to use, and it’s ironic that technology has only just now caught up to the rich intricacies of voices. Now that computing resources, internet bandwidth, and technological innovation can handle voice well, we predict that voice applications have become the current gold rush—just as we saw a gold rush with touch-enabled devices (starting with the iPhone)—and spawn an entirely new economy.
Voice already dominates customer interactions and grows exponentially each year. Currently, 78 percent of all customer interactions are by voice. One estimate suggests that voice shopping will increase from a $2 billion industry in 2018 to a $40 billion industry in 2022.
How has the adoption of voice assistants grown?
The adoption of voice assistants and voice activities is also starting to really accelerate. Over 25 percent of the US population has access to a smart device, and a large percentage of people anticipate more voice interactions going forward – such as on cell phones or using voice assistants for shopping. Voice tasks encompassing a variety of situations will increase in adoption over the next 18 months and especially over the next five years – for inside the home, outside the home, and work tasks.
For more answers – download the full Voice Intelligence Report here.
The makers of the M.E. Doc software that has been at the center of the NotPetya malware story say they have produced an updated version of the application that does not include the backdoor that had been slipped in by attackers several months ago.
“M.E.Doc has created an update that will ensure safe work in the program. This was reported by SEA ‘IT-expert’ Alesya Bilousova. ‘Today, we officially handed the Cyberpolice update 190, which removes the malware (backdoor) from our product. After its inspection, the Department of Cyberpolicians will provide its findings, and I hope tomorrow we will be able to launch it,’ – said the representative of the company. The update mentioned contains enhanced protection from the virus-encryptor,” a statement from the company on Facebook says.
The company has not pushed the update to customers yet, as it is still in the hands of Ukrainian law enforcement officials.
The news of the update comes two days after agents from the Ukraine Cyberpolice went to the offices of Intellect Services, the company that makes the M.E. Doc accounting software, and confiscated a number of servers used to deliver updates to customers. Security researchers and forensic experts working directly with the company said they had found direct evidence that attackers had been able to insert a backdoor into software updates for M.E. Doc that had been pushed to customers over the last several weeks.
Researchers from Cisco’s Talos team worked on site at Intellect Services and said that their findings matched up with those produced independently by researchers at Eset, who found a stealthy backdoor in the M.E. Doc software. The software, which is used by a large number of Ukrainian businesses, was then used as the main propagation mechanism for the NotPetya malware.
“While we didn’t know it at the time, we can now confirm ESET’s research into the backdoor that had been inserted into the M.E.Doc software. The .net code in ZvitPublishedObjects.dll had been modified on multiple occasions to allow for a malicious actor to gather data and download and execute arbitrary code,” David Maynor, Aleksandar Nikolic, Matt Olney, and Yves Younan, of the Talos team said in a post on the investigation.
Experts initially assumed NotPetya was ransomware, because of its infection screen that demanded Bitcoin to decrypt the victim’s files. But they quickly found that the ransom demand was a ruse and that NotPetya was in fact erasing data on infected machines, including the master boot record. Researchers discovered that the attackers behind this campaign likely had access to the compromised PCs for several weeks before they decided to push the NotPetya malware to them, thanks to the backdoor in M.E. Doc. Why the attackers made the decision to burn their access to these organizations for a faux ransomware campaign is unclear.
Intellect Services did not specify when it would be able to issue the clean update.
CC By license image from Tawheed Manzoor
Anthem Inc., the victim of one of the more extensive data breaches in U.S. history, has agreed to pay a settlement of $115 million to consumers affected by the incident.
The settlement is believed to be the largest ever to result from a data breach in the U.S. and would end a class-action lawsuit that followed the 2015 compromise of Anthem, a major health-care provider. That breach affected more than 80 million consumers, and the data taken during the incident included names, birth dates, Social Security numbers, and other sensitive information. As part of the settlement, Anthem admits no fault for the incident.
“Defendants deny any wrongdoing whatsoever, and this Agreement shall in no event be construed or deemed to be evidence of or an admission or concession on the part of any Defendant with respect to any claim of any fault or liability or wrongdoing or damage whatsoever,” the settlement says.
The settlement agreement, announced on Friday, is subject to a judge’s approval, but if it’s finalized the money from the agreement would go to pay for consumers’ credit monitoring and costs from the data breach. The Anthem compromise reportedly resulted from attackers sending spear-phishing emails to Anthem employees, one of whom opened the mail and started a chain that resulted in the installation of malware. The attackers then moved around inside the network and were able to compromise many other computers.
Anthem officials said the settlement does not indicate that the company was at fault for the breach, but the company will be making changes to its security program as part of the agreement.
“Nevertheless, we are pleased to be putting this litigation behind us, and to be providing additional substantial benefits to individuals whose data was or may have been involved in the cyber attack and who will now be members of the settlement class,” the statement says.
“Anthem is determined to do its part to prevent future attacks. To that end, as part of the settlement, Anthem has agreed to continue the significant information security practice changes that we undertook in the wake of the cyber attack, and we have agreed to implement additional protections over the next three years.”
At the time it occurred, the Anthem data breach was one of the larger breaches on record, but since then there have been several other breaches of greater magnitude. The series of compromises that hit Yahoo in the last few years are far larger, with one revealed in 2016 that hit more than a billion accounts.
A report from the California Department of Insurance, which conducted an investigation into the Anthem breach, concluded earlier this year that a foreign government was behind the attack.
“The team determined with a high degree of confidence the identity of the attacker and concluded with a medium degree of confidence that the attacker was acting on behalf of a foreign government. Notably, the exam team also advised that previous attacks associated with this foreign government have not resulted in personal information being transferred to non-state actors,” the insurance report says.
CC By-sa license image from Matthew Hurst
In recent years as VoLTE (Voice over LTE) services have grown more popular and the nation’s four largest cellular networks have adopted it, security concerns have begun to arise. In a new study presented at the Symposium on Information and Communications Technology Security (SSTIC) three researchers from P1 Security found new vulnerabilities and confirmed old ones regarding information attackers can get about users from VoLTE calls, including geolocation data.
VoLTE is a service that offers voice communications (a phone call) over an LTE network, typically providing higher call quality. This has proven to be lucrative for telecom companies as VoLTE has taken businesses by storm and is becoming a part of people’s personal devices as well. In their paper, the researchers identify several active and passive vulnerabilities in VoLTE that can be used to enumerate users, spoof numbers, and gather information about users from information leaks.
“A malicious user (UE-attacker) can customize certain header fields (From and P-Preferred-Identity) of a SIP INVITE request in order to trick the different network elements present on the SIP signaling path. This fake information, if left as is, not sanitized and not replaced, could be received by the target (UE-victim) and make calls appear from another (spoofed) identities,” the paper by Patrick Ventuzelo, Olivier Le Moal, and Thomas Coudray says.
Some of the vulnerabilities discussed in the paper have been disclosed publicly before, but the researchers show how they can be combined to get a picture of a network and its users.
“This paper demonstrates different vulnerabilities in the VoLTE networks which can be exploited to figure out the location of the targeted victim. For example, VKB#1468 leaks B-party private information. If an attacker A makes a voice call over VoLTE to a victim B, then ‘some’ un-patched systems/networks can leak ‘utran-cell-id-3gpp’ value, which is contained in P-AccessNetwork-Info header. Once an attacker gets this information about his target he can easily retrieve the victim’s localization using databases of Cell IDs like OpenCellID / Cell ID Finder,” said Payas Gupta, a data scientist at Pindrop.
The researchers also identified a flaw that could allow an attacker to get the IMEI number for a subscriber. An IMEI number is unique to each individual device and can be used as an identifier. In addition, the authors demonstrated both active and passive attacks by modifying the SIP packet and SDP headers.
The major contribution of this paper is that given a certain rooted Android phone, it is possible to inject packets in the phone using applications like Wireshark while making a VoLTE call. So, a malicious app on the rooted Android phone can sniff the traffic from the phone and track a victim’s location.
This is just the latest in a series of studies that expose the vulnerabilities built into these VoLTE networks. Previously, attackers had found ways to compromise 4G networks using this new technology after Verizon Wireless, the nation’s largest carrier, rolled it out to the public. Currently, the main audience for VoLTE are businesses who rely on programs like Skype for Business among others as their default voice communications system.
Researchers have discovered a new Android trojan in the Google Play app store that has the ability to root devices and can inject malicious code into system runtime libraries.
The Dvmap trojan is thought to be the first such piece of malware that’s capable of injecting code into system libraries at runtime, and researchers at Kaspersky Lab said the app containing the malware has been downloaded more than 50,000 times. The malware is disguised as a puzzle game and the attackers behind it have used some innovative methods to get past Google’s security roadblocks for Play.
“To bypass Google Play Store security checks, the malware creators used a very interesting method: they uploaded a clean app to the store at the end of March, 2017, and would then update it with a malicious version for short period of time. Usually they would upload a clean version back on Google Play the very same day. They did this at least 5 times between 18 April and 15 May,” Roman Unuchek, a researcher at Kaspersky, wrote in an analysis of the malware.
“All the malicious Dvmap apps had the same functionality. They decrypt several archive files from the assets folder of the installation package, and launch an executable file from them with the name ‘start’.”
Google has removed the Dvmap trojan from the Play store, after Kaspersky reported it to the company. Unuchek said it’s not entirely clear what the endgame was for the attackers who created and uploaded the malware. After installation, the trojan tries to gain root privileges on the infected device and then begins its main process, which begins injecting code into the device’s runtime libraries.
“During patching, the Trojan will overwrite the existing code with malicious code so that all it can do is execute /system/bin/ip. This could be very dangerous and cause some devices to crash following the overwrite. Then the Trojan will put the patched library back into the system directory. After that, the Trojan will replace the original /system/bin/ip with a malicious one from the archive (Game324.res or Game644.res). In doing so, the Trojan can be sure that its malicious module will be executed with system rights. But the malicious ip file does not contain any methods from the original ip file. This means that all apps that were using this file will lose some functionality or even start crashing,” Unuchek said.
The Dvmap malware also will try to turn off the Verify Apps functionality on infected devices. This feature continuously checks installed apps to ensure that they’re not malicious or exhibiting undocumented behavior.
“It looks like its main purpose is to get into the system and execute downloaded files with root rights. But I never received such files from their command and control server,” Unuchek said.
“These malicious modules report to the attackers about every step they are going to make. So I think that the authors are still testing this malware, because they use some techniques which can break the infected devices.”
A long-running, multi-faceted, malvertising campaign has been found using a technique that enables the sites involved to bypass the protections of ad blockers.
Malvertising campaigns can take a lot of different forms and they often involve multiple layers of compromised or malicious sites and lots of redirections. Some campaigns are connected to malware operations and use exploit kits, while others simply use visual or technical tricks to redirect users to sites with malicious or aggressive ads. The ultimate goal is to get the user to click on an ad to either download some piece of software or collect pay-per-click revenue for the group behind the campaign.
The new campaign, identified by researchers at Malwarebytes Labs, is known as RoughTed and it is deeply connected to both exploit kits and the world of sketchy browser extensions. The attackers behind the campaign are using a number of interesting techniques, including detailed fingerprinting of users, and are pushing several different payloads to victims. The scope of the RoughTed campaign is considerable.
“We estimate that the traffic via RoughTed related domains accumulated to over half a billion hits and was responsible for many successful compromises due to effective techniques that triage visitors and bypass ad-blockers,” Jerome Segura of Malwarebytes Labs wrote in a post analyzing the campaign.
“The threat actors behind RoughTed have been leveraging the Amazon cloud infrastructure, in particular, its Content Delivery Network (CDN), while also blending in the noise with multiple ad redirections from several ad exchanges, making it more difficult to identify the source of their malvertising activity.”
Segura said the researchers noticed the RoughTed campaign while looking at traffic associated with the Magnitude exploit kit. They noticed that the RoughTed domain was redirecting traffic, through a series of intermediate domains, to the Magnitude filtering gate, leading users to the exploit kit. Much of the traffic that’s flowing through the RoughTed campaign is coming from streaming sites and file-sharing sites, Segura said, often with the help of URL shorteners.
“These are areas where malicious actors love to lurk because of the sheer volume of traffic but also subpar standards for quality and safety of online advertising,” he said.
The RoughTed campaign also uses detailed browser fingerprinting and a clever trick to evade ad blockers. When a user hits a page that is associated with the campaign and has the requisite code, clicking anywhere on the page will initiate a connection to a tracking site, bypassing the protection of the major ad blockers. The sites associated with this campaign have been seen delivering malware, browser extensions, and fake updates stuffed with adware.
“This malvertising campaign is quite diverse and no matter what your operating system or browser are, you will receive a payload of some kind. Perhaps this should be something for publishers to have a deep hard look at, knowing what they may be subjecting their visitors to if they decide to use those kinds of adverts,” Segura said.
A researcher has released a tool that can recover the decryption key for the WannaCry ransomware on infected Windows XP systems.
The tool, called Wannakey, is the work of Adrien Guinet of Quarkslab, a French security firm. Wannakey takes advantage of a quirk in the way that WannaCry uses the Windows Crypto API on XP machines. The API doesn’t remove the prime numbers used to compute the private key from memory before it frees that memory.
“This is not really a mistake from the ransomware authors, as they properly use the Windows Crypto API. Indeed, for what I’ve tested, under Windows 10, CryptReleaseContext
does cleanup the memory (and so this recovery technique won’t work). It can work under Windows XP because, in this version, CryptReleaseContext
does not do the cleanup,” Guinet said in the documentation for Wannakey.
“If you are lucky (that is the associated memory hasn’t been reallocated and erased), these prime numbers might still be in memory.”
The tool only works on Windows XP PCs and Guinet said that if the computer was rebooted after it was infected, Wannakey won’t be able to recover the private key.
WannaCry has hit the Internet hard in the last week, infecting hundreds of thousands of machines around the world, many of them in Russia. The ransomware is unusual in many respects, particularly its use of exploit code for a vulnerability in Microsoft’s SMB protocol implementation. The NSA reportedly discovered the vulnerability and developed the exploit code for it, a tool known as EternalBlue. Once the WannaCry malware is on a new PC, it encrypts the user’s files and then begins scanning the local network for other vulnerable computers.
WannaCry is seen as the first self-replicating ransomware variant, something that security experts have been warning about for a couple of years.
Although Microsoft no longer officially supports Windows XP, the company last week released a patch for the SMB vulnerability in XP as well as other unsupported. older Windows releases.
“Given the potential impact to customers and their businesses, we made the decision to make the Security Update for platforms in custom support only, Windows XP, Windows 8, and Windows Server 2003, broadly available for download,” Phillip Misner of the Microsoft Security Response Center said.
Image: Show Jian Ming, CC by-nd license.
Google is introducing a new system that scans all of the apps on Android devices continuously, looking for unwanted behavior, malware, and other problems.
The new Play Protect framework is Google’s latest attempt to shore up the security of the Android ecosystem. Any apps that are in the Play store already are subject to a number of reviews and security restrictions. Google checks out every developer who submits apps to the store and also employs a tool called Bouncer that scans submitted apps. Bouncer looks for both outright malicious behavior as well as hidden or undocumented features that could be harmful to the device.
The app store-level checks are designed to prevent users from installing malware or shady apps that somehow make their way into the Play store. But the Android ecosystem allows users to install apps from other sources, including third-party marketplaces, and Google has no control over the security or integrity of those apps. Play Protect is designed to remedy that situation by constantly checking the apps on Android devices and alerting users to any problems.
“Play Protect is built into every device with Google Play, is always updating, and automatically takes action to keep your data and device safe, so you don’t have to lift a finger,” Edward Cunningham, product manager for Android security at Google, said in a post.
“With more than 50 billion apps scanned every day, our machine learning systems are always on the lookout for new risks, identifying potentially harmful apps and keeping them off your device or removing them. All Google Play apps go through a rigorous security analysis even before they’re published on the Play Store—and Play Protect warns you about bad apps that are downloaded from other sources too. Play Protect watches out for any app that might step out of line on your device, keeping you and every other Android user safe.”
Since the advent of smart phones and the app store model, attackers have been trying to sneak malware and other kinds of malicious apps into the stores and onto users’ phones. They had some early successes going after the Android app store, but Google has tightened up the app submission and review process considerably in recent years. Play Protect adds another layer to that protection.
As part of the new system, Google also added a feature similar to Apple’s Find My iPhone feature that allows users to find their devices by signing into their Google accounts.
Google has patched a dangerous issue in Chrome that enabled attackers to spoof legitimate domains in the browser by using unicode characters rather than normal ones.
That vulnerability is the result of the way that Chrome handles some unicode characters and it’s not necessarily a new issue. Security experts have known about the underlying problem for several years, and the browser vendors have made changes along the way to address it. But Chrome and Mozilla Firefox don’t prevent all of the different variations of the attack. Last week, researchers showed that both browsers could still be tricked into displaying some unicode characters in a way that’s essentially impossible to distinguish from the normal ASCII characters.
“From a security perspective, Unicode domains can be problematic because many Unicode characters are difficult to distinguish from common ASCII characters. It is possible to register domains such as ‘xn--pple-43d.com’, which is equivalent to ‘аpple.com’. It may not be obvious at first glance, but ‘аpple.com’ uses the Cyrillic ‘а’ (U+0430) rather than the ASCII ‘a’ (U+0041). This is known as a homograph attack,” researcher Xudong Zheng wrote in a post on the attack.
The biggest risk with this issue is its potential use in phishing attacks. If an attacker is able to register a domain that is visually indistinguishable from a legitimate one, he would have the ability to trick users into trusting the site. Google fixed this vulnerability in Chrome 58, released on April 19. However, Mozilla has decided not to make a change to Firefox to address the problem. The company has published a FAQ that explains both the attack and why Mozilla isn’t planning to address it in Firefox.
Mirai is no longer the only game in town when it comes to IoT malware.
A new piece of malware known as Hajime is infecting some of the same kinds of embedded devices that Mirai has been targeting for several months. The malware has infected thousands of IoT devices in recent weeks and researchers say it has a modular design that could allow the creator to add functionality in the future. Right now, Hajime isn’t being used for DDoS attacks, but it is targeting IoT devices with open Telnet ports and default usernames and passwords.
While Hajime has a number of functions and traits that line up with Mirai, it also includes several unique capabilities. Most notably, after it establishes a foothold on a new device, Hajime closes several ports that Mirai is known to use for initial infections. It also doesn’t have a central command-and-control server, but instead uses a decentralized architecture that allows the malware’s creator to push messages to all of the infected devices from any of the peers in the network.
“Hajime is also stealthier and more advanced in comparison to Mirai. Once on an infected device, it takes multiple steps to conceal its running processes and hide its files on the file system. The author can open a shell script to any infected machine in the network at any time, and the code is modular, so new capabilities can be added on the fly. It is apparent from the code that a fair amount of development time went into designing this worm,” Waylon Grange of Symantec wrote in an analysis of the Hajime malware.
Grange said there are tens of thousands of devices infected by Hajjime right now, and a large portion of them are in Brazil, Iran, and Thailand. Like Mirai, Hajime infects new devices by taking advantage of open Telnet connections that have default credentials. But rather than launching DDoS attacks or taking some other malicious action, Hajime displays a somewhat cryptic message every 10 minutes on infected devices, saying that the author is a white hat trying to secure weak systems.
“However, there is a question around trusting that the author is a true white hat and is only trying to secure these systems, as they are still installing their own backdoor on the system. The modular design of Hajime also means if the author’s intentions change they could potentially turn the infected devices into a massive botnet,” Grange said.
The Mirai botnet has been active for several months and has been involved in a number of enormous DDoS attacks, including one that knocked DNS provider Dyn offline for several hours. The botnet is actually not just one network but several different ones controlled by various attackers. In February, authorities in the U.K. arrested a man in connection with a Mirai attack on Deutsche Telekom home routers.
Image: Eli Christman, CC By license.
Facebook has opened a beta program for its new Delegated Account Recovery system, which is designed to replace traditional email or SMS-based recovery processes.
The Facebook system allows users to connect their Facebook accounts with other services and use that trusted link to recover access to one of the accounts. The company has published an SDK and documentation on the system, which it has been testing for several months with GitHub. Now the program is entering a closed beta with the promise of a public release in the coming months. Delegated Account Recovery is meant to eliminate the use of insecure channels such as email or SMS to verify a user’s ownership of a given account.
“It’s an open protocol. Trust who you want. We’re really excited that GitHub is making the first connection with us,” Brad Hill, a security engineer at Facebook, said in January. “We really don’t want this to be a Facebook-only service, so that we can have that network effect protecting you. The best way for us to address that is to share it.”
On Tuesday, Hill announced the beta program for Delegated Account Recovery and said GitHub also is publishing its own SDK. The hope, he said, is that many other companies will join the program, creating a large ecosystem with a variety of interconnected services and users. The system relies on a trusted relationship between two participants and uses cryptographically signed tokens, rather than emailed links, for account recovery.
Image: Facebook
“Instead of requesting user data at the outset, your business creates a recovery token linked to your identifier for the customer, and sends it to Facebook. We keep it safe and private until that person needs it. Think of it as giving a sealed envelope to a trusted friend. Facebook can’t see what’s inside; we just know we shouldn’t give it back to anyone but you,” Hill said in a post announcing the beta.
Both email and SMS are considered insecure channels, as attackers can intercept the messages in a variety of ways. A system such as Facebook’s helps users avoid the kind of chain reaction that can happen when an attacker is able to compromise one account and then use that as a launching point to go after others. The idea is to make each participating service an equally important part of the system.
“I want these core accounts strongly and redundantly cross-verifiable. None of the services should be more or less important,” Hill said in January. “We’re trying to build a reliable set of steps that anyone can follow without opening it up to attackers.”
Facebook has published extensive documentation on Delegated Account Recovery, as well as some sample applications.
Image: Startbloggingonline.com, CC by license.
By Jonah Berg-Ganzarain
A pair of doctoral students and their advisor, looking for insights into the inner workings of tech support scams, spent eight months collecting data on and studying the tactics and infrastructure of the scammers, using a purpose-built tool. What they uncovered is a complex, technically sophisticated ecosystem supported by malvertising and victimizing people around the world.
The study is the first analysis of its kind on tech support scams, and it’s the work of two PhD candidates at Stony Brook University, and their advisor, Nick Nikiforakis. The team built a custom tool called RoboVic that performed a “systematic analysis of technical support scam pages: identified their techniques, abused infrastructure, and campaigns”. The tool includes a man-in-the-middle proxy that catalogs requests and responses and also will click on pop-up ads, which are key to many tech-support scams.
There are a slew of different versions of these scams, but generally they’re a type of multichannel fraud that occurs when a scammer claims to offer legitimate tech support via the phone or online to unsuspecting users, with the caller usually claiming to work for Microsoft or Apple support. As the Stony Brook study points out, many of these scams begin when a cleverly designed website tricks unsuspecting, vulnerable users into believing they have a virus, and that they need to call the number shown on the site to help them out. Sometimes, the page disguises itself as a Windows “blue screen” so that users find it more believable.
In their study, the researchers found that the source for many of these scams were “malvertisements”, advertisements on legitimate websites, particularly using ad-based URL shorteners, that advertised for malicious scams. This gives the scammers an opportunity to strike on what would seem like a relatively safe page. Although victims of these scams can be anywhere, the researchers found that 85.4 percentof the IP addresses in these scams were located across different regions of India, with 9.7 percentlocated in the United States and 4.9 percent in Costa Rica. Scammers typically asked users for an average of $291, with prices ranging from $70 to $1,000.
“This threat is not going to decrease soon.”
“Technical support scam is a multi channel scam that benefits from both the telephony channel and web channel to spread and perform the attack and it makes it difficult to track it and take it down.” said study co-author Najmeh Miramirkhani, a PhD Computer Science student at Stony Brook.
The researchers used RoboVic to collect data over the course of the eight-month study, and then called 60 scammers, posing as naive users, and gathering information on the scammers’ social engineering techniques and demeanor. They saw a wide range of tactics and concluded that while there may be a few individual scammers operating, the vast majority of them are part of large, organized call centers. Many of the scammers also use technical tools to help run their fraudulent operations.
“We discovered that scammers abuse popular remote administration tools (81% of scammers rely on two specific software products), to gain access to user machines where they then patiently attempt to convince users that they are infected with malware. We found that, on average, a scammer takes 17 minutes, using multiple social engineering techniques mostly based on misrepresenting OS messages, to convince users of their infections and then proceeds to request an average of $290.9 for repairing the ‘infected’ machines,” the authors said in their paper.
This sort of scam is showing no signs of decline, Miramirhani says.
“So far, we collected more than 25K scam domains and thousands of scam phone numbers and we [have] evidence that this threat is not going to decrease soon and it still has an increasing trend,” Miramirhani said.
The authors stress that it’s important to educate individuals on how to avoid these types of scams, and suggest measures such as a browser extension that warns users about scam sites or a general education program. While older people and individuals unfamiliar with technology are the most vulnerable, the most important thing to do for everyone is to engage them in training to learn when to spot this multichannel scam.
The FTC has been cracking down on these scams for several years, and just this week shut down an operation in which the alleged scammer was pretending to represent the FTC itself while offering fake tech support.
Image: Greyweed, CC By license.
ST. MAARTEN–Researchers investigating modern cyber espionage operations have found a direct link between the Moonlight Maze attacks that hit a number of United States military and government agencies in the 1990s and operations that are still ongoing today. The connections, through code samples, logs, and other data, show that some of the same tools and infrastructure used 20 years ago are still in use by advanced attackers right now.
The Moonlight Maze attacks were among the first major cyber espionage campaigns to gain public attention, and security researchers often point to the attacks as the beginning of the modern advanced threat era. The attacks went on for years and included highly complex techniques and the exfiltration of a huge amount of data. Researchers at Kaspersky Lab, working with counterparts from King’s College London, recently discovered that a backdoor used by the Moonlight Maze attackers in 1998 also has been used by the Turla APT attack group, possibly as recently as this year. The new details come from a months-long analysis of data and logs from a server that was compromised during the Moonlight Maze attacks and preserved by a systems administrator since then.
The original Moonlight Maze attackers mainly used Unix and had a large set of tools at their disposal. They were targeting Solaris systems for the most part and had a custom backdoor that they used often. One of the systems that they compromised was a server known as HRtest (pictured below), which administrator David Hedges has kept. Hedges allowed Kaspersky’s researchers and Rid access to the server, including access logs, the attackers’ own logs, and an extensive toolset used by the attackers, including 43 separate binaries. The researchers discovered that the attackers had made a key mistake in some of their operations.
“In the late 1990s, no-one foresaw the reach and persistence of a coordinated cyberespionage campaign.”
“It was their standard behavior to use infected machines to look for further victims on the same network or to relay onto other networks altogether. In more than a dozen cases, the attackers had infected a machine with a sniffer that collected any activity on the victim machine and then proceeded to use these machines to connect to other victims. That meant that the attackers actually created near complete logs of everything they themselves did on these systems—and once they did their routine exfiltration, those self-logs were saved on the HRTest node for posterity. The attackers created their own digital footprint for perpetuity,” Kaspersky said in a post detailing their findings, which were announced at the Security Analyst Summit here Monday.
The information that Hedges gave the researchers, along with other data they had collected over the years during investigations into the Turla APT campaigns, allowed them to dig in and connect the Moonlight Maze operation to the Turla attacks. The analysis shows that the tools used by the attackers have evolved over the years, as needs have dictated, and have used a wide range of tools and techniques.
The connection between the old Moonlight Maze attacks and techniques and the recent Turla campaigns is a solid one, but the researchers stopped short of saying that the attackers themselves are the same.
“An objective view of the investigation would have to admit that a conclusion is simply premature. The unprecedented public visibility into the Moonlight Maze attack provided by David Hedges is fascinating, but far from complete. It spans a window between 1998-1999 as well as samples apparently compiled as far back as late 1996. On the other hand, the Penquin Turla codebase appears to have been primarily developed from 1999-2004 before being leveraged in more modern attacks,” the researchers said.
The data that the researchers had gave them the ability to look at complete sessions between the attackers and proxies, target systems, and other machines from nearly 20 years ago. That gave them detailed insight into the attackers’ techniques.
“This looks nothing like a modern APT,” Kaspersky researcher Juan Andres Guerrero-Saade said during the talk Monday.
Image: Aaron Harmon, CC by license.
Cloudflare, one of the larger content-delivery networks and DNS providers on the Internet, had a critical bug in one of its services that resulted in sensitive customer data such as cookies, authentication tokens, and encryption keys being leaked and cached by servers around the world.
The vulnerability was in an HTML parser that Cloudflare engineers had written several years ago but had recently replaced by a newer one. The company was migrating various services from the old parser, written using Ragel, to the new one, and a change made during that process is what caused the bug to activate and begin leaking memory with private information in it. The bug active for several days, and Cloudflare said the most critical period was Feb. 13 to Feb. 18.
“It turned out that the underlying bug that caused the memory leak had been present in our Ragel-based parser for many years but no memory was leaked because of the way the internal NGINX buffers were used. Introducing cf-html subtly changed the buffering which enabled the leakage even though there were no problems in cf-html itself,” John Graham-Cumming of Cloudflare said in a post-mortem on the response to the vulnerability.
Cloudflare has a massive and diverse customer base that includes companies such as Uber, Yelp, OkCupid, Medium, and 1Password. There is a running list being maintained of all of the known customers, including some that are known not to have been affected by the vulnerability. 1Password is among those who have said their data was unaffected.
The bug had a broad potential effect for Cloudflare’s customers, as well as for the company itself. Because of the way the company’s infrastructure is set up, a request to one Cloudflare site affected by the vulnerability could end up revealing private information from a separate site. Also, search engines routinely cache web content for faster serving, and some of the leaked private data from Cloudflare sites had been cached by Google and other engines.
“We fetched a few live samples, and we observed encryption keys, cookies, passwords, chunks of POST data.”
“The infosec team worked to identify URIs in search engine caches that had leaked memory and get them purged. With the help of Google, Yahoo, Bing and others, we found 770 unique URIs that had been cached and which contained leaked memory. Those 770 unique URIs covered 161 unique domains. The leaked memory has been purged with the help of the search engines,” Graham-Cumming said.
“We also undertook other search expeditions looking for potentially leaked information on sites like Pastebin and did not find anything.”
Some of the sensitive data leaked by the vulnerability belonged to Cloudflare itself rather than its customers. Although no customer encryption keys were leaked, an SSL key Cloudflare used to encrypt connections between its own machines did, as did some other internal authentication secrets.
A researcher with Google’s Project Zero discovered the memory leak last week while doing unrelated research, and after confirming what he had found, reached out to CloudFlare’s security team immediately.
“It looked like that if an html page hosted behind cloudflare had a specific combination of unbalanced tags, the proxy would intersperse pages of uninitialized memory into the output. My working theory was that this was related to their ‘ScrapeShield’ feature which parses and obfuscates html – but because reverse proxies are shared between customers, it would affect *all* Cloudflare customers,” researcher Tavis Ormandy of Google said in his initial analysis of the flaw.
“We fetched a few live samples, and we observed encryption keys, cookies, passwords, chunks of POST data and even HTTPS requests for other major cloudflare-hosted sites from other users.”
Cloudflare implemented a partial fix for the memory leak within a few hours of Ormandy’s initial report and fully fixed it earlier this week.
Image: Maarten Van Damme, CC By license.
There is a clever, well-crafted phishing campaign targeting Gmail users that includes a fake login page that exactly mimics the real thing to trick victims into entering their credentials.
The campaign has been going on for some time but it recently began to gain attention after researchers analyzed it and broke down the techniques the attackers are using. The general setup is pretty much the same as most phishing campaigns, with an email coming from what appears to be a familiar address. In this case, the message actually is coming from someone in the victim’s address book, a user whose account has been compromised already. The email has a subject line that has been used in emails between the parties before and includes an attachment that looks like an image.
If the victim clicks on the image to get a preview of the attachment, it opens a new tab in the browser with an exact replica of the Gmail login page. The only giveaway that it’s a fake is the information in the browser’s address bar. Rather than a typical Gmail URL starting with https://, the address has a data URI at the beginning and actually includes a large text file at the end. If the victim doesn’t look closely at the address bar and enters her credentials, it’s game over.
“This phishing technique uses a ‘data URI’ to include a complete file in the browser location bar.”
“The attackers signing into your account happens very quickly. It may be automated or they may have a team standing by to process accounts as they are compromised. Once they have access to your account, the attacker also has full access to all your emails including sent and received at this point and may download the whole lot,” Mark Maunder of WordFence wrote in an analysis of the campaign.
“Now that they control your email address, they could also compromise a wide variety of other services that you use by using the password reset mechanism including other email accounts, any SaaS services you use and much more.”
The key to the attack is the use of the extra information in the address bar. After the data URI at the beginning of the string, there is the actual address for the Gmail login page, “accounts.google.com”. But at the tail end of the address is a huge chunk of text that forces the browser to open the new tab.
“This phishing technique uses something called a ‘data URI’ to include a complete file in the browser location bar. When you glance up at the browser location bar and see ‘data:text/html…..’ that is actually a very long string of text,” Maunder said.
The attack is effective for several reasons aside from the URL trickery. Sending the victim an email from a contact’s account with a subject line she’s seen previously from that person is the foot in the door. The use of the attachment, which users are conditioned to click on for a preview, is the next step, and then the deception with the URL structure is the final ingredient.
Google is aware of the problem and advises users to turn on two-step verification to protect against account takeovers like this.
Image: Matteo X, CC By license.
With exploit code publicly available and details of the vulnerability widely known, Netgear has released a beta version of new firmware to fix a bug in several of its routers that attackers can use to execute arbitrary code on the devices.
The Netgear router vulnerability affects several of the company’s home router models, including the R6250, R6400, R6700, and many others. Attackers can exploit the vulnerability by tricking users into clicking on a malicious link. Researchers at the CERT/CC at Carnegie Mellon University disclosed the vulnerability a few days ago and there is exploit code available for the bug. Neatgear officials said the company is developing a full firmware update to fix the flaw, but in the meantime have released a beta update for some of the vulnerable models.
“While we are working on the production version of the firmware, we are providing a beta version of this firmware release. This beta firmware has not been fully tested and might not work for all users. NETGEAR is offering this beta firmware release as a temporary solution, but NETGEAR strongly recommends that all users download the production version of the firmware release as soon as it is available,” the company said in its advisory.
The quick release of the beta update is a clear indication of the seriousness of the vulnerability and the high potential for users to be compromised. Netgear hasn’t said when the final updated firmware for the affected routers will be available, and said it is still trying to determine if any other devices are vulnerable.
“NETGEAR is continuing to review our entire portfolio for other routers that might be affected by this vulnerability. If any other routers are affected by the same security vulnerability, we plan to release firmware to fix those as well,” the company said.
Image: Kristy MacPherson, CC By-SA license.
Two models of Netgear home routers contain a vulnerability that can allow a remote attacker to execute arbitrary code. The bug can be exploited with a simple URL and there’s a publicly available exploit for the flaw.
The issue affects the Netgear R7000 and R6400 routers and right now there’s no fix available for the vulnerability. The bug affects firmware version 1.0.7.2_1.1.93 in the R7000 and version 1.0.1.6_1.0.4 in the R6400, and there are reports that some other Netgear models might be vulnerable, as well. An advisory from the CERT/CC at Carnegie Mellon University says the vulnerability can be exploited easily by remote or local attackers.
“Exploiting the vulnerability is trivial.”
“Netgear R7000, firmware version 1.0.7.2_1.1.93 and possibly earlier, and R6400, firmware version 1.0.1.6_1.0.4 and possibly earlier, contain an arbitrary command injection vulnerability. By convincing a user to visit a specially crafted web site, a remote unauthenticated attacker may execute arbitrary commands with root privileges on affected routers. A LAN-based attacker may do the same by issuing a direct request, e.g. by visiting: https://<router_IP>/cgi-bin/;COMMAND," the advisory says.
“This vulnerability has been confirmed in the R7000 and R6400 models. Community reports also indicate the R8000, firmware version 1.0.3.4_1.1.2, is vulnerable. Other models may also be affected.”
There’s no patch for the vulnerability, and the CERT/CC advisory says users who are running vulnerable versions of the firmware should disable the web server or stop using the router until a patch is released.
“Exploiting the vulnerability is trivial. Users who have the option of doing so should strongly consider discontinuing use of affected devices until a fix is made available,” the advisory says.
Nether said it is aware of the issue and is investigating the vulnerability. The affected routers are designed for home use.
Google has spent a lot of time and money on security over the last few years, developing new technologies and systems to protect users’ devices. One of the newer technologies the company has come up with is designed to provide security for users themselves rather than their laptops or phones.
On Monday Google launched a new app for Android called Trusted Contacts that allows users to share their locations and some limited other information with a set of close friends and family members. The system is a two-way road, so a user can actively share her location with her Trusted Contacts, and stop sharing it at her discretion. But, when a problem or potential emergency comes up, one of those contacts can request to get that user’s location to see where she is at any moment. The app is designed to give users a way to reassure contacts that they’re safe, or request help if there’s something wrong.
“Once you install the Android app, you can assign “trusted” status to your closest friends and family. Your trusted contacts will be able to see your activity status — whether you’ve moved around recently and are online — to quickly know if you’re OK. If you find yourself in a situation where you feel unsafe, you can share your actual location with your trusted contacts,” Minh T. Nguyen, a software engineer at Google, said.
A key feature of the Trusted Contacts app is that if a contact requests a user’s location and the user doesn’t reply in a given period of time, the app will share the location automatically.
“And if your trusted contacts are really worried about you, they can request to see your location. If everything’s fine, you can deny the request. But if you’re unable to respond within a reasonable timeframe, your location is shared automatically and your loved ones can determine the best way to help you out. Of course, you can stop sharing your location or change your trusted contacts whenever you want,” Nguyen said.
The new app, which is only available for Android right now but may be released for iOS later, also conveys some data other than the user’s location, including battery level. Although there isn’t an iOS app yet, users can track a Trusted Contacts user by signing in to the service in a browser on an iPhone or computer. Users always have the option of removing people from the Trusted Contacts list or to stop receiving notifications about another user’s location, Google said.
Image: Phrawr, CC By 2.0 license.
Mozilla has released a patch for a critical remote code execution vulnerability in Firefox that is being used in active attacks to unmask users of the Tor Browser, which is based on Firefox.
The vulnerability lies in the way that Firefox handles SVG animations and exploit code for the bug has been posted on a public Tor mailing list. The exploit uses JavaScript on a malicious web site to deliver a payload, which only works against Windows machines at the moment. The vulnerability exists on Linus and MacOS too.
“The exploit took advantage of a bug in Firefox to allow the attacker to execute arbitrary code on the targeted system by having the victim load a web page containing malicious JavaScript and SVG code. It used this capability to collect the IP and MAC address of the targeted system and report them back to a central server,” Daniel Veditz of Mozilla said.
Veditz said Mozilla first got word of the vulnerability on Tuesday morning, a few hours before details of the bug and exploit code was posted on the Tor mailing list. Mozilla released the patch about a day later, and Tor also has released an update for the Tor Browser to address the issue.
“The security flaw responsible for this urgent release is already actively exploited on Windows systems. Even though there is currently, to the best of our knowledge, no similar exploit for OS X or Linux users available the underlying bug affects those platforms as well. Thus we strongly recommend that all users apply the update to their Tor Browser immediately. A restart is required for it to take effect,” the Tor Project said in the release notes for the new version of the browser.
Several security researchers have said that the exploit seen in the wild for this vulnerability is nearly identical to one known to have been used by the FBI in an investigation of a child exploitation site. Mozilla’s Veditz said there’s no direct confirmation that the exploits are the same.
“As of now, we do not know whether this is the case. If this exploit was in fact developed and deployed by a government agency, the fact that it has been published and can now be used by anyone to attack Firefox users is a clear demonstration of how supposedly limited government hacking can become a threat to the broader Web,” he said.
Image: Akamdar, CC By 2.0 license.
A group of academic security researchers has reviewed the security of the Signal protocol, which is used in the Signal encrypted messaging app as well as in many third-party apps, and found that it is both secure and resistant to attack.
The review, conducted by researchers from universities in the U.K., Canada, and Australia, looked at the cryptographic underpinnings of Signal and found no serious security problems and pronounced the protocol to be sound and resilient, even in the face of compromise. Signal, developed by Open Whisper Systems several years ago, is designed to provide encrypted messaging and it is used in many high-profile apps, including WhatsApp, Facebook, and Google Allo.
The researchers from the University of Oxford, Queensland University, and McMaster University too an in-depth look at the intricacies of the Signal protocol, its cryptographic foundation, and the ways in which it is implemented. They came away generally impressed with what they found.
“First, our analysis shows that the cryptographic core of Signal provides useful security properties. These properties, while complex, are encoded in our security model, and which we prove that Signal satisfies under standard cryptographic assumptions. Practically speaking, they imply secrecy and authentication of the message keys which Signal derives, even under a variety of adversarial compromise scenarios such as forward security (and thus ‘future secrecy’). If used correctly, Signal could achieve a form of post-compromise security, which has substantial advantages over forward secrecy,” the researchers say in their paper, “A Formal Security Analysis of the Signal Messaging Protocol”.
This audit is the first full-scale public investigation of the security of Signal, a protocol that many cryptographers and security experts have praised. The researchers conducted the assessment of Signal’s security using the assumption that the network the device is using is hostile and controlled by an adversary. They found Signal’s approach to the protection of keys to be well done.
“Signal’s mechanisms suggest a lot of effort has been invested to protect against the loss of secrects used in specific communications. If the corresponding threat model is an attacker gaining (temporary) access to the device, it becomes crucial if certain previous secrets and decrypted messages can be accessed by the attacker or not: generating new message keys is of no use if the old ones are still recoverable. This, in turn, depends on whether deletion of messages and previous secrets has been effective. This is known to be a hard problem, especially on flash-based storage media [46], which are commonly used on mobile phones,” the paper says.
The team also said that there are some areas in which Signal could improve its security.
“One can imagine strengthening the protocol further. For example, if the random number generator becomes fully predictable, it may be possible to compromise communications with future peers. We have pointed out to the developers that this can be solved at negligible cost by using constructions in the spirit of the NAXOS protocol or including a static-static DH shared secret in the key derivation,” they say.
The researchers who exposed the ways in which ultrasonic signals can be used to track users across devices have released a patch for Android that helps users protect themselves against the silent tracking.
The patch is designed to give users more control of which apps on their devices have access to the ultrasonic spectrum, which is what the tracking systems need. Once installed, the patch will allow users to allow or deny access to that spectrum on an app by app basis. Patching Android is a simple step to address a highly complex problem that involves advertisers, technology providers. regulators, and users.
Ultrasonic tracking is a relatively unknown issue, especially among users. A handful of technology companies have developed systems that can use code embedded in mobile apps to receive and interpret inaudible signals emitted by ads on TV. The system is designed to allow marketers to pair users with their various devices and gather data on their activities and, therefore, serve them more accurate ads.
A team of researchers from the University of California at Santa Barbara and University College London last week presented new research that shows the extent of the tracking and how attackers could exploit a tracking framework in order to essentially poison a user’s profile built by an advertiser.
“These profiles are built based on a variety of factors often including the ads that the user has previously seen. Given that the attacker can push beacons to the victim’s device, it can consequently influence the profile corresponding to the user. The degree that the attacker can ‘corrupt’ this profile and what he can do with it, depends on how each company has implemented this mechanism,” Vasilis Mavroudis, a PhD student at UCL and one of the researchers involved in the work, said.
The research team on Monday released the Android patch to the Android Open Source Project and it can be downloaded from the team’s site. However, individual users have to rely on their carriers to include the patch in their Android distributions and then send them out in updates. The researchers also are planning to release a browser extension that will prevent browsers from sending out the ultrasonic tracking signals. But they also say that there are policy level decisions that need to be made on this kind of tracking.
“Decision and policy makers should agree on what’s the next step in terms of regulations and standardization, OS vendors and developers should integrate support for ultrasound beacons to provide a transparent API (e.g., like for other physical and data layers such as Bluetooth), and finally developers should adopt such API,” the researchers said.
Researchers have discovered a pair of serious vulnerabilities in several ICS products made by Schneider Electric that can allow an attacker to freeze the control panel of vulnerable devices and force them to disconnect from a SCADA network.
The vulnerabilities affect seven different Magelis products from Schneider, which are used for remote management and monitoring of ICS devices over the web. Researchers at Critifence discovered the vulnerabilities and reported them to Schneider, but there are no patches available at the moment. Schneider said that some of the products will have software upgrades available in March.
The two flaws that Critifence discovered are classified as denial-of-service conditions, but because they can disrupt SCADA and ICS network functionality, they’re considered serious.
“The timeout value for closing an HTTP client’s requests in the Web Gate service is too long and allows a malicious attacker to open multiple connections to the targeted web server and keep them open for as long as possible by continuously sending partial HTTP requests, none of which are ever completed. The attacked server opens more and more connections, waiting for each of the attack requests to be completed, which enables a single computer to take down the Web Gate Server,” the advisory from Critifence says.
The second vulnerability is similar, but is caused by a different kind of HTTP request.
“The timeout value between chunks for closing an HTTP chunked encoding connection in the Web Gate service is too long and allows a malicious attacker to keep the connection open by exploiting the maximum possible interval between chunks and by using the Content-Length header and buffer the whole result set before calculating the total content size, which keeps the connection alive and enables a single computer to take down the Web Gate Server,” the advisory says.
The researchers have named the bugs PanelShock, and Schneider said in its advisory that it is working on updates to address the vulnerabilities. One key mitigation for the bugs is that the Web Gate Server, which needs to be enabled for the attack to succeed, is disabled by default.
“The use cases identified demonstrate the ability to generate a freeze condition on the HMI, that can lead to a denial of service due to incomplete error management of HTTP requests in the Web Gate Server. While under attack via a malicious HTTP request, the HMI may be rendered unable to manage communications due to high resource consumption. This can lead to a loss of communications with devices such as Programmable Logic Controllers (PLCs), and require reboot of the HMI in order to recover,” the Schneider advisory says.
Image: Seth Stoll, CC By-Sa 2.0 license.
Researchers have known for a long time that acoustic signals from keyboards can be intercepted and used to spy on users, but those attacks rely on grabbing the electronic emanation from the keyboard. New research from the University of California Irvine shows that an attacker, who has not compromised a target’s PC, can record the acoustic emanations of a victim’s keystrokes and later reconstruct the text of what he typed, simply by listening over a VoIP connection.
The researchers found that when connected to a target user on a Skype call, they could record the audio of the user’s keystrokes. With a small amount of knowledge about the victim’s typing style and the keyboard he’s using, the researchers could accurately get 91.7 percent of keystrokes. The attack does not require any malware on the victim’s machine and simply takes advantage of the way that VoIP software acquires acoustic emanations from the machine it’s on.
“Skype is used by a huge number of people worldwide,” said Gene Tsudik, Chancellor’s Professor of computer science at UCI, and one of the authors of the new paper. “We have shown that during a Skype video or audio conference, your keystrokes are subject to recording and analysis by your call partners. They can learn exactly what you type, including confidential information such as passwords and other very personal stuff.”
As the researchers point out, a lot of people who are on Skype calls do other things while they’re connected. They send emails, chat messages, or take notes, and the keystrokes produce sounds that are transmitted to the other parties on the call. While many people use Skype and other VoIP apps to talk to friends around the world, these apps also are used for business meetings and the parties on the calls may not always be friends. If an attacker has a bit of knowledge about what kind of computer the target is using, he would be well on his way.
“It’s possible to build a profile of the acoustic emanation generated by each key on a given keyboard,” Tsudik said. “For example, the T on a MacBook Pro ‘sounds’ different from the same letter on another manufacturer’s product. It also sounds different from the R on the same keyboard, which is right next to T.”
Even without knowing anything about the keyboard the victim is using, or his typing style, an attacker still has about a 42 percent chance of guessing which key the target is pressing. The keyboards on touch screens aren’t vulnerable to this kind of attack, which the researchers call Skype & Type.
“S&T attack transpires as follows: during a VoIP call between the victim and the attacker, the former types something on target-device, e.g., a password, that we refer to as targettext. Typing target-text causes acoustic emanations from targetdevice’s keyboard, which are then picked up by the targetdevice’s microphone and transmitted to the attacker by VoIP. The goal of the attacker is to learn the target-text by taking advantage of these emanations,” the paper, entitled “Don’t Skype & Type! Acoustic Eavesdropping in Voice-Over-IP”, says.
One potential use for this attack would be to record a user typing a password into a given site or application. The researchers say that their attack could greatly reduce the amount of effort an attacker would need to exert in order to get a victim’s password versus a typical brute-force attack. There are some countermeasures to the researchers’ new attack, including adding extra noise to the channel as the user types.
“A simple countermeasure to our attack could be a short ‘ducking’ effect, a technique where we greatly reduce the volume of the microphone and overlap it with a different sound, when a keystroke is detected. However, this could ruin the quality of the voice call, as the voice is removed in its entirety as well. An effective countermeasure should be less intrusive as possible, and disrupt only the sound of the keystrokes, avoiding to ruin the call of the user,” the paper says.
Researchers looking into the Mirai botnet that has been used in two massive DDoS attacks in the last couple of weeks have discovered that many of the compromised IoT devices in the botnet include components from one Chinese manufacturer and have hardcoded credentials that can’t be changed.
The Mirai botnet is made up of a variety of IoT devices such as surveillance cameras and DVRs that have been compromised via Telnet. The malware that’s used in the botnet infects new devices by connecting to them over Telnet with default credentials and then installing itself on the device. Mirai has been used to attack journalist Brian Krebs’s site and also to hit hosting provider OVH. The two attacks were among the larger DDoS attacks ever seen in terms of traffic volume, with the OVH attack being in the range of 1 Tbps. The botnet has been operating for some time, but it has received a lot of attention after the two huge attacks and the subsequent release of the Mirai source code.
Now, researchers at Flashpoint have found that a large percentage of the devices in the Mirai botnet contain components manufactured by XiongMai Technologies, a Chinese company that sells products to many DVR and IP camera makers. The devices that use these components have a default username and password and attackers can log into them remotely.
“The issue with these particular devices is that a user cannot feasibly change this password. The password is hardcoded into the firmware, and the tools necessary to disable it are not present. Even worse, the web interface is not aware that these credentials even exist. Further exacerbating the issue, the Telnet service is also hardcoded into /etc/init.d/rcS (the primary service startup script), which is not easy to edit,” Zach Wikholm of Flashpoint wrote in a report on the company’s findings.
There’s also a separate vulnerability that allows attackers to bypass the web authentication mechanism that devices running XiongMai’s CMS or NetSurveillance software use.
“The login URL for the device, https://<IP_address_of_device>/Login.htm, prompts for a username and password. Once the user logs in, the URL does not change but instead loads a second page: DVR.htm. While researching CVE-2016-1000245, Flashpoint identified a vulnerability that the web authentication can be bypassed by navigating to DVR.htm prior to login. This vulnerability has been assigned CVE-2016-1000246. It should be noted, both vulnerabilities appear in the same devices. Any DVR, NVR or Camera running the web software ‘uc-httpd’, especially version 1.0.0 is potentially vulnerable. Out of those, any that have the ‘Expires: 0’ field in their server header are vulnerable to both,” Wikholm said.
The researchers found 515,000 devices online that have both vulnerabilities.
The Department of Justice has charged two teenagers in connection with a scheme that involved hacking-for-hire activities as well as a service that would make repeated harassing phone calls to victims for a price.
The charges are related to an investigation into the Lizard Squad hacking group, which has been tied to a number of DDoS attacks. The two men charged on Wednesday in Chicago are Zachary Buchta of Maryland and Bradley Jan Willem Van Rooy or Leiden, Netherlands, both of whom are 19 years old. The pair are alleged to be involved with the operation of a site called phonebomber.net, through which customers could pay $20 to have victims harassed with repeated phone calls.
“Lizard Squad initially drew the attention of U.S. authorities during an investigation into phonebomber.net, a website that enabled paying customers to select victims to receive repeated harassing phone calls from spoofed numbers, according to the complaint. One of the victims, who resided in Illinois, last fall received a phone call every hour for thirty days,” the Department of Justice said in a statement.
That service was just the beginning, though. The complaint against Buchta and van Rooy alleges that the pair also helped run sites that enabled users to launch DDoS attacks against any target they chose. The sites were used to run thousands of DDoS attacks, and customers also could buy stolen payment card data through the sites, according to the Justice Department’s complaint.
“Zachary Buchta, Bradley Jan Willem van Rooy, Individual A, Individual B, and others have conspired to launch destructive cyber attacks against companies and individuals around the world. They have done so first by promoting and operating the websites ‘shenron.lizardsquad.org’ (Subject Domain 1) and ‘stresser.ru’, through which they provided a cyber-attack-for-hire service and trafficked stolen payment card account information for thousands of victims,” the complaint says.
“Soon after the launch of phonebomber.net, members of Lizard Squad began denial-of-service attacks against various victims and boasted about their attacks on Twitter. In particular, during November and December 2015, the Twitter accounts @LizardLands, @fbiarelosers (i.e., Buchta), and @chippyshell (i.e., Individual A) were used to coordinate and announce a denial-of-service attack committed against Victim A, an international digital media company.”
Authorities arrested Buchta in Maryland in September, and van Rooy was arrested in the Netherlands around the same time and is still in custody there.
Apple seems to have made a curious security choice in iOS 10, one that enables attackers to brute force the password for a user’s local backup 2,500 times faster than was possible on iOS 9.
Researchers at Elcomsoft, a Russian security company, discovered the issue, which is related to the choice of hashing algorithm in iOS 10. In the newest version of the iPhone operating system, Apple uses SHA256 to hash the password for the user’s local backup, which is stored on a computer paired with the phone. In previous versions, Apple used PBKDF2 for this job and ran the password through the algorithm 10,000 times, making password cracking quite difficult.
But iOS 10 uses just one iteration of SHA256 to hash the local backup password, something that the Elcomsoft researchers said made brute-forcing the password far easier. They found that using just a CPU rather than an optimized GPU implementation, they could try as many as six million passwords per second in iOS 10. By comparison, the same setup could try just 2,400 passwords per second against iOS 9. Elcomsoft has a custom tool it sells for this task.
“When working on an iOS 10 update for Elcomsoft Phone Breaker, we discovered an alternative password verification mechanism added to iOS 10 backups. We looked into it, and found out that the new mechanism skips certain security checks, allowing us to try passwords approximately 2500 times faster compared to the old mechanism used in iOS 9 and older,” Oleg Afonin of Elcomsoft said in a post on the issue.
“This new vector of attack is specific to password-protected local backups produced by iOS 10 devices. The attack itself is only available for iOS 10 backups. Interestingly, the ‘new’ password verification method exists in parallel with the ‘old’ method, which continues to work with the same slow speeds as before.”
“The attack itself is only available for iOS 10 backups.”
The key limitation for this attack, as Afonin said, is that it requires access to the local backup of a target iPhone. Many, if not most, iPhone users eschew local backups on their computers in favor of iCloud storage, and an attacker would need either physical access to the local backup or to compromise the machine in some other way in order to execute this attack.
Per Thorsheim, a security advisor who runs the PasswordsCon conference, said the attack on iOS 10 works and brings up some questions about why Apple made the choice it did.
“The implementation works. Apple has taken us through many betas of iOS 10, so it is easy to say that this didn’t happen by pure error.,” Thorsheim wrote in an analysis of the password issue.
Afonin said that once an attacker has cracked the victim’s local backup password, he would have access to the most sensitive data on the device, including the keychain, which is the protected storage built into iOS.
“If you are able to break the password, you’ll be able to decrypt the entire content of the backup including the keychain,” Afonin said.
The word botnet usually conjures images of hordes of compromised PCs being used for DDoS attacks or malware operations, but researchers in the Czech Republic has discovered a large network of compromised CCTV cameras, routers, and other embedded devices that’s growing by tens of thousands of devices per day.
Since the end of May, researchers at CZ.NIC, an association of ISPs in the Czech Republic that also operates the Czech CSIRT, have been seeing a huge increase in the number of attacks on its Telnet honeypot, as well as the number of unique IP addresses conducting the attacks. After looking at the data and conducting some analysis on the type of devices that are connecting to the honeypot, the researchers found that a large percentage of the devices hitting the Telnet honeypot are embedded devices that appear to have been compromised.
“These devices often run outdated software which are known to have security holes and an attacker with such knowledge can easily compromise a large number of hosts by a single exploit,” Bedřich Košata of CZ.NIC said in an analysis of the honeypot data.
Using Shodan, the researchers looked at more than 1.8 million unique IP addresses that had hit the honeypot to determine the kind of devices they were and some other information. They discovered that many of the devices were running older software that was known to have security issues. Among the devices connecting to the Telnet honeypot were IP-enabled security cameras and home routers. The volume of activity from the most commonly seen devices began to increase quickly beginning in May.
“In first place we find the RomPager/4.07 HTTP server, which is an old version of an HTTP server used in many home routers and other embedded devices known for having serious security vulnerabilities in the past. In second place was gSOAP/2.7, which is an older version of a popular toolkit for web services used, again, often in embedded devices. H264DVR 1.0 is an identifier for a RTSP (Real Time Streaming Protocol) server used in online DVR products, such as security cameras, etc.,” Košata said.
“A large proportion of online devices of a specific type are already taken over.”
“From this we can conclude that the rise of Telnet activity is driven by attacks from compromised embedded devices. We could speculate that an attacker was able to target these devices using some known vulnerability and after taking them over, uses them to spread the botnet even further. What is even worse than the number of attacking devices is the trend.”
Embedded devices that are exposed to the Internet often are easy prey for attackers. Many of these devices, including home routers, security cameras, smart TVs, and others, are rarely, if ever, updated. So when a security vulnerability is discovered in the firmware of a given device, attackers can use that information to go after those devices.
“These devices form an easy target as there is usually a “monoculture” of these devices, all having the same setup and same vulnerabilities. It is very likely that an adversary is specifically targeting some of these devices to form a botnet. It even seems that in some cases, a large proportion of online devices of a specific type are already taken over,” Košata said.
“In the course of our investigation, we were able to obtain one “infected” CCTV camera. We were not able to find any obvious malware in its firmware and thus we conclude that the attacks are probably performed remotely without permanent changes to the firmware.”
CZ.NIC has set up a tool that allows people to check the IP addresses of their own devices against the list of devices that hit their honeypot.
The scope of a compromise of Dropbox four years ago that the company initially said only involved customer email addresses being stolen has now expanded, with more than 68 million user passwords dumped online.
The cache comprises passwords that are hashed with either SHA-1 or bcrypt and none of them are in plaintext. When Dropbox first disclosed the breach in 2012, company officials said that the attackers had taken users’ email addresses and some users were receiving spam on those accounts. The compromise was the result of a Dropbox employee reusing an internal password.
“A stolen password was also used to access an employee Dropbox account containing a project document with user email addresses. We believe this improper access is what led to the spam. We’re sorry about this, and have put additional controls in place to help make sure it doesn’t happen again,” the company said at the time.
Researchers who have analyzed the Dropbox password files have confirmed that they’re authentic.
But now, Dropbox is forcing all of its users who haven’t changed their passwords since mid-2012 to reset them. The company hasn’t provided any further details on why it didn’t detect the theft of the passwords in 2012 or how the passwords were taken.
“Our security teams are always watching out for new threats to our users. As part of these ongoing efforts, we learned about an old set of Dropbox user credentials (email addresses plus hashed and salted passwords) that we believe was obtained in 2012. Our analysis suggests that the credentials relate to an incident we disclosed around that time,” Patrick Heim of Dropbox said of the password dump.
Researchers who have analyzed the Dropbox password files have confirmed that they’re authentic Dropbox credentials. Troy Hunt, who maintains the Have I Been Pwned archive, said half of the passwords were hashed with SHA-1 and half with bcrypt, a much stronger algorithm. He checked a known password that his wife used for Dropbox against a hash of it he found it in the credential dump and found they matched.
Hunt said the way Dropbox handled the passwords makes the credential dump less of a threat to many users.
“As for Dropbox, they seem to have handled this really well. They communicated to all impacted parties via email, my wife did indeed get forced to set a new password on logon and frankly even if she hadn’t, that password was never going to be cracked. Not only was the password itself solid, but the bcrypt hashing algorithm protecting it is very resilient to cracking and frankly, all but the worst possible password choices are going to remain secure even with the breach now out in the public,” Hunt said.
A new family of powerful ATM malware is being used in heists around the world, using known techniques, but also employing a card with a malicious EMV chip that allows the thief to control the malware on the machine.
The malware is known as Ripper and researchers have connected it to thefts at ATMs in a variety of countries, including a huge heist in Thailand earlier this summer. Ripper has a number of functions and capabilities, including the ability to count the number of bills in the machine, disable the network interface, and erase logs and other forensic evidence on the ATM. Researchers at FireEye, who have analyzed the malware, say some of the techniques have not been seen before, or are quite uncommon.
ATM malware comes in a number of different forms, and often is delivered to the machines through a USB drive or other portable media. Once on the machine, the malware’s main job is to dispense as much money to the thief as possible in a short period of time. Ripper accomplishes this in two ways, either as a standalone service or as a legitimate process on the ATM.
“Upon execution, RIPPER will kill the processes running in memory for the three targeted ATM Vendors via the native Windows ‘taskkill’ tool. RIPPER will examine the contents of directories associated with the targeted ATM vendors and will replace legitimate executables with itself. This technique allows the malware to maintain the legitimate program name to avoid suspicion,” Daniel Regalado of FireEye wrote in an analysis of the Ripper malware.
“RIPPER will maintain persistence by adding itself to the RunFwLoadPm registry key (that might already exist as part of the vendor installation), passing the “/autorun” parameter that is understood by the malware.”
In order to control the malware on an infected ATM, the thief has to insert a card with a malicious EMV chip into the machine. The Ripper malware will validate the card, and then will wait for instructions from the keypad on the machine. The thief has a variety of commands at his disposal, such as cleaning logs, hiding the malware’s GUI, and shutting down the network interface of the ATM, which prevents it from communicating with the remote bank.
One attack that’s been linked to Ripper is a series of thefts in Thailand that netted thieves about $350,000 earlier this month. That operation hit more than 20 ATMs.
“This malware family can be used to compromise multiple vendor platforms and leverages uncommon technology to access physical devices. In addition to requiring technical sophistication, attacks such as that affecting the ATMs in Thailand require coordination of both the virtual and the physical. This speaks to the formidable nature of the thieves,” Regalado said.
Malware infected the point-of-sale systems in all of Eddie Bauer’s stores in the United States and Canada for more than six months this year, stealing payment card data at the company’s 350 stores.
The attack affects an untold number of customers who shopped in the stores between January and mid-July of 2016, but the company said customers who shopped online are not affected. Eddie Bauer officials said the company is working with the FBI on the breach investigation.
“The security of our customers’ information is a top priority for Eddie Bauer,” said Mike Egeck, Chief Executive Officer of Eddie Bauer, in a statement. “We have been working closely with the FBI, cyber security experts, and payment card organizations, and want to assure our customers that we have fully identified and contained the incident and that no customers will be responsible for any fraudulent charges to their accounts. In addition, we’ve taken steps to strengthen the security of our point of sale systems to prevent this from happening in the future.”
The breach at Eddie Bauer is the latest in a string of very similar incidents at restaurants, hotels, and other retail and hospitality chains. Earlier this week, hotel operator HEI admitted that 20 of the hotels it runs around the U.S. were hit by PoS malware over the course of about 15 months, starting in March 2015. The attack affected Marriott, Sheraton, and other hotels that HEI runs.
“We are treating this matter as a top priority, and took steps to address and contain this incident promptly after it was discovered, including engaging outside data forensic experts to assist us in investigating and re mediating the situation and promptly transitioning payment card processing to a stand-alone system that is completely separated from the rest of our network. In addition, we have disabled the malware and are in the process of re configuring various components of our network and payment systems to enhance the security of these systems,” HEI said in a notice to customers.
Eddie Bauer officials said that they believe the attack on the company’s stores was part of a “sophisticated attack” that targeted hotels, restaurants, and retailers. Hackers have been going after PoS and payment card systems for several years, particularly those at retailers and hotels that see a high volume of transactions. The malware used in these attacks typically is designed to capture card data on the terminal before it is encrypted and sent to the back end system.
LAS VEGAS–Mobile payments services have become a popular choice for consumers, but security researchers have been finding plenty of vulnerabilities in them, and Venmo is the latest one to take a hit.
A researcher was able to uncover a number of weaknesses in the Venmo mobile payment system recently, some of which enabled him to steal money from users, regardless of whether their devices were locked or open. The vulnerabilities have to do with the way that the system handles SMS notifications, and, combined with Siri commands and other methods, the flaws allow an attacker to force a victim to make a payment through the Venmo app.
Venmo is a service owned by PayPal, and it allows users to send money to one another and also to make payments to outside services. One of the app’s features is that it allows one user to “charge” other users for something, which results in an SMS notification being sent to the person who was charged. When that occurs, the recipient can reply to the SMS with a six-digit code that was sent in the original message, which completes the payment.
Security researcher Martin Vigo, who uses Venmo, noticed the SMS notifications for charges and thought about the fact that he didn’t have to authenticate to the service before replying to the message authorizing the payment. So he began looking at the way that the app handled notifications and how he might be able to mess with that process through Siri.
“I remembered that you can use Siri to send SMS when your device is locked. It is worth noting that this feature is on by default and became especially popular when the ‘Hey Siri’ feature was added in iOS 9,” Vigo wrote in a post explaining the bugs.
The SMS notification is not enabled by default in Venmo.
“Now that we know we can send SMS on locked devices, we need the code present in the SMS in order to reply and make the payment. Apple introduced the ‘Text Message Preview’ which allows you too see in the lock screen who sent you a text and part of the content. This is also on by default. If we combine these two, I am able to see the SMS with the code and can reply using Siri. All this without unlocking the device. All this out of the box.”
The SMS notification is not enabled by default in Venmo, Vigo said, so he tried to find a method to turn it on. It didn’t take long before he noticed that each SMS response from Venmo included a line that told him to text the command “STOP” to disable notifications. If that worked, why not try sending the command “START” to turn them on?
“You can activate the SMS notification service by sending an SMS to 86753 with the word ‘Start’. 86753 is a short code number owned by Venmo and used for all the SMS notifications. Now, I am able to activate Venmo’s SMS notification service, ask Siri to tell me the secret code and reply to make the payment. All that without unlocking the device!” he wrote.
In an email, Vigo said users might notice an email from Venmo about a payment, but by then the attack has already succeeded.
“When it comes to the Siri attack, the victim will usually receive an email that a payment was made. By then is already to late though,” he wrote.
The attack that Vigo devised isn’t entirely reliant on issues with Venmo’s app. Some of the problems have to do with the way that iPhones display texts and how Siri handles voice commands. An iPhone will display several lines of an incoming text message on the lock screen, which can include the short code that Venmo, or many other apps, send to users.
Vigo said that with the payment limits set up in Venmo an attacker could steal nearly $3,000 a day with his attack before it was patched. Vigo reported the flaws to Venmo in June and the company deployed fixes for them by mid-July.
Vigo also discovered a method that could possibly allow him to send the same payment request to as many as a million Venmo users at the same time.
“These attacks are theoretical and I did not try them. Venmo payments are known to be monitored and the last thing I want is someone knocking at my door asking why so many people owes me money,” Vigo said.
The methods Vigo described need physical access to the device, but he also found a way to exploit the bugs by brute-forcing the short code Venmo sends to users. He charged his own account, for the short code, and then began to reply with incorrect codes. Rather than canceling the payment, Venmo sent him a message saying he would have to wait to try again.
“Anyway, the point is, after 5 tries I had to wait about 5 minutes till I could try another 5 times. The codes are six digits long so we have 1 million possibilities and we can try 5 codes every 5 minutes. Do the math. Possible but not feasible,” Vigo said.
UPDATED–Researchers have identified a serious flaw that could allow an attacker to compromise a number of different devices and networks, including telecommunications networks and mobile phones, as well as a number of other embedded devices.
The vulnerability is in a specific compiler that’s used for software in several programming languages in a number of industries, including aviation, telecom, defense, and networking. The compiler, sold by Objective Systems, is for the ASN.1 standard, and one of the code libraries in the compiler contains a heap overflow vulnerability that could allow a high-level attacker to execute arbitrary code remotely on vulnerable systems. Discovered by researcher Lucas Molas, the vulnerability could affect products from a wide range of vendors who use the compiler. Right now, only products from Qualcomm are known to be affected.
“A vulnerability found in the runtime support libraries of the ASN1C compiler for C/C++ from Objective Systems Inc. could allow an attacker to remotely execute code in software systems, including embeded software and firmware, that use code generated by the ASN1C compiler,” the advisory from Molas says.
“The vulnerability could be triggered remotely without any authentication in scenarios where the vulnerable code receives and processes ASN.1 encoded data from untrusted sources, these may include communications between mobile devices and telecommunication network infrastructure nodes, communications between nodes in a carrier’s network or across carrier boundaries, or communication between mutually untrusted endpoints in a data network.”
Objective Systems has released a new version of the ASN1C compiler for C and C++ that includes a patch for the vulnerability, but that may not completely fix the issue. The company pushed out a hot fix for the ASN1C 7.0.1.x series of the compiler, but there is no release date set for the 7.0.2 version, which will contain the full fix. Although the vulnerability is considered quite serious, Molas said in the advisory that it’s not clear how easy it would be for an attacker to exploit it.
“An exploit would be highly dependent and custom-built for the actual target.”
“Due to the fact that the bugs are located in the core runtime support library, it is hard to assess its exploitability in all scenarios but it is safe to assume that it would lead attacker controlled memory corruption of either the system’s heap (ifmalloc is called) or in the internal memory allocator (if the number of bytes requested is below the aforementioned threshold),” Molas said.
Iván Arce, who leads the research team at Programa STIC of Fundación Sadosky in Argentina, of which Molas is a member, said that any exploitation of the vulnerability would need to be specific to a given target.
“In practice, aka the real world, an exploit would be highly dependent and custom-built for the actual target. Target here should be understood as an specific device brand, model and vulnerable software version. I use ‘software’ a generic term that includes embedded software, firmware, baseband, etc.,” Arce said by email.
“The reason for this is that the bug is in a support library used by the automatically generated code and incorporated as a component into a product’s source tree by a given vendor. The way each vendor chooses to do that and build the resulting software, the hardware on which that will run and the specifics about the (ASN.1 based) protocol that the ASN1C-generated code parses would determine exploitability.”
ASN.1 is one of the foundational standards in many networks, including telecom networks and is used in a variety of places. Arce said a skilled attacker might be able to compromise a mobile device over the air through the use of a fake base station, or compromise a base station with a mobile device, or compromise telecom network equipment with this vulnerability.
“The scenarios are not limited to telco stuff but we do not know how ASN1C is being used in other areas,” Arce said.
Objective Systems has a broad customer list, which includes tech giants such as Cisco and Qualcomm, as well as a number of federal agencies, such as the Federal Aviation Administration and the FBI.
A Qualcomm spokesman said the company is working on a fix, although it doesn’t believe the bug is exploitable.
“The vulnerability is an integer overflow that can cause buffer overflow. However due to the ASN.1 PER encoding rule specified in the cellular standards and implemented in our products, we believe the vulnerability is not exploitable. This is because in order to exploit it, an attacker needs to send a large value in a specially crafted network signaling message; but the encoding rule specified in the 3G/4G Standards and in our products does not allow such a large value to get through,” the spokesman said.
This story was updated on July 25 to add comments from Qualcomm.
Overlay malware has emerged as one of the more pernicious threats on mobile devices, particularly Android phones, and researchers have now discovered a new SMS phishing campaign that uses overlay malware to steal credentials for mobile banking apps and messaging apps.
The attackers behind the campaign are using a wide range of lures and a diverse infrastructure, including a dozen command-and-control servers spread across Europe. Targeting users in a number of European countries, the campaign uses shortened URLs sent via SMS to trick victims into clicking on a malicious link and installing the malware. The SMS messages typically have some version of a notification for a failed shipment and though the campaign originated in Russia, it has now begun targeting users in Denmark, Italy, Germany, Austria, and the U.K.
Overlay malware is a specific form of mobile malware that is designed to mimic the look and feel of a target app. When a user opens her mobile banking app, for example, the installed malware will execute and produce an overlay screen that asks for the user’s credentials and blocks out the legitimate app. The technique has become increasingly popular among attackers as it’s often difficult to distinguish the overlay screen from the real app and it’s a simple method to harvest a large number of credentials quickly.
Researchers at FireEye, who have been tracking the newest SMS phishing campaign, say the attackers also have added new apps to their target list. They began with users of MobilePay and WhatsApp in a couple of countries, and now have expanded to WhatsApp users in many other European countries, as well as customers of other banks. The campaigns have been active since early this year, and FireEye researchers said the malware also has the ability to mimic the official Google Play store app.
The malware overlays a phishing view on top of the benign app.
The malware and attack infrastructure used in this campaign is typical of professional attacker groups and the campaign involves several steps.
“Threat actors typically first setup the command and control (C2) servers and malware hosting sites, then put the malware apps on the hosting sites and send victims SMS messages with an embedded link that leads to the malware app. After landing on the user’s device, the malware launches a process to monitor which app is running in the foreground on the compromised device,” Wu Zhou, Linhai Song, Jens Monrad, Junyuan Zeng, and Jimmy Su of FireEye wrote in an analysis of the campaign.
“When the user launches a benign app into the foreground that the malware is programmed to target (such as a banking app), the malware overlays a phishing view on top of the benign app. The unwary user, assuming that they are using the benign app, will enter the required account credentials, which are then sent to remote C2 servers controlled by threat actors.”
Overlay malware attacks are particularly effective on Android devices, which allow for the installation of software from third-party sources. Apple iOS only allows installs from the App Store–unless the device is jailbroken–so it is much more difficult to get a malicious app on an iPhone or iPad than on an Android device.
Researchers have discovered a new class of mobile malware that has made its way into the Google Play store and is capable of completely compromising more than 90 percent of existing Android phones.
The malware, which researchers at Trend Micro are calling Godless, contains a number of exploits for known Android vulnerabilities, some of which are a couple of years old. The malware has already hit more than 850,000 devices, the researchers said, and it affects devices running on Android 5.1 or earlier.
“Godless is reminiscent of an exploit kit, in that it uses an open-source rooting framework calledandroid-rooting-tools. The said framework has various exploits in its arsenal that can be used to root various Android-based devices. The two most prominent vulnerabilities targeted by this kit are CVE-2015-3636 (used by the PingPongRoot exploit) and CVE-2014-3153 (used by the Towelroot exploit). The remaining exploits are deprecated and relatively unknown even in the security community,” Veo Zhang, a mobile threat analyst at Trend Micro, said in an analysis of the Godless malware.
The malware is being hidden inside apps in various mobile app stores, and once a user downloads and installs a compromised app, Godless will wait until the device’s screen is turned off before executing. It then installs a payload as a system app that is difficult to remove.
“In addition, with root privilege, the malware can then receive remote instructions on which app to download and silently install on mobile devices. This can then lead to affected users receiving unwanted apps, which may then lead to unwanted ads. Even worse, these threats can also be used to install backdoors and spy on users,” Zhang said.
The newest version of Godless includes a function that will wait until it’s installed on a new device and then contact a remote server and download the exploit and the payload. Zhang said this behavior is likely a method to avoid the security checks that Google has in its Play store to identify malicious apps. The Godless code is typically found in a variety of utility apps, such as a flashlight app.
“We have also seen a large amount of clean apps on Google Play that has corresponding malicious versions—they share the same developer certificate—in the wild. The versions on Google Play donot have the malicious code. Thus, there is a potential risk that users with non-malicious apps will be upgraded to the malicious versions without them knowing about apps’ new malicious behavior,” Zhang said.
Th most recent versions of Godless, once they have root privileges, will install a backdoor that then is used to install other malicious apps.
Mozilla is testing a new feature in pre-release versions of its Firefox browser that enable users to employ multiple personas or identities in different contexts at the same time. The feature, known as Containers, is designed to help users separate their various personal, work, and other online activities.
The new feature is currently in the Nightly build of Firefox 50, and it gives users the ability to open separate tabs in multiple different contexts. Containers are an attempt to address one of the more difficult problems in online identity: sectioning off different aspects of a user’s online activities. Many people use the same computer and browser for work and personal activities, and keeping those identities and information separate is notoriously difficult. Companies have tries various approaches over time, including Microsoft’s online ID card concept.
Mozilla’s new effort is not an entirely new construction. The concept of different online identities for different activities has been around for many years, but implementing it in a way that’s easy for people to use has proven to be quite difficult. Mozilla’s interpretation of the idea involves separate Containers for each different context in which the user is browsing.
“With Containers, we attempt to improve privacy while still minimizing breakage.”
“Each context has a fully segregated cookie jar, meaning that the cookies, indexeddb, localStorage, and cache that sites have access to in the Work Container are completely different than they are in the Personal Container. That means that the user can login to their work twitter account on twitter.com in their Work Container and also login to their personal twitter on twitter.com in their Personal Container,” Tanvi Vyas, a security engineer at Mozilla, said in a blog post introducing the feature.
“The user can use both mail accounts in side-by-side tabs simultaneously. The user won’t need to use multiple browsers, an account switcher, or constantly log in and out to switch between accounts on the same domain.”
To users, the change won’t have a major effect on normal browsing behavior. They can browse in their own default context and when they want to switch Containers, simply go to the File menu and select the option to open a new Container tab. The Containers feature will segregate any data that a site has the ability to read or write. When a user loads two separate sites in separate containers, the data from those sites are kept separate and neither site can read the other’s data.
“Assume the user then opens a Shopping Container and opens the History menu option to look for a recently visited site. example.com will still appear in the user’s history, even though they did not visit example.com in the Shopping Container. This is because the site doesn’t have access to the user’s locally stored History. We only segregate data that a site has access to, not data that the user has access to. The Containers feature was designed for a single user who has the need to portray themselves to the web in different ways depending on the context in which they are operating,” Vyas said.
Right now, the Containers feature is only on the Nightly Firefox build and Mozilla is using it as a way to collect users’ feedback. The company also is planning a Test Pilot release of it in a few months. However, Vyas said that there are no plans for Containers to be included in Firefox 50 when it moves to the next stage, which is the Aurora/Developer edition. While the Containers feature offers users an extra layer of privacy and security, Vyas warns that it is not a cure-all and comes with some limitations.
“The first is that all requests by your browser still have the same IP address, user agent, OS, etc. Hence, fingerprinting is still a concern. Containers are meant to help you separate your identities and reduce naive tracking by things like cookies. But more sophisticated trackers can still use your fingerprint to identify your device,” Vyas said.
“The Containers feature is not meant to replace the Tor Browser, which tries to minimize your fingerprint as much as possible, sometimes at the expense of site functionality. With Containers, we attempt to improve privacy while still minimizing breakage.”
An Austrian aerospace manufacturer that lost €50 million in a business email compromise scam earlier this year has fired its CEO over the incident. FACC, which makes components for the aerospace industry, said its board decided last week to fire Walter Stephan for his involvement in the scheme, after previously firing other employees.
In January, officials at FACC said that the company had been targeted by an email scheme run by outside attackers. The scam is believed to have been a version of the business email compromise scheme, in which attackers impersonate an executive or finance official inside a company in order to trick the victim into transferring a large amount of money from the company’s accounts to accounts controlled by the attackers. The fraudsters typically will spoof the domain name of the target company and ask the victim to move the money for an acquisition or other urgent transaction.
“Today, it became evident that FACC AG has become a victim of a crime act using communication- an information technologies. The management board has immediately involved the Austrian Criminal Investigation Department and engaged a forensic investigation. The correct amount of damage is under review. The damage can amount to roughly EUR 50 million. The cyberattack activities were
executed from outside of the company,” the statement from FACC said in January.
The FACC case is one of the larger examples of this kind of scheme, and has had unusually far-reaching consequences. The company’s board met last week and decided that it was going to remove Stephan, although the CEO’s role in the scam has not been detailed.
“n the supervisory board meeting, held on May 24, 2016, Mr. Walter Stephan (CEO) was revoked by the supervisory board as chairman of the management board of FACC AG with immediate effect for important reason. The supervisory board came to the conclusion, that Mr. Walter Stephan has severely violated his duties, in particular in relation to the ‘Fake President Incident’,” the company’s statement says.
Statistics compiled by the FBI show that the CEO phishing scam cost United States businesses $246 million in 2015. That number is likely well below the actual monetary losses, as it only represents losses that were reported to the FBI. Many companies don’t report these kind of crimes, as they don’t want the information to become public. The amount of money that FACC lost in the attack in January is unusually high, but not unique. A Belgian bank lost $75 million to a similar scheme around the same time.
Researchers have found that a vulnerability in Android that allows attackers to trick users into granting apps elevated privileges affects more devices than had originally been thought–nearly 96 percent of all Android devices.
The vulnerability is not a typical bug. It relies on some user interaction and lies in the way that Android allows apps to draw over one another. Using that ability, an attacker can overlay an app on top of the Accessibility Services app in Android and trick the user into making a series of clicks that grants the app a broad range of advanced permissions. The attack is a variety of the old clickjacking technique used in desktop browsers, and researchers at Skycure discovered that 95.4 percent of Android devices are vulnerable to a mobile clickjacking technique.
The researchers disclosed the original problem in March during the RSA Conference, but said Tuesday that they’ve now confirmed that it works on devices running Marshmallow, as well as older devices. The target of the attack is the Accessibility Services portion of Android, a feature of the OS that is designed to help users with disabilities interact with a device. Many of those services have very powerful permissions, and can take a variety of actions on behalf of the user.
https://youtu.be/4cSRq7_Z26s
“Recognizing this potential, starting with Lollipop (5.x), Google added additional protection to the final ‘OK’ button that would grant these accessibility permissions. In other words, Android programmers wanted to make sure that if a user was going to turn on Accessibility Services, the OK button could not be covered by an overlay, and the user would be sure to know what they are allowing,” Yair Amit, CTO of Skycure wrote in a post explaining the issue.
However, Skycure found that by overlaying another app on top of the Accessibility Services screen–a behavior that is part of Android’s design–an attacker could guide a victim through the process of granting the malicious app high privileges by clicking on various parts of the app. Those clicks go through the overlaid app and press the OK button in the Accessibility Services app.
“Accessibility Clickjacking can allow malicious applications to access all text-based sensitive information on an infected Android device, as well as take automated actions via other apps or the operating system, without the victim’s consent. This would include all personal and work emails, SMS messages, data from messaging apps, sensitive data on business applications such as CRM software, marketing automation software and more,” Amit said in the original post on the issue.
Sky cure disclosed the vulnerability to Google, which controls the Android code base, before its initial public discussion of it in March, but the company is not going to fix it.
“Skycure takes pride in abiding by vendor’s responsible disclosure policy. Per that policy, we notified Google of this issue in March 2016. Following our correspondence with the Google Android Security team, they have decided not to fix this issue and accept this risk as a consequence of its current design,” Amit said.
Developers building bots for Slack are including their personal access tokens in code posted on GitHub, researchers have found, a problem that could give anyone who finds the tokens access to internal Slack conversations and files.
Slack is a team communications app used in many organizations to share information, files, and other data. Developers can write bots that perform specific actions, such as responding to common questions, and researchers at Swedish security firm Detectify discovered that hundreds of developers are including their tokens in code snippets posted publicly on GitHub. Slack tokens are essentially credentials for users and developers, and developers are including their own tokens in their bot code, the researchers found.
Slack tokens are structured in a highly specific way that the researchers say is easy to find on GitHub. The tokens have a prefix with a hyphen and then the rest of the token, and the Detectify researchers said that searching for the prefix on GitHub makes the tokens easily findable.
“In the worst case scenario, these tokens can leak production database credentials, source code, files with passwords and highly sensitive information. The Detectify Team have already been able to find thousands of tokens by simply searching GitHub; and new tokens are becoming publicly available every day,” researchers at Detectify Labs wrote in a disclosure of the issue.
The consequences of an attacker getting access to a developer’s token could be quite serious.
“Using the tokens it’s possible to eavesdrop on a company. Outsiders can easily gain access to internal chat conversations, shared files, direct messages and even passwords to other services if these have been shared on Slack,” the researchers said.
There are several different kinds of Slack tokens, including a custom bot token and a private token. The private token is the most powerful, and functions like a full username and password combination. With that token, an attacker could get full access to a target Slack channel. Detectify’s researchers said they found 626 private tokens on GitHub.
“Even for a user with two factor authentication enabled, you can still access Slack with nothing else but this token,” the researchers said.
Detectify contacted Slack about the issue, and the company has responded by sending a message to teams with leaked tokens, informing them of the problem and disabling any leaked tokens.
Dutch police have seized servers and other equipment operated by Ennetcom, a communications provider in the Netherlands that operates an encrypted mobile phone service. The Dutch National Police Corps allege that the company was providing encrypted communications for criminal groups.
The police said they have copied the contents of several servers belonging to the company, both in the Netherlands and in Canada. Ennetcom provides a variety of secure communications products, including encrypted BlackBerry handsets. The company offers customized secure devices, with encrypted storage and encrypted email.
“Ennetcom is the only company in the world that offers encrypted BlackBerry devices with 3 layers of encryption. In addition to the BES BlackBerry encryption and standard S/MIME support package, we have built a Mobile Encryption Gateway. This is a closed system. Users can only communicate to other Ennetcom S/MIME MEG BlackBerry devices, not to any other random BlackBerry device nor to other BlackBerry security platforms,” the company’s site says.
In a statement, the Dutch national police said the network had about 19,000 registered users, who were notified of the seizure of the servers and other assets. They also said the network was known to be used by criminals.
In a statement on its site, Ennetcom said that this is not the first time law enforcement agencies have targeted the company. The company has suspended its operations in the face of the new seizure and investigation.
“Tuesday, April 19th, 2016 revealed that judicial research is being done towards Ennetcom. There has been an international collaboration of various government agencies and Interpol in an attempt to put our network down. Previously there have been attempts to put us down, amongst them the Dutch intelligence service, but they never succeeded,” the statement, which appears in a pop-up on the Ennetcom home page, says.
“Regarding the current investigation, Ennetcom is forced to suspend all operations and services for the time being. Ennetcom regrets this course of events and insinuations towards Ennetcom. It should be clear that Ennetcom stands for freedom of privacy! Because of security and privacy reasons Ennetcom chooses to keep all systems offline.”
Police in Toronto also were involved in the operation, seizing a BlackBerry Enterprise Server operating in that city that was part of the Ennetcom network.
“In Canada today, in partnership with the Toronto Police, a server temporarily decommissioned. Also, this server has been copied,” the Dutch National Police Corps statement says.
Rep. Ted Lieu, who has been one of the loudest voices in Congress on security and privacy issues, is urging the House Committee on Oversight and Government Reform to look into the vulnerabilities in the SS7 phone protocol that allowed researchers to track and compromise Lieu’s phone in a demonstration this week.
The letter comes days after the demo, which was done on 60 Minutes over the weekend with Lieu’s cooperation. The attack had less to do with the iPhone Lieu was using in California than the problems with the Signaling System 7 protocol, a system that’s used to connect and help telecom carriers communicate. Security researchers in Germany, knowing only the number of Lieu’s iPhone, were able to take advantage of weaknesses in the SS7 protocol to track Lieu’s movements and listen to and record conversations on the iPhone.
What the researchers showed was that anyone who can access SS7, which includes a lot of employees at hundreds of global telecoms, can find data on subscribers and ultimately do what they did to Lieu. In the piece, Lieu called the demo “creepy”, and in his letter to the leaders of the Oversight and Government Reform committee he said the problems “threaten personal privacy, economic competitiveness and U.S. national security.”
Security researchers have known about the issues with SS7 for several years and there have been talks on the problems and demonstrations of attacks at various conferences. But Lieu hopes that his demonstration and the attention from Congress will bring the problem into the daylight and push carriers to fix it.
“The applications for this vulnerability are seemingly limitless, from criminals monitoring individual targets to foreign entities committing economic espionage on American companies to nation states monitoring U.S. government officials,” Lieu’s letter says. “The vulnerability has serious ramifications not only for individual privacy, but also for American innovation, competitiveness and national security. Many innovations in digital security–such as multi-factor authentication using text messages–may be rendered useless.
One of the problems with fixing the bugs in SS7 is that no one entity is responsible for it. The system is actually a group of protocols that helps carriers communicate and, like many things in the telecom world, it was designed decades ago and has evolved over the years. But the security of the system hasn’t kept up with the capabilities of researchers and attackers, as Lieu discovered. German researcher Karsten Nohl, who illustrated the problems in the 60 Minutes piece, has spoken publicly about them before, as have other researchers.
Now, Lieu (D-Calif.) is pushing his colleagues in the House of Representatives to look into the problem.
“I strongly believe that the action by the House Committee on Oversight and Government Reform is needed to examine the full scope and implications of the SS7 security flaw,” he wrote.
The FBI says it has seen a huge increase in the volume of business email compromise scams hitting enterprises in the last year, and estimates that losses from the scheme have hit $2.3 billion now.
Like normal phishing scams, these kinds of attacks rely on highly believable messages and a healthy dose of social engineering to get the job done. Typically, an attacker will send an email to a victim inside a target organization, saying that funds need to be transferred immediately to an outside account. The email usually has a spoofed sender address and appears to come from the CEO, CFO, or other top executive inside the target company.
The losses from these attacks are staggering, as the FBI’s new numbers show. Since October 2013, when the bureau began tracking the scams, through February of this year, the FBI says it has received more than 17,000 complaints about the attacks, which also are known as CEO email scams. In that time, total losses by businesses have amounted to $2.3 billion, the bureau says.
“The schemers go to great lengths to spoof company e-mail or use social engineering to assume the identity of the CEO, a company attorney, or trusted vendor. They research employees who manage money and use language specific to the company they are targeting, then they request a wire fraud transfer using dollar amounts that lend legitimacy,” the FBI’s alert says.
“There are various versions of the scams. Victims range from large corporations to tech companies to small businesses to non-profit organizations. Many times, the fraud targets businesses that work with foreign suppliers or regularly perform wire transfer payments.”
It’s not just smaller or unsophisticated businesses that fall for these scams, either. Last week, details emerged of an attack on Mattel that nearly cost the company $3 million. A finance executive at the company got an email from what seemed to be the CEO, asking her to send a payment of $3 million to one of the company’s vendors in China. She did, and only after checking with the CEO later did she realize that he hadn’t sent the email. The company worked with the United States and Chinese law enforcement and got the money back a few days later, something that is a rarity with these attacks.
In January, Crelan Bank in Belgium lost $75 million in a similar scheme, and a manufacturing company in Austria lost about €50 million in a phishing scheme, too.
ORLANDO–One of the few topics that it is relatively easy to get consensus on in the security community is that passwords have outlived their usefulness as a standalone means of authentication. Two-factor authentication, in various forms and factors, has become the main way to fix this, but getting users and management to buy into the idea can be painful, as the security team at Duke University and its associated health system found out amid a number of data breaches that hit the organization.
There are a number of different two-factor authentication or two-step verification systems available now, including hardware tokens, software tokens, SMS verification, phone call verification, and many others. And companies large and small across the spectrum of industries have deployed one or another of these systems, including Twitter, Google, Apple, and Amazon. While these systems provide a more robust level of security than simple usernames and passwords, they can be difficult to roll out, especially if users are resistant to the plan.
Charles Kesler, the CISO of Duke Medicine, and Richard Biever, CISO of Duke University, found themselves facing this problem in early 2014 after a multi-stage phishing attack hit the organizations. The scheme involved a variety of emails that were sent to faculty and staff members, saying that the recipients had been approved for pay raises and needed to provide bank account details in order for the raises to be processed. Most of the targets didn’t take the bait, but 10 of them did and soon found themselves without their paychecks.
In the wake of the attacks, university and health system leaders asked Kesler and Biever to come up with options for improving authentication in their organizations. Their teams looked at a variety of options, and decided that multi-factor authentication was the right one. Deciding which system to use proved difficult, though, as the organizations have a wide range of apps and users to consider.
“We weren’t going to go out and buy twenty-eight thousand RSA tokens and distribute them,” Kesler said during a talk he and Biever gave on Monday at the InfoSec World conference here.
Instead, they settled on a two-factor system from Duo Security that uses phone calls or push notifications on mobile devices for the second step in the verification process. The organizations were already in the middle of a small 2FA pilot program when the phishing attacks hit, and the incidents accelerated the rollout very quickly.
We really encouraged people to sign up for the program then, and about nine thousand did that month,” Biever said.
The adoption rate of 2FA in the organization continued to increase in the following months, but Kesler and Biever were still looking for ways to get everyone involved in the program. So in April 2014 Kesler made the program mandatory in his organization. He said the days of being able to rely on passwords for authentication are long past.
“Passwords are just not really sufficient for protecting data these days,” he said. “In order to take care of patients, we have to make data easily accessible to many people. We have to be very conscious of that. There’s a tremendous amount of innovation in health care right now, and that creates complexity. We have hundreds if not thousands of apps, so it’s a complex problem.”
There are vulnerability reports, and there are Vulnerability Reports. The latest and perhaps best entry in the latter category is a disclosure of more than 1,400 vulnerabilities in a variety of medication-supply devices manufactured by CareFusion.
The affected devices are CareFusion’s Pyxis SupplyStation systems, automated cabinets that allow medical personnel to dispense medication and monitor dosages. The devices are used in hospitals and other medical institutions and typically are networked together. Security researchers Billy Rios and Mike Ahmadi discovered the vulnerabilities and reported them through the ICS-CERT. The flaws affect several versions of the Pyxis SupplyStation devices, none of which are supported any longer.
“The affected products, Pyxis SupplyStation systems, are automated supply cabinets used to dispense medical supplies that can document usage in real-time. The Pyxis SupplyStation systems include automated devices that may be deployed using a variety of functional configurations. The Pyxis SupplyStation systems have an architecture that typically includes a network of units, or workstations, located in various patient care areas throughout a facility and managed by the Pyxis SupplyCenter server, which links to the facility’s existing information systems,” the ICS-CERT advisory says.
Rios and Ahmadi tested several different versions of the software running on the SupplyStation devices and found 1,418 separate vulnerabilities in version 8.1.3 of the software.
“Exploitation of these vulnerabilities may allow a remote attacker to compromise the Pyxis SupplyStation system. The SupplyStation system is designed to maintain critical functionality and provide access to supplies in “fail-safe mode” in the event that the cabinet is rendered inoperable. Manual keys can be used to access the cabinet if it is rendered inoperable,” the advisory says.
There are publicly available exploits for these vulnerabilities, and because the affected products are no longer supported, CareFusion is not planning to release patches for them.
“CareFusion has confirmed that the identified vulnerabilities are present in the Pyxis SupplyStation systems that operate on Server 2003/Windows XP, which are at end-of-life, are no longer supported. As a result of the identified vulnerabilities, CareFusion has started reissuing targeted customer communications, advising customers of end-of-life versions with an upgrade path. For customers not pursuing the remediation path of upgrading devices, CareFusion has provided compensating measures to help reduce the risk of exploitation,” the advisory says.
Researchers at Michigan State University have developed a clever hack that allows them to scan and then print a target user’s fingerprint and then use it to unlock a mobile phone via the fingerprint sensor.
The method uses an off-the-shelf inkjet printer equipped with some special cartridges with conductive ink to print the fingerprint image on special paper. That image is then used to unlock a target phone by applying it to the fingerprint sensor on the device. Those sensors rely on the fingerprint to identify the specific user, but also on conductivity to complete the circuit when the user’s finger is placed on the sensor.
There has been previous research on spoofing fingerprints to fool touch sensors on phones, specifically the iPhone 5S, the first mass-market phone to use a fingerprint sensor. That mechanism was bypassed within a fews days of its release when researchers in Germany were able to use a fingerprint taken from a glass surface and replay it on the phone after printing it with very thick toner on special paper.
The MSU researchers were able to improve upon the existing methods by reducing the amount of time it takes to create the spoofed fingerprint and making the process simpler.
“It is only a matter of time before hackers develop improved hacking strategies not just for fingerprints, but other biometric traits.”
“This experiment further confirms the urgent need for antispoofing techniques for fingerprint recognition systems, especially for mobile devices which are being increasingly used for unlocking the phone and for payment,” the MSU researchers wrote in their paper.
Biometrics such as fingerprint and voice recognition are becoming increasingly popular as secondary, and sometimes primary, forms of authentication. Because these identifiers are unique to each person, they are considered more secure and reliable for users than passwords, but researchers have found a variety of different methods for bypassing or hacking these mechanisms. A group at the University of Alabama at Birmingham published a method a few months ago for building a model of a user’s voice and using voice-morphing software to impersonate the target.
The method that the research at MSU developed involves using conductive silver ink cartridges from a Japanese manufacturer, along with a normal black ink cartridge. The researchers scanned a target user’s fingerprint at 300 DPI, then reversed the fingerprint horizontally and printed it on special glossy paper. The print could then be used to unlock the user’s phone. The researchers ran the experiment on a Samsung Galaxy S6 and a Huawei Honor 7 and found that it worked on both devices.
“Once the printed 2D fingerprints are ready, we can then use them for spoofing mobile phones. In our spoofing experiment, we selected Samsung Galaxy S6 and Huawei Hornor 7 phones as examples. We enrolled the left index finger of one of the authors and used the printed 2D fingerprint of this left index finger to unlock the fingerprint recognition systems in these phones,” the paper says.
The MSU researchers, Kai Cao and Anil K. Jain, said that their method doesn’t work on all mobile phones with fingerprint sensors, but it is a step forward from what’s been done before.
“As the phone manufactures develop better anti-spoofing techniques, the proposed method may not work for the new models of mobile phones. However, it is only a matter of time before hackers develop improved hacking strategies not just for fingerprints, but other biometric traits as well that are being adopted for mobile phones (e.g., face, iris and voice),” Cao and Jain said.
Image from Flickr stream of Kourepis Aris.
Google is expanding the way that its Safe Browsing API protects users against malicious content by blocking deceptive content on sites that is considered to be social engineering.
The change to Safe Browsing will focus on detecting and warning users about content that tries to trick users into downloading a piece of software or taking some other action that they wouldn’t normally take. A common example of this is a fake or deceptive download button on a site that’s included in a dialogue box warning about out-of-date software.
“You may have encountered social engineering in a deceptive download button, or an image ad that falsely claims your system is out of date. Today, we’re expanding Safe Browsing protection to protect you from such deceptive embedded content, like social engineering ads,” Lucas Ballard of Google’s Safe Browsing team said in a blog post.
Attackers often use malicious or deceptive ads that imitate legitimate download dialogues for software such as Adobe Flash or Microsoft’s Skype in order to trick users into downloading something else. That download could be a browser tool bar, malware, or some other unwanted software. To non-expert users, these ads or dialogue boxes can seem indistinguishable from authentic ones, which is exactly what fraudsters and attackers are counting on.
Google’s new effort to protect against social engineering takes much of that decision making out of users’ hands. The Safe Browsing API is used not just in Google Chrome, but also in many of the other major browsers, so the new protection will benefit those users, as well.
Federal officials have indicted more than 50 people, including 15 former prison officials and 19 former inmates, in a long-running vishing and phone fraud scheme that was run through a Georgia prison.
Using cell phones smuggled into Autry State Prison by guards, the inmates would call victims, mostly in the Atlanta metro area, and inform them that they were warrants our for their arrest because they had failed to show up for jury duty. The callers would warn the victims that law enforcement officers were on the way and they were about to be arrested. Unless, of course, the victims could come up with some money to pay a fine and have the warrants erased.
Because that’s how the justice system works.
“For those victims who wanted to pay a fine, the inmates instructed them to purchase pre-paid cash cards.”
It’s not how it works, obviously, but when there’s a caller, pretending to be a sheriff’s deputy or other law enforcement agent and threatening jail time, people tend to have irrational reactions. And that’s what the callers were counting on. The inmates researched the names of local law enforcement officers and created fake voice mail greetings on their contraband cell phones so when victims called back, the greetings would seem legitimate.
Once the scammer had a victim on the hook, he would work to convince the victim to pay the imaginary fine immediately.
“For those victims who wanted to pay a fine, the inmates instructed them to purchase pre-paid cash cards and provide the account number of the cash card or wire money directly into a pre-paid debit card account held by the inmates. Based on these false representations, the victims electronically transferred money to the inmates because they believed that the funds would be used to pay the fine for failing to appear for jury duty and would result in the dismissal of the arrest warrant,” a statement from the U.S. Attorney’s Office for the Northern District of Georgia on the case says.
The jury duty scam is a twist on the more familiar IRS phone-fraud scheme in which callers demand immediate payment for supposedly unpaid back taxes. The lure is different, but the mechanics and results are the same. With the scheme allegedly run by the Autry prison inmates, it was effective enough to pull in more than $37,000, some of which allegedly went to the inmates and some went to alleged conspirators outside the prison.
“After a victim provided an inmate with the account number of the pre-paid cash card, the inmates then used their contraband cellular telephones to contact co-conspirators, who were not incarcerated, to have those individuals transfer the money from the cash card purchased by the victims to a pre-paid debit card possessed by the co-conspirators. Next, the co-conspirators withdrew the victim’s money, which had been transferred to the pre-paid debit card they controlled, via an automated teller machine or at a retail store. Typically, the co-conspirators then laundered the stolen money by purchasing a new cash card so that the victims’ funds could be transferred back to the inmates,” the U.S. Attorney’s Office said.
Among those indicted, many of the former correctional officers were charged with conspiring to accept bribes; the inmates and former inmates were charged mainly with conspiring to bribe correctional officials, conspiring to smuggle in contraband, or money laundering; and most of the alleged co-conspirators outside the prison were charged with wire fraud and money laundering.
Image from Flickr stream of Quinn Dombrowski.
A security researcher has developed a phishing attack against the LastPass password manager app that is virtually impossible to detect and has the ability to mimic the LastPass login sequence perfectly.
The technique takes advantage of several weaknesses in the way that LastPass handles user logout notifications and the resulting authentication sequence. Sean Cassidy, the CTO of Seattle-based Praesidio, developed the attack and has released code for the technique, which he calls LostPass. In essence, the technique allows an attacker to copy much of the login sequence for a LastPass user, including the use of identical login dialogs and the ability to capture and replay two-factor authentication codes.
Cassidy discovered the technique after becoming suspicious when he received a message in Chrome telling him that his LastPass session had expired and he needed to log back in.
“When I went to click the notification, I realized something: it was displaying this in the browser viewport. An attacker could have drawn this notification,” Cassidy said in a blog post explaining LostPass.
“Any malicious website could have drawn that notification. Because LastPass trained users to expect notifications in the browser viewport, they would be none the wiser. The LastPass login screen and two-factor prompt are drawn in the viewport as well.”
They should either move their login page to HTTPS EV or only display it in a pop-up window.
In order for LostPass to work, an attacker needs to get a victim to visit a malicious site where the LostPass code is deployed. The code will check to see if the victim has LastPass installed, and if so, use a CSRF (cross-site request forgery) weakness in LastPass to force the victim to log out of the app. The attacker using LostPass then will show the victim the notification telling her she’s logged out and when she clicks on it, will bring her to the login page the attacker controls. It will look identical to the authentic one.
Once the victim enters her credentials, they are sent to the attacker’s server, who can use the LastPass API to check their authenticity. If the server says that 2FA is set up on the victim’s account, LostPass will display a screen to enter the 2FA code, which the attacker will capture and use to log in to the victim’s account.
“Once the attacker has the correct username and password (and two-factor token), download all of the victim’s information from the LastPass API. We can install a backdoor in their account via the emergency contact feature, disable two-factor authentication, add the attacker’s server as a “trusted device”. Anything we want, really,” Cassidy wrote.
The attack has serious implications for LastPass users, who have been trained to respond to notifications from the app in the browser window. Cassidy disclosed the LostPass attack to LastPass in November and said the company didn’t respond until December. He spoke about the technique at the ShmooCon security conference last weekend and since then LastPass has begun requiring email confirmation for any new logins.
However, Cassidy said that doesn’t fix the problem, but just mitigates it. A better solution, he said, would be to stop showing user notifications in the main part of the browser window.
“They should not show notifications in the viewport (the part of the browser where the content is shown). They should either move their login page to HTTPS EV or only display it in a pop-up window like they sometimes do in Chrome,” Cassidy said by email.
He said hat LastPass has now disputed whether Cassidy contacted them in November, even though Cassidy sent proof of the email contact.
“And to imply that I withheld information for my talk is just ridiculous. I told them everything that was going to be in there in advance,” Cassidy said by email.
Image from Flickr stream of Christiaan Colen.
In the face of continued data breaches and an ever-increasing pile of identity thefts, the IRS has released a new piece of guidance that says companies are able to deduct the cost of identity theft protection, even without it being connected to a specific breach.
The new guidance, released Monday, comes as consumers are beset on all sides by identity theft threats stemming from a long list of data breaches at retailers, health-care companies, financial-services firms, and many other organizations. Scammers and crooks–organized and otherwise–use the mountain of available personally identifiable information belonging to consumers as the basis for their schemes. The problem has gotten to the point that the person who doesn’t receive at least one breach notification letter every year can count himself lucky indeed.
Offering free identity theft protection and credit-monitoring services is a standard part of breach responses from compromised organizations, but some organizations have been providing such benefits on their own. The IRS now says the cost of those services is a deductible one for these companies.
“The announcement provides that the IRS will not assert that an individual whose personal information may have been compromised in a data breach must include in gross income the value of the identity protection services provided by the organization that experienced the data breach,” the new guidance from the IRS says.
The agency had released a statement on the topic in August and requested comments on it. There were only four comments, but those who did comment said information security is one of their bigger concerns, resulting from the growing number of data breaches. The new guidance also says that individual employees don’t have to include the value of any identity theft protection services their employers provide in their income.
“Accordingly, the IRS will not assert that an individual must include in gross income the value of identity protection services provided by the individual’s employer or by another organization to which the individual provided personal information (for example, name, social security number, or banking or credit account numbers). Additionally, the IRS will not assert that an employer providing identity protection services to its employees must include the value of the identity protection services in the employees’ gross income and wages,” the IRS guidance says.
Already this year there have been a number of breaches, including one at Time Warner that exposed data belonging to 320,000 people.
Image from Flickr stream of 401(k).
Calling a tech support line can be a fairly miserable experience. Having tech support reps calling you at home to warn you about supposed malware on your PC is even worse. It’s an old scam, but one that’s gotten a vicious new twist of late with scammers who know every detail of a victim’s support history, purchases, and even the model numbers of the machines they’ve bought, lending a high level of authenticity to scams that already dupe millions of people every year.
The fake tech support scam is one that’s been rattling around the tech industry for the better part of a decade now, with the most famous iteration being the Windows malware version. In most of these campaigns, scammers purporting to be from some nebulous “Windows support” organization call a victim directly and inform him that they have detected malware on the victim’s computer. The caller usually tells the victim to download a remote access tool to allow the support team to diagnose the malware infection. The callers will then, of course, find some fake malware on the victim’s PC and offer to remove it for a reasonable fee.
Various other versions of this scam can involve ransomware being installed on victims’ PCs, which can cost the victims quite a bit of money. However, the latest variant involves not random, ill-informed people throwing things against the wall, but rather highly knowledgeable scammers who know highly specific details of each target’s history with the company they’re spoofing. A case in point is a recent rash of calls to Dell customers in which the caller says he is from Dell itself and is able to identify the victim’s PC by model number and provide details of previous warranty and support interactions with the company.
These are details that, it would seem, only Dell or perhaps its contractors would know. One person who was contacted by the scammers wrote a detailed description of the call, and said the caller had personal details that could not have been found online.
These are details that, it would seem, only Dell or perhaps its contractors would know.
“Scammers pretending to be from Dell computers phoned me in November — but these scammers knew things about me. They identified the model number for both my Dell computers, and knew every problem that I’d ever called Dell about. None of this information was ever posted online, so it’s not available anywhere except Dell’s own customer service records,” the post on 10zenmonkeys.com, a tech and culture site, says.
The call is not an isolated incident. There are a number of posts on Dell’s own customer support forum from people who experienced similar calls.
“He claimed to be from the Dell ‘R and D Department’. He claimed that my computer had detected a problem and notified Dell automatically. He knew that Dell recently replaced a battery for me, which was true, so that’s why I believed he was really from Dell. (This means they also hacked Dell!) He had me run come commands on the PC and he told me all devices on my IP address were compromised. He had me install the teamviewer app. He passed us off to his ‘level 5 network support’ person. Then I got really suspicious and I hung up the phone,” one post says.
Many of the posts on Dell’s forums, as well as the post on 10zenmonkeys, mention the possibility that Dell has been compromised, allowing the scammers to access customers’ personal details. A company spokesman said Dell is looking into reports it has received from customers.
“We have an extensive end-user security practice that develops capabilities and best practices to better protect our customers. Further, we have established a process by which customers can report this type of tech-support phone scam,” said David Frink.
“Yes, we are investigating as customers provide us information regarding the calls.”
The FTC has been warning customers about tech support scams for many years and has taken steps to disrupt some of the crews running them. It’s a multimillion dollar business that has ensnared thousands of victims, and the addition of authentic details from the victim’s support and purchase history makes it even more difficult for potential victims to identify and ignore these scams.
Perhaps because smart lightbulbs that refuse firmware updates and refrigerators with blue screens of death aren’t enough fun on their own, a new WiFi protocol designed specifically for IoT devices and appliances is on the horizon, bringing with it all of the potential security challenges you’ve come to know and love in WiFi classic.
The new protocol is based on the 802.11ah standard from the IEEE and is being billed as Wi-Fi HaLow by the Wi-Fi Alliance. Wi-Fi HaLow differs from the wireless signal that most current devices uses in a couple of key ways. First, it’s designed as a low-powered protocol and will operate in the range below one gigahertz. Second, the protocol will have a much longer range than traditional Wi-Fi, a feature that will make it attractive for use in applications such as connecting traffic lights and cameras in smart cities.
The new version of Wi-Fi also could be useful for connections among smaller, lower-powered devices such as smart watches, fitness bands, and other pieces of wearable technology. The Wi-Fi Alliance, which certifies Wi-Fi compatible devices and is overseeing usage of the proposed new protocol, is touting it as an extension and improvement of the existing protocol.
“Wi-Fi HaLow is well suited to meet the unique needs of the Smart Home, Smart City, and industrial markets because of its ability to operate using very low power, penetrate through walls, and operate at significantly longer ranges than Wi-Fi today,” said Edgar Figueroa, president and CEO of Wi-Fi Alliance.
But, as with any new protocol or system, Wi-Fi HaLow will carry with it new security considerations to face. And one of the main challenges will be securing all of the various implementations of the protocol. Device manufacturers all implement things in their own way and in their own time, a practice that has led to untold security vulnerabilities and innumerable billable hours for security consultants. Security experts don’t expect Wi-Fi HaLow to be the exception.
“While the standard could be good and secure, implementations by different vendors can have weaknesses and security issues. This is common to all protocols,” said Cesar Cerrudo, CTO of IOActive Labs, who has done extensive research on the security of a wide range of smart devices and smart city environments.
Many of the devices that may use the new protocol–which isn’t due for release for a couple of years–are being manufactured by companies that aren’t necessarily accustomed to thinking about threat modeling, potential attacks, and other issues that computer hardware and software makers have had to face for decades. That could lead to simple implementation problems that attackers can take advantage of.
“While the standard could be good and secure, implementations by different vendors can have weaknesses and security issues.”
Cerrudo said that the longer range of Wi-Fi HaLow could present an opportunity for attackers, as well.
“Having a longer range also means that attackers can launch attacks from longer distances, your neighbor’s devices three or more houses away will be able to talk to (hack) your devices. What’s more scary is that if this new standard goes mainstream and it’s adopted by smart home, smart city, smart phones technologies then hackers will get in a golden age being able to hack everything from miles away,” Cerrudo said.
“For instance, an attacker in China wants to hack smart homes and cities in the US he will just need to hack some smart phones in the US and from there launch attacks that will affect homes and cities technologies.”
Each new iteration in technology brings with it fresh security and privacy considerations, and the proliferation of connected non-computing devices is no different. The concept of a voice-enabled hub that controls your home’s climate, entertainment, and other systems is now a reality, as is the ability to send an email from your refrigerator. That’s all well and good, until these smart devices start doing really dumb things.
“This is nothing new but until now we have different technologies (protocols) used for communications on smart home and smart cities devices, etc. When all these converge and use the same technology then the attack surface grows significantly and opens the door for attacks,” Cerrudo said.
Image from Flickr stream of EFF.
Few, if any, companies or government agencies store more sensitive personal information than the IRS, and consumers have virtually no insight into how that data is used and secured. But, as the results of a recent Justice Department investigation show, when you start poking around in those dark corners, you sometimes find very ugly things.
Beginning in 2008, a small group of people–including an IRS employee who worked in the Taxpayer Advocate Service section–worked a simple and effective scam that involved fake tax returns, phony refunds, dozens of pre-loaded debit cards, and a web of lies. The scheme relied upon one key ingredient for its success: access to taxpayers’ personal information. And it brought the alleged perpetrators more than $1 million.
The scam’s particulars are not unique. There have been a variety of similar operations that have come to light over the last few years, with IRS employees improperly accessing taxpayer records as part of a financial fraud or out of curiosity over what an athlete or actor makes. What sets this case apart is that the accused IRS employee, Nakeisha Hall, was tasked specifically with helping people who had been affected by some kind of tax-related identity theft or fraud.
“Taxpayers trust, and expect, that IRS employees, as a whole, will safeguard their most sensitive personal information.”
From that position, Hall allegedly tapped in to the personal files of an untold number of taxpayers and used the data she found there to file false tax returns in those victims’ names. The returns would be set up in such a way that the “taxpayers” would be due refunds. Hall typically would request that refunds be put on debit cards issued by Bancorp Bank or another bank, according to an indictment issued by the Department of Justice in December. The debit cards would be mailed to addresses that Hall had access to, and then Hall’s alleged co-conspirators Jimmie Goodman and Abdullah Coleman would pick up the cards.
From there, the crew would take cards to ATMs and withdraw money, or use them in stores, the DoJ said. Hall, Goodman, and Coleman were arrested last month on a number of charges related to the scam, including mail fraud and conspiracy to commit bank fraud.
“Taxpayers trust, and expect, that IRS employees, as a whole, will safeguard their most sensitive personal information. Taxpayers also must trust that IRS employees in the Taxpayer Advocate Service will not only protect their sensitive information but will actively assist them when it has been compromised by others,” said Joyce White Vance, U.S. Attorney for the Northern District of Alabama. “An IRS taxpayer advocate who exploits that trust, and with full knowledge of the significant impacts of identity theft, uses her IRS access to compromise taxpayers’ identities and steal a million dollars from the U.S. Treasury is committing a particularly egregious crime that will not go unpunished.”
IRS tax fraud schemes have become a scourge for consumers and businesses both. The ploys take a variety of forms, often involving repeated phone calls from fake IRS agents warning the victim that he is about to be arrested for unpaid taxes and demanding payment. In those cases, the victims have a chance to defeat the scheme and are aware that it’s going on. But in the case of the scam allegedly run by Hall, the victims had no idea what was going on and were betrayed by someone who worked in the IRS, the organization that is meant to be protecting their personal tax information.
Hall was arrested in Mississippi, and Goodman was arrested in Birmingham, Ala., in December. Coleman already was in prison for a separate charge in Wisconsin.
Prosecutors in the United Kingdom have convicted four men in connection with a widespread phone fraud scam that saw them steal more than £600,000 (more than $912,000) from a number of elderly victims over the course of a year.
The men ran a version of a common scam in which they posed as police officers and called victims, informing them that their bank accounts were being targeted by a fraud campaign. The victims were told that they needed to transfer funds out of their accounts and into other accounts, which were, of course, controlled by the fraudsters. This attack sometimes comprised numerous phone calls to a single victim, and one of the known victims lost £130,000.
The suspects who were convicted include Mohamed Dahir, Sakaria Aden, Yasser Abukar, and Mohammed Sharif Abokar. Authorities in the U.K. say that the arrests were the result of an investigation into terrorist financing and involved a number of victims.
“This callous group of criminals stole vast sums of money from extremely vulnerable and elderly people from across the country. Their despicable actions have had a terrible and devastating impact on their victims with some losing their life savings. The targeting of vulnerable men and women in their 80s and 90s is quite simply beyond belief,” said Richard Walton, head of the Counter-Terrorism Command of the Metropolitan Police in London.
“This was a scam on a huge national scale detected by specialist financial investigators who have stopped the targeting of even more victims. Our investigation remains on-going and we will continue to arrest and prosecute anyone involved in this fraud.”
The convictions come at a time when similar phone fraud scams are proliferating in a number of countries, especially the United States and U.K. In a similar scheme, a couple in Surrey recently lost £104,000 to scammers.
Image from Flickr stream of Captain Roger Fenton.
How did a self-described “pot smoking teenager” manage such a high-profile attack? He used a technique known as “social engineering,” which is a fancy way of saying he tricked a few call center agents. Wired reporter Kim Zetter described the process in an article posted Monday night:
- The hacker started with Brennan’s mobile phone number. After looking it up online, they found that he was a Verizon customer.
- The hacker called Verizon, pretending to be another Verizon employee having technical issues. Verizon call center agents helped the hacker access Brennan’s account number, PIN, backup mobile number, AOL email address and the last four digits on his bank card.
- Working down the daisy chain, the hacker next called AOL, impersonating Brennan himself. He claimed he was locked out of his email account and needed the password reset. AOL customer service reps asked security questions, but the hacker was able to answer correctly using information collected from the earlier Verizon call.
- The hacker reset the password to Brennan’s AOL email and downloaded several years worth of information, including Agency related documents, a log of Brennan’s phone calls, and his contact lists.
The hacker used a similar method to break into Jeh Johnson’s Comcast email account. And we’ve seen this kind of high profile call-center based attack before. In 2012 hackers called Apple to reset reporter Mat Honan’s accounts and take over his Twitter. Earlier this year, novelist Andy Weir was the target, with the hackers calling Comcast to get access to his social media accounts.
Hackers today use the phone channel as a way to quickly and easily gain access to online accounts. They work across industries, gathering information on their targets from different organizations to build a profile before their final attack.
The message is clear: Call centers are the weakest link. All organizations are vulnerable when it comes to the phone channel, because the main line of defense for most call centers is little more than a friendly customer service agent asking a caller for their mother’s maiden name.
Call centers must find better ways to authenticate callers, before agents are able to give away valuable personal information. Organizations that rely on a call center should follow the lead of some of the largest US financial institutions, which are now implementing solutions based on PhoneprintingTM and voice biometrics to authenticate callers based on risk.
Phoneprinting analyzes 147 characteristics of the background audio of a call to determine the caller’s location, device type, and other characteristics, creating a unique identifier for each caller. Within the first 30 seconds of a call, Phoneprinting can tell a call center agent whether the call is suspicious, if the phone number is being spoofed, or the caller is a known fraudster.
Gartner vice president and distinguished analyst Avivah Litan addressed the issue in an article for Forbes last year writing, “The best security is always layered security, and this principle holds true when securing the telephony channel… Phoneprinting combined with voice biometrics provides the strongest method for detecting fraudsters who call into enterprises.”
Pindrop co-founder and CEO, Vijay Balasubramaniyan invented phoneprinting in 2010, while he was working on his PhD at Georgia Tech. Vijay noticed that there were subtle differences in audio characteristics of phone calls coming from different countries. In his thesis, “Using Single-Ended Audio Features to Determine Call Provenance,” Vijay created an algorithm to analyze call audio and identify anomalies in Caller-ID information.
Five years later, Vijay’s thesis has become the core of Pindrop’s Phoneprinting technology. Today, phoneprints analyze 147 audio characteristics to determine the risk level associated with each call. Phoneprinting has gotten more sophisticated as we’ve integrated voice biometrics and caller reputation information to create the Fraud Detection System (FDS).
Many of the largest banks, insurers, brokerages, and retailers in the US are using FDS phoneprinting to protect their call centers. They’re using Pindrop’s solution because it is the only one on the market that flags fraudulent callers on the first call, and functions regardless of call type or quality. Phoneprinting is the only technology that can detect and track voice distortion, Caller ID spoofing, gateway hijacking, and other fraud techniques.
With the new patent, we’re ready to continue to tackle the world of phone fraud and authentication.
Read more about how phoneprinting works in our latest Technology Brief.