FeedBurner makes it easy to receive content updates in My Yahoo!, Newsgator, Bloglines, and other news readers.
Playtech Open Platform (POP) has now expanded its fraud prevention offering to operators around the global by integrating iovation’s FraudForce technology. The agreement is part of Playtech’s policy to expand its extensive operator toolset via carefully selected strategic partnerships, alongside its significant internal research and development.
Shimon Akad, Chief Operating Officer of Playtech, said, “Everyone at Playtech is delighted to be working with iovation to bring FraudForce to our licensees. IMS is already the most comprehensive player management platform in the industry, but with the Playtech Open Platform, we can enhance its capabilities even further.”
Akad continued, “We are passionate about equipping our operators with world-class fraud prevention tools, and our partnership with iovation is a key part of our strategy to deliver this. FraudForce is a powerful weapon in the war on cybercrime in online gambling, with its integration into IMS providing a seamless boost to the arsenal of our licensees.”
Our FraudForce solution provides unique device recognition technology allowing Playtech to leverage our “Privacy by Design” methodology to ensure ongoing compliance with global privacy laws, including the EU’s new General Data Protection Regulation (GDPR).
Check out the press release.
To learn more, pre-register for the 2019 Gambling Report (available Jan 2019) to learn about the latest fraud trends, as well as the expansion and convergence of digital and physical for the gambling marketplace.
iovation has seen a 220% increase in confirmed reports of account takeover (ATO) from our e-commerce customers in the past twelve months. What is driving this rise? And how can you combat it without losing good customers?
In response to customer preferences, many e-commerce sites have launched dedicated apps or optimized their sites for mobile. This move has paid off with the increasing numbers of consumers who want fast, easy checkout and an optimized shopping experience. Retailers that have both mobile sites and apps are seeing, on average, two-thirds of their online sales coming from mobile devices, according to a recent report. They also found that conversion rates are 3x higher for mobile apps than for mobile web.1
While this move has been lucrative for merchants, it’s also created opportunities for fraudsters. The switch to dedicated accounts and applications, combined with the flood of breached credentials and personal data available on the dark web, has had the unintended consequence of opening the door to ATO attacks.
The Cost of ATO Attacks Goes Far Beyond Revenue
To complicate matters, shoppers are very sensitive to any added friction in their shopping experience. This leaves merchants in the precarious position of having to balance the need to prevent account takeover against preserving a positive customer experience.
Speed Good Customers to Checkout, Stop ATO
Legacy authentication solutions that rely on usernames and passwords cannot protect against ATO. Yet most businesses don’t have the time or resources to completely revamp their systems.
Enter transparent, device-based authentication.
Device-based authentication can easily be layered on top of existing systems without the need for personal data. It adds a second, invisible layer of authentication that can drive step-ups if new or suspicious devices try to access an account, enhancing your existing authentication procedures without heavy lifting or intense coding.
Customers simply register their devices to their accounts and then on subsequent visits a device check is done in the background without any further inputs needed. You receive powerful risk insight that allows you to assess risk factors indicative of ATO, including device anomalies, spoofing, and evasion; and because you have verified that the device belongs to your customer, they enjoy a secure and seamless shopping experience.
If you’d like to learn more, check out our recent webinar on ATO in E-Commerce.
1 Source: https://marketingland.com/retailers-shopping-apps-now-see-majority-e-commerce-sales-mobile-234931
2 Gartner: Market Guide for online Fraud Detection, Jan 2018
3 TotalRetail 5 Ways E-Commerce Merchants Can Combat Identity Fraud
4 Javelin 2018 Identity Fraud: Fraud Enters a New Era of Complexity
5 TransUnion 2018 Retail Consumer Survey Insights
In the fight against fraud, analysts maintain a delicate balance. Whilst stronger regulations and policies protect customers’ data, fraudsters become more aggressive and sophisticated.
Nevertheless, one of our premier UK insurance clients managed to collaborate with another insurer to detect a ghost broking ring which led to several prosecutions.
Over a period of 25 months, someone or some group, had incepted and then cancelled 83 new motor insurance policies as soon as they had converted.
Whoever was opening and closing these policies was meticulous. Among those 83 applications, very little key data was repeated. The applicants incepted only four or five motor insurance policies per month.
This level of diligence would normally have slipped by undetected. But by using device intelligence our client could see that only two devices (both with true IP addresses in London) were submitting the applications, some of which listed residences hundreds of miles away.
The extra detail framed the core question: Why were these policies being incepted and cancelled so quickly?
Our client used unique, persistent identification numbers assigned to the two suspicious devices to query peers in a global fraud consortium. Sure enough, analysts at another insurer had seen the same devices applying for policies with them also.
Whilst adhering to UK privacy laws and their companies’ privacy policies, the two insurers pooled their observations. Confident that they were discussing the same devices, they shared information such as the number of policies incepted, the average time before the policies were cancelled, and the reasons given for those cancellations.
They concluded that the fraudsters were using the first insurers cancellation letters and no-claims bonuses to get cheaper policies with the second insurer. The cheaper policies (their premiums lowered further with false information about the drivers) were then sold to unsuspecting victims.
Equipped with this insight, our client supplied New Scotland Yard with concrete device data that proved essential in the successful prosecution of the ghost broking ring.
The consistent device ID allowed the fraud analysts to keep their balance and obey privacy regulations in their fight against fraud.
It’s no secret that financial institutions are battling against a rising tide of credit write offs. Investigating the root cause can leave us without satisfying answers however, since current economic trends and user behavior patterns don’t line up to account for the steep rise. New research from Gartner may have found a cause. Their latest research suggests that “by 2021, first-party fraud and synthetic identity fraud will account for 40% of credit write-offs, up from an estimated 25% today.”
If we look at the evolving fraud landscape for answers, the picture starts to become clearer. Synthetic identity fraud and first party fraud are evolving at a rate that most current identity proofing tools and some of the older bust-out models are unable to detect. In fact, most of these models were never designed to detect fraud, they were designed to establish creditworthiness, approve new lines of credit, and verify identity. So when fraudsters get smarter with their synthetic identity practices, they are able to bypass these systems. The result of this is an increase in credit write offs, and a pollution of miscategorized fraud, that is never appropriately solved at the source. Once this “hidden” fraud is removed from the chargeback and credit write off categories, we see a true picture of the scope of the fraudscape, thus allowing for its appropriate solutions. Financial institutions can only begin to recover these losses by combatting the correct source of fraud—by first identifying it.
In fact, iovation’s own data amongst its customers shows that synthetic identity is an ever growing problem. 2018 customer polling shows that it is the third most common type of fraud they face.
To see exactly how much synthetic identity fraud might be contributing to your inflated chargeback losses, we first must define the terms we are dealing with that contribute to the problem. Synthetic identity at its core means either an entirely fabricated identity, or an altered version of a real identity, by combining otherwise genuine identity elements from multiple separate identities. This is different than stolen identity fraud, since either some or all of the elements of the identity are in fact not real, or synthetic. These synthetic identities are evolved enough that they pass most identity proofing models, and count as “real” accounts.
Most institutions only measure and provide checks against direct first party or third party fraud losses. These losses vary by institution, but encompass anything that results in a loss where the fraudster is directly using a real identity information, whether their own or stolen, with malicious intent. First or third party fraud losses can include collusion, bad debt, policy abuse, stolen identity to open new lines of credit, bust-out schemes, and frivolous chargebacks to try and gain over the system. Too often, all these types of fraud are lumped together into one category, and treated as the entirety of “first party” or “third party” fraud.
When all these types of fraud are tracked together in legacy systems, it becomes easy for synthetic identity to mask as either first or third party fraud. In fact, it’s possible for synthetic identity fraud to be masked entirely, as legitimate chargebacks or credit losses. When there is no categorization of the different types of fraud within current models, you can’t quantify which types are increasing, or where the source originates. Most systems identify chargebacks as a typical credit failure. And if an institution isn’t measuring chargebacks as its own category of fraud, it’s difficult to see the dramatic rise and originating causes of the chargebacks. As Gartner’s latest report states, “If you can't accurately name it, you can't measure it, determine what an acceptable level is, and watch how these types of losses grow and change, and you can't justify investments in technology solutions and internal resources to fight it.” The key is to appropriately differentiate the types of first and third party losses, their originating causes, and methods of fraud prevention. To prepare and get a clear scope of your own synthetic identity fraud, view our on demand presentation which gives you strategies for identifying synthetic identity and battling credit write-offs.
Many businesses are still grappling with how best to satisfy Strong Customer Authentication (SCA) requirements under the EU Payment Services Directive (PSD2) without losing customers. Currently, merchants can choose to opt out lower risk transactions from SCA requirements, mostly in the form of 3DSecure, which shifts liability back to the card merchant. This will no longer be allowed under PSD2, creating a paradigm shift. Leveraging Transaction Risk Analysis will be vital for merchants to retain control over the buyer’s journey, but that will require close collaboration with payment processors.
Consumers are increasingly sensitive to any added friction and are voting with their feet. An estimated 70% of consumers abandon online forms due to a poor experience. So the question is, how do you balance risk and compliance without compromising the customer experience? I suggest by combining next-generation fraud prevention with risk-based consumer authentication, but more on that later.
There is good news!
Paragraph 21 of the European Banking Authority’s (EBA) Regulatory Technical Standards (RTS) document was a response to industry concerns, where they conceded that some exceptions should be allowed for risk-based SCA. The relevant section details:
21. [...] the EBA agrees with the view expressed by these respondents that a risk-based approach, including the ability to conduct detailed Transaction Risk Analysis and fraud monitoring, is essential to achieve the objective under PSD2 of reducing overall fraud.
Consequently the EBA arrived at the view that, in accordance with Article 98(2)(a) PSD2, an exemption based on such an analysis should be added in a new Article 16 RTS. The RTS also reiterate the importance of risk and fraud monitoring in general as a necessary complement to the principle of SCA laid out in PSD2 as stated in a new Article 2 RTS.
Transaction Risk Analysis
Essentially, the EBA has agreed that payment service providers (PSPs) and merchants should be able to request exemptions to SCA if they can attain target fraud rates. To be allowed the exemption based on Transaction Risk Analysis, the solution must operate in real-time and must verify a transaction against anomalies in user behavior. Checkpoints include the following:
The table of exemptions is as follows:
|Exemption threshold value||Reference fraud rate % Remote card-based payments|
|€250||0.01 - 0.06|
|€100||0.06 - 0.13|
Reference fraud rate formula:
Reference Fraud Rate % =
Total value of successful fraudulent transactions
Total value of all successful transactions (including SCA and exempted)
Competitive Advantages to Achieving Lowest Reference Fraud Rate
PSPs and merchants will have to work much more collaboratively to reduce fraud in order to reach the highest exemption thresholds, but this could provide a major competitive advantage on a number of fronts:
iovation is uniquely suited to help businesses drive down their fraud rates to maximize transaction risk analysis exemptions, while also providing an elegant, risk-based authentication solution to satisfy SCA requirements. Helping you strike the balance between compliance and customer experience.
We combine ClearKey, our lightweight and transparent customer authentication, with FraudForce, a real-time risk insight and fraud prevention solution, to confidently identify returning devices and check for risk signals that could signal fraud.
iovation’s deep device intelligence allows us to provide real-time data on the location of the payer and payee at time of payment, and to determine previous use of the access device provided to the payment service user for SCA. This intelligence coupled with your data on previous spending patterns of the payer will allow your business to confidently decide to accept, reject or review each transaction. This allows you to reduce your fraud rate, reduce the overall number of transactions subject to SCA, and increase customer satisfaction.
PSD2 requires that SCA must use two or more of the following independent factors:
For those transactions that are subject to SCA, you can layer transparent authentication onto your existing authentication system to utilize the device (possession factor) as a second authentication factor. This allows you to satisfy SCA requirements without adding additional steps to the checkout process unnecessarily.
MFA for SCA, Hip Hip Hooray
Another option is utilizing iovation’s mobile multifactor authentication (MFA) solution, LaunchKey. There are some key advantages to this approach.
Compliance doesn’t have to come at the cost of degrading the customer experience. To achieve market differentiation in the age of PSD2, PSPs and merchants will need to closely collaborate to optimize their fraud prevention strategies while also elegantly solving for SCA requirements. Learn how iovation’s solutions can help your business.
Authentication capabilities have evolved vastly in recent years. In the beginning a combination of username and password was all that was needed for authenticating an end user’s identity online. If more stringent authentication was required, a second “factor” such as a passcode, or knowledge-based question could be requested. A popular second factor methodology was, and still is, to send a one-time passcode (OTP) to a user’s mobile device.
But the OTP is often sent over SMS, which is potentially vulnerable to hacking. So all a fraudster needs to get access to your account is your username/password and your mobile phone number. Username/password is easy enough. Since 2013 nearly 10 billion data records have been exposed and are available to cybercriminals. (Gartner Report: Market Guide for Online Fraud Detection, January 2018) And two of the most popular passwords in use today are, you guessed it, “123456” and “password.” Then all one needs is your mobile phone number, which is not at all difficult for a cunning cybercriminal to obtain. It is easy to do a SIM swap and impersonate you or pull it off a social web site. And once access has been granted to your account, depending on your level of authorization or security clearance, everything else is accessible. The fox is in the henhouse, so to speak.
What about knowledge-based questions, you might ask? Perfectly legitimate question. Knowledge based questions can certainly add a layer of assurance. These higher-friction authentication methods can definitely be effective, and are valuable for high-risk transaction requests. In the case of social networks, those transactions are not always seen as vulnerable, so knowledge-based questions are not always employed at every login point.
The recently-announced data breach at Reddit is just one example of the weakness of two-factor authentication. The user’s account was protected by two-factor authentication which a fraudster was able to breach by intercepting the SMS authentication. And since the breached account had access to customer and company information, including database backups, all of it was available to the fraudster. Reddit is by no means alone. Yahoo and LinkedIn are just two more examples of massive data breaches in sites that were protected by two-factor authentication. And, as evidenced by these examples, it’s not only ecommerce sites that need this protection. It’s any site that stores user or other sensitive information.
So what can you do to protect your site and your users and still give them a satisfactory visit? Simple. Employ a dynamic, context-aware multi-factor authentication solution.
ClearKey has the ability to match a user account to a device, or multiple devices, and recognize that paring at login. This authentication is done transparently; the user doesn’t have to do anything. ClearKey performs a deep analysis of the login device to make sure it is one that is registered to the account. If a fraudster is attempting to evade detection or spoof the device characteristics, ClearKey will detect it. Then, even if they are able to intercept an SMS message, it won’t do them any good.
And ClearKey can be used to authenticate at any step in the customer journey. For relatively harmless transactions, like checking a balance, perhaps ClearKey authentication is all the business requires. For more risky transactions, like transferring funds, LaunchKey can be employed as an additional authentication factor. With LaunchKey the transaction can be authorized on your mobile phone by entering a PIN code, or circle code or with a simple fingerprint.
The bottom line is that fraudsters are getting better at their jobs and two-factor authentication comes with new risks, when it comes to securing online websites and users. Today’s websites need to deploy a true dynamic multifactor authentication solution to protect against fraud while providing users with a satisfactory online experience.
Machine learning or artificial intelligence – which holds the most potential for securing transactions in the future? In practice, the differences in the disciplines’ predictive capabilities blend together amid the hype. What really matters for today’s security pros is the ability to take trust beyond basic scoring and leverage everything we know from our Internet-scale experience.
Moving Beyond the Device
At first, we at iovation focused on uncovering patterns for device reputation. Our systems recognized users’ machines over time with high assurance. (And still do.) As devices took actions on our subscribers’ systems, our users catalogued evidence of fraudulent and abusive (i.e., ‘bad’) behavior. We amended those data with the relationships we detected across multiple devices and accounts. Administrators could then write rules to flag devices showing associations with other bad devices or accounts, records of past bad behavior or other risk signals.
In this early period, we used some machine learning techniques and simple predictive analytics to refine device recognition patterns with user input and device-account associations.
Also, we encountered a persistent challenge for machine learning: data quality. Some of our users were flagging devices even without confirmed evidence of fraud or abuse. That led to a self-fulfilling prophecy. When the flagged devices reappeared, it made things look that much worse in the user’s system. In those instances, we would explain what was happening and recommend that evidence be reserved for confirmed cases of fraud or abuse.
By 2009, we had 100 million devices in our data set (a number that has grown to more than 5 billion today). We could look closer at millions of transactions, analyzing every aspect of devices involved in fraud for patterns. As our database grew, customers expressed interest in additional axes of risk, such as anomalies in velocity, geo-location, suspicious combinations of device attributes, and other variables. They also wanted to be able to identify trustworthy devices.
Using all the data
Our application of machine learning sharpened our subscribers’ fraud detection, but only tapped into the 2% of transactions that were fraudulent. TrustScore changed that. It was a predictive trust model that looked out for devices with good reputations.
We applied machine learning to the 98% of transactions coming from legitimate customers. TrustScore provided some value, but it was a little too narrow in its approach. It could only comment on devices with tenure in the system, not new devices, and it only predicted trustworthiness, not risk. Our subscribers really wanted a complete predictive score that would call out both trustworthy and risky transactions. That is SureScore.
Instead of focusing solely on the trustworthiness of a device, this approach allows us to make real-time predictions without any knowledge about the particular user involved in the transaction. We analyze clues that the device alone doesn’t offer, such as transactional, contextual and behavioral indicators. That level of nuance exposed more opportunity.
We found a sizeable gray area in the space between clear threats and good customers. Some trustworthy users trigger fraud-prevention measures by happenstance but are otherwise harmless. Identifying the characteristics of honest customers – instead of scanning only for the bad ones – helped minimize this group’s time spent in the review queue. This brought a measurable benefit to the business, too. Beyond catching fraud, our modelling improved the efficiency of our customers’ workflow by reducing the number of cases that require manual intervention.
Machine learning needs human judgement
As I mentioned earlier, the potential of machine learning models is influenced by the quality of the data and the decisions based upon them. For example, in banking, thousands of rapid log-ins and transactions from a single source are a hallmark of fraud rings. Or they could come from popular finance software like Yodlee or Mint. When writing policies and setting rules, institutions have to make judgments that go beyond what a predictive algorithm is capable of learning from transaction data alone.
You need contextual insight to recognize the difference between an aggregator and a fraud ring, even when they exhibit the same behavior. A lot of data cleansing goes into quality machine learning. Neglect that, and it will directly impact the algorithm’s efficacy.
This reminds me of a short and relevant story. While working at a different company, I once evaluated an intrusion-detection system. The system’s algorithm recognized 100% of intrusion attempts on what was the industry’s standard sample dataset. However, as we dug into the results, we realized that almost all of the attacks in the sample data included automated, scripted elements. The intrusion-detection system’s machine learning had decided to focus on network sessions with short durations.
When we simply slowed down the scripts, the same attacks went through undetected. The algorithm didn't have the extra keys to understand how trivial session duration might be to defeat.
This is all to say that you can't blindly trust machine learning to solve your problems. It's going to help. It's going to catch problems that static rules might not. But predictive models still need tuning and oversight from experts in the systems at stake.
Mr. Daniel served as Cybersecurity Advisor to President Obama during the second term of the administration, and now leads the Cyber Threat Alliance (CTA). He’s the keynote speaker for Fraud Force Chicago. Register for the event to hear him speak, and benefit in these nine other ways.
During his presentation – A 360 Degree Outlook On The Global Security Landscape – Mr. Daniel will share insight distilled from his service during the Obama years, and since refined at the Cyber Threat Alliance.
If you’re like me, and struggle to list six major cybersecurity incidents that occurred between 2012 and 2016, let alone describe them in any detail, then you might benefit from this short primer.
Fraudsters succeed because – in part – they’re willing to share sensitive information (albeit for a profit). Mr. Daniel has been a strong advocate for information sharing in the name of defense, too.
The private and public sectors shouldn’t have to reinvent the mechanisms and processes for producing and acting on threat intelligence. Cybersecurity firms such as Fortinet, Intel Security, Palo Alto Networks, and Symantec have already done a fine job. That’s where the CTA fits in.
In 2014 those four firms agreed to embark on an experiment. Even though they competed for some of the same contracts and clients, they would share threat intelligence in the name of greater security.
Members upload packages of normalized data that conform to the CTA’s platform. (They anonymize data identifying the parties under observation.) The platform validates members’ submissions by correlating information uploaded by other members. As long as members continue to upload enough information, they’re allowed to download other members’ submissions for ingestion into their respective platforms.
(We at iovation love the underlying principle at work here. On the iovation Intelligence Center our users have shared over 50M confirmed incidences of fraud and abuse. This device intelligence helps all members detect and prevent fraud on known devices, while still defining their own business rules.)
Instead of competing with different, incomplete pools of information, the CTA’s members (16 as of publication) compete on the value they create with those pooled data; better integrating with clients’ tech stacks, or better fitting with clients’ business models. Sharing data makes all of these competitors better at securing their clients.
In spite of the technical nature of the domain, Mr. Daniel espouses a holistic risk management approach to cybersecurity. He points out that the challenge encompasses political, economical, psychological and behavioral factors.
For example, Mr. Daniel wrote in the Harvard Business Review: “Sharing information among people at human speed may work in many physical contexts, but it clearly falls short in cyberspace. As long [as] we continue to try to map physical-world models onto cyberspace, they will fall short in some fashion.”
Mr. Daniel has openly acknowledged some challenges to the CTA’s process:
We applaud his candor and creative thinking.
Let’s close with a smart analogy: look to natural disaster preparedness as a model. If the event overwhelms local responders, then surrounding groups and the state can bring extra help. And so on all the way up to the Federal Emergency Management Agency (FEMA).
Mr. Daniel has suggested a similar fluid approach might be appropriate for cyber threats, but we need to address some important questions first; “How do we do the handoff, and decide whether something is the kind of thing the private sector can and should handle on its own, versus something that calls for feds to help? We don’t yet have the policy language to talk about what that relationship is.”
You can be sure Mr. Daniel and the CTA’s members are working on preliminary answers. Come to Fraud Force Chicago for a preview in greater detail.
A botnet typically consists of a network of IoT devices that have weak security and have been infected by malware. The device is then remotely controlled from another location. These could be computers, phones, game platforms, baby speakers, children’s toys – basically anything that connects to the internet or internet of things (IoT).
Why would criminals spend time researching or socially engineering a fake user profile when they have an quicker and easier tool available? Botnets can run through extensive lists of username and password variations to hijack accounts in seconds. With the huge amounts of leaked data available for criminals to take advantage of, the use of a botnet is a logical next step.
Botnets aren’t even that expensive; you can rent or purchase one for as little as £5. The Mirai botnet of 400,000+ devices (recently seen attacking the Finance sector) can be rented for as little as £2,000 a week.
Botnets are used to incept fraudulent policies on mass, to takeover accounts and access documentation and to make policy changes to later commit “crash for cash” fraud.
How many of your users have the same username and password spread across multiple accounts? I’m willing to bet that this is the rule, not the exception.
When it comes to protecting yourself, you need to be able to recognise the devices that want to enter your secured portals. Has that device accessed this account before? If yes, you can assume they are a lower level of risk. If not, they need to jump through a few more hoops.
After all, why would someone be logging in from a children’s toy or be able to complete a lengthy form in milliseconds?
The Device intelligence you gain here can be fed into an authentication process to help you assess the risk that a device poses and step up the authorisation with multi-factor authentication.
A botnet can’t provide the extra levels of authentication needed but your trusted customers can.
Ghost Brokers sell fake or defrauded insurance policies to individuals for what appears to be very cheap premiums. Unfortunately, these policies are not worth the electronic mail they’re written on.
This can work in two ways, either the policy is incepted using falsified or stolen identity data and sold onto unwitting customers, or fake policy documents are produced to impersonate a legitimate insurer. We’re going to focus on how the former is evolving and how you can combat it.
In our last editorial we wrote about the rise in reported True Identity Theft, Account Takeover and reported 3rd Party Application Fraud by our insurance clients, all fraud instances that are prevalent in Ghost Broking operations. The threat from Ghost Brokers is not a new one, but the techniques they use to avoid detection are constantly evolving.
Fraudsters know they are being watched, we see this when investigating the origin of questionable policies. Fraudsters have started to create smaller amounts of falsified accounts per combination of deception. They will actively change not only an identity, but also the device in use to avoid detection. They are able to change the device profile (is it a laptop? tablet? mobile phone?), the location (are they based in the UK or further afield?), the MAC address, local language, operating system, screen resolution, and this is just scratching the surface.
Fraudsters have hundreds of variables to play with to enable their behaviour, so how is the insurance industry keeping up?
Firstly, there are offerings on the market that can aid you to recognise these signs and help to combat fraud (disclaimer: iovation is one of them!). Secondly, insurers can collaborate by sharing fraud insights with other insurers, or even other industries to get a better picture of evolving threats. This is a powerful tool that allows insurers to connect the dots on ghost brokering rings before running afoul of them. Because after all, insurance is not the only defrauded industry, and these sophisticated fraud techniques are being used and perfected elsewhere.
Recognition and collaboration are the biggest weapons in fighting fraud, because businesses cannot solve such large issues in isolation. With these tools you can evolve your fraud prevention, and help ensure that your policies ARE worth the electronic mail they’re written on.