FeedBurner makes it easy to receive content updates in My Yahoo!, Newsgator, Bloglines, and other news readers.
In the fight against fraud, analysts maintain a delicate balance. Whilst stronger regulations and policies protect customers’ data, fraudsters become more aggressive and sophisticated.
Nevertheless, one of our premier UK insurance clients managed to collaborate with another insurer to detect a ghost broking ring which led to several prosecutions.
Over a period of 25 months, someone or some group, had incepted and then cancelled 83 new motor insurance policies as soon as they had converted.
Whoever was opening and closing these policies was meticulous. Among those 83 applications, very little key data was repeated. The applicants incepted only four or five motor insurance policies per month.
This level of diligence would normally have slipped by undetected. But by using device intelligence our client could see that only two devices (both with true IP addresses in London) were submitting the applications, some of which listed residences hundreds of miles away.
The extra detail framed the core question: Why were these policies being incepted and cancelled so quickly?
Our client used unique, persistent identification numbers assigned to the two suspicious devices to query peers in a global fraud consortium. Sure enough, analysts at another insurer had seen the same devices applying for policies with them also.
Whilst adhering to UK privacy laws and their companies’ privacy policies, the two insurers pooled their observations. Confident that they were discussing the same devices, they shared information such as the number of policies incepted, the average time before the policies were cancelled, and the reasons given for those cancellations.
They concluded that the fraudsters were using the first insurers cancellation letters and no-claims bonuses to get cheaper policies with the second insurer. The cheaper policies (their premiums lowered further with false information about the drivers) were then sold to unsuspecting victims.
Equipped with this insight, our client supplied New Scotland Yard with concrete device data that proved essential in the successful prosecution of the ghost broking ring.
The consistent device ID allowed the fraud analysts to keep their balance and obey privacy regulations in their fight against fraud.
It’s no secret that financial institutions are battling against a rising tide of credit write offs. Investigating the root cause can leave us without satisfying answers however, since current economic trends and user behavior patterns don’t line up to account for the steep rise. New research from Gartner may have found a cause. Their latest research suggests that “by 2021, first-party fraud and synthetic identity fraud will account for 40% of credit write-offs, up from an estimated 25% today.”
If we look at the evolving fraud landscape for answers, the picture starts to become clearer. Synthetic identity fraud and first party fraud are evolving at a rate that most current identity proofing tools and some of the older bust-out models are unable to detect. In fact, most of these models were never designed to detect fraud, they were designed to establish creditworthiness, approve new lines of credit, and verify identity. So when fraudsters get smarter with their synthetic identity practices, they are able to bypass these systems. The result of this is an increase in credit write offs, and a pollution of miscategorized fraud, that is never appropriately solved at the source. Once this “hidden” fraud is removed from the chargeback and credit write off categories, we see a true picture of the scope of the fraudscape, thus allowing for its appropriate solutions. Financial institutions can only begin to recover these losses by combatting the correct source of fraud—by first identifying it.
In fact, iovation’s own data amongst its customers shows that synthetic identity is an ever growing problem. 2018 customer polling shows that it is the third most common type of fraud they face.
To see exactly how much synthetic identity fraud might be contributing to your inflated chargeback losses, we first must define the terms we are dealing with that contribute to the problem. Synthetic identity at its core means either an entirely fabricated identity, or an altered version of a real identity, by combining otherwise genuine identity elements from multiple separate identities. This is different than stolen identity fraud, since either some or all of the elements of the identity are in fact not real, or synthetic. These synthetic identities are evolved enough that they pass most identity proofing models, and count as “real” accounts.
Most institutions only measure and provide checks against direct first party or third party fraud losses. These losses vary by institution, but encompass anything that results in a loss where the fraudster is directly using a real identity information, whether their own or stolen, with malicious intent. First or third party fraud losses can include collusion, bad debt, policy abuse, stolen identity to open new lines of credit, bust-out schemes, and frivolous chargebacks to try and gain over the system. Too often, all these types of fraud are lumped together into one category, and treated as the entirety of “first party” or “third party” fraud.
When all these types of fraud are tracked together in legacy systems, it becomes easy for synthetic identity to mask as either first or third party fraud. In fact, it’s possible for synthetic identity fraud to be masked entirely, as legitimate chargebacks or credit losses. When there is no categorization of the different types of fraud within current models, you can’t quantify which types are increasing, or where the source originates. Most systems identify chargebacks as a typical credit failure. And if an institution isn’t measuring chargebacks as its own category of fraud, it’s difficult to see the dramatic rise and originating causes of the chargebacks. As Gartner’s latest report states, “If you can't accurately name it, you can't measure it, determine what an acceptable level is, and watch how these types of losses grow and change, and you can't justify investments in technology solutions and internal resources to fight it.” The key is to appropriately differentiate the types of first and third party losses, their originating causes, and methods of fraud prevention. To prepare and get a clear scope of your own synthetic identity fraud, view our on demand presentation which gives you strategies for identifying synthetic identity and battling credit write-offs.
Many businesses are still grappling with how best to satisfy Strong Customer Authentication (SCA) requirements under the EU Payment Services Directive (PSD2) without losing customers. Currently, merchants can choose to opt out lower risk transactions from SCA requirements, mostly in the form of 3DSecure, which shifts liability back to the card merchant. This will no longer be allowed under PSD2, creating a paradigm shift. Leveraging Transaction Risk Analysis will be vital for merchants to retain control over the buyer’s journey, but that will require close collaboration with payment processors.
Consumers are increasingly sensitive to any added friction and are voting with their feet. An estimated 70% of consumers abandon online forms due to a poor experience. So the question is, how do you balance risk and compliance without compromising the customer experience? I suggest by combining next-generation fraud prevention with risk-based consumer authentication, but more on that later.
There is good news!
Paragraph 21 of the European Banking Authority’s (EBA) Regulatory Technical Standards (RTS) document was a response to industry concerns, where they conceded that some exceptions should be allowed for risk-based SCA. The relevant section details:
21. [...] the EBA agrees with the view expressed by these respondents that a risk-based approach, including the ability to conduct detailed Transaction Risk Analysis and fraud monitoring, is essential to achieve the objective under PSD2 of reducing overall fraud.
Consequently the EBA arrived at the view that, in accordance with Article 98(2)(a) PSD2, an exemption based on such an analysis should be added in a new Article 16 RTS. The RTS also reiterate the importance of risk and fraud monitoring in general as a necessary complement to the principle of SCA laid out in PSD2 as stated in a new Article 2 RTS.
Transaction Risk Analysis
Essentially, the EBA has agreed that payment service providers (PSPs) and merchants should be able to request exemptions to SCA if they can attain target fraud rates. To be allowed the exemption based on Transaction Risk Analysis, the solution must operate in real-time and must verify a transaction against anomalies in user behavior. Checkpoints include the following:
The table of exemptions is as follows:
|Exemption threshold value||Reference fraud rate % Remote card-based payments|
|€250||0.01 - 0.06|
|€100||0.06 - 0.13|
Reference fraud rate formula:
Reference Fraud Rate % =
Total value of successful fraudulent transactions
Total value of all successful transactions (including SCA and exempted)
Competitive Advantages to Achieving Lowest Reference Fraud Rate
PSPs and merchants will have to work much more collaboratively to reduce fraud in order to reach the highest exemption thresholds, but this could provide a major competitive advantage on a number of fronts:
iovation is uniquely suited to help businesses drive down their fraud rates to maximize transaction risk analysis exemptions, while also providing an elegant, risk-based authentication solution to satisfy SCA requirements. Helping you strike the balance between compliance and customer experience.
We combine ClearKey, our lightweight and transparent customer authentication, with FraudForce, a real-time risk insight and fraud prevention solution, to confidently identify returning devices and check for risk signals that could signal fraud.
iovation’s deep device intelligence allows us to provide real-time data on the location of the payer and payee at time of payment, and to determine previous use of the access device provided to the payment service user for SCA. This intelligence coupled with your data on previous spending patterns of the payer will allow your business to confidently decide to accept, reject or review each transaction. This allows you to reduce your fraud rate, reduce the overall number of transactions subject to SCA, and increase customer satisfaction.
PSD2 requires that SCA must use two or more of the following independent factors:
For those transactions that are subject to SCA, you can layer transparent authentication onto your existing authentication system to utilize the device (possession factor) as a second authentication factor. This allows you to satisfy SCA requirements without adding additional steps to the checkout process unnecessarily.
MFA for SCA, Hip Hip Hooray
Another option is utilizing iovation’s mobile multifactor authentication (MFA) solution, LaunchKey. There are some key advantages to this approach.
Compliance doesn’t have to come at the cost of degrading the customer experience. To achieve market differentiation in the age of PSD2, PSPs and merchants will need to closely collaborate to optimize their fraud prevention strategies while also elegantly solving for SCA requirements. Learn how iovation’s solutions can help your business.
Authentication capabilities have evolved vastly in recent years. In the beginning a combination of username and password was all that was needed for authenticating an end user’s identity online. If more stringent authentication was required, a second “factor” such as a passcode, or knowledge-based question could be requested. A popular second factor methodology was, and still is, to send a one-time passcode (OTP) to a user’s mobile device.
But the OTP is often sent over SMS, which is potentially vulnerable to hacking. So all a fraudster needs to get access to your account is your username/password and your mobile phone number. Username/password is easy enough. Since 2013 nearly 10 billion data records have been exposed and are available to cybercriminals. (Gartner Report: Market Guide for Online Fraud Detection, January 2018) And two of the most popular passwords in use today are, you guessed it, “123456” and “password.” Then all one needs is your mobile phone number, which is not at all difficult for a cunning cybercriminal to obtain. It is easy to do a SIM swap and impersonate you or pull it off a social web site. And once access has been granted to your account, depending on your level of authorization or security clearance, everything else is accessible. The fox is in the henhouse, so to speak.
What about knowledge-based questions, you might ask? Perfectly legitimate question. Knowledge based questions can certainly add a layer of assurance. These higher-friction authentication methods can definitely be effective, and are valuable for high-risk transaction requests. In the case of social networks, those transactions are not always seen as vulnerable, so knowledge-based questions are not always employed at every login point.
The recently-announced data breach at Reddit is just one example of the weakness of two-factor authentication. The user’s account was protected by two-factor authentication which a fraudster was able to breach by intercepting the SMS authentication. And since the breached account had access to customer and company information, including database backups, all of it was available to the fraudster. Reddit is by no means alone. Yahoo and LinkedIn are just two more examples of massive data breaches in sites that were protected by two-factor authentication. And, as evidenced by these examples, it’s not only ecommerce sites that need this protection. It’s any site that stores user or other sensitive information.
So what can you do to protect your site and your users and still give them a satisfactory visit? Simple. Employ a dynamic, context-aware multi-factor authentication solution.
ClearKey has the ability to match a user account to a device, or multiple devices, and recognize that paring at login. This authentication is done transparently; the user doesn’t have to do anything. ClearKey performs a deep analysis of the login device to make sure it is one that is registered to the account. If a fraudster is attempting to evade detection or spoof the device characteristics, ClearKey will detect it. Then, even if they are able to intercept an SMS message, it won’t do them any good.
And ClearKey can be used to authenticate at any step in the customer journey. For relatively harmless transactions, like checking a balance, perhaps ClearKey authentication is all the business requires. For more risky transactions, like transferring funds, LaunchKey can be employed as an additional authentication factor. With LaunchKey the transaction can be authorized on your mobile phone by entering a PIN code, or circle code or with a simple fingerprint.
The bottom line is that fraudsters are getting better at their jobs and two-factor authentication comes with new risks, when it comes to securing online websites and users. Today’s websites need to deploy a true dynamic multifactor authentication solution to protect against fraud while providing users with a satisfactory online experience.
Machine learning or artificial intelligence – which holds the most potential for securing transactions in the future? In practice, the differences in the disciplines’ predictive capabilities blend together amid the hype. What really matters for today’s security pros is the ability to take trust beyond basic scoring and leverage everything we know from our Internet-scale experience.
Moving Beyond the Device
At first, we at iovation focused on uncovering patterns for device reputation. Our systems recognized users’ machines over time with high assurance. (And still do.) As devices took actions on our subscribers’ systems, our users catalogued evidence of fraudulent and abusive (i.e., ‘bad’) behavior. We amended those data with the relationships we detected across multiple devices and accounts. Administrators could then write rules to flag devices showing associations with other bad devices or accounts, records of past bad behavior or other risk signals.
In this early period, we used some machine learning techniques and simple predictive analytics to refine device recognition patterns with user input and device-account associations.
Also, we encountered a persistent challenge for machine learning: data quality. Some of our users were flagging devices even without confirmed evidence of fraud or abuse. That led to a self-fulfilling prophecy. When the flagged devices reappeared, it made things look that much worse in the user’s system. In those instances, we would explain what was happening and recommend that evidence be reserved for confirmed cases of fraud or abuse.
By 2009, we had 100 million devices in our data set (a number that has grown to more than 5 billion today). We could look closer at millions of transactions, analyzing every aspect of devices involved in fraud for patterns. As our database grew, customers expressed interest in additional axes of risk, such as anomalies in velocity, geo-location, suspicious combinations of device attributes, and other variables. They also wanted to be able to identify trustworthy devices.
Using all the data
Our application of machine learning sharpened our subscribers’ fraud detection, but only tapped into the 2% of transactions that were fraudulent. TrustScore changed that. It was a predictive trust model that looked out for devices with good reputations.
We applied machine learning to the 98% of transactions coming from legitimate customers. TrustScore provided some value, but it was a little too narrow in its approach. It could only comment on devices with tenure in the system, not new devices, and it only predicted trustworthiness, not risk. Our subscribers really wanted a complete predictive score that would call out both trustworthy and risky transactions. That is SureScore.
Instead of focusing solely on the trustworthiness of a device, this approach allows us to make real-time predictions without any knowledge about the particular user involved in the transaction. We analyze clues that the device alone doesn’t offer, such as transactional, contextual and behavioral indicators. That level of nuance exposed more opportunity.
We found a sizeable gray area in the space between clear threats and good customers. Some trustworthy users trigger fraud-prevention measures by happenstance but are otherwise harmless. Identifying the characteristics of honest customers – instead of scanning only for the bad ones – helped minimize this group’s time spent in the review queue. This brought a measurable benefit to the business, too. Beyond catching fraud, our modelling improved the efficiency of our customers’ workflow by reducing the number of cases that require manual intervention.
Machine learning needs human judgement
As I mentioned earlier, the potential of machine learning models is influenced by the quality of the data and the decisions based upon them. For example, in banking, thousands of rapid log-ins and transactions from a single source are a hallmark of fraud rings. Or they could come from popular finance software like Yodlee or Mint. When writing policies and setting rules, institutions have to make judgments that go beyond what a predictive algorithm is capable of learning from transaction data alone.
You need contextual insight to recognize the difference between an aggregator and a fraud ring, even when they exhibit the same behavior. A lot of data cleansing goes into quality machine learning. Neglect that, and it will directly impact the algorithm’s efficacy.
This reminds me of a short and relevant story. While working at a different company, I once evaluated an intrusion-detection system. The system’s algorithm recognized 100% of intrusion attempts on what was the industry’s standard sample dataset. However, as we dug into the results, we realized that almost all of the attacks in the sample data included automated, scripted elements. The intrusion-detection system’s machine learning had decided to focus on network sessions with short durations.
When we simply slowed down the scripts, the same attacks went through undetected. The algorithm didn't have the extra keys to understand how trivial session duration might be to defeat.
This is all to say that you can't blindly trust machine learning to solve your problems. It's going to help. It's going to catch problems that static rules might not. But predictive models still need tuning and oversight from experts in the systems at stake.
Mr. Daniel served as Cybersecurity Advisor to President Obama during the second term of the administration, and now leads the Cyber Threat Alliance (CTA). He’s the keynote speaker for Fraud Force Chicago. Register for the event to hear him speak, and benefit in these nine other ways.
During his presentation – A 360 Degree Outlook On The Global Security Landscape – Mr. Daniel will share insight distilled from his service during the Obama years, and since refined at the Cyber Threat Alliance.
If you’re like me, and struggle to list six major cybersecurity incidents that occurred between 2012 and 2016, let alone describe them in any detail, then you might benefit from this short primer.
Fraudsters succeed because – in part – they’re willing to share sensitive information (albeit for a profit). Mr. Daniel has been a strong advocate for information sharing in the name of defense, too.
The private and public sectors shouldn’t have to reinvent the mechanisms and processes for producing and acting on threat intelligence. Cybersecurity firms such as Fortinet, Intel Security, Palo Alto Networks, and Symantec have already done a fine job. That’s where the CTA fits in.
In 2014 those four firms agreed to embark on an experiment. Even though they competed for some of the same contracts and clients, they would share threat intelligence in the name of greater security.
Members upload packages of normalized data that conform to the CTA’s platform. (They anonymize data identifying the parties under observation.) The platform validates members’ submissions by correlating information uploaded by other members. As long as members continue to upload enough information, they’re allowed to download other members’ submissions for ingestion into their respective platforms.
(We at iovation love the underlying principle at work here. On the iovation Intelligence Center our users have shared over 50M confirmed incidences of fraud and abuse. This device intelligence helps all members detect and prevent fraud on known devices, while still defining their own business rules.)
Instead of competing with different, incomplete pools of information, the CTA’s members (16 as of publication) compete on the value they create with those pooled data; better integrating with clients’ tech stacks, or better fitting with clients’ business models. Sharing data makes all of these competitors better at securing their clients.
In spite of the technical nature of the domain, Mr. Daniel espouses a holistic risk management approach to cybersecurity. He points out that the challenge encompasses political, economical, psychological and behavioral factors.
For example, Mr. Daniel wrote in the Harvard Business Review: “Sharing information among people at human speed may work in many physical contexts, but it clearly falls short in cyberspace. As long [as] we continue to try to map physical-world models onto cyberspace, they will fall short in some fashion.”
Mr. Daniel has openly acknowledged some challenges to the CTA’s process:
We applaud his candor and creative thinking.
Let’s close with a smart analogy: look to natural disaster preparedness as a model. If the event overwhelms local responders, then surrounding groups and the state can bring extra help. And so on all the way up to the Federal Emergency Management Agency (FEMA).
Mr. Daniel has suggested a similar fluid approach might be appropriate for cyber threats, but we need to address some important questions first; “How do we do the handoff, and decide whether something is the kind of thing the private sector can and should handle on its own, versus something that calls for feds to help? We don’t yet have the policy language to talk about what that relationship is.”
You can be sure Mr. Daniel and the CTA’s members are working on preliminary answers. Come to Fraud Force Chicago for a preview in greater detail.
A botnet typically consists of a network of IoT devices that have weak security and have been infected by malware. The device is then remotely controlled from another location. These could be computers, phones, game platforms, baby speakers, children’s toys – basically anything that connects to the internet or internet of things (IoT).
Why would criminals spend time researching or socially engineering a fake user profile when they have an quicker and easier tool available? Botnets can run through extensive lists of username and password variations to hijack accounts in seconds. With the huge amounts of leaked data available for criminals to take advantage of, the use of a botnet is a logical next step.
Botnets aren’t even that expensive; you can rent or purchase one for as little as £5. The Mirai botnet of 400,000+ devices (recently seen attacking the Finance sector) can be rented for as little as £2,000 a week.
Botnets are used to incept fraudulent policies on mass, to takeover accounts and access documentation and to make policy changes to later commit “crash for cash” fraud.
How many of your users have the same username and password spread across multiple accounts? I’m willing to bet that this is the rule, not the exception.
When it comes to protecting yourself, you need to be able to recognise the devices that want to enter your secured portals. Has that device accessed this account before? If yes, you can assume they are a lower level of risk. If not, they need to jump through a few more hoops.
After all, why would someone be logging in from a children’s toy or be able to complete a lengthy form in milliseconds?
The Device intelligence you gain here can be fed into an authentication process to help you assess the risk that a device poses and step up the authorisation with multi-factor authentication.
A botnet can’t provide the extra levels of authentication needed but your trusted customers can.
Ghost Brokers sell fake or defrauded insurance policies to individuals for what appears to be very cheap premiums. Unfortunately, these policies are not worth the electronic mail they’re written on.
This can work in two ways, either the policy is incepted using falsified or stolen identity data and sold onto unwitting customers, or fake policy documents are produced to impersonate a legitimate insurer. We’re going to focus on how the former is evolving and how you can combat it.
In our last editorial we wrote about the rise in reported True Identity Theft, Account Takeover and reported 3rd Party Application Fraud by our insurance clients, all fraud instances that are prevalent in Ghost Broking operations. The threat from Ghost Brokers is not a new one, but the techniques they use to avoid detection are constantly evolving.
Fraudsters know they are being watched, we see this when investigating the origin of questionable policies. Fraudsters have started to create smaller amounts of falsified accounts per combination of deception. They will actively change not only an identity, but also the device in use to avoid detection. They are able to change the device profile (is it a laptop? tablet? mobile phone?), the location (are they based in the UK or further afield?), the MAC address, local language, operating system, screen resolution, and this is just scratching the surface.
Fraudsters have hundreds of variables to play with to enable their behaviour, so how is the insurance industry keeping up?
Firstly, there are offerings on the market that can aid you to recognise these signs and help to combat fraud (disclaimer: iovation is one of them!). Secondly, insurers can collaborate by sharing fraud insights with other insurers, or even other industries to get a better picture of evolving threats. This is a powerful tool that allows insurers to connect the dots on ghost brokering rings before running afoul of them. Because after all, insurance is not the only defrauded industry, and these sophisticated fraud techniques are being used and perfected elsewhere.
Recognition and collaboration are the biggest weapons in fighting fraud, because businesses cannot solve such large issues in isolation. With these tools you can evolve your fraud prevention, and help ensure that your policies ARE worth the electronic mail they’re written on.
Previously, on the token blog, we pondered various approaches to protecting personal data from exploitation. The strategies include perimeter maintenance (the “secure data environment”), well-defined and -enforced access policies, encryption of data in transit and at rest, and tokenization. The wide deployment of firewalls and network partitions, authentication and authorization services, TLS and encrypted databases address the first three of these categories. The subtleties of tokenization, on the other hand, merit deeper attention.
Tokenization substitutes a “token” for a single value. Take a user profile, for example. A service might require a username and password for authentication, an email address for password resets, and a brief user bio. Each field contains personal or potentially-personal information. A single record might look like this:
bio: VP Engineering at Supercool Analyticz, LLC. Wife of eleanorrigby. Celica afficionato. Go Fighting Ducks!
The sensitivity of the email address, password, and phone number is self-evident. But even the bio can be revealing: identity thieves can use this information to spoof location based on company name and guess answers to typical KBA questions (“first car”, “college attended”). Some services make this information public, but for a site that requires mutual consent to share a bio, the contents might be more sensitive. Loss of such data would violate the trust between user and service provider.
Data processors and controllers have a responsibility to take reasonable steps to protect personal data entrusted to them, not only from external breaches, but also from internal exposure. Employees mustn’t see such data in the clear, not only to prevent leakage, but also to reduce bias. Disclosure of the barest of personal information may play into the blind spots of even the most responsible of people, potentially leading to unfair treatment and outcomes. Imagine a support person uncomfortable about gay marriage seeing the above bio. Might it cause them to treat the user differently than others?
Far better to expose as little information as possible. However, the requirements for each field vary depending on usage. Let’s leave aside the password field, for now; we expect that the self-evident secrecy requirements long ago led most organizations to adopt password hashing to protect passwords. Focus instead on the other fields. They’re not inherently secret, but personal, still deserving of protection from unnecessary disclosure. What are the requirements for tokenizing the username, email address, and bio?
Let’s lay them out:
These requirements demonstrate the two dimensions of tokenization: reversibility and determinism. Reversible tokens may be detokenized to recover their original values. Deterministic tokens are always the same given the same inputs. For example, the phone number +1-503-987-3456 might be tokenized as OIGM09jeWSEz_yNN-oXMrQ, and must be tokenized with exactly that string every time. This contrasts with non-deterministic tokens, where each tokenization of +1-503-987-3456 returns a different token.
A quadrant graph nicely illustrates the options created by these dimensions:
In truth, the top left quadrant, non-reversible & deterministic, is traditionally filled by cryptographic hash functions, such as SHA–256. Similarly, the bottom-right quadrant, reversible and non-deterministic, corresponds to symmetric cryptography modes such as CBC-MAC. Of course the bottom-left quadrant, non-reversible and non-deterministic, isn’t useful at all. It’s that top-right corner, requiring deterministic, reversible values, that’s the sweet spot for tokenization.
Returning to the user profile, the fields map to the dimensions as follows:
Although one could rely on a cryptographic hash function to tokenize the username, and a crypto library to protect the bio, we find it useful to adopt a tokenization strategy that covers all three use-cases. The consistency of interface ensures consistent treatment of values, easing the protection of data with different requirements without additional effort. When the goal is to protect all personal data, it’s easiest to adopt a solution the properly protects all personal data.
We believe it worthwhile to evaluate tokenization solutions that encompass these tokenization dimensions, where a single solution encapsulates fully-vetted and -audited, industry-standard tokenization, encryption, and hashing algorithms in a single solution. Applying the patterns to our example user profile, the data now becomes safe to show to employees, and useless to identity thieves:
Alas, reversibility and determinism cover only a subset of the considerations when it comes to tokenization. Other variables to weigh include data type preservation, data storage strategies, and regulatory compliance vetting. We’ll cover those topics in future posts.
Last week, the U.S. Supreme Court paved the way for states to legalize sports betting. This ruling could create a multibillion-dollar market for businesses and states — provided they can manage two key elements.
Six states — New Jersey, Connecticut, Mississippi, New York, Pennsylvania, and West Virginia — already have sports betting laws in place. They’re expected to tap into the new source of tax revenue quickly. Other states, established U.S. casinos and fantasy sports websites, and European operators are sure to follow.
The profit potential has already made a few early winners. Reuters reported gaming stocks rallied around the news. Those specializing in gaming technology gained 10 percent or more.
As the market fills with operators, player experience will become a competitive differentiator quickly. And as new players are welcomed, bad actors will attempt to take advantage. To succeed, online gaming operators will need to create an exceptional player experience while preventing fraud.
As online sports betting has grown in popularity throughout the world, players have come to expect the same seamless user experience in-play that they enjoy elsewhere online.
For example, users expect to be able to access their favorite online games on any device. In our 2018 Gambling Report, we found that the number of gambling transactions we processed from mobile devices in 2017 had grown to 62%, up from 6% in 2012. That’s an annual growth rate of 116%. What’s more, many users are accessing their favorite online gaming sites from more than one device, making it imperative to provide omnichannel authentication that doesn’t interfere with their play.
Mobile market share isn’t the only factor on the rise. From 2013 until 2017, reports of cheating fraud from online gaming operators who use our technology increased by more than a factor of 10. Along with more generalized fraud such as account takeover and credit card fraud, this includes fraud unique to the online gaming industry: chip dumping, player collusion, all-in abuse, and bonus abuse.
Left unchecked, online fraud can quickly deflate the bottom line of the most enthusiastic entrant into the new markets. Cheaters, and complaints about them on social media, can cause long-term damage to an operator’s reputation.
If those two challenges weren’t enough, U.S. sports betting operators will have to meet the challenge of self-exclusion. It isn’t yet mandatory to help players addicted to gambling to exclude themselves in the U.S., but offering this service demonstrates social accountability and an ability to self-regulate. Operators entering each state’s new market would do well to integrate strong self-exclusion measures from the beginning.
iovation’s device-recognition technology offers powerful tools to enter the market safely.
To shut out fraudsters and cheaters, online gambling operators around the world have come to rely on FraudForce and SureScore. The former details the user’s device by considering context, behavior, and reputation. It stops fraud in real time with help from a global network with over 50M confirmed fraud cases. The latter, SureScore, applies machine learning to dozens of indicators to predict the outcome of any given online transaction, even without any prior history with the particular player involved.
Similar technology has been improving millions of players’ experiences across the Internet. ClearKey helps to welcome players with an invisible, hassle-free authentication experience by recognizing and using their device as a second factor of authentication. ClearKey works alongside existing systems to reduce the number of challenges put between players and their gaming experiences. LaunchKey leverages adaptable, risk-based multifactor authentication to secure players’ accounts. It strengthens security for operators while delivering authentication that players actually like to use.
All of these tools will be invaluable to operators’ pursuit of VIPs. These high-value customers will receive plenty of incentives to move their play from operator to operator. Identifying and retaining them quickly and confidently will have an outsized impact.
I’m just scratching the surface with this post. iovation has a long, proud history serving online gambling operators and platforms. Watch our webinar on our 2018 iovation Gambling Report for more insights on the industry’s opportunities with fraud prevention and user authentication.