FeedBurner makes it easy to receive content updates in My Yahoo!, Newsgator, Bloglines, and other news readers.
Mr. Daniel served as Cybersecurity Advisor to President Obama during the second term of the administration, and now leads the Cyber Threat Alliance (CTA). He’s the keynote speaker for Fraud Force Chicago. Register for the event to hear him speak, and benefit in these nine other ways.
During his presentation – A 360 Degree Outlook On The Global Security Landscape – Mr. Daniel will share insight distilled from his service during the Obama years, and since refined at the Cyber Threat Alliance.
If you’re like me, and struggle to list six major cybersecurity incidents that occurred between 2012 and 2016, let alone describe them in any detail, then you might benefit from this short primer.
Fraudsters succeed because – in part – they’re willing to share sensitive information (albeit for a profit). Mr. Daniel has been a strong advocate for information sharing in the name of defense, too.
The private and public sectors shouldn’t have to reinvent the mechanisms and processes for producing and acting on threat intelligence. Cybersecurity firms such as Fortinet, Intel Security, Palo Alto Networks, and Symantec have already done a fine job. That’s where the CTA fits in.
In 2014 those four firms agreed to embark on an experiment. Even though they competed for some of the same contracts and clients, they would share threat intelligence in the name of greater security.
Members upload packages of normalized data that conform to the CTA’s platform. (They anonymize data identifying the parties under observation.) The platform validates members’ submissions by correlating information uploaded by other members. As long as members continue to upload enough information, they’re allowed to download other members’ submissions for ingestion into their respective platforms.
(We at iovation love the underlying principle at work here. On the iovation Intelligence Center our users have shared over 50M confirmed incidences of fraud and abuse. This device intelligence helps all members detect and prevent fraud on known devices, while still defining their own business rules.)
Instead of competing with different, incomplete pools of information, the CTA’s members (16 as of publication) compete on the value they create with those pooled data; better integrating with clients’ tech stacks, or better fitting with clients’ business models. Sharing data makes all of these competitors better at securing their clients.
In spite of the technical nature of the domain, Mr. Daniel espouses a holistic risk management approach to cybersecurity. He points out that the challenge encompasses political, economical, psychological and behavioral factors.
For example, Mr. Daniel wrote in the Harvard Business Review: “Sharing information among people at human speed may work in many physical contexts, but it clearly falls short in cyberspace. As long [as] we continue to try to map physical-world models onto cyberspace, they will fall short in some fashion.”
Mr. Daniel has openly acknowledged some challenges to the CTA’s process:
We applaud his candor and creative thinking.
Let’s close with a smart analogy: look to natural disaster preparedness as a model. If the event overwhelms local responders, then surrounding groups and the state can bring extra help. And so on all the way up to the Federal Emergency Management Agency (FEMA).
Mr. Daniel has suggested a similar fluid approach might be appropriate for cyber threats, but we need to address some important questions first; “How do we do the handoff, and decide whether something is the kind of thing the private sector can and should handle on its own, versus something that calls for feds to help? We don’t yet have the policy language to talk about what that relationship is.”
You can be sure Mr. Daniel and the CTA’s members are working on preliminary answers. Come to Fraud Force Chicago for a preview in greater detail.
A botnet typically consists of a network of IoT devices that have weak security and have been infected by malware. The device is then remotely controlled from another location. These could be computers, phones, game platforms, baby speakers, children’s toys – basically anything that connects to the internet or internet of things (IoT).
Why would criminals spend time researching or socially engineering a fake user profile when they have an quicker and easier tool available? Botnets can run through extensive lists of username and password variations to hijack accounts in seconds. With the huge amounts of leaked data available for criminals to take advantage of, the use of a botnet is a logical next step.
Botnets aren’t even that expensive; you can rent or purchase one for as little as £5. The Mirai botnet of 400,000+ devices (recently seen attacking the Finance sector) can be rented for as little as £2,000 a week.
Botnets are used to incept fraudulent policies on mass, to takeover accounts and access documentation and to make policy changes to later commit “crash for cash” fraud.
How many of your users have the same username and password spread across multiple accounts? I’m willing to bet that this is the rule, not the exception.
When it comes to protecting yourself, you need to be able to recognise the devices that want to enter your secured portals. Has that device accessed this account before? If yes, you can assume they are a lower level of risk. If not, they need to jump through a few more hoops.
After all, why would someone be logging in from a children’s toy or be able to complete a lengthy form in milliseconds?
The Device intelligence you gain here can be fed into an authentication process to help you assess the risk that a device poses and step up the authorisation with multi-factor authentication.
A botnet can’t provide the extra levels of authentication needed but your trusted customers can.
Ghost Brokers sell fake or defrauded insurance policies to individuals for what appears to be very cheap premiums. Unfortunately, these policies are not worth the electronic mail they’re written on.
This can work in two ways, either the policy is incepted using falsified or stolen identity data and sold onto unwitting customers, or fake policy documents are produced to impersonate a legitimate insurer. We’re going to focus on how the former is evolving and how you can combat it.
In our last editorial we wrote about the rise in reported True Identity Theft, Account Takeover and reported 3rd Party Application Fraud by our insurance clients, all fraud instances that are prevalent in Ghost Broking operations. The threat from Ghost Brokers is not a new one, but the techniques they use to avoid detection are constantly evolving.
Fraudsters know they are being watched, we see this when investigating the origin of questionable policies. Fraudsters have started to create smaller amounts of falsified accounts per combination of deception. They will actively change not only an identity, but also the device in use to avoid detection. They are able to change the device profile (is it a laptop? tablet? mobile phone?), the location (are they based in the UK or further afield?), the MAC address, local language, operating system, screen resolution, and this is just scratching the surface.
Fraudsters have hundreds of variables to play with to enable their behaviour, so how is the insurance industry keeping up?
Firstly, there are offerings on the market that can aid you to recognise these signs and help to combat fraud (disclaimer: iovation is one of them!). Secondly, insurers can collaborate by sharing fraud insights with other insurers, or even other industries to get a better picture of evolving threats. This is a powerful tool that allows insurers to connect the dots on ghost brokering rings before running afoul of them. Because after all, insurance is not the only defrauded industry, and these sophisticated fraud techniques are being used and perfected elsewhere.
Recognition and collaboration are the biggest weapons in fighting fraud, because businesses cannot solve such large issues in isolation. With these tools you can evolve your fraud prevention, and help ensure that your policies ARE worth the electronic mail they’re written on.
Previously, on the token blog, we pondered various approaches to protecting personal data from exploitation. The strategies include perimeter maintenance (the “secure data environment”), well-defined and -enforced access policies, encryption of data in transit and at rest, and tokenization. The wide deployment of firewalls and network partitions, authentication and authorization services, TLS and encrypted databases address the first three of these categories. The subtleties of tokenization, on the other hand, merit deeper attention.
Tokenization substitutes a “token” for a single value. Take a user profile, for example. A service might require a username and password for authentication, an email address for password resets, and a brief user bio. Each field contains personal or potentially-personal information. A single record might look like this:
bio: VP Engineering at Supercool Analyticz, LLC. Wife of eleanorrigby. Celica afficionato. Go Fighting Ducks!
The sensitivity of the email address, password, and phone number is self-evident. But even the bio can be revealing: identity thieves can use this information to spoof location based on company name and guess answers to typical KBA questions (“first car”, “college attended”). Some services make this information public, but for a site that requires mutual consent to share a bio, the contents might be more sensitive. Loss of such data would violate the trust between user and service provider.
Data processors and controllers have a responsibility to take reasonable steps to protect personal data entrusted to them, not only from external breaches, but also from internal exposure. Employees mustn’t see such data in the clear, not only to prevent leakage, but also to reduce bias. Disclosure of the barest of personal information may play into the blind spots of even the most responsible of people, potentially leading to unfair treatment and outcomes. Imagine a support person uncomfortable about gay marriage seeing the above bio. Might it cause them to treat the user differently than others?
Far better to expose as little information as possible. However, the requirements for each field vary depending on usage. Let’s leave aside the password field, for now; we expect that the self-evident secrecy requirements long ago led most organizations to adopt password hashing to protect passwords. Focus instead on the other fields. They’re not inherently secret, but personal, still deserving of protection from unnecessary disclosure. What are the requirements for tokenizing the username, email address, and bio?
Let’s lay them out:
These requirements demonstrate the two dimensions of tokenization: reversibility and determinism. Reversible tokens may be detokenized to recover their original values. Deterministic tokens are always the same given the same inputs. For example, the phone number +1-503-987-3456 might be tokenized as OIGM09jeWSEz_yNN-oXMrQ, and must be tokenized with exactly that string every time. This contrasts with non-deterministic tokens, where each tokenization of +1-503-987-3456 returns a different token.
A quadrant graph nicely illustrates the options created by these dimensions:
In truth, the top left quadrant, non-reversible & deterministic, is traditionally filled by cryptographic hash functions, such as SHA–256. Similarly, the bottom-right quadrant, reversible and non-deterministic, corresponds to symmetric cryptography modes such as CBC-MAC. Of course the bottom-left quadrant, non-reversible and non-deterministic, isn’t useful at all. It’s that top-right corner, requiring deterministic, reversible values, that’s the sweet spot for tokenization.
Returning to the user profile, the fields map to the dimensions as follows:
Although one could rely on a cryptographic hash function to tokenize the username, and a crypto library to protect the bio, we find it useful to adopt a tokenization strategy that covers all three use-cases. The consistency of interface ensures consistent treatment of values, easing the protection of data with different requirements without additional effort. When the goal is to protect all personal data, it’s easiest to adopt a solution the properly protects all personal data.
We believe it worthwhile to evaluate tokenization solutions that encompass these tokenization dimensions, where a single solution encapsulates fully-vetted and -audited, industry-standard tokenization, encryption, and hashing algorithms in a single solution. Applying the patterns to our example user profile, the data now becomes safe to show to employees, and useless to identity thieves:
Alas, reversibility and determinism cover only a subset of the considerations when it comes to tokenization. Other variables to weigh include data type preservation, data storage strategies, and regulatory compliance vetting. We’ll cover those topics in future posts.
Last week, the U.S. Supreme Court paved the way for states to legalize sports betting. This ruling could create a multibillion-dollar market for businesses and states — provided they can manage two key elements.
Six states — New Jersey, Connecticut, Mississippi, New York, Pennsylvania, and West Virginia — already have sports betting laws in place. They’re expected to tap into the new source of tax revenue quickly. Other states, established U.S. casinos and fantasy sports websites, and European operators are sure to follow.
The profit potential has already made a few early winners. Reuters reported gaming stocks rallied around the news. Those specializing in gaming technology gained 10 percent or more.
As the market fills with operators, player experience will become a competitive differentiator quickly. And as new players are welcomed, bad actors will attempt to take advantage. To succeed, online gaming operators will need to create an exceptional player experience while preventing fraud.
As online sports betting has grown in popularity throughout the world, players have come to expect the same seamless user experience in-play that they enjoy elsewhere online.
For example, users expect to be able to access their favorite online games on any device. In our 2018 Gambling Report, we found that the number of gambling transactions we processed from mobile devices in 2017 had grown to 62%, up from 6% in 2012. That’s an annual growth rate of 116%. What’s more, many users are accessing their favorite online gaming sites from more than one device, making it imperative to provide omnichannel authentication that doesn’t interfere with their play.
Mobile market share isn’t the only factor on the rise. From 2013 until 2017, reports of cheating fraud from online gaming operators who use our technology increased by more than a factor of 10. Along with more generalized fraud such as account takeover and credit card fraud, this includes fraud unique to the online gaming industry: chip dumping, player collusion, all-in abuse, and bonus abuse.
Left unchecked, online fraud can quickly deflate the bottom line of the most enthusiastic entrant into the new markets. Cheaters, and complaints about them on social media, can cause long-term damage to an operator’s reputation.
If those two challenges weren’t enough, U.S. sports betting operators will have to meet the challenge of self-exclusion. It isn’t yet mandatory to help players addicted to gambling to exclude themselves in the U.S., but offering this service demonstrates social accountability and an ability to self-regulate. Operators entering each state’s new market would do well to integrate strong self-exclusion measures from the beginning.
iovation’s device-recognition technology offers powerful tools to enter the market safely.
To shut out fraudsters and cheaters, online gambling operators around the world have come to rely on FraudForce and SureScore. The former details the user’s device by considering context, behavior, and reputation. It stops fraud in real time with help from a global network with over 50M confirmed fraud cases. The latter, SureScore, applies machine learning to dozens of indicators to predict the outcome of any given online transaction, even without any prior history with the particular player involved.
Similar technology has been improving millions of players’ experiences across the Internet. ClearKey helps to welcome players with an invisible, hassle-free authentication experience by recognizing and using their device as a second factor of authentication. ClearKey works alongside existing systems to reduce the number of challenges put between players and their gaming experiences. LaunchKey leverages adaptable, risk-based multifactor authentication to secure players’ accounts. It strengthens security for operators while delivering authentication that players actually like to use.
All of these tools will be invaluable to operators’ pursuit of VIPs. These high-value customers will receive plenty of incentives to move their play from operator to operator. Identifying and retaining them quickly and confidently will have an outsized impact.
I’m just scratching the surface with this post. iovation has a long, proud history serving online gambling operators and platforms. Watch our webinar on our 2018 iovation Gambling Report for more insights on the industry’s opportunities with fraud prevention and user authentication.
Without context, how can you be sure of your users’ identities?
Continuous identity assurance improves security posture and user experience, but it’s only possible when the identity proofing and user authentication teams incorporate signals from their colleagues in online fraud detection.
So far, we’ve explored the converging nature of identity assurance across three business functions that have, historically, been isolated from each other: identity proofing, user authentication, and online fraud detection.
Once a linear process, identity assurance now demands continuity. The three teams will have to collaborate as one, not as discrete points in a one-way sequence.
Continuity implies the passage of time and changes in circumstance: users move across cities and continents, acquire new devices and give up old ones, grow their social-media footprints, and otherwise go about their digital lives.
In the linear model of identity assurance, the identity proofing and user authentication teams haven’t had to account for this change. Their roles were executed with finite snapshots of the user. That’s incompatible with the nature of continuous identity assurance and its driving imperatives.
Now, to determine if an identity can be trusted with some account privileges, identity proofing and user authentication leaders need context.
According to Gartner’s new report on establishing and sustaining trust in digital identities, context brings two benefits to the enterprise. First, it helps establish an appropriate level of trust in the user’s identity, as defined by the use case and the enterprise's risk appetite. Second, context removes the "burden of proof" from the user by taking an adaptive approach that minimizes the use of intrusive authentication methods. (We call this dynamic authentication.)
If you’re responsible for identity and access management, fraud prevention, or identity proofing in your organization, this report is a must-read. Get your free copy here.
Gartner clarifies the composition of context in their Trusted Identity Capabilities Model. They describe four types of ‘signals’ as contributing to the context necessary to maintain trust in an identity:
|Attack signals||Familiarity signals||Risk signals||Anomalies|
|Device, location spoofing||Trusted device, location||Malware/Jailbreak detection||Other deviations from normal behaviors|
|Nonhuman behavior||Entity link analysis||Short phone/email lifetime|
|Human-farm behavior||Social footprint (“Internet life”)||Anonymity|
|Attacker-like behavior||Normal behaviors||Location mismatch|
|Probing||Passive biometric modes|
In the linear model of identity assurance, these signals haven’t been useful to the identity proofing and user authentication teams. A binary worldview was sufficient:
Can the user provide valid credentials? If so, create an account and assign privileges.
Can the user provide the correct username and password? If so, grant access to the account.
In that model, the enterprise just needed a hard perimeter and an ability to identify and repair damage to that perimeter quickly.
But the world isn’t binary. Users are tired of this treatment. Fraudsters have shown they can pass through the hard perimeter. It doesn’t make sense to challenge all users with the same authentication methods for all tasks.
Risk is relative and slippery. How do you assess and address the risk of the transaction, user and device in real-time? With flexibility rooted in context.
Identity proofing and user authentication don’t have to look far for the context they need. For years now, the online fraud detection team has been monitoring in real time for anomalies and various elements of attack-, familiar-, and risk-signals to reach an informed decision about suspicious users, transactions and accounts.
If you can combine continuous identity assurance at any point in the user journey with context, you're on your way to dynamic authentication. That enables you to calibrate the level of authentication to the circumstances. Using risk-appropriate authentication only when it’s needed allows you to preserve a smooth user experience for longer periods, which improves brand image and results in greater user acceptance. It also yields a superior position to prevent fraud and maintain information security.
For more on this, and what it means for you and your organization, get your copy of Gartner’s new report and watch for our upcoming webinars, where we’ll continue this conversation.
The GDPR goes into effect on Friday. If your authentication tools and processes aren’t in compliance by now, you need a solution that balances strong security and simple implementation with the regulation’s mandate for digital privacy. But don’t forget user experience in the process. Keep reading to discover how to satisfy all of those requirements by the deadline.
At iovation, we welcome the GDPR’s assertion of privacy as a consumer right and a corporate social responsibility. We have embraced the challenge of designing fraud prevention and authentication solutions that achieve these goals without sacrificing the customer experience. The question though is, how can we help you with GDPR compliance?
Consider ClearKey, our lightweight two-factor authentication solution. ClearKey uses iovation’s patented device-recognition technology to authenticate visitors without adding customer friction. The result is an easy-to-implement solution that brings you closer to GDPR compliance — fast.
Let’s review five reasons to use ClearKey for last-minute GDPR compliance.
The GDPR’s requirement for “data minimization” means that organizations should only collect the data necessary for a specific purpose. This reduces the amount of personal information your organization is responsible for protecting. Less data to protect means less impact in the event of a breach.
ClearKey supports this principle by default. Our device-recognition technology uses hundreds of device attributes and their unique orientation with each other to instantly identify over 5B devices in our database without requiring users’ directly identifying information.
In the past, stronger security has come at the cost of increased customer friction. Today’s users expect a seamless experience even as the GDPR raises standards for greater data security.
ClearKey satisfies both imperatives. It recognizes and uses the customer’s device as a second factor of authentication. Your customers may choose which devices to associate with their accounts, or you can register accounts and devices on their behalf.
However you choose to implement ClearKey, you’ll improve your customers’ security and experience.
You need something you can implement quickly. We designed ClearKey for quick and easy integration into your user-authentication stack. We’ve made sure it’s compatible with existing authentication solutions, so you can layer it on top of your existing infrastructure quickly.
ClearKey’s lightweight, easy-to-implement SDK can be easily integrated into Apple and Android applications, with white labeling that allows you to completely brand the authentication experience. We support a complete range of web and desktop SDKs to help you improve the security of your desktop and web applications. No need to rip and replace.
By adding the second layer of authentication to your legacy system you can achieve strong customer authentication, gaining the security benefits of multifactor authentication without the friction. ClearKey creates a strong bond with the device, and then allows regular and transparent evaluation of risk factors such as:
The customer experience is improved by allowing strong transparent authentication to operate as a proxy for active authentication, so that active challenges can be carried out less often. This allows you to dynamically adapt your authentication in real-time in response to new threat vectors. If the risk is low, transparently authenticate customers without adding friction. If additional security is needed, step up to a more robust MFA solution. (Read more about this in our free ebook MFA for Dummies.)
Most authentication solutions continue to rely on usernames and passwords. Yet time and again we’ve seen how easy it is for criminals to steal, buy, or brute force these credentials — raising the possibility of account takeover (ATO).
The spirit of the GDPR seeks to preserve users’ privacy and the security of their accounts. ClearKey provides powerful risk insight that allows you to assess risk factors indicative of ATO, including device anomalies, spoofing, and evasion. ClearKey adds a second, invisible layer of authentication that drives step-up authentication if new or suspicious devices try to access an account, enhancing your existing authentication procedures without heavy lifting or intense coding.
The regulation represents a huge advance for consumer data privacy. At iovation, we’re excited for the improvements in user experience that can be made in tandem with compliance.
Learn more about how we can help you become compliant with the GDPR while delighting your customers on our GDPR compliance resources page.
At its core, the announcement represents a great opportunity for iovation and TransUnion to combine technology strategies and stay ahead of the ever-evolving fraud and consumer authentication landscapes.
TransUnion is a sophisticated, global risk information provider that helps companies make smarter decisions. And for years iovation has been making the internet a safer place by recognizing any online device and understanding its risk and reputation. Their common enemy: fraudsters.
By combining the strengths of iovation and TransUnion, they are positioned to create a safer digital world for all. TransUnion and iovation’s complementary assets will provide increased global scale, and accelerate new innovations in the areas of fraud prevention, consumer authentication, and the convergence of online and offline identities.
So, what happens now? Pending approval by regulatory agencies, this transaction is expected to close in the next four to six weeks. For iovation’s employees, there will be no layoffs, no relocations, no changes to leadership. The only change will be an improved ability to fight fraudsters and help businesses stay ahead of evolving threats.
Following the ratification of the GDPR, some feared that certain tenants — such as the right to restrict data processing, the right to object to data collection, and the right to be forgotten — would give fraudsters a newfound advantage. Fortunately, the GDPR specifically calls out fraud prevention in the legitimate interest clause governing how subject companies process data and handle customer requests.
When the goal is to prevent fraud – a ‘legitimate interest,’ as defined by the GDPR – companies aren’t required to proactively gain consent to collect customers’ data, nor honor all requests for deletion of data.
This is bad news for fraudsters, but it could also be problematic for regular citizens. Some fraud-detection vendors might see the argument of legitimate interest as justification to sidestep consent requirements altogether — whether intentionally choosing to work around the regulation, or unintentionally ignoring the spirit of the GDPR.
Using legitimate interest as a basis for data processing brings extra responsibility for considering and protecting data subjects’ rights.
There are three key considerations when applying the legitimate interests clause:
There are many different ways to prevent fraud. Some fraud prevention solutions require more personal data than others to perform.
Research the amount of personal data that your fraud-prevention vendor requires. They may fail the second part of the above test. If so, they could expose you to liability for non-compliance.
Adhering to the core principles of the EU GDPR and preventing fraud can go hand-in-hand. Minimizing the amount of personal data collected, pseudonymizing that data, and embracing privacy by design principles will not only ensure that your customers’ right to data privacy is upheld, but also help mitigate your risks under the GDPR.
And don’t forget authentication! As we discussed in an earlier post, breached and stolen credentials are a real threat to your users’ data security. That threat vector makes stronger authentication an essential component in the fight against fraud and in the defense of your users’ right to data privacy.
The GDPR is revealing opportunities to make user experience a significant differentiator among competitors. Learn more about how you can turn GDPR compliance into an opportunity by checking out our webtalk, 4 Hacks to Mitigate Breach Risks Post GDPR.
Your customers’ expectations for their experiences with your brand are rising rapidly. The conventional way to establish and maintain assurance about their identities can’t keep up. Insight from your fraud department can.
In my last post, I wrote that ‘the conventional practice of issuing and managing user accounts and credentials is becoming optional.’ To appreciate what will take the place of that practice, you have to understand the old, linear model of identity assurance.
The identity proofing or compliance team set the level of assurance necessary to trust a visitor’s identity. This filtered the pool of account holders for the user authentication and fraud detection teams. By the time an account came to the fraud department’s attention, identity proofing was long over.
Under this model, teams working in fraud or authentication could trust the initial identity assigned to the user.
In this linear path through identity assurance, each siloed team was responsible for calibrating its own level of friction for users. The team responsible for building market share might be a little more lenient. The fraud team might comparatively be strict.
Due to stolen, synthetic and rented identities, even the inaugural step of identity assurance is under attack. The dynamic nature of security requires the right level of authentication for the current level of risk at any time during the user session.
If that balance weren’t delicate enough, it must be maintained within the acceptable limits of users’ rising expectations for a smoother experience from the moment they create an account.
Combined, these trends are upending the typical identity assurance process, and making the assumptions I’ve described not only obsolete, but dangerous to the organization.
Fortunately, the fraud department already monitors a trove of signals in real time that can help at account creation, at login, and after login. (What sorts of signals? I’ll describe them in my next post.)
Today’s volatile threat vectors have rendered obsolete the legacy ‘one and done’ approach. You need to be able to continuously assess the level of trust that you can assign to a specific identity, to the riskiness of each step in the current transaction, to the risk signals present in that session, and to the reputation of the device involved.
In this world, fraud professionals can’t make assumptions about the accounts they review. That’s the price for providing a better user experience. But it also presents an opportunity.
It’s now the obligation of IAM and fraud leaders to understand the ongoing process of assigning and validating trust in users’ identities: what's been done, where your risk signals and assurance information come from, and how those signals are applied.
Sound like a lot of work? Well, I did mention ‘opportunity’ a moment ago. Continuous identity assurance allows for dynamic decisioning so that friction is added only when appropriate and minimized when it isn’t.
If you know you can trust a user, then you can lower the barrier for them. If you detect signals indicating increased risk, then you re-authenticate or use stronger authentication methods.
In this new model, insight from fraud feeds into the other parts of the identity-assurance cycle continuously, not retroactively. It requires a whole new set of real-time, on-demand signals. Fortunately, these signals have long been a cornerstone of online fraud detection.
If this idea sounds appealing, you may be wondering how to put it into practice. How can you make sense of all the variables in such a sensitive part of your business, and make the right decisions to navigate from the old model (based on assumptions) to the new model (based on continuous assurance).
Glad you asked.
Gartner explored this question through the lens of its Trusted Identity Capabilities Model in its new report, “Take a New Approach to Establishing and Sustaining Trust in Digital Identities.”
This model defines the six complementary criteria for identifying and aligning the capabilities you need to continuously validate a level of confidence in a visitor’s claimed identity. Get your copy of the report for Gartner’s guidance on the transition to continuous identity assurance.
Businesses can’t afford to let their fraud departments continue to operate in the background. They need to leverage the kind of analytics and signals that a modern fraud-detection stack generates in real time.
For more insight into fraud’s rising importance to continuous identity assurance, register for our webinars i, where we’ll continue this conversation.