<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Kenna Blog</title>
	<atom:link href="http://blog.kennasecurity.com/feed/" rel="self" type="application/rss+xml" />
	<link>http://blog.kennasecurity.com</link>
	<description>Vulnerability Management &#38; Threat Intelligence</description>
	<lastBuildDate>Mon, 01 Aug 2016 14:22:44 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.5.3</generator>
	<item>
		<title>New Zero-Day Exploit Intelligence – Introducing Exodus</title>
		<link>http://blog.kennasecurity.com/2016/08/new-zero-day-exploit-intelligence-introducing-exodus/</link>
		<comments>http://blog.kennasecurity.com/2016/08/new-zero-day-exploit-intelligence-introducing-exodus/#respond</comments>
		<pubDate>Mon, 01 Aug 2016 11:17:17 +0000</pubDate>
		<dc:creator><![CDATA[Greg Howard]]></dc:creator>
				<category><![CDATA[Data Analysis]]></category>
		<category><![CDATA[Remediation]]></category>
		<category><![CDATA[Threats and Attacks]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Vulnerability Management]]></category>

		<guid isPermaLink="false">http://blog.kennasecurity.com/?p=4179</guid>
		<description><![CDATA[One of Kenna’s primary differentiators is its use of external exploit intelligence. It’s that real-time context, informed by Kenna’s own proprietary, patented algorithm, which makes our customers’ vulnerability scan data tell a story. We’re able to provide a “headline news” of what’s happening in our customer’s environments and what threats they need to remediate quickly. (And by the way, when... <a href="http://blog.kennasecurity.com/2016/08/new-zero-day-exploit-intelligence-introducing-exodus/">Read more &#187;</a>]]></description>
				<content:encoded><![CDATA[<p>One of Kenna’s primary differentiators is its use of external exploit intelligence. It’s that real-time context, informed by Kenna’s own proprietary, patented algorithm, which makes our customers’ vulnerability scan data tell a story. We’re able to provide a “headline news” of what’s happening in our customer’s environments and what threats they need to remediate quickly.</p>
<p>(And by the way, when you meet other organizations claiming to do what Kenna does, ask them to list out their exploit intelligence partners. All of them. And how they integrate that intelligence with your vulnerability scans. Then watch their faces go white as they stutter their way through an answer. It’s a fun game!)</p>
<p>Kenna is proud to introduce our new intelligence partner: Exodus. They deliver unparalleled zero-day data that’s unmatched in the industry.</p>
<p>We believe that Exodus has a unique view of the world; they share Kenna’s belief that it’s imperative to know which corners of your network houses vulnerabilities for which there are no current patches.</p>
<p>Addressing risks from zero days requires you to change your security practice to be faster than the rate of disclosure. It requires you to do research faster than MITRE or NVD and to correlate to assets based on imperfect data. Addressing zero day risks also requires the centralization of decision making. If you’re looking at a vulnerability scan and making decisions about it, then looking at a second scanner and making those same decisions about it, and then a third engineer looks at a threat intel feed and raises his hand &#8211; you are in disarray. Conversely, if all of your intel and vulnerability data is in one place, you can make and measure decisions about overall risk.</p>
<p>With Exodus, Kenna continues to offer its customers a detailed knowledge of unpatched vulnerabilities and helps them measure, monitor, and reduce their overall exposure to risk.</p>
<p>Within the Kenna platform, environments that are susceptible to Exodus’ Zero-Day vulnerabilities will be displayed in their primary dashboard, making it easy for users to take appropriate action. If they do experience an issue with a Zero-Day, users will be able to obtain extremely detailed reports and exploit details on the specific vulnerability from Exodus Intelligence.</p>
<p><a href="http://blog.kennasecurity.com/wp-content/uploads/2016/07/Screen-Shot-2016-07-29-at-1.10.11-PM.png"><img class="alignnone size-full wp-image-4181" src="http://blog.kennasecurity.com/wp-content/uploads/2016/07/Screen-Shot-2016-07-29-at-1.10.11-PM.png" alt="Screen Shot 2016-07-29 at 1.10.11 PM" width="670" height="335" /></a></p>
<p>Learn more about <a href="https://www.exodusintel.com/">Exodus Intelligence</a>!</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.kennasecurity.com/2016/08/new-zero-day-exploit-intelligence-introducing-exodus/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Celebrity Treatment: How Vulns are Being Hyped, and When to Pay Attention</title>
		<link>http://blog.kennasecurity.com/2016/07/celebrity-treatment-how-vulns-are-being-hyped-and-when-to-pay-attention/</link>
		<comments>http://blog.kennasecurity.com/2016/07/celebrity-treatment-how-vulns-are-being-hyped-and-when-to-pay-attention/#respond</comments>
		<pubDate>Fri, 15 Jul 2016 17:25:50 +0000</pubDate>
		<dc:creator><![CDATA[Ed Bellis]]></dc:creator>
				<category><![CDATA[Vulnerability Management]]></category>

		<guid isPermaLink="false">http://blog.kennasecurity.com/?p=4172</guid>
		<description><![CDATA[Like it or not, we live in an era of manufactured celebrity and large-scale hype creation. While this can make it easy to keep tabs on movie stars’ relationships, it doesn’t help security teams stay on top of what’s really important. To prioritize their efforts, there are five factors security teams should look at in assessing the true risk of... <a href="http://blog.kennasecurity.com/2016/07/celebrity-treatment-how-vulns-are-being-hyped-and-when-to-pay-attention/">Read more &#187;</a>]]></description>
				<content:encoded><![CDATA[<p>Like it or not, we live in an era of manufactured celebrity and large-scale hype creation. While this can make it easy to keep tabs on movie stars’ relationships, it doesn’t help security teams stay on top of what’s really important. To prioritize their efforts, there are five factors security teams should look at in assessing the true risk of vulnerabilities. Read on to find out what these factors are and why they matter.</p>
<h1>The Urgency of Vulnerability Remediation</h1>
<p>If security is a race, it’s a sprint. This is particularly true when vulnerabilities are discovered. The race begins, with the good guys figuring out where they’re exposed and how to address the gaps, while the bad guys start building their modes of attack. The problem is that bad guys have an unfair advantage, leveraging automation to wage broad-based, indiscriminate attacks.</p>
<p>Security teams are falling behind. The reality is that vulnerabilities are most likely to be exploited within 40 to 60 days of their release, while security teams typically take 100 to 120 days to remediate vulnerabilities—according to Kenna Security’s proprietary research.</p>
<p>This is a race that also calls for precision. In an enterprise, security teams have tens of thousands of vulnerabilities to remediate. There’s no way they’ll address them all ever—let alone anywhere close to that 60-day window. Therefore, it’s critical that administrators don’t waste time remediating vulnerabilities that aren’t being exploited. Security staff needs to identify the vulnerabilities that pose the biggest risk—and the key is knowing which those are.</p>
<h1>The Emergence of Vulnerability Marketing</h1>
<p>As security teams struggle to stay current with the threat landscape and determine how to prioritize their efforts, they have to wade through a dizzying amount of news, hype, opinions, and other noise. In recent years, the noise volumes have continued to be amplified. For that you can blame a lot of factors, but one that may not come to immediately to mind is Heartbleed.</p>
<p>Back when Heartbleed was announced, a new precedent was set. Here was a vulnerability with its own marketing engine, featuring a logo, a web site, and more. Heartbleed was a critical vulnerability affecting the most popular open source cryptographic protocol, one relied upon by millions of sites. This was a vulnerability worthy of widespread attention, and using marketing to increase awareness was a good thing.</p>
<h1>The Problem: Distinguishing Between Hype and Risk</h1>
<p>The problem is that, since that time, around ten vulnerabilities have received similar “star” treatment. Some were critical and worthy of attention, others weren’t. It’s almost starting to feel like this kind of promotion is the only way to get folks to pay any attention to vulnerabilities.  (The recent ImageTragick campaign was a great illustration of this phenomenon.)</p>
<p>When security teams are relying on media, forums, and other public resources to assess their vulnerabilities, they’re in trouble. The process is simply way too arbitrary. Clever marketing can hype a vulnerability, and get staff chasing a vulnerability that doesn’t pose any real danger. At the same time, the increased noise is very likely to drown out the real risks that should be getting addressed.</p>
<p>Consider just a couple examples of some non-marketed vulnerabilities that are still being exploited:</p>
<ul>
<li><strong>CVE-2010-3055</strong>. This vulnerability was exploited 121,000 times in one year. It allows attackers to run malicious code in phpmyadmin, which is used to run millions of sites worldwide. This only received a Common Vulnerability Scoring System (CVSS) score of 7.5 out of 10, so has remained under the radar of many teams—but it continues to be exploited.</li>
<li><strong>CVE-2002-0649</strong>. First discovered in January 2003, this exploit affects Microsoft SQL Server and Microsoft Desktop Engine. One may think that given how long ago it was discovered that this is a vulnerability we could forget—but one would be wrong. Many thousands of exploits continue to be seen.</li>
</ul>
<p>Publishing these kinds of exploits won’t get you press or notoriety, so much as a stifled yawn. If you’re waiting for some high profile news stories to appear on these types of threats, you will be waiting a while. A vulnerability shouldn’t have to be new and buzz-worthy to get attention. The quality of logo design or a firm’s social marketing prowess shouldn’t determine which vulnerabilities get addressed and which don’t.</p>
<p><a href="http://blog.kennasecurity.com/wp-content/uploads/2016/07/Screen-Shot-2016-07-15-at-12.27.47-PM.png"><img class="alignnone size-full wp-image-4175" src="http://blog.kennasecurity.com/wp-content/uploads/2016/07/Screen-Shot-2016-07-15-at-12.27.47-PM.png" alt="Screen Shot 2016-07-15 at 12.27.47 PM" width="761" height="594" /></a></p>
<p><i>This graph shows the significant delta between successful hits on “Logo’d vulns” versus ones that go mostly unnoticed. It’s the quiet ones that have remote code execution, an exploit, and an active Internet breach.</i></p>
<h1>The Top 4 Factors to Consider</h1>
<p>These recent developments point to a bigger problem: Many security teams lack an objective, consistent way to find out about vulnerabilities, and to determine how to prioritize them. Security teams need a way to be alerted to threats and prioritize their efforts based on factors that really matter.</p>
<p>Based on the extensive intelligence Kenna has collected, we have identified common factors that distinguish between vulnerabilities that are critical, and likely to pose real threats to businesses, and those that aren’t. In the following sections, I’ll outline some of the most important factors. These factors are all important; they’re not listed in any kind of priority order. Spoiler alert: The quality of a vulnerability’s logo design, the catchiness of its name, and the number of its Twitter followers are nowhere to be found.</p>
<h2>#1. Allows Remote Code Execution</h2>
<p>Remote code execution is ultimately what the bad guys are after. Once bad guys have established a way to run their code on a remote system, they inflict all kinds of chaos, whether they want to set up bot networks, steal data, or infiltrate networks. If a vulnerability doesn’t permit remote code execution, it’s one that will typically pose less risk than a vulnerability that does.</p>
<h2>#2. Has a module in Metasploit</h2>
<p>Metasploit has emerged as the de facto standard for exploit development. Many enterprise security teams and security firms use Metasploit to do penetration testing of an organization’s defenses and identify weaknesses. The problem is that the bad guys can also use Metasploit, and these aren’t just tests—they’re real attacks. When modules appear in Metasploit, you can be assured that a lot of bad guys are, or will soon be, leveraging them in their attacks.</p>
<h2>#3. Is network accessible</h2>
<p>Whether or not a vulnerability is network accessible can play major role in the severity of the threat and the likelihood of being exploited. Today’s bad guys are all about automation and scale in waging their attacks. The only way to achieve these ends is through network-accessible vulnerabilities that form the basis of botnets, command-and-control communications, and so on.</p>
<h2>#4. Is included in Exploit Database</h2>
<p>The Exploit Database is a comprehensive repository of exploits and proof-of-concept attacks. Like Metasploit, Exploit Database is invaluable for good guys and bad guys alike. Until a vulnerability appears in the Exploit Database, it remains less likely to emerge as a significant, broad-based threat for organizations.</p>
<h1>How Kenna Can Help</h1>
<p>Assessing all these factors represents one of the most critical efforts for security teams. However, it also represents a significant effort that has to be sustained. If security teams are going to address the growing gap being created by automated, broad-based attacks, they must get actionable intelligence and they must respond quickly and at scale. This means streamlining and accelerating efforts wherever possible. Quite simply, security teams need to fight automation with automation.</p>
<p>With Kenna solutions, that’s exactly what security teams can do. With Kenna, security teams don’t have to do all the work of manually collecting and analyzing vulnerabilities that get discovered. Kenna solutions automate and streamline the intelligence gathering process. These solutions make it easy for security administrators to distinguish between what’s really being exploited versus what’s being effectively hyped. With Kenna, security teams can more intelligently manage efforts, including remediation and security investments, so they can apply their efforts to the activities that matter most: those that significantly reduce risk.</p>
<h1>Summary</h1>
<p>Reading security news is a great way to keep tabs on what’s happening, but it’s not a suitable basis for formulating your remediation strategies. Don’t let the cleverness of a vulnerability’s marketing campaign dictate how you prioritize your efforts. Make sure you’re looking at the most critical aspects that figure to predict how much danger a vulnerability really poses. When you do that, you can align your limited time and resources with those efforts that will yield the biggest dividends in reducing risk to your organization.</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.kennasecurity.com/2016/07/celebrity-treatment-how-vulns-are-being-hyped-and-when-to-pay-attention/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>How to Budget for Vulnerability Management in 2017</title>
		<link>http://blog.kennasecurity.com/2016/06/how-to-budget-for-vulnerability-management-in-2017/</link>
		<comments>http://blog.kennasecurity.com/2016/06/how-to-budget-for-vulnerability-management-in-2017/#respond</comments>
		<pubDate>Thu, 30 Jun 2016 15:51:06 +0000</pubDate>
		<dc:creator><![CDATA[Greg Howard]]></dc:creator>
				<category><![CDATA[Vulnerability Assessment]]></category>
		<category><![CDATA[Vulnerability Management]]></category>

		<guid isPermaLink="false">http://blog.kennasecurity.com/?p=4158</guid>
		<description><![CDATA[It’s almost budgeting season! (Yes, try to restrain your excitement.) At Kenna, we thought we’d offer a few (admittedly biased) thoughts on how to approach your vulnerability management budgeting process. Here&#8217;s a hint: it&#8217;s not just about the scanner anymore. It&#8217;s about automating the tedious and error-prone processes of prioritization and reporting. Read the full infographic below:]]></description>
				<content:encoded><![CDATA[<p>It’s almost budgeting season! (Yes, try to restrain your excitement.) At Kenna, we thought we’d offer a few (admittedly biased) thoughts on how to approach your vulnerability management budgeting process.</p>
<p dir="ltr">Here&#8217;s a hint: it&#8217;s not just about the scanner anymore. It&#8217;s about automating the tedious and error-prone processes of prioritization and reporting. Read the full infographic below:</p>
<p><a href="http://blog.kennasecurity.com/wp-content/uploads/2016/06/2017-Infographic.jpg"><img class="alignleft size-full wp-image-4159" src="http://blog.kennasecurity.com/wp-content/uploads/2016/06/2017-Infographic.jpg" alt="2017-Infographic" width="1280" height="4229" /></a></p>
]]></content:encoded>
			<wfw:commentRss>http://blog.kennasecurity.com/2016/06/how-to-budget-for-vulnerability-management-in-2017/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Moving from Vulnerability Remediation to Risk Measurement</title>
		<link>http://blog.kennasecurity.com/2016/06/moving-from-vulnerability-remediation-to-risk-measurement/</link>
		<comments>http://blog.kennasecurity.com/2016/06/moving-from-vulnerability-remediation-to-risk-measurement/#respond</comments>
		<pubDate>Mon, 06 Jun 2016 20:15:31 +0000</pubDate>
		<dc:creator><![CDATA[Ed Bellis]]></dc:creator>
				<category><![CDATA[Vulnerability Assessment]]></category>
		<category><![CDATA[Vulnerability Intelligence]]></category>

		<guid isPermaLink="false">http://blog.kennasecurity.com/?p=4155</guid>
		<description><![CDATA[Fighting security threats is hard enough, but it’s pretty much impossible if you’re fighting wrong battles. However, that’s what you’re doing if you’re focused on vulnerability remediation. I see it all the time: Security teams live by their spreadsheets. They have lists of vulnerabilities. They stack rank them by severity, start with the most critical, and commence to work through... <a href="http://blog.kennasecurity.com/2016/06/moving-from-vulnerability-remediation-to-risk-measurement/">Read more &#187;</a>]]></description>
				<content:encoded><![CDATA[<p>Fighting security threats is hard enough, but it’s pretty much impossible if you’re fighting wrong battles. However, that’s what you’re doing if you’re focused on vulnerability remediation.</p>
<p>I see it all the time: Security teams live by their spreadsheets. They have lists of vulnerabilities. They stack rank them by severity, start with the most critical, and commence to work through the list.</p>
<p>The problem? All this effort is being expended without a clear understanding of real business risk. Therefore, critical questions remain unanswerable: Where is risk now and where was it before? Where does the risk level need to be? Are we making progress?</p>
<p>In this paper, I’ll detail some common problems I’ve seen security teams encounter when they focus on vulnerability remediation. These cases provide a very vivid illustration of the limitations and risks of focusing on vulnerability counts—and they underscore how much better things can be for organizations that are taking a holistic risk measurement approach.</p>
<h1>Problem #1: Focusing on the Wrong Metrics</h1>
<p>The security team at Acme Corporation runs their monthly vulnerability scan report and sees the environment contains 500,000 vulnerabilities. Through an “all-hands-on-deck” effort of late nights and concerted focus, they close out 50,000 vulnerabilities before the next month is out. Great work, right? Not so much. The next monthly report comes out and they see there are now 525,000 vulnerabilities.</p>
<p>What’s next, more coffee and even later nights? The reality is that without making significant additions to headcount, it will most likely be impossible for Acme to make a meaningful dent in vulnerability numbers each month.</p>
<p>While these numbers are depressing in their own right, I’d argue they’re the least of the Acme’s problems. Ultimately, the focus is all wrong. Even if they could make a significant dent in vulnerability counts, a critical question remains: Has risk been reduced at all, and, if so, to what degree?</p>
<p>Contrast this reality with Acme after they start taking a risk measurement approach.</p>
<p>The team starts with a focus on assets, and works back from them to identify threats and determine how to guard against them. Every day, their efforts are dedicated to reducing risk, and they’re clear on the steps that will be most effective in realizing that objective. They may still have 500,000 vulnerabilities open. Even if total vulnerabilities rise, risk will tend to be going down because staff is focusing on the most critical areas of exposure. The team can prioritize efforts and track progress against meaningful objectives. Ultimately, staff can be more effective at what matters most: reducing risk to the business.</p>
<h1>Problem #2: Ignoring Assets</h1>
<p>The reality for any security professional is that there are always more tasks than hours in the day. The trick is trying to figure out which tasks to prioritize, and which to ignore.</p>
<p>Historically, when staff members were weighing priorities, not much attention was given to Acme’s public facing web site, which delivers publicly available technical resources to electronics consumers. While Acme requires visitors to register to access the site, they don’t charge site users or collect any payment information.</p>
<p>Acme’s leadership team doesn’t feel that their site and assets are likely targets, so they don’t want to allocate a lot of their limited IT budget and staff time to securing these online resources. Acme’s security team had plenty of other tasks to juggle, so they didn’t argue.</p>
<p>The problem is that given their site’s popularity, they’ve ultimately amassed a registered user base of 100 million, including email addresses and other personal information. Further, their site is routinely visited by thousands of unique visitors every day. Because they’re focusing on vulnerability remediation, the security team and executive leadership haven’t recognized the value of the assets associated with their online properties.</p>
<p>While the site’s value may be overlooked internally, it represents an appealing target for those who are constantly looking for user machines to compromise. The bad guys may launch a series of exploits to gain access to user emails, so they can launch a series of phishing attacks. Or, lured by the large volume of visitors, the criminals may exploit a persistent cross-site scripting vulnerability on Acme’s web servers that enables them to infect site visitors’ systems. Once they’ve infected enough machines, they may start creating botnets to launch DDoS attacks. Staff members don’t take the steps needed to guard against these types of attacks, and they’re incapable of stopping them when they happen.</p>
<p>By contrast, when Acme’s security team takes a risk measurement approach, they look at risk holistically, employing techniques like threat modeling. That means factoring in not only vulnerabilities, but Acme’s assets, and the types of threats they’re exposed to. Through threat modeling, they’ll do an objective analysis of the value that data, systems, and users may have to an attacker. They’ll also assess how sites and applications are used, the number of users, the types of attacks being waged against other organizations, critical vulnerabilities discovered on specific systems, and a number of other factors.</p>
<p>Through risk measurement, the security team can gain a much more complete picture of the most critical threats facing their business. Perhaps even more importantly, they’ll be able to gather objective analysis of threats and overall risk, so they can report effectively to executive leadership, and ensure investments are aligned with security priorities.</p>
<h1>Problem #3: Investing Time and Money on the Wrong Efforts</h1>
<p>After running a scan, John, an administrator at Acme, finds there are 10 vulnerabilities on a given system: one critical, two high, and seven low. Because John and his management team have a vulnerability-count mindset, he focuses on closing those seven low-priority issues and claims victory to his superiors: Vulnerabilities were reduced by 70 percent.</p>
<p>The problem is that John may not have reduced risk much or at all—and a lot of time, effort, and money went into that lack of accomplishment.</p>
<p>Once the team at Acme employs a risk measurement approach, things change substantially. By factoring in exploit threat intelligence, for example, they may find that, of the 10 vulnerabilities referenced earlier, one of the high-priority vulnerabilities is the one that’s currently being exploited most frequently by a sophisticated adversary. By closing that one vulnerability, John may reduce total <em>risk</em> by 70%, while spending far less time and money on remediation.</p>
<h1>Conclusion: The Risk Measurement Payoff</h1>
<p>By moving from a focus on vulnerability remediation to risk measurement, organizations can reduce business exposure in two ways. First, they can reduce the likelihood of a breach. Second, they can minimize the potential impact of a breach should one occur.</p>
<p>I once heard a CISO say, “We can’t work any harder, we have to work smarter.” Fundamentally, risk measurement provides a way for security teams to work smarter. They can focus their time, budget, and resources on what matters most: reducing risk. Risk measurement also provides teams with a centralized way to accumulate, analyze, and report on risk, which helps significantly improve operational efficiency.</p>
<h1>Risk Measurement Can Take a Lot of Effort—Kenna Can Help</h1>
<p>While it’s easy to see how risk measurement can help, that doesn’t make it easy to do. A lot of intelligence needs to be correlated to do risk measurement right, and Kenna can help. Kenna automates much of the effort associated with aggregating and parsing intelligence, so security teams can much more easily measure risk and determine the most significant and efficient ways to reduce risk. By combining Kenna with best practice threat modeling approaches, organizations can employ their resources most intelligently to reduce the likelihood and potential impact of breaches.</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.kennasecurity.com/2016/06/moving-from-vulnerability-remediation-to-risk-measurement/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The 2016 DBIR</title>
		<link>http://blog.kennasecurity.com/2016/05/the-2016-dbir/</link>
		<comments>http://blog.kennasecurity.com/2016/05/the-2016-dbir/#respond</comments>
		<pubDate>Wed, 11 May 2016 20:52:41 +0000</pubDate>
		<dc:creator><![CDATA[Karim Toubba]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://blog.kennasecurity.com/?p=4145</guid>
		<description><![CDATA[This month Kenna Security participated in the Verizon data breach report, and for the second year running we used our data to drive the perspective of the vulnerability section. Since then there have been some questions and criticisms of a specific subset of the data referenced in a footnote in the vulnerability section – namely the top 10 vulnerability list.... <a href="http://blog.kennasecurity.com/2016/05/the-2016-dbir/">Read more &#187;</a>]]></description>
				<content:encoded><![CDATA[<p>This month Kenna Security participated in the Verizon data breach report, and for the second year running we used our data to drive the perspective of the vulnerability section. Since then there have been some questions and criticisms of a specific subset of the data referenced in a footnote in the vulnerability section – namely the top 10 vulnerability list. I want to be clear that the criticism of the top 10 vulnerability list is fair and warranted, and we acknowledge the fact that we made a mistake. To put it simply, in an attempt to maintain vendor neutrality, the data and analysis used to generate the Top 10 list in the Verizon DBIR was very different than the analysis that was used to prioritize remediation within the Kenna platform.</p>
<p>The Kenna platform processes billions of pieces of vulnerability and exploit data daily for our customers, helping contextualize vulnerabilities so that security teams know what to prioritize and fix in their own environment. The data we submitted to the Verizon top 10 only used a raw subset of 3rd party exploitation data without taking any of the contextual data or our prioritization algorithms into consideration. As one of our customers constantly reminds me, “We can’t work harder anymore than we do today – we have to work smarter – and that is what your platform allows us to do.”</p>
<p>Looking at the much-discussed FREAK vulnerability as an example, if we had actually run the data through our platform and algorithms, it would not have risen to the level of a significant vulnerability. The Kenna platform is designed to ensure that our customers don’t prioritize a patch that could be a false positive or outlier by taking into account many variables including: volume and velocity of the exploit, exploit availability, weaponization of the exploit, whether or not that exploit has been observed as part of a greater campaign, relative priority of the asset on which the vulnerabilities sit, and over a dozen external sources. We looked at FREAK within the Kenna platform and saw that CVE: 2015-0204 had a Kenna score of 25.0372 (out of 100). This is nowhere close to even a top 10,000 vulnerability or even in the top 70% of all vulnerabilities.</p>
<p>I have always believed that you need to be clear about and uphold your values and this experience only underscores this belief. We at Kenna deeply value integrity in all of its forms, but especially of our data as it helps our customers “work smarter.” As we clearly did not exhibit that integrity with the top 10 results, we felt it was important to set the record straight.</p>
<p>Karim Toubba<br />
CEO – Kenna Security</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.kennasecurity.com/2016/05/the-2016-dbir/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Collaborative Data Science &#8211; Inside the 2016 Verizon DBIR Vulnerability Section.</title>
		<link>http://blog.kennasecurity.com/2016/05/collaborative-data-science-inside-the-2016-verizon-dbir-vulnerability-section/</link>
		<comments>http://blog.kennasecurity.com/2016/05/collaborative-data-science-inside-the-2016-verizon-dbir-vulnerability-section/#respond</comments>
		<pubDate>Mon, 02 May 2016 00:29:21 +0000</pubDate>
		<dc:creator><![CDATA[Michael Roytman]]></dc:creator>
				<category><![CDATA[Data Analysis]]></category>
		<category><![CDATA[Vulnerability Management]]></category>
		<category><![CDATA[Verizon DBIR]]></category>
		<category><![CDATA[vulnerability analysis]]></category>

		<guid isPermaLink="false">http://blog.kennasecurity.com/?p=4134</guid>
		<description><![CDATA[The best part about working in a nascent, yet-unsolved-perhaps-never-to-be-solved industry is that the smartest minds are often struggling with the same problems, and are only a tweet or a phone call away if you need help. I’ve had help from fellow data scientists, NIST and MITRE folk, competitors, practitioners, professors and the like. While rock-star-syndromes are surely out there and... <a href="http://blog.kennasecurity.com/2016/05/collaborative-data-science-inside-the-2016-verizon-dbir-vulnerability-section/">Read more &#187;</a>]]></description>
				<content:encoded><![CDATA[<p>The best part about working in a nascent, yet-unsolved-perhaps-never-to-be-solved industry is that the smartest minds are often struggling with the same problems, and are only a tweet or a phone call away if you need help. I’ve had help from fellow data scientists, NIST and MITRE folk, competitors, practitioners, professors and the like. While <a href="https://mumble.org.uk/blog/2016/04/30/malory-isnt-the-only-imposter-in-infosec/">rock-star-syndromes</a> are surely out there and aren’t very helpful, I want to point out that the industry is full of brilliant people without much ego, ready to help throughout their busy days.</p>
<p>Recently, Adrian Sanabria from 451Research called me to question some of the assumptions I used to write the 2016 Verizon DBIR Vulnerability section. We had a two-hour chat and some follow up emails, and he led me to think of a new way to look at this data. This is absolutely indispensable when we defenders are working together against a sentient attacker. It’s also thrilling! Where else does one get to collaborate with competitors, researchers and clients alike as part of of their day to day job?</p>
<p>Adrian’s big insight was that the top 10 list of vulnerabilities is confusing because the applicability to the enterprise is lost on these arcane, working exploits, and the gap between peoples’ expectations and reality is creating some friction. We had an excellent offline discussion in which he dove deeply into the assumptions of my work, asked thoughtful, deep questions in private, and together, we came up with a better metric for generating a top 10 vulnerabilities list.</p>
<p>To address these issues, I scaled the total successful exploitation count for every vulnerability in 2015 by the number of observed occurrences of that vulnerability in Kenna’s aggregate dataset. Sifting through 265 million vulnerabilities gives us a top 10 list perhaps more in line with what was expected – but equally unexpected! The takeaway here is that datasets like the one explored in the DBIR might be noisy, might have false positives and the like, but carefully applied to your enterprise the additional context successful exploitation data lends to vulnerability management is priceless. Generating signal <em>is science</em>. So here’s a new top 10 list (biased because occurrence rates are measured by Kenna Security customers, a very convenient convenience sample).</p>
<div id="attachment_4135" style="width: 1695px" class="wp-caption alignnone"><a href="http://blog.kennasecurity.com/wp-content/uploads/2016/05/Screen-Shot-2016-04-28-at-6.39.27-PM.png"><img class="size-full wp-image-4135" src="http://blog.kennasecurity.com/wp-content/uploads/2016/05/Screen-Shot-2016-04-28-at-6.39.27-PM.png" alt="Prevalence Scaled Top 10 Vulnerabilities" width="1685" height="1228" /></a><p class="wp-caption-text">Prevalence Scaled Top 10 Vulnerabilities</p></div>
<p><strong>1. CVE-2002-0013</strong> &#8211; Vulnerabilities in the SNMPv1 request handling of a large number of SNMP implementations allow remote attackers to cause a denial of service or gain privileges via (1) GetRequest, (2) GetNextRequest, and (3) SetRequest messages<br />
<strong>2. CVE-2002-0012</strong> &#8211; Vulnerabilities in a large number of SNMP implementations allow remote attackers to cause a denial of service or gain privileges via SNMPv1 trap handling<br />
<strong>3. CVE-2015-0204</strong> &#8211; The ssl3_get_key_exchange function in s3_clnt.c in OpenSSL before 0.9.8zd, 1.0.0 before 1.0.0p, and 1.0.1 before 1.0.1k allows remote SSL servers to conduct RSA-to-EXPORT_RSA downgrade attacks and facilitate brute-force decryption by offering a weak ephemeral RSA key in a noncompliant role, related to the &#8220;FREAK&#8221; issue. NOTE: the scope of this CVE is only client code based on OpenSSL, not EXPORT_RSA issues associated with servers or other TLS implementations.<br />
<strong>4. CVE-2001-0540</strong> &#8211; Memory leak in Terminal servers in Windows NT and Windows 2000 allows remote attackers to cause a denial of service (memory exhaustion) via a large number of malformed Remote Desktop Protocol (RDP) requests to port 3389.<br />
<strong>5. CVE-2015-1637</strong> &#8211; Schannel (aka Secure Channel) in Microsoft Windows Server 2003 SP2, Windows Vista SP2, Windows Server 2008 SP2 and R2 SP1, Windows 7 SP1, Windows 8, Windows 8.1, Windows Server 2012 Gold and R2, and Windows RT Gold and 8.1 does not properly restrict TLS state transitions, which makes it easier for remote attackers to conduct cipher-downgrade attacks to EXPORT_RSA ciphers via crafted TLS traffic, related to the &#8220;FREAK&#8221; issue, a different vulnerability than CVE-2015-0204 and CVE-2015-1067.<br />
<strong> 6. CVE-2012-0152</strong> &#8211; The Remote Desktop Protocol (RDP) service in Microsoft Windows Server 2008 R2 and R2 SP1 and Windows 7 Gold and SP1 allows remote attackers to cause a denial of service (application hang) via a series of crafted packets, aka &#8220;Terminal Server Denial of Service Vulnerability.&#8221;<br />
<strong>7. CVE-2001-0877</strong> &#8211; Universal Plug and Play (UPnP) on Windows 98, 98SE, ME, and XP allows remote attackers to cause a denial of service via (1) a spoofed SSDP advertisement that causes the client to connect to a service on another machine that generates a large amount of traffic (e.g., chargen), or (2) via a spoofed SSDP announcement to broadcast or multicast addresses, which could cause all UPnP clients to send traffic to a single target system.<br />
<strong>8. CVE-2001-0876</strong> &#8211; Buffer overflow in Universal Plug and Play (UPnP) on Windows 98, 98SE, ME, and XP allows remote attackers to execute arbitrary code via a NOTIFY directive with a long Location URL.<br />
<strong>9. CVE-2013-0229</strong> &#8211; The ProcessSSDPRequest function in minissdp.c in the SSDP handler in MiniUPnP MiniUPnPd before 1.4 allows remote attackers to cause a denial of service (service crash) via a crafted request that triggers a buffer over-read.<br />
<strong>10. CVE-2014-0160</strong> &#8211; The (1) TLS and (2) DTLS implementations in OpenSSL 1.0.1 before 1.0.1g do not properly handle Heartbeat Extension packets, which allows remote attackers to obtain sensitive information from process memory via crafted packets that trigger a buffer over-read, as demonstrated by reading private keys, related to d1_both.c and t1_lib.c, aka the Heartbleed bug.</p>
<p>What we set out to build at Kenna Security is a new way of thinking about information security vulnerabilities – <strong>a framework for efficiently measuring remediation,</strong> a way to stay ahead of attackers by making sure that every action was a useful one.</p>
<p>At first it seemed like an impossible task, precisely because of the issues outlined in @attritionorg’s incredibly thorough and well thought out <a href="https://blog.osvdb.org/2016/04/27/a-note-on-the-verizon-dbir-2016-vulnerabilities-claims/">criticism</a> of the <a href="https://www.kennasecurity.com/wp-content/uploads/data-breach-investigation-report-2016.pdf">Verizon DBIR’s vulnerability chapter.</a> On the whole, Brian is correct: IDS alerts generate a ton of false positives, vulnerability scanners often don’t revisit signatures, CVE is not a complete list of vulnerability definitions.</p>
<p>But those are just the trees, and we’ll get to them later. The forest is that this somewhat stark status quo is <strong>exactly the situation faced by thousands of enterprise practitioners</strong>, who are armed with nothing but their vulnerability scans and some logs, and must wade through millions of vulnerabilities and determine what must be done next. We seek to combine these datasets, in some inventive ways – not to say “this somewhat noisy data indicates you must fix an FTP vulnerability or perish”, but rather t<strong>o use exploitation data to add context to vulnerability scans</strong>, and to confirm some assumptions about automated attacks.</p>
<p>But most importantly, we seek to give the reader of the Verizon DBIR cold hard data to take back to management and be able to say <strong>“Yes I know it’s patch Tuesday, but today I shall fix a CVE from 2006, because it’s likely actively exploited, and 273 of our servers are vulnerable to it”.</strong> That is the reason the Verizon DBIR is useful and esteemed. Insights generated from data, no matter how counter-intuitive the insight, is the whole of why data science exists and can help security practitioners.</p>
<p>Dan Geer and Jay Jacobs’ article on <a href="https://www.usenix.org/system/files/login/articles/11_geer.pdf">how to properly describe convenience samples </a>(make no mistake, this is one of them), borrows a few excellent ideas from medicine and lays out a few guidelines, so I will follow them here (better late than never):</p>
<p>The data used by the Verizon DBIR Vulnerability section is comprised of two datasets.</p>
<p>The first is a convenience sample that includes 2,442,792 assets (defined as: workstations, servers, databases, ips, mobile devices, etc) and 264,912,235 vulnerabilities associated to those assets. The vulnerabilities are generated by 8 different scanners, they are: Beyond Security, Tripwire, McAfee VM, Qualys, Tenable, BeyondTrust, Rapid7, and OpenVAS . This dataset is used in determining remediation rates and the normalized open rate of vulnerabilities.</p>
<p>The second is a convenience sample that includes 3,615,706,022 successful exploitation events which all take place in 2015 which come from partners such as Alienvault’s Open Threat Exchange.</p>
<p>Please note the methodology of data collection: Successful Exploitation is defined as one successful technical exploitation of a vulnerability on one machine at a particular timestamp. The event is defined as: 1. An asset has a known CVE open. 2. An attack come in that matches the signature for that CVE on that asset and 3. One or more IOCs are detected/correlated post attack. It is not necessarily a loss of a data, or even root on a machine. It is just the successful use of whatever condition is outlined in a CVE. If any readers would like to see a sample of the dataset, feel free to each out to me at my Kenna Security email, michael@kennasecurity.com. Below are descriptive statistics for every CVE in the original top 5. The data comes from internally instrumented dashboards which will update live as new data rolls in (every hour).</p>
<ol>
<li>CVE-2015-1637 http://www.stathat.com/s/4813HmgKhyPk</li>
<li>CVE-2015-0204 http://www.stathat.com/s/YSHQp2H6OAin</li>
<li>CVE-2003-0818 http://www.stathat.com/s/4qHxrmPDWum3</li>
<li>CVE-2002-1054 http://www.stathat.com/s/8TzXxTnyYyJ3</li>
<li>CVE-2002-0126 <span style="line-height: 1.71429; font-size: 1rem;">http://www.stathat.com/s/14wGpFQFpe6w </span></li>
</ol>
<p>Enterprises use vulnerability scanners to manage vulnerabilities, the lowest-hanging fruit is to answer the question “Of these vulnerabilities, where should I focus my remediation efforts?”. In fact, the entire vulnerabilities section of this year’s DBIR focuses on what can be done given the status quo of vulnerability management, not what should be done in a perfect world. The underlying distributions indicate that 1. Attackers target old vulnerabilities often 2. Attackers automate their campaigns and spray across the internet. The implications can then be used to formula a strategy that is more effective. Essentially, patch Tuesday rolls along. Everyone scrambles to fix the new MS vulnerabilities. The data says “No. Be better.”</p>
<div id="attachment_4136" style="width: 392px" class="wp-caption aligncenter"><a href="http://blog.kennasecurity.com/wp-content/uploads/2016/05/NO.png"><img class="size-full wp-image-4136" src="http://blog.kennasecurity.com/wp-content/uploads/2016/05/NO.png" alt="data data data everywhere" width="382" height="393" /></a><p class="wp-caption-text">Data is your voice of reason.</p></div>
<p>Our point is not “If you don’t have BlackMoon FTP you’re safe”. Our point is that fat tailed statistical distributions necessitate a different approach to vulnerability management. It would be a shame if we lost the forest for the exploit signatures.</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.kennasecurity.com/2016/05/collaborative-data-science-inside-the-2016-verizon-dbir-vulnerability-section/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Must-Have Metrics for Vulnerability Management: Part 3</title>
		<link>http://blog.kennasecurity.com/2016/03/must-have-metrics-for-vulnerability-management-part-3/</link>
		<comments>http://blog.kennasecurity.com/2016/03/must-have-metrics-for-vulnerability-management-part-3/#respond</comments>
		<pubDate>Wed, 30 Mar 2016 15:50:56 +0000</pubDate>
		<dc:creator><![CDATA[Ed Bellis]]></dc:creator>
				<category><![CDATA[Vulnerability Assessment]]></category>
		<category><![CDATA[Vulnerability Intelligence]]></category>
		<category><![CDATA[Vulnerability Management]]></category>

		<guid isPermaLink="false">http://blog.kennasecurity.com/?p=4126</guid>
		<description><![CDATA[This is part 3 of a 3-part series on Must-Haves for Vulnerability Management. Read Part 1 here and Part 2 here. Must Have #4: Know Your Resources Once you have a good handle on your business, your assets, and what security risks are currently affecting your environment, you’ll need to understand your resources. What do you have at your disposal... <a href="http://blog.kennasecurity.com/2016/03/must-have-metrics-for-vulnerability-management-part-3/">Read more &#187;</a>]]></description>
				<content:encoded><![CDATA[<p>This is part 3 of a 3-part series on Must-Haves for Vulnerability Management. Read Part 1 <a href="http://blog.kennasecurity.com/2016/03/must-have-metrics-for-vulnerability-management-part-i/">here </a>and Part 2 <a href="http://blog.kennasecurity.com/2016/03/must-have-metrics-for-vulnerability-management-part-2/">here</a>.</p>
<p>Must Have #4: Know Your Resources</p>
<p>Once you have a good handle on your business, your assets, and what security risks are currently affecting your environment, you’ll need to understand your resources. What do you have at your disposal to eliminate risks going forward—including people, money, time or a combination thereof?</p>
<p>If you have identified what resources you have at your disposal, you can set reasonable or stretch goals to eliminate security risk targeting the environments and processes that matter most. You, of course, could turn this approach on its head and use the first 3 “must haves” to identify and budget the appropriate resources to manage risk down to an acceptable level for your business.</p>
<p>What areas are most crucial for risk reduction within your business? What security risks are you carrying within those areas? What types and how many resources do you need to remove risk?</p>
<p>“Risk” here should be thought of as <em>uncertainty</em>, specifically within a security context. By knowing your resources and targeting the greatest “bang for your buck” risks, you’ll be able to reduce risk in a very efficient manner.</p>
<p>Some useful metrics here include:</p>
<ol>
<li>Budget spent on security remediation</li>
<li>Risk carried above tolerance level</li>
<li>Hours per security solution</li>
</ol>
<p><strong>Keep in mind:</strong> When you’re tracking your progress in reducing risk, it’s important not to fall into the trap of simply counting vulnerabilities. While closed vulnerabilities is a metric that all teams will want to continue to track, what’s more important is the ability to reduce overall risk and how to effectively do that with the resources at hand. It may even be possible that as the most critical vulnerabilities are closed—but a relatively small number—the trend line of overall vulnerabilities may rise while the risk line goes down.</p>
<p><strong>Must Have #5: Know Your Direction</strong></p>
<p>You’re now in a state where you understand where your assets are, how to discover new assets as they pop up, what risk these assets carry based on your business and your security weaknesses, and what resources you have at your disposal to reduce that risk in an efficient manner.</p>
<p>It’s now time to continuously measure these metrics to understand your direction and set meaningful goals to reduce risk over time while continuing to support business objectives. As baselines are established, you can then work with the organization to target the areas of risk that are not within an acceptable range.</p>
<p>With knowledge of the resources you have at your disposal, you can quantitatively demonstrate what a reasonable risk reduction goal could be or make a case for additional resources based on your organization’s risk tolerance.</p>
<p>Some useful metrics here include:</p>
<ol>
<li>Risk reduction by asset group over time</li>
<li>Risk goal by asset group</li>
<li>Cumulative risk accepted over time</li>
</ol>
<p>And finally, it should be possible to preview how much your risk will be adjusted by any activity. This ensures that you are fixing the most critical issues first and spending your resources wisely.</p>
<p><strong>Summin&#8217; Up&#8230;<br />
</strong></p>
<p>Vulnerability Management in a function that needs to be driven by the right metrics, with the goal of identifying and reducing the organization’s overall exposure to risk. Having the appropriate set of numbers, metrics, and measurements will facilitate that process.</p>
<p>The ultimate objective is to have an accurate overall picture of risk—both in terms of overall asset counts, the threats particular to the business, the resources you have available, and the trend line of your overall risk. In doing so, vulnerability management becomes as clear and precise as possible in an environment where the threats themselves are murky, ever-changing, and increasingly difficult to identify.</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.kennasecurity.com/2016/03/must-have-metrics-for-vulnerability-management-part-3/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Must-Have Metrics for Vulnerability Management: Part 2</title>
		<link>http://blog.kennasecurity.com/2016/03/must-have-metrics-for-vulnerability-management-part-2/</link>
		<comments>http://blog.kennasecurity.com/2016/03/must-have-metrics-for-vulnerability-management-part-2/#respond</comments>
		<pubDate>Wed, 30 Mar 2016 15:41:32 +0000</pubDate>
		<dc:creator><![CDATA[Ed Bellis]]></dc:creator>
				<category><![CDATA[Vulnerability Assessment]]></category>
		<category><![CDATA[Vulnerability Intelligence]]></category>
		<category><![CDATA[Vulnerability Management]]></category>

		<guid isPermaLink="false">http://blog.kennasecurity.com/?p=4124</guid>
		<description><![CDATA[This blog is Part 2 in a 3-part series on Must-Have Metrics for Vuln Management. Read Part 1 here. Must-Have #2: Know Your Business In order to understand the most pertinent threats and measure the likelihood of exploits, you really need to understand these factors within the context of your business. A great way to apply this knowledge to security... <a href="http://blog.kennasecurity.com/2016/03/must-have-metrics-for-vulnerability-management-part-2/">Read more &#187;</a>]]></description>
				<content:encoded><![CDATA[<p>This blog is Part 2 in a 3-part series on Must-Have Metrics for Vuln Management. <a href="http://blog.kennasecurity.com/2016/03/must-have-metrics-for-vulnerability-management-part-i/">Read Part 1 here</a>.</p>
<p>Must-Have #2: Know Your Business</p>
<p>In order to understand the most pertinent threats and measure the likelihood of exploits, you really need to understand these factors within the context of your business. A great way to apply this knowledge to security is through threat modeling. There are numerous methodologies for threat modeling out there, ranging from the heavyweight to the “back of the napkin.”</p>
<p>But don’t worry about starting with something highly sophisticated: there are significant advantages to be realized by just going from nothing to something. By knowing even some of the most basic attributes about your business and applications, you can begin to understand what are the most likely threats. Some simple examples include:</p>
<ol>
<li>What broad metadata do you have about the organization? Your industry vertical, size, and geography all play a role here.</li>
<li>What information is processed by your application or assets, and what are the value and/or regulations around such information?</li>
<li>How many people use an application? This is an often overlooked but critical attribute.</li>
<li>What and where are the key assets to your business? Are there critical controls for these assets to protect confidentiality, integrity, or availability?</li>
<li>Who are your adversaries and what are their capabilities? Occam’s Razor applies here.</li>
</ol>
<p>Some useful metrics here include:</p>
<ol>
<li>System Susceptibility
<ol>
<li>Value to Attackers</li>
<li>Vulnerabilities</li>
</ol>
</li>
<li>Time to Compromise (Hacker economics): How long would it take to compromise any of the key controls for these assets and applications?</li>
<li>Threat Accessibility
<ol>
<li>Access Points and Attack Surface</li>
</ol>
</li>
<li>Threat Capability
<ol>
<li>Tools</li>
<li>Resources</li>
</ol>
</li>
</ol>
<p><strong>Does your threat model include Alexa ratings? </strong>As an example of this, take two applications. One is used by your internal employees for HR and contains sensitive information such as social security numbers, health care information, etc. The other is a public application that contains no sensitive information and all data is public.</p>
<p>If you don’t evaluate the size of your user base, it’s obvious that the application processing sensitive data far outweighs the risk of the public application. However, if the public application has 100 million users—and these users can be attacked through a persistent cross-site scripting vulnerability in the application allowing a malicious user to attack 100 million users—does that change things?<strong><br />
</strong></p>
<p><strong>Must-Have #3: Know Your Risk</strong><br />
<strong><br />
</strong>Counting vulnerabilities and relying on static scores no longer works; these methods don’t account for the fact that threats change constantly and there needs to be a tried-and-true methodology in place to measure real security risk. If you build your process on top of a broken risk model, you end up in a riskier position faster. Taking a risk-based approach over quantities and an “effort complete” method is a must.</p>
<p>In order to understand your risk, at a minimum you’ll need a handle on both likelihood and impact. There are many factors that go into such a methodology. Some questions to ask include:</p>
<ol>
<li><strong>Asset metadata</strong>: Fed by the first two “must haves” areas, do you understand who owns the asset, what the function of the asset is, and how it’s used? What’s the impact of losing the confidentiality, integrity, or availability of the asset? Which is the most important based on its function?</li>
<li><strong>Vulnerabilities</strong>: What are the weaknesses and vulnerabilities tied to this asset or group of assets? How easy or difficult is it to exploit these weaknesses?</li>
<li><strong>Threats</strong>: What are the threats associated with the security holes as well as to your business? How skilled is your adversary and what skills are required to exploit your weaknesses? How prevalent are these vulnerabilities being exploited in the wild? Are you likely to be hit by a “drive by”?</li>
</ol>
<p>Most importantly, there should be a single score that unifies the entire environment, based on real-time exposure to risk. From there, it’s possible to move down into other important asset groups and categories.</p>
<p>You also need to be able to track your progress over time. Think of exposure to risk like a stock report. What was your exposure last week—and last month? Has the risk line trended down or up? What’s your “high” and “low” point over the past 52-weeks?</p>
<p>Showing your team’s ability to reduce exposure to risk over time is a critical component of arguing for a team’s efficiency, ability and—at the end of each year—the need for additional budget and headcount.</p>
<p>Some useful metrics here include:</p>
<ol>
<li>Risk by asset group both current and trending over time</li>
<li>Mean-time-to risk reduction where risk reduction is a target or goal</li>
<li>Time to remediate high risks broken down by asset groups</li>
</ol>
<p>We still have two more Must-Haves coming up. Stay tuned for Part 3&#8230;</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.kennasecurity.com/2016/03/must-have-metrics-for-vulnerability-management-part-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Must-Have Metrics for Vulnerability Management: Part I</title>
		<link>http://blog.kennasecurity.com/2016/03/must-have-metrics-for-vulnerability-management-part-i/</link>
		<comments>http://blog.kennasecurity.com/2016/03/must-have-metrics-for-vulnerability-management-part-i/#respond</comments>
		<pubDate>Wed, 30 Mar 2016 00:23:15 +0000</pubDate>
		<dc:creator><![CDATA[Ed Bellis]]></dc:creator>
				<category><![CDATA[Vulnerability Assessment]]></category>
		<category><![CDATA[Vulnerability Intelligence]]></category>
		<category><![CDATA[Vulnerability Management]]></category>

		<guid isPermaLink="false">http://blog.kennasecurity.com/?p=4122</guid>
		<description><![CDATA[In this series of blog posts, we&#8217;ll cover the must-have metrics for vulnerability management. The rising cadence of automated attacks means that security teams need to strive to make their own practices as precise and metric-driven as possible. Pouring through spreadsheets and creating 500-page PDFs is no longer enough to ensure that critical vulnerabilities are remediated in time. But what’s... <a href="http://blog.kennasecurity.com/2016/03/must-have-metrics-for-vulnerability-management-part-i/">Read more &#187;</a>]]></description>
				<content:encoded><![CDATA[<p>In this series of blog posts, we&#8217;ll cover the must-have metrics for vulnerability management.</p>
<p>The rising cadence of automated attacks means that security teams need to strive to make their own practices as precise and metric-driven as possible. Pouring through spreadsheets and creating 500-page PDFs is no longer enough to ensure that critical vulnerabilities are remediated in time. But what’s the best way to ensure that the right metrics are applied to the practice of vulnerability management—a security function that has occasionally been seen as directionless in the past?</p>
<p>Here are a few key areas where you’ll need to apply Must-Haves Metrics for Vulnerability Management:</p>
<p><strong>Know Your Assets</strong>: Do you know where all your assets and applications are? What is your current assessment coverage? How do you discover new assets?</p>
<p><strong>Know Your Business</strong>: Are you performing threat modeling? What threats exist to your business? Are you a target?</p>
<p><strong>Know Your Risk</strong>: Where are your security weaknesses and vulnerabilities, and which ones are the most likely to be exploited? How do you determine likelihood and impact?</p>
<p><strong>Know Your Resources</strong>: What can you get done with the resources you have? Are you accounting for budget, time, and people?</p>
<p><strong>Know Your Direction</strong>: Are you getting better or worse over time? Given the other “must haves” above, what is an achievable goal for risk reduction?</p>
<p>The reality is, the old methods of vulnerability management—using spreadsheets and counting vulnerabilities—still have their place. But for a new world of rising threats and attacks, a new set of metrics is necessary in order to keep pace with the rising cadence of critical vulnerabilities.</p>
<p><strong>Must-Have #1: Know Your Assets</strong><br />
<strong><br />
</strong>Before you can even begin to asses your security risk or posture, you first need to know what you have. This includes all of your assets—whether in the data center, on your corporate network, as part of remote access, or as part of your applications.</p>
<p>Of course, that’s easier said that done; knowing where ALL of your assets are for any sizeable organization is a daunting task. Identifying 100% of your assets is often dismissed by practitioners as something only vendors say is possible (who’ve never had to do it themselves).</p>
<p>Rather than writing this off as an impossible task, though, treat this objective as a metric with specific goals and progress tracking. There’s a certain Rumsfieldian aspect to asset tracking in that there are “known knowns, known unknowns, and unknown unknowns” on your networks and managed by your organization.</p>
<p>In order to manage and measure this metric, you’ll need an automated discovery process. Starting from the outside in, you’ll need to understand your DNS and WHOIS records. What IP address ranges and domains do you own? What ports, applications, and services are running on them? What is your process for discovering new assets, services, DNS records—and is the process automated? How do you feed these assets into your assessment and scanning processes,</p>
<p>Some useful metrics here include:</p>
<ol>
<li>External scanner coverage (known assets/scanned assets)</li>
<li>Internal scanner coverage (known assets/scanned assets)</li>
<li>Time to discover (lower is better)</li>
</ol>
<p>More metrics to come in the next post&#8230;</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.kennasecurity.com/2016/03/must-have-metrics-for-vulnerability-management-part-i/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Enhanced Reporting Capabilities in Kenna: It&#8217;s All About Risk</title>
		<link>http://blog.kennasecurity.com/2016/03/enhanced-reporting-capabilities-in-kenna-its-all-about-risk/</link>
		<comments>http://blog.kennasecurity.com/2016/03/enhanced-reporting-capabilities-in-kenna-its-all-about-risk/#respond</comments>
		<pubDate>Wed, 09 Mar 2016 16:25:07 +0000</pubDate>
		<dc:creator><![CDATA[Greg Howard]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://blog.kennasecurity.com/?p=4115</guid>
		<description><![CDATA[We&#8217;re thrilled to announce our new reporting capabilities today. Kenna has always been an unparalleled platform for vulnerability prioritization&#8211;enabling security teams to identify their most critical vulnerabilities and take the right actions to help remediate them. But with the introduction of our new reports, Kenna becomes something else: a security analytics platform that helps organizations measure, monitor, and track their... <a href="http://blog.kennasecurity.com/2016/03/enhanced-reporting-capabilities-in-kenna-its-all-about-risk/">Read more &#187;</a>]]></description>
				<content:encoded><![CDATA[<p>We&#8217;re thrilled to announce our new reporting capabilities today. Kenna has always been an unparalleled platform for vulnerability prioritization&#8211;enabling security teams to identify their most critical vulnerabilities and take the right actions to help remediate them.</p>
<p>But with the introduction of our new reports, Kenna becomes something else: a security analytics platform that helps organizations measure, monitor, and track their overall exposure to risk. By helping direct the conversation along the lines of risk&#8211;as opposed to, say, the number of CVSS-ranked 8s, 9s, or 10s that exist in an environment&#8211;Kenna allows its customers to center the conversation around a single metric for risk that even the non-technical business person can understand.</p>
<p><a href="http://blog.kennasecurity.com/wp-content/uploads/2016/03/reporting2-slider-img-5.png" rel="attachment wp-att-4117"><img class="alignnone size-full wp-image-4117" src="http://blog.kennasecurity.com/wp-content/uploads/2016/03/reporting2-slider-img-5.png" alt="reporting2-slider-img-5" width="2066" height="720" /></a></p>
<p>Kenna&#8217;s new reporting enables you to:</p>
<p><strong>Track your progress like a stock report.</strong> What was your highest and lowest risk score over time? What&#8217;s your exposure to risk right this second?</p>
<p><strong>Measure Your Risk by Asset Group or Vulns.</strong> Look at associated risk in both asset groups and vulnerabilities&#8211;then immediately drill down to see what&#8217;s at the highest risk.</p>
<p><strong>Track the vuln numbers that matter most. </strong>Even though Kenna keeps your focus on risk, it still gives you the vulnerability numbers that your stakeholders need&#8211;including total number of closed vulns, vulnerability aging, mean time to remediate, and more.</p>
<p><strong>Know the work you have left to do.</strong> Kenna’s unique “fixes” feature tells you not only what vulnerabilities you have, but what patch will close them. You can report on the effectiveness of our Fixes by seeing how many you still need to implement and the number of CVEs affected by them.</p>
<p><strong>Custom Ranges Take Snapshots of Risk. </strong>Track your progress in reducing risk using whatever date range you want. See how effective you&#8217;ve been in over time, and prove the impact of your teams&#8217; remediation effort.</p>
<p><a href="http://blog.kennasecurity.com/wp-content/uploads/2016/03/reporting2-hero-ss.jpg" rel="attachment wp-att-4116"><img class="alignnone size-full wp-image-4116" src="http://blog.kennasecurity.com/wp-content/uploads/2016/03/reporting2-hero-ss.jpg" alt="reporting2-hero-ss" width="1800" height="1865" /></a></p>
<div class="page" title="Page 2">
<div class="section">
<div class="layoutArea">
<div class="column">
<p><strong>What are all these reports tracking?<br />
</strong>Kenna takes your vulnerability scan data and integrates it with real-time exploit intelligence, using a patented algorithm that takes into account over 220 billion vulnerabilities processed daily.</p>
<p>From this data, we’re able to tell which of your assets are most at risk—based on active Internet breaches, popular targets, and easily exploitable vulnerabilities. We’re able to put a score on your groups of assets and let you know exactly which assets are most at risk of a breach.</p>
<p>The Kenna risk score is used and trusted by major Fortune 500 organizations; their CISOs use this reporting to report on risk to their boards. Kenna Reporting is data you can depend on—and offers an unmatched insight into your true risk posture that you can use to communicate to your entire organization.</p>
</div>
</div>
</div>
</div>
<p><a href="http://www.kennasecurity.com/signup">Try it for yourself here</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.kennasecurity.com/2016/03/enhanced-reporting-capabilities-in-kenna-its-all-about-risk/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
