<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>
	<title type="text">Eli Grey</title>
	<subtitle type="text"></subtitle>

	<updated>2024-01-30T02:17:43Z</updated>

	<link rel="alternate" type="text/html" href="https://eligrey.com/blog" />
	<id>https://eligrey.com/blog/feed/atom/</id>
	<link rel="self" type="application/atom+xml" href="https://eligrey.com/blog/feed/atom/" />

	<generator uri="https://wordpress.org/" version="6.4.2">WordPress</generator>
	<entry>
		<author>
			<name>Eli Grey</name>
					</author>

		<title type="html"><![CDATA[Big Tech&#8217;s role in enabling link fraud]]></title>
		<link rel="alternate" type="text/html" href="https://eligrey.com/blog/link-fraud/" />

		<id>https://eligrey.com/blog/?p=735</id>
		<updated>2024-01-30T02:17:43Z</updated>
		<published>2024-01-09T03:20:00Z</published>
		<category scheme="https://eligrey.com/blog" term="Uncategorized" />
		<summary type="html"><![CDATA[Link fraud is increasingly undermining trust in major online platforms, including ad-supported websites like Google, Bing, and Twitter.com. These platforms allow advertisers to spoof links with unverified &#8216;vanity URLs&#8217;, laundering trust in their systems, while simultaneously deflecting blame onto advertisers when these mechanisms are exploited for fraudulent purposes.  I believe that this status quo must [&#8230;]]]></summary>

					<content type="html" xml:base="https://eligrey.com/blog/link-fraud/"><![CDATA[
<p>Link fraud is increasingly undermining trust in major online platforms, including ad-supported websites like Google, Bing, and Twitter.com. These platforms <a href="https://support.google.com/google-ads/answer/6246601">allow advertisers to spoof links</a> with <em>unverified</em> &#8216;vanity URLs&#8217;, laundering trust in their systems, while simultaneously deflecting blame onto advertisers when these mechanisms are exploited for fraudulent purposes. </p>
<p>I believe that this status quo must be abolished. Commercial entities that maintain advertising systems that systemically enable link fraud must contend with their net-negative impact on society.</p>
<h2>What are vanity URLs and what is link fraud?</h2>
<p>URL spoofing is the act of presenting an internet address that appears to lead to one destination but actually leads to another, unexpected, location. URL spoofing is commonly referred to as &#8220;vanity URLs&#8221; by the adtech industry when provided as a first-class feature on adtech platforms.</p>
<p>Link fraud is the use of URL spoofing to achieve financial gain or other illicit objectives. It is a staple practice in spam emails and scam websites, where links may appear legitimate but lead to harmful content.</p>
<!--
<p>Misuse of vanity URLs is so prevalent across the web that professionals can find it difficult to be aware of links' corresponding destinations, even outside of vanity URL applications. For example, the current top Google search result for <q title="(searched from the San Francisco Bay Area)">ads vanity URL </q> <a href="https://neilpatel.com/blog/vanity-url/#:~:text=here%E2%80%99s%20the%20actual%20URL%20it%20redirects%20you%20to">confuses link attribution in its own content</a> due to the verbosity of the links, without even realizing it. The result is an article titled "How to Use Vanity URLs for Paid Ads" on neilpatel.com. In the first example of a vanity URL, the blog post presents a screenshot of a Kickstarter ad on Facebook. The post describes the ad and displays a link that does not correspond with the example, instead showing a link that is for the next vanity URL example. This shows just how difficult it can be to detect spoofed vanity URLs.</p>-->
<h2>Examples of link fraud</h2>
<p>Google Search:</p>
<ul>
<li>See <a href="https://twitter.com/1ZRR4H/status/1735798531921698853">this compilation of reports on Twitter.com by Germán Fernández</a>.</li>
<li>See <a href="https://twitter.com/ericlaw/status/1712531148356661494">this post on Twitter.com by Eric Lawrence</a>.</li>
</ul>
<p>Microsoft Bing: See <a href="https://www.forbes.com/sites/jasonevangelho/2018/10/27/stop-using-microsoft-edge-to-download-chrome-unless-you-want-malware/?sh=6fd791f513ea">this Forbes article</a> and <a href="https://twitter.com/sephr/status/1055751684146655232">my analysis</a>. I reproduced the issue at the time and triaged the root cause to be uncontrolled use of vanity URLs.</p>
<p>Twitter.com: I&#8217;ve personally witnessed it but didn&#8217;t screenshot it. It&#8217;s relatively common.</p>
<h2>Culprits</h2>
<p>Google, Microsoft, and X all have similarly ineffective enforcement methods and policies for vanity URL control. I suspect that Reddit also has similar limitations but haven&#8217;t confirmed this.</p>
<h2>Implicit regulatory capture</h2>
<p>Adtech companies play the victim by claiming that fraudsters and scammers are &#8216;abusing&#8217; their unverified vanity URL systems. These companies should not be able to get away with creating systems that enable link fraud and then pretend to tie their hands behind their back when asked to combat the issue. They have created systems for trust-laundered URL spoofing, and then disclaimed ethical or legal responsibility for the fundamental technical failures of these systems.</p>
<p>It is not possible to automatically prevent link fraud in systems that allow for unverified URL spoofing to occur. If adtech providers do not perform domain ownership verification on vanity URLs, advertisers are technically free to commit fraud as they please.</p>
<h2>How did we get here?</h2>
<p>The adtech industry may excuse these practices as an unavoidable consequence of the complexity of online advertising. However, this overlooks the responsibility that these companies bear for prioritizing profit over user safety and the integrity of their platforms.</p>
<p>Corporate greed has gotten so out-of-control that companies such as Google, Microsoft, and Brave now all deeply integrate advertising technologies at the browser-level, with some effects ranging from battery drain to <a href="https://developers.google.com/privacy-sandbox/relevance/protected-audience">personal interest tracking</a>, and even <a href="https://brave.com/brave-rewards/#terms:~:text=70%25%20of%20the%20revenue%20Brave%20earns%20through%20these%20unobtrusive%2C%20privacy%2Dpreserving%20ads%20is%20shared%20directly%20back%20with%20users%20as%20Brave%20Rewards.">taking a cut of the value of your attention</a>.</p>
<h2>National security risks</h2>
<p>The risk of malvertising and fraud through adtech platforms has become so concerning and prevalent that <a href="https://www.ic3.gov/Media/Y2022/PSA221221#:~:text=Use%20an%20ad%20blocking%20extension%20when%20performing%20internet%20searches.">the FBI now recommends all citizens install ad blockers</a>. Interestingly, some of the FBI&#8217;s advice for checking ad authenticity is inadequate in practice. The FBI suggests &#8220;Before clicking on an advertisement, check the URL to make sure the site is authentic. A malicious domain name may be similar to the intended URL but with typos or a misplaced letter.&#8221; — this is useless advice in the face of unverified vanity URLs. Instead of asking private citizens to block an entire &#8216;legal&#8217; industry, the FBI should be investigating adtech platforms for systemically enabling link fraud.</p>
<p>Intelligence agencies such as <a href="https://www.vice.com/en/article/93ypke/the-nsa-and-cia-use-ad-blockers-because-online-advertising-is-so-dangerous">the NSA and CIA also use adblockers</a> in order to keep their personnel safe from malware threats. I anticipate that the US federal government may start requiring adblockers on all federal employee devices at some point in the future.</p>
<h2>What can be done? Verification &amp; enforcement</h2>
<p>Companies are generally mandated by law to provide true statements to consumers where technically possible. Unverified vanity URLs as a first-class feature flies in the face of these requirements.</p>
<p>Adtech providers should validate ownership of the domain names used within vanity URLs, or alternatively vanity URLs should be banned entirely. Validating domain ownership can easily be done through automated or manual processes where domain name owners place unique keys in their domain name&#8217;s DNS records.</p>
<p>A common, yet fundamentally flawed verification mechanism that adtech platforms such as Google Ads employ is the use of sampled URL resolution, which involves visiting a website at given points in time from one or more given computers. This technique can easily be bypassed with <a href="https://eligrey.com/blog/zerodrop/">dynamic redirection software that can hide fraud and malware from URL scanning servers</a>.</p>
<p>Petition your elected government officials to let them know that big tech is willingly ignoring their role in the rise of effective link fraud, spurred by their support of unverified vanity URLs. The United States Federal Trade Commission should request an investigation and seek to prosecute companies that knowingly enable link fraud through unverified vanity URL systems that are fundamentally impossible to audit.</p>
<p>On a personal level, you can install an adblocker such as uBlock Origin to block advertising, which has a nice added side effect of increasing web browsing privacy and performance.</p>
]]></content>
		
					<link rel="replies" type="text/html" href="https://eligrey.com/blog/link-fraud/#comments" thr:count="0" />
			<link rel="replies" type="application/atom+xml" href="https://eligrey.com/blog/link-fraud/feed/atom/" thr:count="0" />
			<thr:total>0</thr:total>
			</entry>
		<entry>
		<author>
			<name>Ben Pevsner</name>
							<uri>https://twitter.com/ivebencrazy</uri>
						</author>

		<title type="html"><![CDATA[Favioli]]></title>
		<link rel="alternate" type="text/html" href="https://eligrey.com/blog/favioli/" />

		<id>https://eligrey.com/blog/?p=542</id>
		<updated>2019-05-09T02:13:04Z</updated>
		<published>2018-08-18T01:00:02Z</published>
		<category scheme="https://eligrey.com/blog" term="Uncategorized" />
		<summary type="html"><![CDATA[🤯 Favioli is a productivity extension that makes it easier to recognize tabs within Chrome. Favioli was originally inspired by two things. The first thing that spurred the idea was Eli Grey&#8217;s personal site and its use of Emoji Favicon Toolkit to make randomized emoji favicons. The favicon of his site shows different emoji on-load [&#8230;]]]></summary>

					<content type="html" xml:base="https://eligrey.com/blog/favioli/"><![CDATA[<p><a href="https://favioli.com"><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f92f.png" alt="🤯" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Favioli</a> is a productivity extension that makes it easier to recognize tabs within Chrome.</p>
<p>Favioli was originally inspired by two things. The first thing that spurred the idea was Eli Grey&#8217;s personal site and its use of <a href="https://github.com/eligrey/emoji-favicon-toolkit">Emoji Favicon Toolkit</a> to make randomized emoji favicons. The favicon of his site shows different emoji on-load that persist within a session. It&#8217;s pretty creative and fun! This was the creative inspiration for Favioli.</p>
<p>There is an <a href="https://eligrey.com/blog/wp-content/themes/eligrey.com/js/favicon.js">Emoji Favicon Toolkit usage example</a> here on eligrey.com.</p>
<p>The practical inspiration for Favioli came from my day job. We have a lot of internal tools and sites, and they tend to either not have favicons, or have the standard Sony logo. For me this was a bit of a pain, because I love to pin my tabs. I couldn&#8217;t tell these sites apart.</p>
<p>I could use a Chrome extension that lets me set custom favicons, but then I&#8217;d have to go through and specify each one. If I were to use emojis, I wouldn&#8217;t have to deal with finding art for each individual site, and I could make something that could automatically make all of these pages recognizable at a glance.</p>
<p><img fetchpriority="high" decoding="async" class="wp-image-600 aligncenter" src="https://eligrey.com/blog/wp-content/uploads/2018/08/comparison-300x188.png" alt="" width="673" height="422" srcset="https://eligrey.com/blog/wp-content/uploads/2018/08/comparison-300x188.png 300w, https://eligrey.com/blog/wp-content/uploads/2018/08/comparison-768x480.png 768w, https://eligrey.com/blog/wp-content/uploads/2018/08/comparison-1024x640.png 1024w" sizes="(max-width: 673px) 100vw, 673px" /></p>
<p>This project has been a fun exploration of javascript strings, chrome extensions, and browser/os string support.</p>
<p>As we look through everything, feel free to follow along by looking at Favioli&#8217;s <a href="https://github.com/ivebencrazy/favioli">source code</a>! All the Favioli code that is my own is licensed with the <a href="https://unlicense.org/">Unlicense</a>, so feel free to go crazy with it.</p>
<p><span id="more-542"></span></p>
<h2>Structure of a Browser Extension</h2>
<p>There are essentially 4 pieces of any fully-local web extension that modifies a web page:</p>
<ul>
<li>Options: The internal website that extensions use for full-screen setting updates</li>
<li>Popup: The mini-website that pops up when you click the extension icon</li>
<li>Background: The background extension process that does stuff&#8230; in the background&#8230;</li>
<li>ContentScript: The script that gets added to websites you visit in your web browser.</li>
</ul>
<p>We use all of these except popup for Favioli.&nbsp; The general structure of our extension looks like this:</p>
<p><img decoding="async" class=" aligncenter" src="https://eligrey.com/blog/wp-content/favioli/favioli-structure.png" width="649" height="458"></p>
<h3>Settings: Options and Popup Pages</h3>
<p>This is the simplest piece of Favioli, and for a good reason; it doesn&#8217;t really do much. All we do here is run a basic website, &#8220;options.html&#8221;, which saves data to a browser-provided storage mechanism. Similar to using localStorage, key differences being that it can sync between different browsers on multiple computers, and is available to other areas of our Chrome Extension that don&#8217;t interact with a webpage. What this boils down to in Favioli is simply saving custom overrides and, in the future, things like icon packs and other settings.</p>
<p><img decoding="async" class=" wp-image-597 aligncenter" src="https://eligrey.com/blog/wp-content/uploads/2018/08/Options-300x204.png" alt="" width="643" height="437" srcset="https://eligrey.com/blog/wp-content/uploads/2018/08/Options-300x204.png 300w, https://eligrey.com/blog/wp-content/uploads/2018/08/Options-768x522.png 768w, https://eligrey.com/blog/wp-content/uploads/2018/08/Options-1024x696.png 1024w, https://eligrey.com/blog/wp-content/uploads/2018/08/Options.png 1980w" sizes="(max-width: 643px) 100vw, 643px" /></p>
<p>The most complicated piece of the options page is the emoji selector, adapted from Dominic Valenicana&#8217;s <a href="https://github.com/Kiricon/emoji-selector">Emoji Selector</a>. We use this to define a custom HTML element that we can use for the options page.</p>
<p>One thing we haven&#8217;t implemented in Favioli is a popup page. In Chrome Extension land, the &#8220;popup&#8221; is the mini-site that shows up when you click the extenstion icon. We currently don&#8217;t use this for Favioli, but it would be extremely useful for quick-pinning specific emojis to sites we are currently visiting. It would essentially follow the same formula as our Options page, though; featuring a user interface that simply feeds information to our background process.</p>
<h3>Background: The Decision Engine</h3>
<p>The Background process is Favioli&#8217;s primary decision engine. It takes information from our content script and background process, and spits the correct emoji to each page. Our decision is a simple priority list: we just check each item and use the first rule to apply:</p>
<table style="font-size: medium;">
<thead>
<tr>
<th>Priority</th>
<th>Rule</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td><strong>User-set Overrides</strong>: If a user has specified a favicon match via the options page, then the site will use that favicon.</td>
</tr>
<tr>
<td>2</td>
<td><strong>Site Default</strong>: If the site has a favicon, then use that.</td>
</tr>
<tr>
<td>3</td>
<td><strong>Random Emoji</strong>: If the site has not natural favicon, we generate a random one via a hashing algorithm.</td>
</tr>
</tbody>
</table>
<p>The user-set overrides and site defaults are pretty straightforward; we can match pieces of a url, or match a regex. If it doesn&#8217;t use those, then fall back to the site&#8217;s default favicon. The more interesting case is if a site doesn&#8217;t have a native Favicon.</p>
<h3>Random Emoji Hash</h3>
<p>The reasoning for using a non-cryptographic hash to determine emojis is based on one idea: there&#8217;s no way in hell we&#8217;re storing the settings of each site a person visits. THAT would be a pain in the ass. We use a hash of the website&#8217;s host to map to an emoji with a custom set of char codes (we don&#8217;t really want to randomly apply flag and symbol emojis to all the sites. It&#8217;s just not as fun). This creates a function that creates a random emoji that is always the same for an individual website host, without storing any data.</p>
<h3>Background: Applying the Favicon</h3>
<p>Our decision process is run in 3 cases, which are in the background.js:</p>
<pre lang="javascript">// After we fetch our settings, start listening for url updates
init().then(function() {
  // If a tab updates, check to see whether we should set a favicon
  chrome.tabs.onUpdated.addListener(function(tabId, opts, tab) {
    tryToSetFavicon(tabId, tab)
  })
})

// Manually sent Chrome messages
chrome.runtime.onMessage.addListener(function(message, details) {
  // If we manually say a tab has been updated, try to set favicon
  // This happens when contentScript loads before settings are ready
  if (message === "updated:tab") tryToSetFavicon(details.tab.id, details.tab)

  // If our settings change, re-run init to fetch new settings
  if (message === "updated:settings") init()
})</pre>
<p><code>tryToSetFavicon</code> decides what emoji we want to use and sends it to our content script as an emoji message string, to render our emoji as a favicon. Our content script has a few additional checks for whether we should show our favicon, because there are some checks that can only be done on each individual site.</p>
<p>We can think of our bakground process as determining <strong>WHEN</strong> to set a favicon, and <strong>WHAT</strong> to set it as. The content script will determine <strong>IF</strong> a favicon truly gets set.</p>
<h3>Content Script: Building Text Favicons</h3>
<p>Our content script is the script that runs on each website we visit, appending or replacing the favicon when our background process tells it to. This scripts could be quite a bit simpler than we made it. The reason? Essentially, this boils down to one thing:</p>
<blockquote><p>Favicons are images. Emojis are not images.</p></blockquote>
<p>Unfortunately, favicons still must be images, so we have to do a bit of hackery magic in order to show our native text emojis show up as favicons. This was the clever bit of code Eli wrote that I borrowed to make Favioli an image-less experience.</p>
<p>I should probably mention that there&#8217;s definitely some over-engineering in Favioli. Practically, using the same clever method as Eli for creating legitimate emoji text favicons is not reeeeally necessary, and makes for some complications when it comes to multi-platform support. So this script coooould be &#8220;use a pre-rendered emoji image and add it as a favicon.&#8221; That script would be easy, but WHAT FUN WOULD THAT BE?!?!</p>
<p>Let&#8217;s jump into a condensed version of <a href="https://github.com/ivebencrazy/favioli/blob/master/source/utilities/faviconHelpers.js#L60">the code we use</a>&nbsp;to create our favicons:</p>
<pre lang="javascript">// Initialize canvas and context to render emojis

const PIXEL_GRID = 16 // Standard favIcon size 16x16
const EMOJI_SIZE = 256 // 16 x 16

const canvas = document.createElement("canvas")
canvas.width = canvas.height = EMOJI_SIZE

const context = canvas.getContext("2d")
context.font = `normal normal normal ${EMOJI_SIZE}px/${EMOJI_SIZE}px sans-serif`
context.textAlign = "center"
context.textBaseline = "middle"

function createEmojiUrl(char) {
const { width } = context.measureText(char)

// Bottom and Left of the emoji (where we start drawing on canvas)
// Since favicons are a square, we can use the same number for both
const center = (EMOJI_SIZE + EMOJI_SIZE / PIXEL_GRID) / 2
const scale = Math.min(EMOJI_SIZE / width, 1) // Adjust canvas to fit
const center_scaled = center / scale

// Scale and resize the canvas to adjust for width of emoji
context.clearRect(0, 0, EMOJI_SIZE, EMOJI_SIZE)
context.save()
context.scale(scale, scale)

// context.fillText(char, bottom, left)
context.fillText(char, center_scaled, center_scaled)
context.restore()

// We need it to be an image
return canvas.toDataURL("image/png")</pre>
<p>We make a canvas, draw a centered piece of favicon text, then convert that canvas drawing into a png <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URIs">data url</a>. We can set favicons with data urls, so at this point, we just need to add our favicon to the site!</p>
<h3>Content Script: Appending Favicons</h3>
<p>The last step of Favioli is appending a favicon to the site we visit. Appending a favicon to an existing site can have a few complications, mainly stemming from the fact that different sites apply their favicons in different ways. We have a few different cases that affect how Favioli adds favicons:</p>
<h4>1. Custom Favicon Path</h4>
<p>This is the easiest to deal with; when a site has a custom favicon path, we can be assured that it has a favicon, and we can either override it or leave it be, depending on our settings.</p>
<h4>2. Weird path changes</h4>
<p>Sometimes a site changes path and expects the favicon to persist through the site. To maintain consistency, and to avoid unnecessary work, favioli memoizes the decision of whether a site has a custom favicon within the context of a session.</p>
<h4>3. No Favicon in the HTML</h4>
<p>When there is no specified favicon, there are two things it could mean. It could mean that a website is either using the default <code>favicon.ico</code>, or it doesn&#8217;t have a favicon. It could be confusing if we try to determine which of these is happening, though. So instead, Favioli simply appends a <code>favicon.ico</code> link after it appends our emoji favicon in cases where we shouldn&#8217;t override the default emoji. This way, our emoji favicon gets overridden by the default one.</p>
<pre lang="javascript">const href = memoizedEmojiUrl(name);

if (existingFavicon) {
  existingFavicon.setAttribute("href", href);
} else {
  const link = createLink(href, EMOJI_SIZE, "image/png");
  existingFavicon = documentHead.appendChild(link);

  if (!shouldOverride) {
    const defaultLink = createLink("/favicon.ico");
    documentHead.appendChild(defaultLink);
  }
}</pre>
<h2>Now we have done it!</h2>
<p>At this point, we have appended a text emoji as a favicon, so we&#8217;ve successfully completed our mission to emoji-fy the universe! With Favioli, we no longer need to worry about the dreaded favicon-less existence that some people still somehow call life.</p>
<h2>Where do we go from here?</h2>
<p>There are a myriad of ways we can extend Favioli in the future. To give you an idea, here are some ideas we have been thinking about:</p>
<h3>Custom sets for Randomly Selected Emojis</h3>
<p>This would be the easiest way to deal with cross-platform compatibility; just change the set that we randomly select from. This will let people better customize their experience.</p>
<h3>Custom pngs as Favicons</h3>
<p>This would create more utility for Favioli. I feel our default offering is better than most favicon replacement extensions. But not being able to set custom pngs is a real killer from being the best one out there. Also, though&#8230; gif favicons. Imagine how hilarious (and technically dumb) that could be. Using pngs as a fallback could also be used for browsers, os that are insufficient for life and don&#8217;t have the new emojis.</p>
<h3>Custom Application Overrides</h3>
<p>It would be cool to be able to override some aspects of page load in a smarter and more fun way. Image <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f62d.png" alt="😭" class="wp-smiley" style="height: 1em; max-height: 1em;" /> emojis for 404/500 page responses, <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f440.png" alt="👀" class="wp-smiley" style="height: 1em; max-height: 1em;" /> for non-https sites or something like that. These would be configured in settings, but could be a fun way to interact with the web.</p>
<h3>Popup Page</h3>
<p>This is a pretty obvious one; A useful UI update</p>
<h3>Stats for nerds</h3>
<p>Being able to apply Favioli to browser history would be fun. We&#8217;d have to play with permission settings so that Favioli is only enabled on this page, but could be interesting&#8230;</p>
<p>Anywhoooooo</p>
<p><a href="https://favioli.com">This is Favioli</a>! Check it out! Edit the <a href="https://github.com/ivebencrazy/favioli">source code</a> and send a PR if you want to help, or just use it!</p>
<p><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f618.png" alt="😘" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/2764.png" alt="❤" class="wp-smiley" style="height: 1em; max-height: 1em;" /><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f92f.png" alt="🤯" class="wp-smiley" style="height: 1em; max-height: 1em;" />,<br />
<a href="https://bpev.me">Ben Pevsner</a><!--

<img loading="lazy" decoding="async" class=" wp-image-599 aligncenter" src="https://eligrey.com/blog/wp-content/uploads/2018/08/29526742_10215362185271619_136158773_o-300x225.png" alt="" width="563" height="422" srcset="https://eligrey.com/blog/wp-content/uploads/2018/08/29526742_10215362185271619_136158773_o-300x225.png 300w, https://eligrey.com/blog/wp-content/uploads/2018/08/29526742_10215362185271619_136158773_o-768x577.png 768w, https://eligrey.com/blog/wp-content/uploads/2018/08/29526742_10215362185271619_136158773_o-1024x769.png 1024w, https://eligrey.com/blog/wp-content/uploads/2018/08/29526742_10215362185271619_136158773_o.png 1065w" sizes="(max-width: 563px) 100vw, 563px" />--></p>
]]></content>
		
					<link rel="replies" type="text/html" href="https://eligrey.com/blog/favioli/#comments" thr:count="4" />
			<link rel="replies" type="application/atom+xml" href="https://eligrey.com/blog/favioli/feed/atom/" thr:count="4" />
			<thr:total>4</thr:total>
			</entry>
		<entry>
		<author>
			<name>Devin Samarin</name>
							<uri>https://dsamar.in/</uri>
						</author>

		<title type="html"><![CDATA[Zerodrop]]></title>
		<link rel="alternate" type="text/html" href="https://eligrey.com/blog/zerodrop/" />

		<id>https://eligrey.com/blog/?p=572</id>
		<updated>2024-01-09T03:00:05Z</updated>
		<published>2018-05-23T12:00:35Z</published>
		<category scheme="https://eligrey.com/blog" term="Uncategorized" />
		<summary type="html"><![CDATA[We are announcing Zerodrop, an open-source stealth URL toolkit optimized for bypassing censorship filters and dropping malware. Zerodrop is written in Go and features a powerful web UI that supports geofencing, datacenter IP filtering, blocklist training, manual blocklisting/allowlisting, and advanced payload configuration! Zerodrop can help you elude the detection of the automatic URL scanners used on [&#8230;]]]></summary>

					<content type="html" xml:base="https://eligrey.com/blog/zerodrop/"><![CDATA[<p>We are announcing <a href="https://go.eligrey.com/zerodrop">Zerodrop</a>, an open-source stealth URL toolkit optimized for bypassing censorship filters and dropping malware. Zerodrop is written in <a href="https://golang.org/">Go</a> and features a powerful web UI that supports geofencing, datacenter IP filtering, blocklist training, manual blocklisting/allowlisting, and advanced payload configuration!</p>
<p>Zerodrop can help you elude the detection of the automatic URL scanners used on popular social media platforms. You can easily blocklist traffic from the datacenters and public Tor exit nodes commonly used by URL scanners. For scanners not included in our default blocklists, you can activate blocklist training mode to automatically log the IP addresses of subsequent requests to a blocklist.</p>
<p>When used for anti-forensic malware distribution, Zerodrop is most effective paired with a server-side compromise of a popular trusted domain. This further complicates incident analysis and breach detection.</p>
<h2>Live demo</h2>
<p>A live demo is available at <a href="https://dangerous.link">dangerous.link</a>. Please keep your usage legal. Infrastructural self-destruct has been disabled for the demo. To prevent automated abuse, users may be required to complete CAPTCHA challenges in order to create new entries.</p>
<p><div id="attachment_563" style="width: 797px" class="wp-caption aligncenter"><a class="wp-caption aligncenter" style="display: inline-block; max-width: 80vw; overflow: auto;" title="Try the Zerodrop public demo" href="https://dangerous.link"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-563" class="size-full wp-image-563" src="https://eligrey.com/blog/wp-content/uploads/2018/05/zerodrop-training-actions-no-header.png" alt="" width="787" height="461" srcset="https://eligrey.com/blog/wp-content/uploads/2018/05/zerodrop-training-actions-no-header.png 1969w, https://eligrey.com/blog/wp-content/uploads/2018/05/zerodrop-training-actions-no-header-300x176.png 300w, https://eligrey.com/blog/wp-content/uploads/2018/05/zerodrop-training-actions-no-header-768x450.png 768w, https://eligrey.com/blog/wp-content/uploads/2018/05/zerodrop-training-actions-no-header-1024x600.png 1024w" sizes="(max-width: 787px) 100vw, 787px" /></a><p id="caption-attachment-563" class="wp-caption-text">Zerodrop geofencing &amp; blocklist training</p></div></p>
<p>
<a href='https://eligrey.com/blog/wp-content/uploads/2018/05/zerodrop-new-entry-page.png'><img loading="lazy" decoding="async" width="150" height="150" src="https://eligrey.com/blog/wp-content/uploads/2018/05/zerodrop-new-entry-page-150x150.png" class="attachment-thumbnail size-thumbnail" alt="Screenshot of Zerodrop&#039;s new entry page" /></a>
<a href='https://eligrey.com/blog/wp-content/uploads/2018/05/zerodrop-dash.png'><img loading="lazy" decoding="async" width="150" height="150" src="https://eligrey.com/blog/wp-content/uploads/2018/05/zerodrop-dash-150x150.png" class="attachment-thumbnail size-thumbnail" alt="" /></a>
<a href='https://eligrey.com/blog/wp-content/uploads/2018/05/zerodrop-training-actions.png'><img loading="lazy" decoding="async" width="150" height="150" src="https://eligrey.com/blog/wp-content/uploads/2018/05/zerodrop-training-actions-150x150.png" class="attachment-thumbnail size-thumbnail" alt="" /></a>
</p>
<p><span id="more-572"></span></p>
<h2>Auto-expiration</h2>
<p>Entries may expire after reaching an optional request limit. After an entry expires, all requests to the entry trigger the denial condition resulting in 404 or a redirect.</p>
<h2>Blocklisting, allowlisting, and geofencing</h2>
<p>We support gitignore-style blocklists processed line-by-line top to bottom. Blocklists consist of allowlist inversions, IP address ranges, geofences, and ipcat queries, interspersed with comments. We added IPv6 support to ipcat to make it datacenter traffic detection more reliable.</p>
<p>Geofencing is implemented using MaxMind&#8217;s GeoIP databases and configured inside an entry&#8217;s blocklist and allowlist. Geofencing entries are specified in the form <code>@ lat, long (radius)</code> for blocklisting and the inverted form <code>!@ lat, long (radius)</code> for allowlisting. Currently we only support radial geofences. A graphical geofencing UI is planned for a future release.</p>
<p>Traffic from datacenters and public Tor exit nodes is blocked using a new version of ipcat, <a href="https://github.com/client9/ipcat/pull/139">which now includes IPv6 support</a>. The syntax to block each is <code>db datacenters</code> and <code>db tor</code>.</p>
<p>Redirects to other Zerodrop payloads may optionally be specified in the &#8220;Redirect On Deny&#8221; field under the blocklist. Payloads can be redirects, proxies, uploaded files, or plain text with a MIME media type.</p>
<h3>Example blocklist</h3>
<p>The following example blocklist blocks datacenters, public tor exit nodes, and everyone outside of San Francisco.</p>
<pre><code># Block all
*
# Allow San Francisco
!@ 37.7749, -122.4194 (24140m)
# Block datacenters
db datacenters
# Block public Tor exit nodes
db tor</code></pre>
<h2>Anti-censorship</h2>
<p>This tool is useful for evading automatic censorship filters in use on popular social media websites. With blocklist training and ipcat, it&#8217;s very easy to build up a blocklist to block these filters and continue to share content that would otherwise be automatically censored on most sites. Zerodrop also includes CloudFlare integration to help hide the IP address of your server and avoid further blocklisting from censorship filters.</p>
<h2>Self-destruct</h2>
<p>Complete infrastructure self-destruct can be triggered with blocklist redirects to the &#8220;<img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f4a3.png" alt="💣" class="wp-smiley" style="height: 1em; max-height: 1em;" />&#8221; internal identifier. When triggered, Zerodrop will attempt to delete all traces of itself from the host system. External navigation to &#8220;<code>/&#x1f4a3;</code>&#8221; will not trigger self-destruct.</p>
<p><div id="attachment_585" style="width: 729px" class="wp-caption aligncenter"><a style="display: inline-block; max-width: 80vw; overflow: auto;" href="https://eligrey.com/blog/wp-content/uploads/2018/05/zerodrop-self-destruct.png"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-585" class="wp-image-585 size-full" src="https://eligrey.com/blog/wp-content/uploads/2018/05/zerodrop-self-destruct.png" alt="" width="719" height="375" srcset="https://eligrey.com/blog/wp-content/uploads/2018/05/zerodrop-self-destruct.png 1799w, https://eligrey.com/blog/wp-content/uploads/2018/05/zerodrop-self-destruct-300x156.png 300w, https://eligrey.com/blog/wp-content/uploads/2018/05/zerodrop-self-destruct-768x400.png 768w, https://eligrey.com/blog/wp-content/uploads/2018/05/zerodrop-self-destruct-1024x534.png 1024w" sizes="(max-width: 719px) 100vw, 719px" /></a><p id="caption-attachment-585" class="wp-caption-text">Example usage of Zerodrop&#8217;s self-destruct functionality</p></div></p>
<h2>What&#8217;s next?</h2>
<p>There are many areas where the current release of Zerodrop can be improved. Over the coming months we hope to implement some of the following changes. This is an open source project, so feel free to contribute yourself by <a href="https://github.com/oftn-oswg/zerodrop/issues">reporting issues and submitting pull requests</a>.</p>
<h3>Blocklist groups &amp; machine learning</h3>
<p>Blocklists will get an improved UI and reusable blocklist groups. Currently you must copy and paste blocklists and allowlists to copy list information into new entries. We can also improve training mechanisms with paid IP address services and machine learning techniques.</p>
<h3>Geofencing improvements</h3>
<p>In addition to the currently-support radial geofences, we plan to implement polygonal geofencing and a graphical geofence creation widget for the new entry UI. It will probably be based off Google Maps and require an API key for deployment. For now, the text-based blocklist should be just as powerful albeit less visually accessible.</p>
]]></content>
		
					<link rel="replies" type="text/html" href="https://eligrey.com/blog/zerodrop/#comments" thr:count="0" />
			<link rel="replies" type="application/atom+xml" href="https://eligrey.com/blog/zerodrop/feed/atom/" thr:count="0" />
			<thr:total>0</thr:total>
			</entry>
		<entry>
		<author>
			<name>Eli Grey</name>
					</author>

		<title type="html"><![CDATA[Google Inbox spoofing vulnerability]]></title>
		<link rel="alternate" type="text/html" href="https://eligrey.com/blog/google-inbox-spoofing-vulnerability/" />

		<id>https://eligrey.com/blog/?p=541</id>
		<updated>2018-11-21T08:56:43Z</updated>
		<published>2018-04-27T22:55:32Z</published>
		<category scheme="https://eligrey.com/blog" term="Uncategorized" />
		<summary type="html"><![CDATA[On May 4th, 2017 I discovered and privately reported a recipient spoofing vulnerability in Google Inbox. I noticed that the composition box always hid the email addresses of named recipients without providing a way to inspect the actual email address, and figured out how to abuse this with mailto: links containing named recipients. The link [&#8230;]]]></summary>

					<content type="html" xml:base="https://eligrey.com/blog/google-inbox-spoofing-vulnerability/"><![CDATA[<p>On May 4th, 2017 I discovered and privately reported a recipient spoofing vulnerability in Google Inbox. I noticed that the composition box always hid the email addresses of named recipients without providing a way to inspect the actual email address, and figured out how to abuse this with mailto: links containing named recipients.</p>
<p>The link <a href="https://go.eligrey.com/security/google-inbox-spoofing-vulnerability-exploit-direct.poc">mailto:​&#8221;support@paypal.com&#8221;​&lt;scam@phisher.example&gt;</a> shows up as &#8220;support@paypal.com&#8221; in the Google Inbox composition window, visually identical to any email actually sent to PayPal. </p>
<p>In order to exploit this vulnerability, the target user only needs to click on a malicious mailto: link. It can also be triggered by clicking on a direct link to Inbox&#8217;s mailto: handler page, as shown in <a href="https://go.eligrey.com/security/google-inbox-spoofing-vulnerability-exploit.poc" target="_blank" rel="nofollow noopener">this example exploit link</a>.</p>
<p>This vulnerability was still unfixed in all Google Inbox apps as of May 4th, 2018, a year after private disclosure.</p>
<p><strong>Update</strong>: <a href="https://www.xda-developers.com/google-fixes-flaw-spoof-inbox-by-gmail/">This vulnerability has been fixed in the Google Inbox webapp</a> as of May 18, 2018. The Android app still remains vulnerable.</p>
<div id="attachment_549" class="wp-caption aligncenter" style="max-width: 575px; overflow: auto;">
<p><a href="https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-poc.png"><img loading="lazy" decoding="async" class="wp-image-549 size-full" src="https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-poc.png" sizes="(max-width: 565px) 100vw, 565px" srcset="https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-poc.png 1130w, https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-poc-300x247.png 300w, https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-poc-768x633.png 768w, https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-poc-1024x844.png 1024w" alt="" width="565" height="466" /></a></p>
<p class="wp-caption-text">The recipient “support@paypal.com” being spoofed in the Google Inbox composition window. The actual recipient is “scam@phisher.example”.</p>
</div>
<p><span id="more-541"></span><br />
On July 3rd, 2017 I noticed that Google had added hover tooltips to this field in Inbox, which made it possible for users to manually confirm the recipient email address. The default presentation of the email address was still vulnerable to spoofing, so I sent another email to Google.</p>
<p><a href="https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-update.png"><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-546" src="https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-update.png" alt="" width="706" height="423" srcset="https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-update.png 1412w, https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-update-300x180.png 300w, https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-update-768x460.png 768w, https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-update-1024x613.png 1024w" sizes="(max-width: 706px) 100vw, 706px" /></a></p>
<p>I received no response for over 8 months, so I sent yet another email on March 16th, 2018.</p>
<p><a href="https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-update-2.png"><img loading="lazy" decoding="async" class="aligncenter wp-image-547 size-full" title="I decided to disclose this 1 week early ¯\_(ツ)_/¯" src="https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-update-2.png" alt="" width="706" height="354" srcset="https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-update-2.png 1412w, https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-update-2-300x151.png 300w, https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-update-2-768x386.png 768w, https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-update-2-1024x514.png 1024w" sizes="(max-width: 706px) 100vw, 706px" /></a></p>
<p>Nine months after sending my emails I received this response, which doesn&#8217;t lead me to believe that Google is serious about fixing this vulnerability.</p>
<p><a href="https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-update-3.png"><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-548" src="https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-update-3.png" alt="" width="706" height="245" srcset="https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-update-3.png 1412w, https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-update-3-300x104.png 300w, https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-update-3-768x267.png 768w, https://eligrey.com/blog/wp-content/uploads/2018/04/google-inbox-vulnerability-update-3-1024x355.png 1024w" sizes="(max-width: 706px) 100vw, 706px" /></a></p>
]]></content>
		
					<link rel="replies" type="text/html" href="https://eligrey.com/blog/google-inbox-spoofing-vulnerability/#comments" thr:count="1" />
			<link rel="replies" type="application/atom+xml" href="https://eligrey.com/blog/google-inbox-spoofing-vulnerability/feed/atom/" thr:count="1" />
			<thr:total>1</thr:total>
			</entry>
		<entry>
		<author>
			<name>Eli Grey</name>
					</author>

		<title type="html"><![CDATA[Opera UXSS vulnerability regression]]></title>
		<link rel="alternate" type="text/html" href="https://eligrey.com/blog/opera-uxss-vulnerability-regression/" />

		<id>https://eligrey.com/blog/?p=534</id>
		<updated>2019-02-12T02:52:25Z</updated>
		<published>2018-01-12T04:06:35Z</published>
		<category scheme="https://eligrey.com/blog" term="Uncategorized" />
		<summary type="html"><![CDATA[Opera users were vulnerable to a publicly-disclosed UXSS exploit for most of 2010-2012. I privately disclosed a UXSS vulnerability (complete SOP bypass) to Opera Software in April 2010, and recently discovered that Opera suffered a regression of this issue and continued to be vulnerable for over two years after disclosure. The vulnerability was that data: [&#8230;]]]></summary>

					<content type="html" xml:base="https://eligrey.com/blog/opera-uxss-vulnerability-regression/"><![CDATA[<p>Opera users were vulnerable to a publicly-disclosed UXSS exploit for most of 2010-2012.</p>
<p>I privately disclosed a <abbr title="universal cross-site scripting">UXSS</abbr> vulnerability (complete <abbr title="same-origin policy">SOP</abbr> bypass) to Opera Software in April 2010, and recently discovered that Opera suffered a regression of this issue and continued to be vulnerable for over two years after disclosure. The vulnerability was that data: URIs could attain&nbsp;same-origin privileges to non-opening origins across multiple redirects.</p>
<p>I asked for a status update 50 days after disclosing the vulnerability, as another Opera beta release was about to be published. Opera responded by saying that they were pushing back the fix.</p>
<p><a href="https://twitter.com/sephr/status/16445082479">I publicly disclosed the vulnerability with a <abbr title="proof-of-concept">PoC</abbr> exploit on Twitter</a> on June 15, 2010<!-- and reddit (https://www.reddit.com/r/netsec/comments/cgu93/opera_data_uri_xss_reddit_upvote_exploit/), but poor reading comprehension from people who don't understand the words "PoC exploit" resulted in a flurry of downvotes :/ -->. This was slightly irresponsible of me (at least I&nbsp;included a <a href="https://github.com/eligrey/code.eligrey.com-archive/blob/master/sec/opera/data-uri-xss/twitter-poc.js#L58">kill switch</a>), but please keep in mind that I was 16 at the time. The next week, Opera published new mainline releases (10.54 for Windows/Mac and 10.11 for Linux) and said that those releases should fix the vulnerability. I tested my PoC and it seemed to be fixed.</p>
<p>Shortly after, this vulnerability regressed back into Opera without me noticing. I suspect that this was due to the rush to fix their mainline branch, and lack of coordination between their security and release teams. The regression was caught two years later by&nbsp;<a href="https://twitter.com/M_script">M_script</a>&nbsp;on the <a href="https://rdot.org/forum/showthread.php?t=2444">RDot Forums</a>, and documented in English by <a href="https://labs.detectify.com/2012/10/05/universal-xss-in-opera/">Detectify Labs</a>.</p>
<p>Opera Software&#8217;s management should not have allowed this major flaw to regress for so long.</p>
<h2>Security advisories</h2>
<ul>
<li>Initial fix (June 22, 2010):&nbsp;<a href="https://web.archive.org/web/20130807105806/http://www.opera.com/security/advisory/955">http://www.opera.com/security/advisory/955/</a> (CVE-2010-2665)</li>
<li>Regression fix (November 11, 2012):&nbsp;<a href="https://web.archive.org/web/20121111095052/http://www.opera.com/support/kb/view/1031/">http://www.opera.com/support/kb/view/1031/</a> (CVE-2012-6463)</li>
</ul>
]]></content>
		
					<link rel="replies" type="text/html" href="https://eligrey.com/blog/opera-uxss-vulnerability-regression/#comments" thr:count="1" />
			<link rel="replies" type="application/atom+xml" href="https://eligrey.com/blog/opera-uxss-vulnerability-regression/feed/atom/" thr:count="1" />
			<thr:total>1</thr:total>
			</entry>
		<entry>
		<author>
			<name>Eli Grey</name>
					</author>

		<title type="html"><![CDATA[Rainpaper 2.0]]></title>
		<link rel="alternate" type="text/html" href="https://eligrey.com/blog/rainpaper-2-0/" />

		<id>https://eligrey.com/blog/?p=523</id>
		<updated>2017-07-16T00:09:41Z</updated>
		<published>2017-07-16T00:06:17Z</published>
		<category scheme="https://eligrey.com/blog" term="Uncategorized" />
		<summary type="html"><![CDATA[Version 2.0 of Rainpaper is now available. What&#8217;s new: Wallpaper scrolling Muzei extension support Cycle through multiple of your own images from your gallery Performance and stability improvements In order to change the refresh interval to cycle through your own images, long press on the &#8220;My images&#8221; image source and tap &#8220;Settings&#8221;. There will be another update (2.1) [&#8230;]]]></summary>

					<content type="html" xml:base="https://eligrey.com/blog/rainpaper-2-0/"><![CDATA[<p>Version 2.0 of <a href="https://play.google.com/store/apps/details?id=org.oftn.rainpaper">Rainpaper</a> is now available.</p>
<p>What&#8217;s new:</p>
<ul>
<li>Wallpaper scrolling</li>
<li><a href="https://play.google.com/store/search?q=Muzei+extension">Muzei extension</a> support</li>
<li>Cycle through multiple of your own images from your gallery</li>
<li>Performance and stability improvements</li>
</ul>
<p>In order to change the refresh interval to cycle through your own images, long press on the &#8220;My images&#8221; image source and tap &#8220;Settings&#8221;. There will be another update (2.1) with support for looping GIF/video wallpapers and additional memory and performance improvements.</p>
<p>I will also be launching a pair of Android and Windows apps later this year named Soundmesh. It enables wireless synchronization of multiple devices&#8217; audio outputs and inputs with low latency, high-quality audio.</p>
<p>You can use Soundmesh to listen to your PC audio output on multiple Android phones and PCs, forward your Android microphone to your PC, and listen to your Android phone&#8217;s audio output on your PC.</p>
]]></content>
		
					<link rel="replies" type="text/html" href="https://eligrey.com/blog/rainpaper-2-0/#comments" thr:count="0" />
			<link rel="replies" type="application/atom+xml" href="https://eligrey.com/blog/rainpaper-2-0/feed/atom/" thr:count="0" />
			<thr:total>0</thr:total>
			</entry>
		<entry>
		<author>
			<name>Eli Grey</name>
					</author>

		<title type="html"><![CDATA[Bedford/St. Martin&#8217;s data breach]]></title>
		<link rel="alternate" type="text/html" href="https://eligrey.com/blog/bedford-st-martins-data-breach/" />

		<id>https://eligrey.com/blog/?p=515</id>
		<updated>2017-05-20T22:14:29Z</updated>
		<published>2017-05-19T12:01:56Z</published>
		<category scheme="https://eligrey.com/blog" term="Uncategorized" />
		<summary type="html"><![CDATA[Some time between Aug 27, 2012 and May 3, 2014, the Macmillan Publishers subsidiary Bedford/St. Martin&#8217;s suffered a data breach that leaked the unique email address that I provided to them. I have previously informed them of the breach and it appears that they do not care to investigate. I don&#8217;t appreciate large companies getting [&#8230;]]]></summary>

					<content type="html" xml:base="https://eligrey.com/blog/bedford-st-martins-data-breach/"><![CDATA[<p>Some time between Aug 27, 2012 and May 3, 2014, the <a href="https://us.macmillan.com/">Macmillan Publishers</a> subsidiary <a href="https://en.wikipedia.org/wiki/Bedford-St._Martin%27s">Bedford/St. Martin&#8217;s</a> suffered a data breach that leaked the unique email address that I provided to them. I have previously informed them of the breach and it appears that they do not care to investigate.</p>
<p>I don&#8217;t appreciate large companies getting away with not disclosing or investigating data breaches, so I&#8217;m disclosing it for them.</p>
]]></content>
		
					<link rel="replies" type="text/html" href="https://eligrey.com/blog/bedford-st-martins-data-breach/#comments" thr:count="0" />
			<link rel="replies" type="application/atom+xml" href="https://eligrey.com/blog/bedford-st-martins-data-breach/feed/atom/" thr:count="0" />
			<thr:total>0</thr:total>
			</entry>
		<entry>
		<author>
			<name>Eli Grey</name>
					</author>

		<title type="html"><![CDATA[Rainpaper]]></title>
		<link rel="alternate" type="text/html" href="https://eligrey.com/blog/rainpaper/" />

		<id>https://eligrey.com/blog/?p=506</id>
		<updated>2016-05-05T09:51:21Z</updated>
		<published>2016-05-05T09:51:21Z</published>
		<category scheme="https://eligrey.com/blog" term="Uncategorized" />
		<summary type="html"><![CDATA[I just released an Android live wallpaper called Rainpaper on Google Play. Check it out! Rainpaper features simulated rain, popular images from reddit, and synchronization with your local weather. Also stay tuned for a new open source project that I&#8217;ve been working on called subscribe.js. Soon you will be able to easily retrofit push-like notifications onto any website that has a syndication [&#8230;]]]></summary>

					<content type="html" xml:base="https://eligrey.com/blog/rainpaper/"><![CDATA[<p>I just released an Android live wallpaper called <a href="https://play.google.com/store/apps/details?id=org.oftn.rainpaper">Rainpaper</a> on Google Play. Check it out!</p>
<p>Rainpaper features simulated rain, popular images from reddit, and synchronization with your local weather.</p>
<p>Also stay tuned for a new open source project that I&#8217;ve been working on called subscribe.js. Soon you will be able to easily retrofit push-like notifications onto any website that has a syndication feed. subscribe.js will be powered by <a href="https://slightlyoff.github.io/ServiceWorker/spec/service_worker/">Service Workers</a> and run locally in your browser.</p>
]]></content>
		
					<link rel="replies" type="text/html" href="https://eligrey.com/blog/rainpaper/#comments" thr:count="0" />
			<link rel="replies" type="application/atom+xml" href="https://eligrey.com/blog/rainpaper/feed/atom/" thr:count="0" />
			<thr:total>0</thr:total>
			</entry>
		<entry>
		<author>
			<name>Eli Grey</name>
					</author>

		<title type="html"><![CDATA[CPU core estimation with JavaScript]]></title>
		<link rel="alternate" type="text/html" href="https://eligrey.com/blog/cpu-core-estimation-with-javascript/" />

		<id>http://eligrey.com/blog/?p=444</id>
		<updated>2017-07-11T08:44:52Z</updated>
		<published>2013-05-24T17:32:10Z</published>
		<category scheme="https://eligrey.com/blog" term="Uncategorized" />
		<summary type="html"><![CDATA[(Update) Standardization I have standardized navigator.cores as navigator.hardwareConcurrency, and it is now supported natively in Chrome, Safari, Firefox, and Opera. Our polyfill has renamed the APIs accordingly. Since the initial blog post, Core Estimator has been updated to estimate much faster and now has instant estimation in Chrome through PNaCl. navigator.cores So you just built some [&#8230;]]]></summary>

					<content type="html" xml:base="https://eligrey.com/blog/cpu-core-estimation-with-javascript/"><![CDATA[<h2>(Update) Standardization</h2>
<p>I have standardized navigator.cores as <a href="https://html.spec.whatwg.org/multipage/workers.html#navigator.hardwareconcurrency">navigator.hardwareConcurrency</a>, and it is now supported natively in Chrome, Safari, Firefox, and Opera. Our polyfill has renamed the APIs accordingly. Since the initial blog post, Core Estimator has been updated to estimate much faster and now has instant estimation in Chrome through PNaCl.</p>
<h2>navigator.cores</h2>
<p>So you just built some cool scalable multithreaded feature into your webapp with web workers. Maybe it&#8217;s machine learning-based webcam object recognition—or a compression algorithm like LZMA2 that runs faster with the more cores that you have. Now, all you have to do is simply set the number of worker threads to use the user&#8217;s CPU as efficiently as possible&#8230;</p>
<p>You might be thinking &#8220;Easy, there&#8217;s probably a <code>navigator.cores</code> API that will tell me how many cores the user&#8217;s CPU has.&#8221; That was our thought while porting xz to JavaScript (which will be released in the future as xz.js), and we were amazed there was no such API or any equivalent whatsoever in any browser! With all the new features of HTML5 which give more control over native resources, there must be a way to find out how many cores a user possesses.</p>
<p>I immediately envisioned a timing attack that could attempt to estimate a user&#8217;s CPU cores to provide the optimal number of workers to spawn in parallel. It would scale from one to thousands of cores. With the help of <a href="https://github.com/dsamarin">Devin Samarin</a>, <a href="http://jon-carlos.com/">Jon-Carlos Rivera</a>, and <a href="http://devyn.me/">Devyn Cairns</a>, we created the open source library, <a href="https://github.com/oftn/core-estimator">Core Estimator</a>. It implements a <code>navigator.cores</code> value that will only be computed once it is accessed. Hopefully in the future, this will be added to the HTML5 specification.</p>
<h3>Live demo</h3>
<p>Try out Core Estimator with the <a href="https://oswg.oftn.org/projects/core-estimator/demo/">live demo</a> on our website.</p>
<p><a title="Core Estimator demo run on an i7 3930k" href="https://oswg.oftn.org/projects/core-estimator/demo/"><img loading="lazy" decoding="async" class="alignnone wp-image-448 size-full" style="border: 1px solid rgba(0, 0, 0, 0.2);" src="https://eligrey.com/blog/wp-content/uploads/2016/02/core-estimator-demo.png" alt="screenshot of the demo being run on an i7 3930k" width="681" height="324" /></a></p>
<h3>How the timing attack works and scales</h3>
<p>The estimator works by performing a statistical test on running different numbers of simultaneous web workers. It measures the time it takes to run a single worker and compares this to the time it takes to run different numbers of workers simultaneously. As soon as this measurement starts to increase excessively, it has found the maximum number of web workers which can be run in parallel without degrading performance.</p>
<p><a href="https://eligrey.com/blog/wp-content/uploads/2016/02/core-estimator-graph.png"><img loading="lazy" decoding="async" class="alignnone wp-image-447 size-full" style="border: 1px solid rgba(0, 0, 0, 0.2);" src="https://eligrey.com/blog/wp-content/uploads/2016/02/core-estimator-graph.png" alt="" width="617" height="280" /></a></p>
<p>In the early stages of testing whether this would work, we did a few experiments on various desktops to visualize the data being produced. The graphs being produced clearly showed that it was feasible on the average machine. Pictured are the results of running an early version of Core Estimator on Google Chrome 26 on an Intel Core i5-3570K 3.4GHz Quad-Core Processor with 1,000 time samples taken for each core test. We used 1,000 samples just to really be able to see the spread of data but it took over 15 minutes to collect this data. For Core Estimator, 5 samples seem to be sufficient.</p>
<p>The astute observer will note that it doesn&#8217;t test each number of simultaneous workers by simply counting up. Instead, Core Estimator performs a binary search. This way the running time is logarithmic in the number of cores—O(log n) instead of O(n). At most, 2 * floor(log2(n)) + 1 tests will be done to find the number of cores.</p>
<h3>Benefits</h3>
<p>Previously, you had to either manually code in an amount of threads or ask the user how many cores they have, which can be pretty difficult for less tech savvy users. This can even be a problem with tech savvy users—few people know how many cores their phone has. Core Estimator helps you simplify your APIs so thread count parameters can be optional. The xz.js API will be as simple as <code>xz.compress(Blob data, callback(Blob compressed), optional int preset=6, optional int threads=navigator.cores)</code>, making it this easy to implement a &#8220;save .xz&#8221; button for your webapp (in conjunction with <a href="https://github.com/eligrey/FileSaver.js">FileSaver.js</a>):</p>
<pre lang="javascript">save_button.addEventListener("click", function() {
    xz.compress(serializeDB(), function(compressed) {
        saveAs(compressed, "db.xz");
    });
});</pre>
<h3>Supported browsers and platforms</h3>
<p>Early Core Estimator has been tested to support all current release versions of IE, Firefox, Chrome, and Safari on ARM and x86 (as of May 2013). The accuracy of Core Estimator on systems with Intel hyper-threading and Turbo Boost technology is somewhat lesser as the time to complete a workload is less predictable. In any case it will try to tend towards estimating a larger number of cores than actually available to provide a somewhat reasonable number.</p>
]]></content>
		
					<link rel="replies" type="text/html" href="https://eligrey.com/blog/cpu-core-estimation-with-javascript/#comments" thr:count="6" />
			<link rel="replies" type="application/atom+xml" href="https://eligrey.com/blog/cpu-core-estimation-with-javascript/feed/atom/" thr:count="6" />
			<thr:total>6</thr:total>
			</entry>
		<entry>
		<author>
			<name>Eli Grey</name>
					</author>

		<title type="html"><![CDATA[Saving generated files on the client-side]]></title>
		<link rel="alternate" type="text/html" href="https://eligrey.com/blog/saving-generated-files-on-the-client-side/" />

		<id>http://eligrey.com/blog/?p=440</id>
		<updated>2011-07-15T20:53:14Z</updated>
		<published>2011-07-15T20:53:14Z</published>
		<category scheme="https://eligrey.com/blog" term="Uncategorized" /><category scheme="https://eligrey.com/blog" term="File API" /><category scheme="https://eligrey.com/blog" term="HTML5" /><category scheme="https://eligrey.com/blog" term="JavaScript" />
		<summary type="html"><![CDATA[Have you ever wanted to add a Save as&#8230; button to a webapp? Whether you&#8217;re making an advanced WebGL-powered CAD webapp and want to save 3D object files or you just want to save plain text files in a simple Markdown text editor, saving files in the browser has always been a tricky business. Usually [&#8230;]]]></summary>

					<content type="html" xml:base="https://eligrey.com/blog/saving-generated-files-on-the-client-side/"><![CDATA[<p>Have you ever wanted to add a <q>Save as&hellip;</q> button to a webapp? Whether you&#8217;re making an advanced <a href="https://developer.mozilla.org/en/WebGL">WebGL</a>-powered <abbr title="computer-aided design">CAD</abbr> webapp and want to save 3D object files or you just want to save plain text files in a simple Markdown text editor, saving files in the browser has always been a tricky business.</p>
<p>Usually when you want to save a file generated with JavaScript, you have to send the data to your server and then return the data right back with a <code>Content-disposition: attachment</code> header. This is less than ideal for webapps that need to work offline. The W3C File API includes a <a href="http://www.w3.org/TR/file-writer-api/#the-filesaver-interface"><code>FileSaver</code> interface</a>, which makes saving generated data as easy as <code>saveAs(data, filename)</code>, though unfortunately it will eventually be removed from the spec.</p>
<p>I have written a JavaScript library called <a href="https://github.com/eligrey/FileSaver.js">FileSaver.js</a>, which implements <code>FileSaver</code> in all modern browsers. Now that it&#8217;s possible to generate any type of file you want right in the browser, document editors can have an instant save button that doesn&#8217;t rely on an online connection. When paired with the standard HTML5 <a href="http://www.w3.org/TR/html5/the-canvas-element.html"><code>canvas.toBlob()</code></a> method, FileSaver.js lets you save canvases instantly and give them filenames, which is very useful for HTML5 image editing webapps. For browsers that don&#8217;t yet support <code>canvas.toBlob()</code>, <a href="https://github.com/eboyjr">Devin Samarin</a> and I wrote <a href="https://github.com/eligrey/canvas-toBlob.js">canvas-toBlob.js</a>. Saving a canvas is as simple as running the following code:</p>
<pre lang="javascript">canvas.toBlob(function(blob) {
    saveAs(blob, filename);
});
</pre>
<p>I have created a <a href="http://eligrey.com/demos/FileSaver.js/">demo</a> of FileSaver.js in action that demonstrates saving a canvas doodle, plain text, and rich text. Please note that saving with custom filenames is only supported in browsers that either natively support <code>FileSaver</code> or browsers like <a href="http://www.chromium.org/getting-involved/dev-channel">Google Chrome 14 dev</a> and <a href="http://tools.google.com/dlpage/chromesxs">Google Chrome Canary</a>, that support <a href="http://developers.whatwg.org/links.html#downloading-resources">&lt;a&gt;.download</a> or web filesystems via <a href="http://www.w3.org/TR/file-system-api/#using-localfilesystem"><code>LocalFileSystem</code></a>.</p>
<h2>How to construct files for saving</h2>
<p>First off, you want to instantiate a <a href="https://developer.mozilla.org/en/DOM/Blob"><code>Blob</code></a>. The <code>Blob</code> API isn&#8217;t supported in all current browsers, so I made <a href="https://github.com/eligrey/Blob.js">Blob.js</a> which implements it. The following example illustrates how to save an XHTML document with <code>saveAs()</code>.</p>
<pre lang="javascript">saveAs(
      new Blob(
          [(new XMLSerializer).serializeToString(document)]
        , {type: "application/xhtml+xml;charset=" + document.characterSet}
    )
    , "document.xhtml"
);
</pre>
<p>Not saving textual data? You can save multiple binary <code>Blob</code>s and <a href="https://developer.mozilla.org/en/JavaScript_typed_arrays"><code>ArrayBuffer</code>s</a> to a <code>Blob</code> as well! The following is an example of setting generating some binary data and saving it.</p>
<pre lang="javascript" escaped="true">var
      buffer = new ArrayBuffer(8) // allocates 8 bytes
    , data = new DataView(buffer)
;
// You can write uint8/16/32s and float32/64s to dataviews
data.setUint8 (0, 0x01);
data.setUint16(1, 0x2345);
data.setUint32(3, 0x6789ABCD);
data.setUint8 (7, 0xEF);

saveAs(new Blob([buffer], {type: "example/binary"}), "data.dat");
// The contents of data.dat are &lt;01 23 45 67 89 AB CD EF&gt;</pre>
<p>If you&#8217;re generating large files, you can implement an abort button that aborts the <code>FileSaver</code>.</p>
<pre lang="javascript">var filesaver = saveAs(blob, "video.webm");
abort_button.addEventListener("click", function() {
    filesaver.abort();
}, false);</pre>
]]></content>
		
					<link rel="replies" type="text/html" href="https://eligrey.com/blog/saving-generated-files-on-the-client-side/#comments" thr:count="81" />
			<link rel="replies" type="application/atom+xml" href="https://eligrey.com/blog/saving-generated-files-on-the-client-side/feed/atom/" thr:count="81" />
			<thr:total>81</thr:total>
			</entry>
	</feed>
