<?xml version="1.0"?>
<rss version="2.0" xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:media="http://search.yahoo.com/mrss/" xmlns:yt="http://gdata.youtube.com/schemas/2007" xmlns:atom="http://www.w3.org/2005/Atom">
   <channel>
      <title>Elasticvapor RSS Combo</title>
      <description>ElasticVapor + Digital Provocateur Blog Feeds</description>
      <link>http://pipes.yahoo.com/pipes/pipe.info?_id=8136012b4ca727e17e85ecd8f0e72f9a</link>
      <atom:link rel="next" href="http://pipes.yahoo.com/pipes/pipe.run?_id=8136012b4ca727e17e85ecd8f0e72f9a&amp;_render=rss&amp;page=2"/>
      <pubDate>Thu, 01 Oct 2015 23:06:48 +0000</pubDate>
      <generator>http://pipes.yahoo.com/pipes/</generator>
      <item>
         <title>The Difference Between Google, Facebook And Microsoft Summed Up In Two Words: Augmentation Or Immersion</title>
         <link>http://www.forbes.com/sites/reuvencohen/2014/04/03/the-difference-between-google-facebook-and-microsoft-summed-up-in-two-words-augmentation-or-immersion/</link>
         <description>Over the last week the question of why would spend $2 billion buying Oculus Rift, a maker of virtual reality headsets has been asked repeatedly. In a world where wearable technology is generally seen as the next big thing, a pair of rather large VR goggles appears to run opposite to the approach taken by [&amp;#8230;]</description>
         <guid isPermaLink="false">http://blogs.forbes.com/reuvencohen/?p=3266</guid>
         <pubDate>Thu, 03 Apr 2014 14:14:00 +0000</pubDate>
         <content:encoded><![CDATA[<p>Over the last week the question of why <a rel="nofollow" target="_blank" href='http://www.forbes.com/facebook-ipo/'>Facebook</a> would spend $2 billion buying Oculus Rift, a maker of virtual reality headsets has been asked repeatedly. In a world where wearable technology is generally seen as the next big thing, a pair of rather large VR goggles appears to run opposite to the approach taken by Google and more recently Microsoft.</p>
<p>Simply put, Google has taken a much more contextual approach to how it believes you and I will consume its services. It’s a strategy that sees a combination of ubiquitous mobile phones, wearable technology and globally available Internet, built upon a collection of web connected things. These things include Nest, a web connected Thermostat, Google Glass, a wearable heads up display of information and recently its  announcement of Android Wear, a version of the popular mobile OS tailored specifically for wearable tech products.  Adding to the mix are some of its ambitious R&amp;D efforts like “Project Loon” which looks to use a global network of high-altitude balloons to connect people in rural and remote areas who have no Internet access.</p>
<p>Through these activities it seems Google’s strategy is to create contextual elements that augments your existing reality with data specifically tailored to you as you live your life. Or in other words, they are not looking to immerse you its world, so much as to help adapt and improve your existing world by adding to it. Combined with Google Now it’s a strategy that tries to anticipate what you need to know before you ask or even know what to ask.</p>
<p>At the other end of the spectrum you have an immersive approach championed by Facebook. It’s a strategy which seems to be based upon the belief that you are at the center of its world. An experience whereby they recreate a virtual all encompassing self contained society / virtual reality where the lines between physical and virtual become blurred. It’s a kind of walled garden where the outside world is no longer a key component.</p>
<div style="width:1290px;" class="wp-caption"><img alt="Oculus Rift" src="http://blogs-images.forbes.com/reuvencohen/files/2014/04/oculusrift1.jpg" width="1280" height="720"/><p class="wp-caption-text">Oculus Rift VR Headset</p></div>
<p>Think I’m crazy? Maybe not, &#8220;This is just the start. After games, we&#8217;re going to make Oculus a platform for many other experiences,&#8221; <a rel="nofollow" title="Mark Zuckerberg announces Oculus Rift deal" target="_blank" href="https://www.facebook.com/zuck/posts/10101319050523971?stream_ref=10">wrote Zuckerberg as he announced the deal</a>. &#8220;Imagine enjoying a courtside seat at a game, studying in a classroom of students and teachers all over the world or consulting a doctor face-to-face – just by putting on goggles in your home.&#8221;</p>
<p>It seems that Zuckerberg isn’t alone in this way of thinking, Oculus Rift founder, Palmer Luckey, made similar comments in a 2013 <a rel="nofollow" title="Palmer Luckey Fast Company interview" target="_blank" href="http://www.fastcolabs.com/3013011/inventor-of-oculus-rift-the-future-of-virtual-reality-is-social-networking"> <i>Fast Company</i> magazine article</a>, &#8220;Right now you have very abstract social networks. So it will be really interesting to see what happens if virtual reality ever progresses to the point where you can have a very realistic way of interacting,&#8221; said Luckey. &#8220;The only difference is that you can be whoever you want to be, instead of whatever cards you got dealt in real life. It&#8217;s the stuff of science fiction, but we are not too far away. People already spend hours a day on Facebook. What if it was truly engaging and immersive, rather than a filtered version of your real self?&#8221;</p>
<p>Although historically I would have said Microsoft had more in common with an immersive approach, recently it seems to be taking a similar approach to Google, opting for augmentation over immersion. <a rel="nofollow" target="_blank" href="http://techcrunch.com/2014/03/27/microsoft-paid-up-to-150m-to-buy-wearable-computing-ip-from-the-osterhout-design-group/">According to <i>TechCrunch</i></a>, Microsoft recently purchased the rights to its own head-mounted device paying between $100-150 million for intellectual property assets of the Osterhout Design Group (ODG). The firm that has spent years developing augmented reality devices for the military and other organizations. Although Microsoft has not issued a statement of its own, ODG founder Ralph Osterhout spoke with <i>TechCrunch</i>, confirming the deal.</p>
<p>At the end of the day both Google and Facebook are in essentially the same business, selling ads. But their approach to how you and I will ultimately consume those ads is fundamentally different. I suppose time will tell which tactic ultimately wins.</p>
<p><em>Find Reuven on <a rel="nofollow" target="_blank" href="http://twitter.com/ruv">Twitter @rUv</a> | <a rel="nofollow" target="_blank" href="http://ca.linkedin.com/in/reuvencohen">Linkedin</a> | <a rel="nofollow" target="_blank" href="https://plus.google.com/112393776166946876479/posts?rel=author">Google+</a> | <a rel="nofollow" target="_blank" href="http://www.facebook.com/ruvnet">Facebook</a> | <a rel="nofollow" target="_blank" href="http://digitalnibbles.com/">Podcast</a></em></p>]]></content:encoded>
      </item>
      <item>
         <title>Design Thinking: A Unified Framework For Innovation</title>
         <link>http://www.forbes.com/sites/reuvencohen/2014/03/31/design-thinking-a-unified-framework-for-innovation/</link>
         <description>Over the years the question of what makes some companies, and the people within, more or less creative than others has been studied ad nauseam. The idea of innovation within business has long been thrown around, it’s a kind of catchall term used for everything a company must do continue to remain relevant. We are [&amp;#8230;]</description>
         <guid isPermaLink="false">http://blogs.forbes.com/reuvencohen/?p=3223</guid>
         <pubDate>Mon, 31 Mar 2014 15:30:00 +0000</pubDate>
         <content:encoded><![CDATA[<p>Over the years the question of what makes some companies, and the people within, more or less creative than others has been studied ad nauseam. The idea of innovation within business has long been thrown around, it’s a kind of catchall term used for everything a company must do continue to remain relevant. We are led to believe that without a strong amount of “innovation,” your company is surely doomed. But the realities of innovation and creativity are much more complicated than simply a willingness to be more creative.</p>
<p>To help better understand where creativity comes from and more importantly if it can be taught, I recently had the opportunity to attend <a rel="nofollow" target="_blank" href='http://www.forbes.com/colleges/stanford-university/'>Stanford University</a> “<a rel="nofollow" target="_blank" href="http://www.gsb.stanford.edu/exed/dtbc/">Design Thinking Boot Camp: From Insights to Innovation</a>.” The program is held at The <a rel="nofollow" target="_blank" href="http://dschool.stanford.edu/"><i>Hasso Plattner Institute of Design</i></a>, <i>affectionately called &#8220;<a rel="nofollow" target="_blank" href="https://dschool.stanford.edu">the d.school</a>.</i>&#8221; It’s a 3 day immersive program tailored to executives, providing the opportunity to learn the concept of “design thinking” — a human-centered, prototype-driven process for innovation that can be applied to product, service, and business design.</p>
<div style="width:560px;" class="wp-caption"><img class=" " alt="" src="http://blogs-images.forbes.com/reuvencohen/files/2014/04/p946396122-4-730x486.jpg" width="550"/><p class="wp-caption-text">d.school</p></div>
<p>Before I get into the program itself, first a little bit of history. Over the last 40 years or so, a number of strategies have been devised to help foster what can be best described as a formalized methodology for applying a systematic form of creativity/innovation with in business. Among the more popular are a group of cognitive processes for creativity that arose from <a rel="nofollow" title="Herbert A. Simon" target="_blank" href="http://en.wikipedia.org/wiki/Herbert_A._Simon">Herbert A. Simon</a>&#8216;s 1969 book <i>The Sciences of the Artificial</i>.  He was one of the most influential social scientists of the twentieth century, among his many claims to fame was his work as a founding father of several of today&#8217;s important scientific domains, including artificial intelligence, information processing, attention economics, organization theory, complex systems, and computer simulation of scientific discovery. One area of particular interest in terms of creativity and innovation is his research into decision-making and problem solving where he devised three stages in rational decision-making; Intelligence, Design, Choice (IDC).</p>
<p><img alt="" src="http://blogs-images.forbes.com/reuvencohen/files/2014/04/simons_3_stages_in_decision_making.gif" width="300"/></p>
<p>Expanding on Herbert A. Simon’s work, in 1973 Robert McKim wrote the book <i>Experiences in Visual Thinking</i>. The book focused on the ways in which perceptual thinking skills can be observed, utilized and improved, and how powerful these skills are in their &#8220;capacity to change your world of ideas and things.”</p>
<p>Finally in the 1980’s Stanford’s <a rel="nofollow" title="Rolf Faste" target="_blank" href="http://en.wikipedia.org/wiki/Rolf_Faste">Rolf Faste</a> expanded on McKim&#8217;s work defining and popularizing the concept of &#8220;Design Thinking&#8221; as a method of creative action. In the simplest terms, Design Thinking is “a formal method for practical, creative resolution of problems or issues, with the intent of an improved future result.” It’s a methodology for actualizing your concepts and ideas.</p>
<p>Design Thinking attempts to inspire the essential element of creativity, the ability to take an abstract idea and create something with it. It’s based upon the fundamental belief that an unexecuted idea, one that is never realized, is a worthless proposition and that doing is equally as valuable as thinking.</p>
<p>A big part of the Design Thinking concept involves empathy for those you are designing for. It’s often manifested through a series of activities, which attempt to create an experience of what or how your idea will ultimately be consumed. During the d.school bootcamp, this was done through a series of role-playing exercises where we played out different characters developed through joint brainstorming sessions. These role-playing games allowed for a rapid ideation (idea generation) with the ability to visualize and adapt the results in near real-time.</p>
<div style="width:560px;" class="wp-caption"><img class="  " alt="Design Thinking Stages" src="http://blogs-images.forbes.com/reuvencohen/files/2014/04/steps-730x345.png" width="550"/><p class="wp-caption-text">Design Thinking Stages</p></div>
<p>The interesting part of Design Thinking is like the creativity it attempts to foster, the very concept itself is continually evolving. One example of a design thinking process could have several stages: <i>Empathize, Define, Ideate, Prototype and Test</i>. Within these steps, problems can be framed, the right questions can be asked, more ideas can be created, and the best answers can be chosen. The steps aren&#8217;t linear; they can occur simultaneously and can be repeated. The d.school offers a free a <a rel="nofollow" target="_blank" href="https://dschool.stanford.edu/dgift/">90-minute video-led cruise</a> through their methodology for anyone interested.</p>
<p>I admit I did enter this program a bit skeptical. Since joining Citrix last year, I’ve heard a lot of the Design Thinking term. But can’t say I fully understood it until now. Design Thinking is a mantra that’s been championed by our senior vice president of customer experience, <a rel="nofollow" target="_blank" href="http://www.forbes.com/pictures/ffgh45edjf/catherine-courage/">Catherine Courage</a>. Under her encouragement, (pun intended) I’m told more than 7500 Citrix employees have gone through our internal Design Thinking courses.</p>
<p>Citrix isn’t alone in applying this way of thinking for inspiring innovation. The Designing Thinking bootcamp included executives (both to learn and teach) from a selection of the largest global corporations.  Among the more interesting instructors was <i><a rel="nofollow" target="_blank" href="https://www.linkedin.com/profile/view?id=834399">Evelyn Huang</a>, </i>Director of Design Thinking and Strategy at <a rel="nofollow" target="_blank" href="https://capitalonelabs.com/">Capital One Labs</a>. A Stanford / d.school Alum, Huang’s mission at Capital One is to “reimagining the way 60 million people interact with their money.” She’s part of a growing trend within companies to reimagine what creativity is and how it can be fostered.</p>
<p>“We believe progress starts with a deep understanding of our customers. That&#8217;s why Design Thinking is our go-to method for building the products and experiences that our customers need. This human-centered methodology, coupled with a &#8220;fail fast&#8221; attitude, allows us to quickly identify, build, and test our way to success. We spend less time planning, more time doing, and, above all else, challenge ourselves to see the world through the eyes of our customers every step of the way,” says Huang<i>.</i></p>
<p>The Design Thinking bootcamp is led by <a rel="nofollow" target="_blank" href="https://www.linkedin.com/profile/view?id=4383988">Perry Klebahn</a>, who’s claim to fame include being the inventor of the modern snowshoe at Atlas Snowshoes and former CEO at Timbuk2, the original messenger bag company. The consensus of Klebahn and the d.school team is that without implementing this systematic approach to innovation your business faces the certain risk of being disrupted or potentially worse.</p>
<p>The risk associated with a lack of innovation within some businesses has been deemed to be so dangerous, that recently it has led to the creation of a new executive role of “Chief Innovation Officer.” The role is an attempt to create a leader responsible for managing the process of innovation in an organization.</p>]]></content:encoded>
      </item>
      <item>
         <title>The Future Of The Web Is Audible</title>
         <link>http://www.forbes.com/sites/reuvencohen/2014/03/06/the-future-of-the-web-is-audible/</link>
         <description>Like it or not the web has mostly been designed for those who can see it. The very nature of HTML and CSS is focused on how a web page looks, mostly disregarding our other senses. With the increasing popularity of wearable technology combined with advancements in machine learning, a newfound emphasis is being placed [&amp;#8230;]</description>
         <guid isPermaLink="false">http://blogs.forbes.com/reuvencohen/?p=3204</guid>
         <pubDate>Thu, 06 Mar 2014 20:20:00 +0000</pubDate>
         <content:encoded><![CDATA[<p>Like it or not the web has mostly been designed for those who can see it. The very nature of HTML and CSS is focused on how a web page looks, mostly disregarding our other senses. With the increasing popularity of wearable technology combined with advancements in machine learning, a newfound emphasis is being placed on not only how a website looks, but also how it can be more naturally interacted with.</p>
<p>[tweet_quote display="The ability to speak and hear commands, might be the key to the next generation of web development."]As it turns out, sound or the ability to speak and hear commands, might be the key to the next generation of web development.[/tweet_quote]  New audible interaction methods and API standards could be poised to usher in a new generation of web technology. Technology specifically tailored to interact with us as individuals rather than having us adapt to interact with the web. At the heart of this transformation is a new crop of technologies focused on natural language interaction through the use of verbal commands.</p>
<p>In its most simple form, speech recognition is the ability to translate spoken words into text. The technology is certainly not a new concept; it has been around for almost 60 years. In 1954, the so-called “<a rel="nofollow" target="_blank" href="http://en.wikipedia.org/wiki/Georgetown-IBM_experiment">Georgetown-IBM experiment</a>” was an influential demonstration of the first machine-based translation program. It was a demonstration developed jointly by the Georgetown University and IBM, which successfully used early computing technology to automate the translation of more than sixty Russian sentences into English. Although primitive, the system had only six grammar rules and 250 items in its vocabulary, all of which were all written by hand. It proved the potential for a more natural interaction with technology and drew a significant amount of early attention to the field. At the time, the researches claimed that within three or five years, machine translation would be a solved problem. Unfortunately, it took another 30 years for it to become a reality.</p>
<p>The next major advancement was in the 1980’s when modern machine learning algorithms specifically tailored for language processing were introduced. These simplistic early algorithms made for the first time speech recognition technology commercially feasible. Yet for the average software developer the technology would remain out of reach.</p>
<p>Fast-forward another 30 years and things are beginning to change. Thanks in part to Moore’s Law (the ever increasing power of computers) and the work of several standards bodies to create open and accessible API’s, we are now beginning to see the emergence of a new generation of web focused speech systems that are as easy to implement and use as building a simple HTML web page.</p>
<p>Leading the charge is the World Wide Web Consortium or W3C, an international community that develops open standards to ensure the long-term growth of the Web. The W3C is the group behind the HTML standard along with a host of others. One of the groups more recent standards is the “<a rel="nofollow" target="_blank" href="https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html#introduction">Web Speech API Specification</a>” that aims to “enable web developers to provide, in a web browser, speech-input and text-to-speech output features that are typically not available when using standard speech-recognition or screen-reader software. The API itself is agnostic of the underlying speech recognition and synthesis implementation and can support both server-based and client-based/embedded recognition and synthesis. It is designed to enable both brief (one-shot) speech input and continuous speech input.&#8221;</p>
<p>In the simplest terms, the API allows web developers to easily add the ability for web site users to <a rel="nofollow" target="_blank" href="http://updates.html5rocks.com/2013/01/Voice-Driven-Web-Apps-Introduction-to-the-Web-Speech-API">speak to a webpage</a> and get a responding result. If you’ve ever used Google Now or Apple Siri, you’ve seen this in action. Google has also recently implemented this feature for its Chrome browser users on its homepage. Google is also among the first to implement the W3C web speech API in its Chrome browsers (version 25 or greater) allowing anyone to easily build speech enabled web pages.</p>
<p>Google Now, Apple‘s Siri and IBM’s Watson are part of new form of intelligent personal assistant and knowledge navigators that allow for a more natural interaction with information. At the center of this technology is an area known as “Natural Language Processing” or the ability to speak to a computer as easily as you speak to another person. The challenge is it is far easier for humans to learn and speak but difficult for computers to comprehend what is being said. But what computers lack in comprehension, they more than make up for in distributed computation. With more than <a rel="nofollow" target="_blank" href="http://news.softpedia.com/news/Google-Chrome-Hits-750-Million-Monthly-Users-by-Far-the-Most-Popular-Browser-in-the-World-353657.shtml">750 million monthly users</a> of the Chrome browser, Google has essential given the ability for anyone to build there very own Siri, or at the very least, the speech recognition portion. As more web developers adopt speech based interfaces, the way we interact with the internet may quickly involve into something that looks more like <a rel="nofollow" target="_blank" href="http://en.wikipedia.org/wiki/LCARS">the computer on Star Trek</a>: The Next Generation.</p>
<p>Leading the charge at Google is <a rel="nofollow" target="_blank" href="http://www.kurzweilai.net/ray-kurzweil-biography">Ray Kurzweil</a>, who in 2012 was appointed to head up a team developing machine intelligence and natural language understanding for the company. In a recent article on <a rel="nofollow" target="_blank" href="http://www.theguardian.com/technology/2014/feb/22/robots-google-ray-kurzweil-terminator-singularity-artificial-intelligence">The Guardian website</a>, Kurzweil outlines the opportunity saying, “language is the key to everything.” He goes on to further state the objectives for his team; “my project is ultimately to base search on really understanding what the language means. When you write an article you&#8217;re not creating an interesting collection of words. You have something to say and Google is devoted to intelligently organizing and processing the world&#8217;s information. The message in your article is information, and the computers are not picking up on that. So we would like to actually have the computers read. We want them to read everything on the web and every page of every book, then be able to engage an intelligent dialogue with the user to be able to answer their questions.&#8221;</p>
<p>Google isn’t alone in seeing an opportunity to interact with computers in a more human and natural way, IBM has invested upward of 1 billion dollars and employs roughly 2000 people in its IBM Watson Group devoted commercializing its Watson platform. Watson is a computing system that IBM built to apply advanced natural language processing, information retrieval, knowledge representation, automated reasoning, and machine learning technologies to the field of open domain question answering. According to IBM, &#8220;more than 100 different techniques are used to analyze natural language, identify sources, find and generate hypotheses, find and score evidence, and merge and rank hypotheses.&#8221;</p>
<p>Although Watson is probably most famous for winning the quiz show Jeopardy!, its potential usages are far more broad. In February 2013, <a rel="nofollow" target="_blank" href="http://www.forbes.com/sites/bruceupbin/2013/02/08/ibms-watson-gets-its-first-piece-of-business-in-healthcare/">IBM announced</a> that Watson software system&#8217;s first commercial application would be for utilization management decisions in lung cancer treatment at Memorial Sloan–Kettering Cancer Center in conjunction with health insurance company WellPoint. WellPoint’s chief medical officer Samuel Nussbaum said at the press event “that health care pros make accurate treatment decisions only 50% of the time. Watson has shown the capability of being accurate in its decisions 90% of the time.”</p>
<p>Watson doesn’t tell a doctor what to do, it provides several options with degrees of confidence for each, along with the supporting evidence it used to arrive at the optimal treatment. Doctors can enter on an iPad a new bit of information in plain text, such as “my patient has blood in her phlegm,” and Watson within half a minute will come back with an entirely different drug regimen that suits the individual. IBM Watson’s business chief Manoj Saxena says that 90% of nurses in the field who use Watson now follow its guidance.</p>
<p>Google’s Kurzweil sees a similarly opportunistic future where computers have the ability to read and understand the semantic content of a language. A future where computers know the answer to your question before you have even asked it because it will have already read every email you&#8217;ve ever written, every document, every idle thought you&#8217;ve ever tapped into a search-engine box or the case of healthcare, every patient medical profile. It will know you better than you potentially know yourself. And if you are to believe Kurzweil, this all starts with the ability to interact with computers in a natural way, with your voice.</p>
<p><em>Find Reuven on <a rel="nofollow" target="_blank" href="http://twitter.com/ruv">Twitter @rUv</a> | <a rel="nofollow" target="_blank" href="http://ca.linkedin.com/in/reuvencohen">Linkedin</a> | <a rel="nofollow" target="_blank" href="https://plus.google.com/112393776166946876479/posts?rel=author">Google+</a> | <a rel="nofollow" target="_blank" href="http://www.facebook.com/ruvnet">Facebook</a> | <a rel="nofollow" target="_blank" href="http://digitalnibbles.com/">Podcast</a></em></p>]]></content:encoded>
      </item>
      <item>
         <title>How Facebook Is Spending Billions Buying Your Attention</title>
         <link>http://www.forbes.com/sites/reuvencohen/2014/02/24/how-facebook-is-spending-billions-buying-your-attention/</link>
         <description>Much has been blogged, tweeted and written about ’s massive $19 billion dollar acquisition of WhatsApp. It’s a fascinating story that hits at the heart of the new American dream. A dream that imagines anyone with a simple idea cannot only find success, but success on gargantuan scale. Yet as a story of rags to [&amp;#8230;]</description>
         <guid isPermaLink="false">http://blogs.forbes.com/reuvencohen/?p=3193</guid>
         <pubDate>Mon, 24 Feb 2014 13:09:00 +0000</pubDate>
         <content:encoded><![CDATA[<p>Much has been blogged, tweeted and written about <a rel="nofollow" target="_blank" href='http://www.forbes.com/facebook-ipo/'>Facebook</a>’s massive $19 billion dollar acquisition of WhatsApp. It’s a fascinating story that hits at the heart of the new American dream. A dream that imagines anyone with a simple idea cannot only find success, but success on gargantuan scale. Yet as a story of rags to riches aside, a core question still remains. Why?</p>
<p>Many will point to a massive disruption occurring within the traditional telecom world. The research firm Informa stated in a recent report that global annual SMS revenues will fall by $23 billion by 2018 from $120 billion in 2013 mainly due to &#8220;continuing adoption and use of over-the-top messaging applications in both developed and emerging markets.&#8221; Yet these stats speak to a decreasing revenue stream, not exactly a compelling reason in itself.</p>
<p>Others may point to the massive growth within the mobile world. In announcing the deal the Facebook founder stated, &#8220;WhatsApp is on a path to connect one billion people. The services that reach that milestone are all incredibly valuable,” said Facebook’s CEO <a rel="nofollow" target="_blank" href='http://www.forbes.com/profile/mark-zuckerberg/'>Mark Zuckerberg</a>.</p>
<p>But does massive user growth alone make apps like Instagram and WhatsApp so valuable? After all, Facebook already has a significant portion of the global instant messaging market through its own messaging service called Facebook Messenger. In November 2013, a survey of smartphone owners found that WhatsApp was the leading social messaging app in countries including Spain, Switzerland, Germany and Japan. Yet at 450 million users and growing, there is a strong likelihood that both Facebook and WhatsApp share the majority of the same user base. So what’s driving the massive valuation? One answer might be users attention. Unlike many other mobile apps, WhatsApp users actually use this service on an ongoing daily or even hourly basis.</p>
<p>&#8220;Attention,&#8221; write Thomas Mandel and Gerard Van der Leun in their 1996 book <i>Rules of the Net,</i> &#8220;is the hard currency of cyberspace.&#8221; This has never been truer.</p>
<p>WhatsApp&#8217;s value may not have much to do with the disruption of the telecom world as much as a looming battle for  Internet users rapidly decreasing attention spans.  <a rel="nofollow" target="_blank" href="http://www.localytics.com/blog/2011/first-impressions-26-percent-of-apps-downloaded-used-just-once/">A study back in 2011 uncovered</a> the reality for most mobile apps. Most people never use an app more than once. According to the study, 26% of the time customers never give the app a second try. With an ever-increasing number of apps competing for users attention, the only real metric that matters is whether or not they actual use it. Your attention may very well be the fundamental value behind Facebook’s purchase.</p>
<p>In <a rel="nofollow" target="_blank" href="http://www.wired.com/wired/archive/5.12/es_attention.html">a 1997 wired article</a>, author Michael H. Goldhaber describes the shift towards the so called Attention Economy; “Attention has its own behavior, its own dynamics, its own consequences. An economy built on it will be different than the familiar material-based one.” writes Goldhaber.</p>
<p>His thesis is that as the Internet becomes an increasingly strong presence in the overall economy and our daily lives, the flow of attention will not only anticipate the flow of money, but also eventually replace it altogether. Fast-forward 17 years and his thesis has never been more true.</p>
<p>As we become ever more bombarded with information, the value of this information decreases. Just look at the improvements made to Facebook’s news feed over the years. In an attempt to make its news feed more useful, the company has implement-advanced algorithms that attempt to tailor the flow of information to your specific interests. The better Facebook gets at keeping your attention, the more valuable you become. Yes, you are the product.</p>
<p>Google has implemented a similar strategy around its <a rel="nofollow" target="_blank" href="http://www.google.ca/landing/now/">Google Now service</a>. Google Now is an intelligent personal assistant, which attempts anticipate your information needs before you even realize you need or want it.  Its contextual, it adapts to you, its omni-present and its tuned to your particular needs. Yet its true value is that it has your attention, just when it thinks you need it.</p>
<p>Yet Goldhaber, Facebook or even Google weren’t the first to notice this trend toward an attention centric economy. Back in 1971 Herbert A. Simon was among the first to write about the concept of attention economics in his book &#8220;Designing Organizations for an Information-Rich World&#8221; where he wrote that &#8220;&#8230;in an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it&#8221; (<a rel="nofollow" target="_blank" href="http://en.wikipedia.org/wiki/Attention_economy#CITEREFSimon1971">Simon 1971</a>, pp. 40–41).</p>
<p>In a nutshell this is what I believe Facebook is buying. They are buying an application that people use on daily basis. They are buying your attention.</p>
<p><i>Find Reuven on <a rel="nofollow" target="_blank" href="http://twitter.com/ruv">Twitter @rUv</a> |  <a rel="nofollow" target="_blank" href="http://ca.linkedin.com/in/reuvencohen">Linkedin</a> |  <a rel="nofollow" target="_blank" href="https://plus.google.com/112393776166946876479/posts?rel=author">Google+</a> | <a rel="nofollow" target="_blank" href="http://www.facebook.com/ruvnet">Facebook</a> | <a rel="nofollow" target="_blank" href="http://digitalnibbles.com/">Podcast</a></i></p>]]></content:encoded>
      </item>
      <item>
         <title>What’s Driving Google’s Obsession With Artificial Intelligence And Robots?</title>
         <link>http://www.forbes.com/sites/reuvencohen/2014/01/28/whats-driving-googles-obsession-with-artificial-intelligence-and-robots/</link>
         <description> is without question one of the most innovative companies on the planet.  It’s a company that is known mostly for its amazingly successful search and advertising businesses, and will probably be known for this for the foreseeable future. But lately it’s also quickly becoming known for its rather unorthodox array of secondary business efforts. These [&amp;#8230;]</description>
         <guid isPermaLink="false">http://blogs.forbes.com/reuvencohen/?p=3126</guid>
         <pubDate>Tue, 28 Jan 2014 15:54:00 +0000</pubDate>
         <content:encoded><![CDATA[<p>Google is without question one of the most innovative companies on the planet.  It’s a company that is known mostly for its amazingly successful search and advertising businesses, and will probably be known for this for the foreseeable future. But lately it’s also quickly becoming known for its rather unorthodox array of secondary business efforts. These efforts include things like driverless cars, wearable technology (Google Glass), human-like robotics, high-altitude Internet broadcasting balloons, contact lenses that monitor glucose in tears, and even an effort to <a rel="nofollow" target="_blank" href="http://en.wikipedia.org/wiki/Calico_(company)">potentially solve death</a>.</p>
<p>Within all these various and sometimes bizarre efforts is a common guiding principle. Google doesn’t just attempt to take incremental steps when it comes to technology.  It takes what a recent <a rel="nofollow" target="_blank" href="http://content.time.com/time/magazine/article/0,9171,2152422,00.html">Time Magazine profile</a> described as “Moon Shots.” Yet within these Moon Shots lays a method to its apparent madness. I decided to do a little digging to see if I could find out what that is.</p>
<p><img alt="" src="http://blogs-images.forbes.com/reuvencohen/files/2014/06/19dumqgta589ogif.gif" width="640" height="360"/></p>
<p>My question is simple; why is a company built on finding information and serving up ads, spending vast amounts on a variety of outlandish projects?</p>
<p>Adding to Google’s mixture of eccentric acquisitions is word that this week it has acquired artificial intelligence (AI) startup DeepMind, a London-based company the tech giant <a rel="nofollow" target="_blank" href="http://recode.net/2014/01/26/exclusive-google-to-buy-artificial-intelligence-startup-deepmind-for-400m/">bought up for an estimated minimum of $400 million</a>. <a rel="nofollow" target="_blank" href="http://recode.net/2014/01/26/exclusive-google-to-buy-artificial-intelligence-startup-deepmind-for-400m/">According to Re/code</a>, which broke the story, the purchase &#8220;is in large part an artificial intelligence talent acquisition.&#8221; Re/code notes that DeepMind has a team of at least 50 people and has secured more than $50 million in funding calling it “the last large independent company with a strong focus on artificial intelligence.” DeepMind was founded by 37-year old former child chess prodigy Demis Hassabis who was once called “probably the best games player in history” by the <a rel="nofollow" target="_blank" href="http://www.boardability.com/profile.php?id=demis_hassabis">Mind Sports Olympiad</a>. Interestingly, <a rel="nofollow" target="_blank" href='http://www.forbes.com/facebook-ipo/'>Facebook</a> <a rel="nofollow" target="_blank" href="https://www.theinformation.com/google-beat-facebook-for-deepmind-creates-ethics-board">was reportedly</a> also attempting to buy the company.</p>
<p>DeepMind joins a growing list of robotics and AI companies recently purchased by Google, including Boston Dynamics, its eighth acquisition of a Robotics Company in the past few months. The robots manufactured by Boston Dynamics possess locomotive abilities replacing the conventional wheel-based robots with ones that look and act more like humans or even certain kinds of animals.  Boston Dynamics is also a leading provider of human simulation software. Two of their bipedal robots named Atlas and Petman have a significant degree of freedom, which can only be matched by human beings. Its primary customers are the US Army, Navy and Marine Corps. Other recent Google acquisitions include Flutter, which specializes in gesture recognition and, most recently Nest, which it bought for $3.2 billion and provides smart household items like thermostats and smoke detectors for the Internet of Things.</p>
<p>Google&#8217;s DeepMind acquisition led it to establish, upon the smaller company&#8217;s insistence, a DeepMind-Google ethics board that will set standards for use of the AI technology within Google, assuring it does “no evil.” Actually, this ethics board sounds a lot like the famous “<a rel="nofollow" target="_blank" href="http://en.wikipedia.org/wiki/Three_Laws_of_Robotics">The Three Laws of Robotics</a>” from the 1942 short story &#8220;<a rel="nofollow" title="Runaround (story)" target="_blank" href="http://en.wikipedia.org/wiki/Runaround_(story)">Runaround</a>&#8221; by the science fiction author Isaac Asimov.</p>
<p>Asimov’s three laws of Robotics are as follows;</p>
<ol start="1">
<li>A robot may not injure a human being or, through inaction, allow a human being to come to harm.</li>
<li>A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.</li>
<li>A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.</li>
</ol>
<p>But I digress, besides adding deep talent to already deep talent pool, the broader question is why would Google spend an estimated half a billion dollars for a “talent acquisition” and what’s with this obsession with Artificial intelligence and robots? All of the companies it has acquired in the AI and robotics space currently sit within its Google X division, a semi-secret facility dedicated to making major technological advancements. Work at the lab is overseen by <a rel="nofollow" title="Sergey Brin" target="_blank" href="http://en.wikipedia.org/wiki/Sergey_Brin">Sergey Brin</a>, one of <a rel="nofollow" title="Google" target="_blank" href="http://en.wikipedia.org/wiki/Google">Google</a>&#8216;s co-founders, and by scientist and entrepreneur <a rel="nofollow" title="Astro Teller" target="_blank" href="http://en.wikipedia.org/wiki/Astro_Teller">Astro Teller</a>. <a rel="nofollow" target="_blank" href="http://www.bbc.co.uk/news/technology-25883016">Teller says</a> that they aim to improve technologies by a factor of 10, and to think of &#8220;science fiction-sounding solutions.&#8221;</p>
<p>A recent post on <a rel="nofollow" target="_blank" href="http://www.theguardian.com/technology/2013/dec/29/google-robotics-us-military-boston-dynamics">The Guardian</a> sheds light on the potential rationale; “What drives the Google founders is an acute understanding of the possibilities that long-term developments in information technology have deposited in mankind&#8217;s lap. Computing power has been doubling every 18 months since 1956. Bandwidth has been tripling and electronic storage capacity has been quadrupling every year. Put those trends together and the only reasonable inference is that our assumptions about what networked machines can and cannot do need urgently to be updated.”</p>
<p>According to the <a rel="nofollow" target="_blank" href="http://research.google.com/pubs/ArtificialIntelligenceandMachineLearning.html">company’s research portal</a>, the answer is simple. Much of the fundamental infrastructure within Google is based on language, speech, translation, and visual processing. All of this depends upon the use of so called Machine Learning and AI. A common thread among all of these tasks and many others at Google is that it gathers unimaginably large volumes of direct or indirect data. This data provides what the company calls “evidence of relationships of interest” which they then apply to adaptive learning algorithms. In turn these smart algorithms create new potential opportunities in areas that the rest of us have yet to grasp. In short, they might very well be attempting to predict the future based on the search/web surfing habits of the millions who visit the company’s products and services every day. They know what we want, before we do.</p>
<p>Along with the billions of dollars Google is spending on various cutting edge companies, in May of 2013 it launched a Quantum Artificial Intelligence Lab to study how quantum computing might advance machine learning and artificial intelligence. For Google this obsession with Artificial Intelligence and robotics may very well be about building better models of the world so they can make more accurate predictions of future outcomes. If Google wants to cure diseases, they need better models of how they develop. If they want cars to drive by themselves, they need better models for how transportation networks operate. If they want to create effective environmental policies, they need better models of what’s happening to our climate. And if Google wants to build a more useful search engine; they need to better understand you and how you interact with what’s on the web so you get the best answer tailored specifically for you. Or maybe they just want to create an autonomous robot army, but that sounds crazy. Or does it?</p>
<p><i>Find Reuven on <a rel="nofollow" target="_blank" href="http://twitter.com/ruv">Twitter @rUv</a> |  <a rel="nofollow" target="_blank" href="http://ca.linkedin.com/in/reuvencohen">Linkedin</a> |  <a rel="nofollow" target="_blank" href="https://plus.google.com/112393776166946876479/posts?rel=author">Google+</a> | <a rel="nofollow" target="_blank" href="http://www.facebook.com/ruvnet">Facebook</a> | <a rel="nofollow" target="_blank" href="http://digitalnibbles.com/">Podcast</a></i></p>
<embed src="http://www.youtube.com/v/J17Qgc4a8xY&amp;rel=0" type="application/x-shockwave-flash" width="485" height="365"></iframe>]]></content:encoded>
      </item>
      <item>
         <title>The Rise Of The Biobot: Mixing Biology And Technology</title>
         <link>http://www.forbes.com/sites/reuvencohen/2014/01/10/the-rise-of-the-biobot-mixing-biology-and-technology/</link>
         <description>In a recent article posted on the The Guardian website, author and new-age guru Deepak Chopra made an interesting observation. “A cyborg future is coming. Man&amp;#8217;s relationship with machine is merging and machines are an extension of our own intelligence. I&amp;#8217;m so into it. I wear all kinds of bio-sensors to tell me what&amp;#8217;s going [&amp;#8230;]</description>
         <guid isPermaLink="false">http://blogs.forbes.com/reuvencohen/?p=3084</guid>
         <pubDate>Fri, 10 Jan 2014 20:13:00 +0000</pubDate>
         <content:encoded><![CDATA[<p>In a recent article posted on the <a rel="nofollow" target="_blank" href="http://www.theguardian.com/lifeandstyle/2014/jan/04/deepak-chopra-this-much-i-know">The Guardian website</a>, author and new-age guru Deepak Chopra made an interesting observation.</p>
<p><i>“A cyborg future is coming. Man&#8217;s relationship with machine is merging and machines are an extension of our own intelligence. I&#8217;m so into it. I wear all kinds of bio-sensors to tell me what&#8217;s going on inside me. It&#8217;s the future,”</i> said Chopra.</p>
<p>Anyone who has read my posts lately will know that I’ve been going through a bit of an obsession, <a rel="nofollow" target="_blank" href="http://www.forbes.com/sites/reuvencohen/2013/11/29/bitcoin-mania-how-to-create-your-very-own-crypto-currency-for-free/">not just with bitcoin</a>, but with biologically inspired technology. From wearable tech, to medical implants to complex interfaces between brain, mind and machine, recent developments in combining machines and organisms of various types is a fascinating subject, but it also gives rise to some major ethical concerns.</p>
<p>In <a rel="nofollow" target="_blank" href="http://onlinelibrary.wiley.com/doi/10.1002/ange.201307495/full#bib1">a recent paper published</a> in the renowned journal <i>Angewandte Chemie <a rel="nofollow" target="_blank" href='http://www.forbes.com/international/'>International</a> Edition,</i> German scientists discuss the state of the art of research, opportunities, and risks facing so called “Cyborgs.” Although published in German, the paper explores the latest developments at the interface between technical systems and living organisms.</p>
<p>First a bit of background, a “cyborg” is an acronym for a cybernetic organism. More simply, it describes a kind of chimera, a living organism combined with a machine. For many this may sound like some far-fetched Sci-fi novel, but today many people use intracorporeal medical systems (occurring within the body) such as pacemakers, complex prostheses or cochlear and retinal implants. In a technical sense, many humans can already be considered as cyborgs.</p>
<p>The report’s authors note that in recent years, the current needs in the field of biomedicine and the enormous advances in micro-and nanotechnology have driven the original idea of cybernetic organism to new levels.  They describe a compound yet functional interaction between living tissue and technical systems that have reached an astonishing level of complexity. Modern man made systems are now able to interact or even replace central body functions. One common example is the frequently of implanted cardiac pacemakers. These types of implants help to compensate for diminished sensory abilities, for example using cochlear implants for hearing. Often they can complement nonfunctional body structures, such as arms or legs that can be partially or completely replaced by technical prostheses that can interact directly with your brain.</p>
<p>The use of prostheses or implants certainly isn’t a new idea. Humans have been using implanted technical aids of various types for thousands of years to compensate for defects and impairments caused by traumatic events or illnesses or just vanity.  Back as far back as Roman times, artificial dentures made ​​of forged iron were used as dental implants to replace lost teeth</p>
<p>Today, when a technical system or machine is used to replace a complex function within the body, such as gripping a hand, it is essential that the system be closely related to the living organism. Ideally, the system itself should be capable of receiving and sending the appropriate signals for the movement and control directly from the central nervous system and especially the brain itself. Such &#8220;hardware / wetware interfaces&#8221; are typically referred to as brain-machine interfaces. They represent the interface to receive control commands from the technical systems and to which they may return feedback or stimulation.</p>
<p><a rel="nofollow" target="_blank" href="http://www.forbes.com/sites/reuvencohen/2014/01/03/new-open-source-platform-allows-anyone-to-hack-brain-waves/">Low-cost brain-machine</a> interfaces make interfacing with our central nervous systems more accessible then ever before even for laymen. One example is the <a rel="nofollow" target="_blank" href="https://backyardbrains.com/products/spikerbox">SpikerBox</a> that is commercially sold by Backyard Brains. The company describes the product as “a great way to get introduced to hands-on neuroscience.”  Technically it is a&#8221; bioamplifier&#8221; that allows you to hear and see spikes (i.e. action potentials) of real living neurons in invertebrates  (cricket, earthworm, or cockroach) which you can order from us or pick up in a local pet store or backyard. The company even offers a <a rel="nofollow" target="_blank" href="https://backyardbrains.com/products/smartphonecable.aspx">Smartphone Cable</a> to plug your SpikerBox into your smartphone or tablet to look at the neurons firing in real time.</p>
<p>Needless to say, there are some pretty serious ethical concerns when you start talking about experimenting on backyard invertebrates. Ethical concerns aside, interfacing directly with lower forms of life opens up the potential for variety of interesting usages. The brains of lower organisms, such as insects, are much less complex. They allow us to more easily understand how a certain movements are programmed, such as running or flying. The use of autonomous electronics implanted with in insects has enabled researchers with the able to remotely control insects for up to 3 hours. In many ways, insects provide the gold standard in terms of aerodynamics, sustainability, energy efficiency and biochemical sensor capabilities.</p>
<p>By understanding these core biological processes, the opportunity for so-called biobots, (i.e. large insects with implanted electronic and microfluidic control units) can be used in a new generation of tools, such as small flying objects for monitoring or even autonomous drones, which can based upon real life processes found within organisms. Moreover, these systems could also be powered by the organism’s own thermal, kinetic, electric or chemical energy making them extremely energy efficient.</p>
<p>Grasping the fundamental way our biological processes work offers a huge potential to tap into some of the efficiencies we as humans enjoy. One such example is the energy efficiency of the human brain. It is both the most powerful and most efficient computer ever created.  Running on just <a rel="nofollow" target="_blank" href="http://hypertextbook.com/facts/2001/JacquelineLing.shtml">23.3 watts</a>, the brain makes up 2% of a person&#8217;s weight. Despite this, even at rest, the brain consumes 20% of the body&#8217;s energy. The brain consumes energy at 10 times the rate of the rest of the body per gram of tissue. Even though your brain is the most energy intensive organ in your body, by computing technology standards, your brain uses extremely low amount of energy for an estimated 1exaFLOP (exaSCALE) computing capability.  Theoretically, an exaSCALE computing system – 100 times more computing capability than today’s fastest systems – could be built with only more common x86 processors, but it would require as much as 2 gigawatts of power or roughly the peak power generation of the <a rel="nofollow" title="Hoover Dam" target="_blank" href="http://en.wikipedia.org/wiki/Hoover_Dam">Hoover Dam</a>. In terms of bang for your computing buck, your brain is by far the winner, at the rate of about <a rel="nofollow" target="_blank" href="https://www.google.com/search?q=23+watts+%2F+2+gigawatts&amp;oq=23+watts+%2F+2+gigawatts&amp;aqs=chrome..69i57j0.10915j0j4&amp;sourceid=chrome&amp;espv=210&amp;es_sm=91&amp;ie=UTF-8#es_sm=91&amp;espv=210&amp;q=23+watts+*+86956521">86,956,521 times more power</a> efficient than conventional computing systems.</p>
<p>Some believe that the relationship between technology and biology may provide the next step in our evolution. For me this both fascinating and terrifying.</p>
<p><em>Find Reuven on <a rel="nofollow" target="_blank" href="http://twitter.com/ruv">Twitter @rUv</a> |  <a rel="nofollow" target="_blank" href="http://ca.linkedin.com/in/reuvencohen">Linkedin</a> |  <a rel="nofollow" target="_blank" href="https://plus.google.com/112393776166946876479/posts?rel=author">Google+</a> | <a rel="nofollow" target="_blank" href="http://www.facebook.com/ruvnet">Facebook</a> | <a rel="nofollow" target="_blank" href="http://digitalnibbles.com/">Podcast</a></em></p>]]></content:encoded>
      </item>
      <item>
         <title>New Open Source Platform Allows Anyone To Hack Brain Waves</title>
         <link>http://www.forbes.com/sites/reuvencohen/2014/01/03/new-open-source-platform-allows-anyone-to-hack-brain-waves/</link>
         <description>For most people how the human brain works remains a mystery, let alone how to hack it.  A new Kickstarter campaign created by engineers Joel Murphy and Conor Russomanno aims to change this by putting an affordable, open-source brain-computer interface kit in the hands &amp;#38; minds of anyone. Brain-computer interfacing (BCI), sometimes called a mind-machine [&amp;#8230;]</description>
         <guid isPermaLink="false">http://blogs.forbes.com/reuvencohen/?p=3077</guid>
         <pubDate>Fri, 03 Jan 2014 16:35:00 +0000</pubDate>
         <content:encoded><![CDATA[<embed src="http://www.youtube.com/v/a55_EfFtysc&amp;rel=0" type="application/x-shockwave-flash" width="485" height="365"></iframe>
<p>For most people how the human brain works remains a mystery, let alone how to hack it.  A new <a rel="nofollow" target="_blank" href="http://www.kickstarter.com/projects/openbci/openbci-an-open-source-brain-computer-interface-fo">Kickstarter campaign</a> created by engineers Joel Murphy and Conor Russomanno aims to change this by putting an affordable, open-source brain-computer interface kit in the hands &amp; minds of anyone.</p>
<p>Brain-computer interfacing (BCI), sometimes called a mind-machine interface is one of those areas of technology that has longed been viewed as mostly science fiction. The kind of technology you might see in a low budget Sci-Fi movie. Remember Luke Skywalker’s prosthetic limb in The Empire Strikes Back? More recently new advancement in the BCI field is now opening new opportunities for a seemingly limitless range of applications powered by nothing more than your thoughts.</p>
<p>In healthcare, medical grade BCIs are often used in assisting people with damage to their cognitive or sensory-motor functions, however, more and more we are seeing affordable BCIs emerge in neurotherapy applications that assist people with ADHD, anxiety, phobia, depression, and other common psychological ailments.</p>
<p>Over the last few years this type BCI technology has begun to improve. In United States, the Food and Drug Administration recently approved a new artificial retina technology known as the <a rel="nofollow" target="_blank" href="http://2-sight.eu/en/product-en">Argus II</a> that can restore partial sight to people suffering from a specific type of blindness known as retinitis pigmentosa. Elsewhere, scientists at the <a rel="nofollow" target="_blank" href="http://news.usc.edu/#!/article/29100/Restoring-Memory-Repairing-Damaged-Brains">University of Southern California at Los Angeles believe</a> they are close to being able to restore a person’s memory capabilities with microchips inserted in the brain. The Researchers used an electronic system that duplicates the neural signals associated with memory. They then managed to replicate the brain function in rats associated with long-term learned behavior, even when the rats had been drugged to forget.</p>
<p>“Flip the switch on, and the rats remember. Flip it off, and the rats forget,” said Theodore Berger of the USC Viterbi School of Engineering, who holds the David Packard Chair in Engineering and is director of the USC Center for Neural Engineering.</p>
<p>The tools for reading brainwaves have been around since at least 1912, when Russian physiologist, Vladimir Vladimirovich Pravdich-Neminsky published the first use of electroencephalography (EEG). EEG provides the ability to measure the faint electrical signals that our brains make when we do things. These signals are emitted when we think, day-dream, sleep, move around, or meditate. Whenever we use our brains, electrical impulses are moving and potentials are flowing all around inside of our heads. EEG is a technique for recording these brain signals.</p>
<p>According to the project creators “An EEG system has three basic parts to measure these signals: electrodes which are placed on the scalp; an electronic amplifier that can sense and relay the tiny electrical changes that your brain makes; and a signal processing computer used to make sense of the data and map it to some type of output. After that, the possibilities are endless! One of the most important links in the chain is the amplifier. It is the goal of our Kickstarter to make an open-source, affordable, high-quality EEG amplifier available to everyone so that those possibilities we are talking about can be realized by anyone.”</p>
<p>Technically, OpenBCI is built around Texas Instrument’s <a rel="nofollow" target="_blank" href="http://www.ti.com/product/ads1299">ADS1299</a> IC. The ADS1299 is an 8-channel, low-noise, 24-bit analog-to-digital converter designed specifically for measuring EEG signals.  The creators say,  “The great thing about OpenBCI is that it’s totally open source. At this point in time, building on top of the OpenBCI Brainwave Visualizer or building unique applications does require some basic programming knowledge. With that said, our mission is to lower the barrier of entry so that even amateur developers can get up and running right away.“</p>
<p>A minimum pledge of US$269 will get you the signal capture system, without electrodes; for those, you&#8217;ll need to bump it up to US$294. To add the OpenBCI Board, you&#8217;ll need to pledge US$314. All have an estimated delivery date of March 2014.</p>
<p>The consumer BCI sector is still nascent with several companies jumping into the space recently. Some of these companies include <a rel="nofollow" title="Neural Impulse Actuator" target="_blank" href="http://en.wikipedia.org/wiki/Neural_Impulse_Actuator">Neural Impulse Actuator</a> (April 2008) <a rel="nofollow" title="Emotiv Systems" target="_blank" href="http://en.wikipedia.org/wiki/Emotiv_Systems">Emotiv Systems</a> (December 2009) <a rel="nofollow" title="NeuroSky" target="_blank" href="http://en.wikipedia.org/wiki/NeuroSky">NeuroSky</a> (June 2009)</p>
<p>The OpenBCI project has already raised $86,082 (as of writing), at the very least proving there is potential demand for DIY mind hacking technology.  Looking forward, technologies like OpenBCI could provide the tools to enable a kind of bionic scientific revolution that may some day help the blind to see, enable amputees to walk again and maybe even restore memories to those affected with serious brain illnesses.</p>
<p><a rel="nofollow" target="_blank" href="http://www.kickstarter.com/projects/openbci/openbci-an-open-source-brain-computer-interface-fo?ref=live">See the project here.</a></p>
<p><em>-<br />
Find Reuven on <a rel="nofollow" target="_blank" href="http://twitter.com/ruv">Twitter @rUv</a> |  <a rel="nofollow" target="_blank" href="http://ca.linkedin.com/in/reuvencohen">Linkedin</a> |  <a rel="nofollow" target="_blank" href="https://plus.google.com/112393776166946876479/posts?rel=author">Google+</a> | <a rel="nofollow" target="_blank" href="http://www.facebook.com/ruvnet">Facebook</a> | <a rel="nofollow" target="_blank" href="http://digitalnibbles.com/">Podcast</a></em></p>]]></content:encoded>
      </item>
      <item>
         <title>Beautiful Supercomputer Visualization Of Global Weather Conditions Updated Every 3 Hours</title>
         <link>http://www.forbes.com/sites/reuvencohen/2013/12/18/beautiful-supercomputer-visualization-of-global-weather-conditions-updated-every-3-hours/</link>
         <description>Sometimes you stumble upon a site that is truly amazing. This is one of those times. A new project simply called &amp;#8220;earth&amp;#8221; aims provide a visualization of global weather conditions forecast by supercomputers and is updated every three hours. The Weather data is produced by the Global Forecast System (GFS), operated by the US National Weather Service. [&amp;#8230;]</description>
         <guid isPermaLink="false">http://blogs.forbes.com/reuvencohen/?p=3053</guid>
         <pubDate>Wed, 18 Dec 2013 20:04:00 +0000</pubDate>
         <content:encoded><![CDATA[<p><a rel="nofollow" target="_blank" href="http://earth.nullschool.net/#current/wind/isobaric/850hPa/orthographic=-60.20,23.92,417"><img class="size-medium wp-image-3054 alignright" alt="Screen Shot 2013-12-18 at 2.57.09 PM" src="http://b-i.forbesimg.com/reuvencohen/files/2013/12/Screen-Shot-2013-12-18-at-2.57.09-PM-300x279.png" width="300" height="279"/></a></p>
<p>Sometimes you stumble upon a site that is truly amazing. This is one of those times. A new project simply called &#8220;<a rel="nofollow" target="_blank" href="http://earth.nullschool.net/#current/wind/isobaric/1000hPa/orthographic=-60.24,36.19,384">earth</a>&#8221; aims provide a visualization of global weather conditions forecast by supercomputers and is updated every three hours.</p>
<p>The Weather data is produced by the Global Forecast System (GFS), operated by the US National Weather Service.  According to the project creator, Cameron Beccario (@<a rel="nofollow" target="_blank" href="http://www.twitter.com/cambecc">cambecc</a>), &#8220;We need only a few of these records to visualize wind data at a particular isobar&#8221;</p>
<p>The project is <a rel="nofollow" target="_blank" href="https://github.com/cambecc/earth">open source</a> with the main components consisting of:</p>
<ul>
<li>a script to download and process <a rel="nofollow" target="_blank" href="http://www.emc.ncep.noaa.gov/index.php?branch=GFS">Global Forecast System</a> weather data in GRIB2 format from the National Centers for Environmental Prediction, NOAA / National Weather Service.</li>
<li>a GRIB2 to JSON converter (see the <a rel="nofollow" target="_blank" href="https://github.com/cambecc/grib2json">grib2json</a> project).</li>
<li>scripts to push site files to <a rel="nofollow" target="_blank" href="http://aws.amazon.com/s3/">Amazon S3</a> for static hosting.</li>
<li>a browser app that interpolates the data and renders an animated wind map.</li>
</ul>
<p>See the stunning project at <a rel="nofollow" target="_blank" href="http://earth.nullschool.net">http://earth.nullschool.net</a></p>
<p><em>Find Reuven on <a rel="nofollow" target="_blank" href="http://twitter.com/ruv">Twitter @rUv</a> |  <a rel="nofollow" target="_blank" href="http://ca.linkedin.com/in/reuvencohen">Linkedin</a> |  <a rel="nofollow" target="_blank" href="https://plus.google.com/112393776166946876479/posts?rel=author">Google+</a> | <a rel="nofollow" target="_blank" href="http://www.facebook.com/ruvnet">Facebook</a> | <a rel="nofollow" target="_blank" href="http://digitalnibbles.com/">Podcast</a></em></p>]]></content:encoded>
      </item>
      <item>
         <title>The Age of Surprise: Predicting The Future Of Technology</title>
         <link>http://www.forbes.com/sites/reuvencohen/2013/12/18/the-age-of-surprise-predicting-the-future-of-technology/</link>
         <description>It’s that time of the year again. You know, that time of year when technologists, pundits and bloggers get into the festive spirit and share technology predictions for the coming year. Being partially curious and possibly not wanting to be left out of the fun, I thought I’d throw my hat into the ring with [&amp;#8230;]</description>
         <guid isPermaLink="false">http://blogs.forbes.com/reuvencohen/?p=3017</guid>
         <pubDate>Wed, 18 Dec 2013 19:04:00 +0000</pubDate>
         <content:encoded><![CDATA[<div class="zemanta-img">
<div style="width:229px;" class="wp-caption alignright"><a rel="nofollow" target="_blank" href="http://www.amazon.com/Back-Future-Michael-J-Fox/dp/B001LXIDVI%3FSubscriptionId%3D0G81C5DAZ03ZR9WH9X82%26tag%3Dzemanta-20%26linkCode%3Dxm2%26camp%3D2025%26creative%3D165953%26creativeASIN%3DB001LXIDVI"><img class="zemanta-img-configured " title="Cover of &quot;Back to the Future&quot;" alt="Cover of &quot;Back to the Future&quot;" src="http://b-i.forbesimg.com/reuvencohen/files/2013/12/51-ZZEIR0YL._SL300_.jpg" width="219" height="300"/></a><p class="wp-caption-text">Cover of Back to the Future</p></div>
</div>
<p>It’s that time of the year again. You know, that time of year when technologists, pundits and bloggers get into the festive spirit and share technology predictions for the coming year. Being partially curious and possibly not wanting to be left out of the fun, I thought I’d throw my hat into the ring with my own set of prognoses. In terms of timeframe, whether it&#8217;s 2014 or 2050 is another story. Alas, this is a story about intersecting trends, asking the simple yet infinitely complex question of where is technology taking us?</p>
<p>The famous computer scientist <a rel="nofollow" target="_blank" href="http://www.smalltalk.org/alankay.html'">Alan Kay</a> can best sum up my opinion on technology predictions in his famous 1971 quote; “Don&#8217;t worry about what anybody else is going to do… The best way to predict the future is to invent it. Really smart people with reasonable funding can do just about anything that doesn&#8217;t violate too many of Newton&#8217;s Laws!&#8221;</p>
<p>Alan Kay may have been right. Among the most amazing recent technological advancements has been what some describe as the &#8220;Age of Surprise.&#8221; A concept originally described by the <a rel="nofollow" target="_blank" href="http://csat.au.af.mil/">U.S. Air Force Center for Strategy and Technology</a> at <a rel="nofollow" target="_blank" href="http://www.au.af.mil/au/">The Air University</a>, as part of a project known as <a rel="nofollow" target="_blank" href="http://csat.au.af.mil/blue_horizon/index.htm">Blue Horizons</a>. The project was a multi-year future study conducted for the Air Force Chief of Staff. According to the study’s authors, the exponential advancement of technology have reached a critical point where not even governments can project the direction humanity is headed. Some researchers have even forecast an eventual singularity where the lines between humans and machines are blurred.</p>
<p>The Air Force determined that “We can predict broad outlines, but we don’t know the ramifications. Information travels everywhere; anyone can access everything — the collective intelligence of humanity drives innovation in every direction while enabling new threats from super-empowered individuals with new domains, interconnecting faster than ever before. Unlimited combinations create unforeseen consequences.”</p>
<p>In the simplest terms, the Age of Surprise may form the basis for the emergence of new powerful forms of technology that are practically impossible to predict.  A recent example of this is <a rel="nofollow" target="_blank" href="http://www.snapchat.com/">SnapChat</a>, a photo messaging application developed in 2011 by Evan Spiegel and Robert Murphy, then <a rel="nofollow" target="_blank" href='http://www.forbes.com/colleges/stanford-university/'>Stanford University</a> students.  The service has grown so quickly that in November, Google reportedly <a rel="nofollow" target="_blank" href="http://www.technobuffalo.com/2013/11/15/snapchat-4-billion-google-offer/">offered $4B</a> for the company which the company founders declined.</p>
<p>The Age of Surprise makes determining what will be the next big thing more difficult than ever. But thanks in part to the emergence of cloud computing, big data and advanced analytics, we can now attempt to make predictions based upon the macro trends driving us toward a future where technology plays an ever-increasing role in our everyday lives.</p>
<p>Over the last few years several trends have begun to take shape, among them has been the transformation of the Internet as a form of basic content or information delivery to a fundamental social system at the heart of modern society. SnapChat is just one example among many recent viral startup success stories. A quick stroll down any street, in any major city, and you’ll see the Internet has already become woven into modern life. Yet this convergence of our physical and digital realities barely scratches the surface of the opportunities it holds for us.</p>
<p>Although viewed by some as “science fiction” the concept of a technological singularity, or more commonly known as<b> </b>singularity, may hold the key. Singularity is the theoretical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilization, and perhaps human nature. The first step may be contextual based computing which is rapidly evolving from basic forms like Google’s Now service to more complex integrated experiences such as various wearable tech. As we move forward, this technology maybe begins to mix our digital and physical realities into a singular unified experience tailored to our particular needs, technology as unique as we are.</p>
<p>A major driving factor has been Moore’s <a rel="nofollow" target="_blank" href='http://www.forbes.com/law/'>law</a>, which states over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. Combined with a global network of connected people and things, our future may lead to a period where progress in technology occurs almost instantly via a global nervous system of interconnected devices. What’s more, this point of technical achievement may not be as far off in the future as you think, thanks in part to the ever increasing speed of computing collectively around the globe. What Moore&#8217;s Law started, the Internet has supercharged.</p>
<p>At a supercomputing conference in 2009, <a rel="nofollow" target="_blank" href="http://www.computerworld.com/s/article/345800/Scientists_IT_Community_Await_Exascale_Computers">Computerworld projected exascale implementation by 2018</a>. Exascale computing refers to <a rel="nofollow" title="Computer system" target="_blank" href="http://en.wikipedia.org/wiki/Computer_system">computing systems</a> capable of at least one <a rel="nofollow" title="FLOPS" target="_blank" href="http://en.wikipedia.org/wiki/FLOPS">exaFLOPS</a>. (One exaflops is a thousand petaflops or a <a rel="nofollow" title="Quintillion" target="_blank" href="http://en.wikipedia.org/wiki/Quintillion">quintillion</a>, 10(<sup>18)</sup>, floating point operations per second.) Exascale computing holds significance in the technology world because it is believed that it is the order of processing power of the human brain. It is for instance the target power of the <a rel="nofollow" title="Human Brain Project" target="_blank" href="http://en.wikipedia.org/wiki/Human_Brain_Project">Human Brain Project</a>, which aims to simulate the complete human brain on supercomputers.</p>
<p>But even if we are able to build computers as fast as human brains, building the complex program that actually simulates the brain and possibly consciousness is probably decades away. But by understanding the fundamental programming of our biology, we may be able to develop applications and systems specifically tuned for us as individuals. Imagine being able to download the ability to speak a new language as easily as downloading an App for your phone, or the ability to learn something or do something new with the ease of one click.</p>
<p>Which brings me back to my original question? What’s the future of technology?</p>
<p>The future of technology may very well lie in our ability to understand the world around us, and how we fit into it. But more importantly, how these things are actually implemented will probably remain a surprise, I guess you can call it the age of surprise.</p>
<p><em>Find Reuven on <a rel="nofollow" target="_blank" href="http://twitter.com/ruv">Twitter @rUv</a> |  <a rel="nofollow" target="_blank" href="http://ca.linkedin.com/in/reuvencohen">Linkedin</a> |  <a rel="nofollow" target="_blank" href="https://plus.google.com/112393776166946876479/posts?rel=author">Google+</a> | <a rel="nofollow" target="_blank" href="http://www.facebook.com/ruvnet">Facebook</a> | <a rel="nofollow" target="_blank" href="http://digitalnibbles.com/">Podcast</a></em></p>]]></content:encoded>
      </item>
      <item>
         <title>Bitcoin Mania: How To Create Your Very Own Crypto-Currency, For Free</title>
         <link>http://www.forbes.com/sites/reuvencohen/2013/11/29/bitcoin-mania-how-to-create-your-very-own-crypto-currency-for-free/</link>
         <description>With Bitcoin now worth potentially more than an ounce of gold, I’m capping off my series of Bitcoin posts with an attempt to answer a recurring question. How to go about creating your very own crypto-currency. When looking at the various crypto-currencies that have emerged over the last few months, most, if not all of [&amp;#8230;]</description>
         <guid isPermaLink="false">http://blogs.forbes.com/reuvencohen/?p=2981</guid>
         <pubDate>Fri, 29 Nov 2013 18:34:00 +0000</pubDate>
         <content:encoded><![CDATA[<div class="zemanta-img"><a rel="nofollow" target="_blank" href="http://www.flickr.com/photos/36495803@N05/8453271596"><img class="zemanta-img-configured zemanta-img-inserted " title="International Currency Money for Forex Trading" alt="International Currency Money for Forex Trading" src="http://blogs-images.forbes.com/reuvencohen/files/2014/02/8453271596_313471af73_m.jpg" width="300" height="246"/></a></div>
<p>With Bitcoin now worth potentially more than an ounce of gold, I’m capping off <a rel="nofollow" target="_blank" href="http://www.forbes.com/sites/reuvencohen/2013/11/28/global-bitcoin-computing-power-now-256-times-faster-than-top-500-supercomputers-combined/">my series of Bitcoin posts</a> with an attempt to answer a recurring question. How to go about creating your very own crypto-currency.</p>
<p>When looking at the various crypto-currencies that have emerged over the last few months, most, if not all of them have had one thing in common. They are essentially cloned versions of Bitcoin. My question isn’t how to clone Bitcoin but rather how can you go about creating a completely new virtual currency. One that is based on varied asset backings. The currency could be like a Bitcoin, based on an algorithm or based upon more traditional assets like US dollars, gold, or even a basket of mixed existing asset types.</p>
<p>As it turns out, there is a free open source project that aims to do exactly this. Called <a rel="nofollow" target="_blank" href="http://opentransactions.org/">Open-Transactions</a> or OT, the project itself is a <a rel="nofollow" title="Transactions" target="_blank" href="http://opentransactions.org/wiki/index.php?title=Transactions">transaction processor</a> in the <a rel="nofollow" target="_blank" href="http://en.wikipedia.org/wiki/Cypherpunk">cypherpunk</a> tradition.</p>
<p>Not to be confused with <em>Cyberpunk</em>, Cypherpunk is a concept originally emerging in the late 1980’s. Early cypherpunks communicated through <a rel="nofollow" title="Electronic mailing list" target="_blank" href="http://en.wikipedia.org/wiki/Electronic_mailing_list">electronic mailing lists</a>, where an informal group of cyber activists  aimed to achieve privacy and security through proactive use of cryptography. With the recent NSA scandal and related electronic spying, the concepts of the cypherpunk movement have become popular once again, specially within the communities involved in crypto-currencies like Bitcoin.</p>
<div style="padding:20px 0pt;margin:20px 0pt;border-bottom:1px solid #DDDDDD;border-top:1px solid #DDDDDD;"><a rel="nofollow" style="text-decoration:none;display:block;color:#900;font:bold 20px/26px Georgia;" target="_blank" href="http://www.forbes.com/ebooks/secret-money-living-on-bitcoin/">The Forbes E-book On Bitcoin</a><a rel="nofollow" style="text-decoration:none;display:block;font:15px/18px Georgia;color:#000;" target="_blank" href="http://www.forbes.com/ebooks/secret-money-living-on-bitcoin/"><em>Secret Money: Living on Bitcoin in the Real World</em>, by Forbes staff writer Kashmir Hill, can be bought in Bitcoin or legal tender.</a></div>
<p>Provided as a free software library, the Open-Transactions platform is a collection of financial cryptography components used for implementing cryptographically secure financial transactions. The author, Chris Odom also known as “Fellow Traveler” and co-founder of <a rel="nofollow" target="_blank" href="http://monetas.net/">Monetas</a>, the company behind the project, <a rel="nofollow" target="_blank" href="http://bitcoin.stackexchange.com/questions/12858/what-is-open-transactions">describes</a> it saying, “It&#8217;s like PGP FOR MONEY. The idea is to have many cash algorithms. So that, just like PGP, the software should support as many of the top algorithms as possible, and make it easy to swap them out when necessary.”</p>
<p>Pretty Good Privacy or PGP, created by Phil Zimmermann in 1991, is a data encryption and decryption method that provides cryptographic privacy and authentication for data communication. PGP encryption uses a serial combination of hashing, data compression, symmetric-key cryptography, and public-key cryptography; each step uses one of several supported algorithms. Each public key is bound to a user name and/or an e-mail address.</p>
<p>Similar to PGP, Open-Transaction’s user accounts are pseudonymous (can be written under a false name.) A user account is provided as a public key allowing users to open as many user accounts, as they want.  But unlike Bitcoin, the system can be configured to enable true anonymity, but to do so, it is limited to &#8220;cash-only&#8221; transactions, although it can be setup to offer <i>pseudonymity</i> or more simply, transactions that can be linked to the key that signed them. While the real life identity of the owner is hidden, continuity of reputation becomes possible, while supporting potentially millions of users.</p>
<p>An interesting aspect of the system is that it isn’t limited to any one specific asset or currency (virtual or otherwise). Basically any user can issue new digital currencies and digital asset types by uploading new currency contracts. Want to create a Gold, Silver, Bitcoin, Litecoin or even USD backed currency? Not a problem on OT. Users are able to conduct transactions, verify instruments, <i>and</i> agree on current holdings via signed receipts, all without the need to store any transaction history.<i> </i></p>
<p>Open-Transactions can be used for broad variety of purposes including issuing currencies/stock, paying dividends, creating asset accounts, sending/receiving digital cash, writing/depositing cheques, cashier&#8217;s cheques, creating basket currencies, trading on markets, scripting custom agreements, recurring payments, and escrow services. The project uses what it calls “strong crypto” with account balances that are unchangeable (even by a malicious server.) The receipts are destructible and redundant with transactions that are unforgeable. The cash is untraceable and cheques are non-repudiable.</p>
<p>There are some potential limitations, for example what if a transaction server attempts to inflate the currency?  According to the developers<b>,</b> this is prevented through auditing, which must be utilized, either by the issuer directly, or by the other members of the voting pool.<b> </b>While the transaction server cannot lie on your receipts, it can potentially inflate the currency itself by using dummy accounts. But the inflated funds cannot be spent without flowing into other accounts, where they will show on an audit<b>.</b><br />
<b></b></p>
<p>Recently, the creators of the project formed a company, <a rel="nofollow" target="_blank" href="http://monetas.net/">Monetas</a> to provide commercial services around the OT platform. The company describes its mission as “to empower people to live and do business with greater freedom than ever before.” Monetas is building the world’s first decentralized system for financial and legal transactions. They claim the system has <i>no single point of control or failure</i>—making it immune to abuses of power and resilient to failure. The solution requires only a mobile phone, and makes transactions easy, cheap, instant, global, secure, and private. It is globally available to individuals, merchants, and entrepreneurs everywhere, for free.</p>
<p><a rel="nofollow" target="_blank" href="http://opentransactions.org/">The project is worth a closer look.</a></p>
<p><em>Find Reuven on <a rel="nofollow" target="_blank" href="http://twitter.com/ruv">Twitter @rUv</a> |  <a rel="nofollow" target="_blank" href="http://ca.linkedin.com/in/reuvencohen">Linkedin</a> |  <a rel="nofollow" target="_blank" href="https://plus.google.com/112393776166946876479/posts?rel=author">Google+</a> | <a rel="nofollow" target="_blank" href="http://www.facebook.com/ruvnet">Facebook</a> | <a rel="nofollow" target="_blank" href="http://digitalnibbles.com/">Podcast</a></em></p>]]></content:encoded>
      </item>
   </channel>
</rss>
<!-- fe2.yql.bf1.yahoo.com compressed/chunked Thu Oct  1 23:06:48 UTC 2015 -->
