Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /customers/c/9/b/ on line 66 Warning: Cannot modify header information - headers already sent by (output started at /customers/c/9/b/ in /customers/c/9/b/ on line 316 Warning: Use of undefined constant SIMPLEPIE_FILE_SOURCE_NONE - assumed 'SIMPLEPIE_FILE_SOURCE_NONE' (this will throw an Error in a future version of PHP) in /customers/c/9/b/ on line 28 Warning: Use of undefined constant SIMPLEPIE_FILE_SOURCE_NONE - assumed 'SIMPLEPIE_FILE_SOURCE_NONE' (this will throw an Error in a future version of PHP) in /customers/c/9/b/ on line 28 Warning: Cannot modify header information - headers already sent by (output started at /customers/c/9/b/ in /customers/c/9/b/ on line 89 Hacker News Links for the intellectually curious, ranked by readers. Introduction to TLA+ Model Checking in the Command Line <div readability="7.3061224489796"><p name="e520" id="e520" class="graf graf--p graf-after--h3"><em class="markup--em markup--p-em">Otherwise known as “wait, you mean I can use the TLA toolbox from the command line?”</em></p></div><div readability="210.96655942283"><p name="ac60" id="ac60" class="graf graf--p graf-after--figure">I started playing around with TLA+ and its more programmer friendly variant PlusCal about a year ago. At that time the amount of documentation that was understandable to people like me who do not enjoy reading white papers full of mathematical notation was slim to none. (Today you can buy <a href="" data-href="" class="markup--anchor markup--p-anchor" rel="nofollow noopener" target="_blank">Hillel Wayne’s excellent book Practical TLA+</a>) So I started reading other people’s code (errr…specs) and began piecing together some idea of how things were done.</p><p name="1cf1" id="1cf1" class="graf graf--p graf-after--p">TLA+ is a tool to model and verify concurrent systems, finding bugs in them before you have written any code by testing every possible combination of inputs. You might say “well how is that different from fuzzing inputs in more conventional functional tests?” and the answer is because TLA+ also simulates the kind of timing issues commonplace with distributive or concurrent systems. As microservices continue to dominate conversations about modern architecture, the question of what happens when a service sends a request that never arrives or arrives before something else has finished or gets repeated become more and more relevant. TLA+ allows you to test for these scenarios.</p><p name="0250" id="0250" class="graf graf--p graf-after--p">Beyond that it’s a radically different way of thinking about computer programs. It’s not a programming language so one has to abandon certain mental crutches and really think about how we represent what we mean. You can write an excellent model and then write code that doesn’t actually do things the way the model is describing. So you have to think about how your favorite frameworks and libraries actually do what they do.</p><p name="9f9a" id="9f9a" class="graf graf--p graf-after--p">I’m a huge fan of anything that might bring unvocalized assumptions up to the surface where people can explore them. It’s hugely difficult to build a complex system of services if two teams are not in sync about their assumptions.</p><p name="73cd" id="73cd" class="graf graf--p graf-after--p">The best way to think of TLA+ is like drawing a blueprint for software you’re going to build. I’ve seen people use it to test algorithms, but I’ve also seen people use it to model user flows throughout an application.</p><h3 name="5a3f" id="5a3f" class="graf graf--h3 graf-after--p">However the Toolbox is Awful</h3><p name="8144" id="8144" class="graf graf--p graf-after--h3">When you’re ready to test your models in TLA+ virtually every resource will direct you to the TLA+ Toolbox. The toolbox is a java application that looks a bit like it’s supposed to be an IDE and allows you to translate PlusCal into TLA+, configure models, test them and explore errors. Its UX is awful, simply awful. How you open a TLA file in the toolbox is not intuitive, things will often fail without any errors (or rather the toolbox will do what you’ve told it to do, which is nothing because you needed to click some buttons you didn’t click first). It’s literally exhibit A for why software developers should respect the skills of designers on their teams.</p><p name="6920" id="6920" class="graf graf--p graf-after--p">On top of that I find it a little clunky. I generally don’t work with full scale IDEs if I can avoid it. I prefer the stripped down elegance of TextMate, Sublime or Atom. Sometimes this means I miss out on valuable features, but I’m a person who currently has over thirty tabs open in Chrome, three projects open in TextMate and at least one WiP up in Scrivener…. I like to use my computer’s memory, okay? This is a light day.</p><p name="29b0" id="29b0" class="graf graf--p graf-after--p">At some point while I was noising around other people’s TLA+ specs to try to understand the language I realized that there seemed to be command line versions of the key tools from the toolbox. Rather than open up the toolkit I could compile PlusCal and test my models from my terminal and even more exciting … from other scripts.</p><h3 name="94af" id="94af" class="graf graf--h3 graf-after--p">Wait… there’s a command line version?</h3><p name="7c08" id="7c08" class="graf graf--p graf-after--h3">Yup! If you’ve downloaded the TLA+ toolbox, you already have the command line tools installed on your computer. To configure them there’s a helpful set of shell scripts you can download <a href="" data-href="" class="markup--anchor markup--p-anchor" rel="nofollow noopener" target="_blank">here</a>.</p><p name="8575" id="8575" class="graf graf--p graf-after--p">One of the first things I did with this knowledge was build a little proof of concept python application that parsed other python programs and tried to write models in PlusCal based on what they were going to do. I wanted to learn more about bytecode and AST and this seemed like a fun way to do it. It was partially successful in that I could write a few simple programs that became working models, but eventually I hit a wall trying to learn two new things at the same time.</p><p name="1daf" id="1daf" class="graf graf--p graf-after--p">But the potential of command line interfaces is still the same. If model checking can be executed via command line, then it can be executed by other programs. You can update your models and test them automatically the same way you might do with other forms of testing in continuous integration.</p><h3 name="fd98" id="fd98" class="graf graf--h3 graf-after--p">pcal: Translating PlusCal to TLA+</h3><p name="0cf3" id="0cf3" class="graf graf--p graf-after--h3">Once you have installed and configured the command line toolbox, you can start translating PlusCal comments in your specs to TLA+ using the <code class="markup--code markup--p-code">pcal</code> command.</p><p name="16b8" id="16b8" class="graf graf--p graf-after--p">Note that <code class="markup--code markup--p-code">pcal</code> will assume the extension <code class="markup--code markup--p-code">.tla</code> so you can leave it off if you want. For the sake of clarity I will include it.</p><p name="a31e" id="a31e" class="graf graf--p graf-after--p">The basic command is:</p><pre name="652d" id="652d" class="graf graf--pre graf-after--p">pcal myspec.tla</pre><p name="11c1" id="11c1" class="graf graf--p graf-after--pre">Unlike command line tools you might be used to <code class="markup--code markup--p-code">pcal</code> is very particular about order of arguments. The name of the file you want to translate should always be last, any flags should come before it.</p><p name="0b79" id="0b79" class="graf graf--p graf-after--p"><code class="markup--code markup--p-code">-version</code> does something that might be unexpected. Rather than telling you the version of <code class="markup--code markup--p-code">pcal</code> you have installed, it lets you pick which version of PlusCal the translator will assume it’s translating (this currently goes up to 1.8)</p><p name="a875" id="a875" class="graf graf--p graf-after--p">If you want to see the AST for your PlusCal you can use the <code class="markup--code markup--p-code">-writeAST</code> flag to forego the rest of the translation just to output the AST.</p><p name="9d81" id="9d81" class="graf graf--p graf-after--p">This is super useful tool but not without its bugs. For example, I understand that<code class="markup--code markup--p-code">-spec</code> and <code class="markup--code markup--p-code">-spec2</code> are supposed to let you choose between version 1 and 2 of TLA+, but I’ve never been able to get them to work.</p><h3 name="d0ac" id="d0ac" class="graf graf--h3 graf-after--p">Defining Models with Configuration files</h3><p name="351a" id="351a" class="graf graf--p graf-after--h3">If you’re used to the Toolbox interface, you’re probably wondering how you can configure your invariants and other test cases. The answer is the <code class="markup--code markup--p-code">.cfg</code> file automatically generated for you by the <code class="markup--code markup--p-code">pcal</code> command.</p><p name="5e1d" id="5e1d" class="graf graf--p graf-after--p">Normally it will look like this:</p><pre name="8f6e" id="8f6e" class="graf graf--pre graf-after--p">SPECIFICATION Spec<br/>\* Add statements after this line.</pre><p name="3cae" id="3cae" class="graf graf--p graf-after--pre">But there are a variety of parameters you can set before checking your models. The most useful of these are<code class="markup--code markup--p-code">INVARIANT</code> and <code class="markup--code markup--p-code">PROPERTY</code> which specify which invariants and which temporal properties to test respectively.</p><p name="7ab8" id="7ab8" class="graf graf--p graf-after--p">Here’s a full list of all the possible configuration settings</p><pre name="57cf" id="57cf" class="graf graf--pre graf-after--p">INVARIANT<br/>INVARIANTS<br/>PROPERTY<br/>PROPERTIES<br/>CONSTANT<br/>CONSTANTS<br/>CONSTRAINT<br/>CONSTRAINTS<br/>ACTION_CONSTRAINT<br/>ACTION_CONSTRAINTS<br/>INIT<br/>NEXT<br/>VIEW<br/>SYMMETRY<br/>TYPE<br/>TYPE_CONSTRAINT</pre><p name="a77d" id="a77d" class="graf graf--p graf-after--pre">I have not been able to figure out what the difference between the singular form of the parameter (eg <code class="markup--code markup--p-code">CONSTANT</code>) and the plural form of the parameter (eg <code class="markup--code markup--p-code">CONSTANTS</code>) is. Both can take multiple values if separated by a space. It might be that there’s some backwards compatibility issue involved here but…. ¯\_(ツ)_/¯</p><p name="64db" id="64db" class="graf graf--p graf-after--p">Make sure you add your custom configuration below the line. If you put them above that comment line, <code class="markup--code markup--p-code">pcal</code> will overwrite them every time you translate.</p><h3 name="a455" id="a455" class="graf graf--h3 graf-after--p">tlc: Checking the Models</h3><p name="0dfa" id="0dfa" class="graf graf--p graf-after--h3">Now it’s time to run the model checker! Like <code class="markup--code markup--p-code">pcal</code> the <code class="markup--code markup--p-code">.tla</code> extension is optional. The basic command looks like this:</p><pre name="5ba8" id="5ba8" class="graf graf--pre graf-after--p">tlc myspec.tla</pre><p name="7035" id="7035" class="graf graf--p graf-after--pre">And this will produce the following output:</p><pre name="f47a" id="f47a" class="graf graf--pre graf-after--p">TLC2 Version 2.10 of 28 September 2017<br/>Running breadth-first search Model-Checking with 1 worker on 4 cores with 910MB heap and 64MB offheap memory (Mac OS X x.x.x x86_64, Oracle Corporation x.x.x_x x86_64).<br/>Parsing file myspec.tla<br/>Parsing file /var/folders/sx/tb9t1_890xd0hs9y0xb1k5fr0000gn/T/Integers.tla<br/>Parsing file /var/folders/sx/tb9t1_890xd0hs9y0xb1k5fr0000gn/T/Naturals.tla<br/>Semantic processing of module Naturals<br/>Semantic processing of module Integers<br/>Semantic processing of module myspec<br/>Starting... (2019-01-17 23:06:59)<br/>Implied-temporal checking--satisfiability problem has 1 branches.<br/>Computing initial states...<br/>Finished computing initial states: 25 distinct states generated.<br/>Progress(5) at 2019-01-17 23:07:00: 332 states generated, 222 distinct states found, 0 states left on queue.<br/>Checking temporal properties for the complete state space with 222 total distinct states at (2019-01-17 23:07:00)</pre><p name="d166" id="d166" class="graf graf--p graf-after--pre">This tells you what modules were imported and how many states the model has. If you’re curious the actual spec we’re running here is the wire transfer example in Hillel Wayne’s book. That particular spec has a temporal property that we know will be violated by the algorithm. This is what that looks like:</p><pre name="9cab" id="9cab" class="graf graf--pre graf-after--p">Error: Temporal properties were violated.</pre><pre name="56ea" id="56ea" class="graf graf--pre graf-after--pre">Error: The following behavior constitutes a counter-example:</pre><pre name="fec8" id="fec8" class="graf graf--pre graf-after--pre">State 1: &lt;Initial predicate&gt;<br/>/\ people = {"alice", "bob"}<br/>/\ sender = &lt;&lt;"alice", "alice"&gt;&gt;<br/>/\ receiver = &lt;&lt;"bob", "bob"&gt;&gt;<br/>/\ acc = [alice |-&gt; 5, bob |-&gt; 5]<br/>/\ amount = &lt;&lt;1, 1&gt;&gt;<br/>/\ pc = &lt;&lt;"CheckAndWithdraw", "CheckAndWithdraw"&gt;&gt;</pre><pre name="3f77" id="3f77" class="graf graf--pre graf-after--pre">State 2: &lt;Action line 55, col 27 to line 61, col 77 of module wire&gt;<br/>/\ people = {"alice", "bob"}<br/>/\ sender = &lt;&lt;"alice", "alice"&gt;&gt;<br/>/\ receiver = &lt;&lt;"bob", "bob"&gt;&gt;<br/>/\ acc = [alice |-&gt; 4, bob |-&gt; 5]<br/>/\ amount = &lt;&lt;1, 1&gt;&gt;<br/>/\ pc = &lt;&lt;"CheckAndWithdraw", "Deposit"&gt;&gt;</pre><pre name="0abe" id="0abe" class="graf graf--pre graf-after--pre">State 3: &lt;Action line 55, col 27 to line 61, col 77 of module wire&gt;<br/>/\ people = {"alice", "bob"}<br/>/\ sender = &lt;&lt;"alice", "alice"&gt;&gt;<br/>/\ receiver = &lt;&lt;"bob", "bob"&gt;&gt;<br/>/\ acc = [alice |-&gt; 3, bob |-&gt; 5]<br/>/\ amount = &lt;&lt;1, 1&gt;&gt;<br/>/\ pc = &lt;&lt;"Deposit", "Deposit"&gt;&gt;</pre><pre name="052d" id="052d" class="graf graf--pre graf-after--pre">State 4: &lt;Action line 63, col 18 to line 66, col 68 of module wire&gt;<br/>/\ people = {"alice", "bob"}<br/>/\ sender = &lt;&lt;"alice", "alice"&gt;&gt;<br/>/\ receiver = &lt;&lt;"bob", "bob"&gt;&gt;<br/>/\ acc = [alice |-&gt; 3, bob |-&gt; 6]<br/>/\ amount = &lt;&lt;1, 1&gt;&gt;<br/>/\ pc = &lt;&lt;"Done", "Deposit"&gt;&gt;</pre><pre name="d5ee" id="d5ee" class="graf graf--pre graf-after--pre">State 5: Stuttering<br/>Finished checking temporal properties in 00s at 2019-01-17 23:07:00<br/>332 states generated, 222 distinct states found, 0 states left on queue.<br/>Finished in 01s at (2019-01-17 23:07:00)</pre><p name="8b81" id="8b81" class="graf graf--p graf-after--pre">If you are already familiar with the Toolbox, this looks pretty much like the output you’d see before, expect without the highlighting.</p><p name="03d2" id="03d2" class="graf graf--p graf-after--p">By default <code class="markup--code markup--p-code">tlc</code> stops at the first invariant error. If you’d like it to continue, you use the flag <code class="markup--code markup--p-code">-continue</code>. In general <code class="markup--code markup--p-code">tlc</code>'s options are split into two types: MODE-SWITCHES and GENERAL-SWITCHES. Mode switches change how the models are run, from switching from breadth-first to depth-first, to picking a seed number for randomization. General switches allow you to specify different configuration files, how much detail is printed to stdout.</p><p name="97b5" id="97b5" class="graf graf--p graf-after--p graf--trailing">I hope this is enough to get you started. I will update this post from time to time as I figure out more about how the tools work. <strong class="markup--strong markup--p-strong">Happy Model Checking!</strong></p></div> Fri, 18 Jan 2019 05:18:35 +0000 Marianne Bellotti Dbeaver – Multi-platform database tool <p>Free multi-platform database tool for developers, SQL programmers, database administrators and analysts. Supports all popular databases: MySQL, PostgreSQL, MariaDB, SQLite, Oracle, DB2, SQL Server, Sybase, MS Access, Teradata, Firebird, Derby, etc.</p> <div class="call-to-action"> <a href="" class="blue button"> Download </a> </div><!-- end of .call-to-action --> Thu, 17 Jan 2019 22:08:35 +0000 5G: if you build it, we will fill it <div class="row sqs-row"><div class="col sqs-col-12 span-12"><div class="sqs-block html-block sqs-block-html" data-block-type="2" id="block-f32a1c09e1a1bd72c9c1"><div class="sqs-block-content"><p>In early 2000, right at the top of the dotcom bubble, the mobile bubble and the broadband bubble, European mobile operators spent €110bn on licenses for 3G spectrum. Now, almost 20 years later, I’ve just got back from CES, and 5G is a Topic. Many of my friends at big companies tell me that ‘what is 5G?’ floats around a lot of corporate headquarters almost as much as ‘what is machine learning?’ does. </p><p>There are a bunch of different ways to answer this. If I was still a telecoms analyst, I would be spending a lot of time thinking about spectrum, deployment schedules and capex - mobile operators around the world spend several hundred billion dollars a year on network capex, and 5G will become a big part of that. I’d talk about network efficiencies, refarming, vendors, Huawei, chipsets, and maybe NFV. But I’m not a telecoms analyst anymore - I work in Silicon Valley. So, seen from Silicon Valley, I think there are maybe four things to talk about.</p><h3>First, what actual changes should we expect?</h3><p>Without going into the technical details (any more than absolutely necessary), what do we actually get from this? A fatter pipe.</p><ul data-rte-list="default"><li><p>As with each previous generational change, 5G makes it cheaper and easier for mobile operators to build more capacity. So, they can continue to accommodate growing usage.</p></li><li><p>5G will be deployed on existing cellular radio frequencies, but also lets operators address much higher radio frequencies (over 20 GHz, AKA millimeter wave or ‘mmWave’) that have never previously been usable for mobile services. (This will also require the installation of many short range base stations.)</p></li><li><p>Mainly because of this new spectrum, mobile 5G speeds in good conditions could be well over 100 megabits/sec and potentially several hundreds megabits/sec (mobile speeds of over a gigabit/sec are technically possible but unlikely in the real world).</p></li><li><p>However, deployment in this frequencies, and hence these eye-catching new speeds, will be possible only in pretty constrained areas and will happen pretty slowly. Signals at such frequencies have worse range and don’t go through walls ( to simplify hugely), so don’t expect these speeds in rural areas or indeed inside buildings (this is why they have not previously been used for mobile at all). 5G deployment on more conventional mobile spectrum will have speeds (and coverage) closer to 4G</p></li><li><p>5G is promised to have much better latency than 4G - perhaps 20-30ms in the real world, down from 50-60ms for LTE (4G). It’s not clear how visible this will be to users.</p></li><li><p>Some people (eg Verizon) think that you can also use 5G in these higher frequencies for a home broadband service (which would mean an antennae on the outside of your house, or in a window), offering up to a gigabit/sec. There is also a fair bit of skepticism about both the economic and technical cases for this. Of course, the newest version of DOCSIS (cable internet) offer much the same - something around a third of the US population could have access to these speeds anyway.</p></li></ul><p><strong>So, mobile gets better latency and the mobile pipe keeps getting fatter. Fixed broadband will get more competition, in some places.</strong></p><h3>Second, what does it mean to have steadily fatter pipes?</h3><p>The internet first took off with dial-up, and the first consumer mobile internet service to get mass adoption, NTT DoCoMo’s i-mode, had 2G data speeds - both of these gave us tens of Kbits/sec at best. DSL and the first deployments of 3G gave us a couple of hundred Kbits/sec. Then improvements to 3G (‘3.5G’) and then 4G gave us tens of Mbits/sec (and also much better latency) and improvements to DSL and DOCSIS gave us fixed home broadband speeds in the tens or low hundreds.</p><p>With each of these surges in speed, two things happen. First, the things we’re already doing get smoother and easier and quicker, and also get more capable (or bloated). Pages get more images and become more dynamic. Second, new things become possible. You could not have done Flickr or Google Maps on dialup, and you could not have done Netflix (or at least not well) on the broadband of 2003. In the following generation, Snapchat only worked when you could presume that all of your users can connect at tens of Mbits/sec (when they’re not on home WiFi, of course). That in turn means networks with the overall capacity to give that speed not just to one person at a time but to lots of people, and network infrastructure that can do that at a vaguely reasonable price. If you’d shown Snapchat to a mobile network executive in the early 2000s, their hair would have gone white - there was just no way the early 3G network could have supported that kind of load. </p><p>In the same way, then, 5G speeds, and ever-faster home broadband, will mean that existing applications will get richer, and also that new applications will emerge - new Flickrs, YouTubes or Snapchats. We don’t know what yet, exactly, though we can make some early guesses, but the creativity of entrepreneurs and platforms and the choices of consumers will decide. This is the great thing about the decentralized, permissionless innovation of the internet - telcos don’t need to decide in advance what the use cases are, any more than Intel had to decide what the use cases for faster CPUs would be.</p><h3>Third, AR and VR, and cars.</h3><p>Having said that we don’t know what the use cases will be, there are a couple that do repeatedly come up in conversations around 5G; AR and VR, and autonomous cars. It’s worth spending a little time talking about each of these. </p><p>I think of VR as fundamentally an indoor product - you will not use it walking down the street or pop it open for 20 seconds while you’re waiting for the bus. That means that the connectivity is whatever your home broadband is - DSL, fiber, cable or, perhaps, 5G, plus however you connect to that (i.e. WiFi, mostly). 5G here means two different possibilities. It might mean fixed 5G (with very limited coverage) at up to a gigabit/sec to an antenna outside your home and then WiFi to the headset, or else it means a cellular, ‘mobile’ modem in your device, in which case you will get speeds much closer to today’s 4G LTE (again, 20 GHz signals do not go through walls). You’ll also get 5G latency, which is better than 4G. It 5G better than the existing connection? Will you notice? Is that what VR is waiting for? For how many applications? </p><p>5G seems rather more interesting for AR. To clarify first, ‘AR’ today is used to describe three different things:</p><ol data-rte-list="default"><li><p>Waving your phone at something and seeing things on the screen</p></li><li><p>A wearable heads-up display (Google Glass) with no awareness of the world around you,</p></li><li><p>An transparent, immersive, fully 3D color display with a sensing suite that allows it to map the room around you and recognise things and people. A bunch of companies (including Magic Leap, in which a16z is an investor) are working on this - it’s still a few years away from being a mass-market consumer product.</p></li></ol><p>The third of these seems much the most interesting to me. If you could put on a pair of reading glasses that could look at the world around you and show you things in response, that could be pretty useful, in much the same way that, say, having the internet in your pocket turned out to be useful, and to enable all sorts of new and unpredictable things (imagine pitching Snapchat when our only internet experience was on a PC over dialup). This would work on 4G, but continuous low power high speed low latency connections from 5G would make it a lot better. </p><p>At the other extreme, I also hear a fair bit about autonomy as a 5G application. I’m not sure about this one. Autonomous cars will certainly use a great deal of data. They will be downloading ‘maps’ (really, very high definition 3D models of the streets they’re driving along) and also updating those maps with data from their own sensors, and they will be downloading updates to their driving systems and uploading more data about how real people drive. However, very little of this needs to happen in real time - it can happen every night or even every week. No mainstream autonomous car project requires continuous connectivity at all, let alone through 5G - the car has to work with no cellular service.</p><p>As we look further out, into a world with lots of autonomous cars, there is widespread interest in those cars being able to talk to each other, so that for example they can speed through a junction without slowing down. This might be done with 5G, or perhaps with another wireless technology. However, most of these use-cases only make sense in a world where there are no human drivers to worry about - this is, obviously, a long way away.</p><p>More imminent is remote operation as a back-stop for when a vehicle gets confused and stops, or when you shift from manually-driven to autonomous - for when autonomy doesn’t work. This is particularly relevant to trucking: 90% or so of the mileage of a long-haul truck in the USA happens on highways, and highways are much easier for autonomy than suburban streets, so a number of companies are exploring a model in which the truck drives itself on the freeway and a human takes over when it leaves - either by physically getting into the vehicle or by connecting remotely. This would be a good 5G use case. But to begin with, you can do it with 4G. It doesn’t <em>presume </em>5G. The advantage of 5G comes partly with latency, but also with things like network slicing, which takes me onto the next section. </p><h3>Fourth, networking slicing and industrial uses.</h3><p>One of the cooler features of 5G is that it lets you split out dedicated capacity for particular use cases - so-called ‘network slicing’. Today (to simplify hugely), although network operators try to do traffic management, all traffic in the cell is fundamentally using the same capacity. 5G lets you create dedicated private capacity in the radios network with specific characteristics. So, you could sell a truck operator dedicated capacity on the two miles between a specific freeway exit and a specific warehouse. Or, you could offer an IoT operator (or alarm company) much lower bandwidth but over a wider area. </p><p>Hence, you could theoretically customise any mix of data speed, coverage, quality, latency, and reliability, or even more narrow things like power consumption, and there will probably be a layer of resellers that emerges to aggregate and implement these kinds of services on behalf of MNOs. This seems interesting - it also seems likely to be an enterprise and vertical application story, not a consumer story. On this theme, we are already seeing a trend for large industrial organizations to use private 4G networks instead of WiFi or even Ethernet. These issues will also apply to 5G.</p><h3>So, what’s the killer app for 5G?</h3><p>In 2000 or so, when I was a baby telecoms analyst, it seemed as though every single telecoms investor was asking ‘what’s the killer app for 3G?’ People said ‘video calling’ a lot. But 3G video calls never happened, and it turned out that the killer app for having the internet in your pocket was, well, having the internet in your pocket. Over time, video turned out to be one part of that, but not as a telco service billed by the second. Equally, the killer app for 5G is probably, well, ‘faster 4G’. Over time, that will mean new Snapchats and New YouTubes - new ways to fill the pipe that wouldn’t work today, and new entrepreneurs. It probably isn’t a revolution - or rather, it means that the revolution that’s been going on since 1995 or so keeps going for another decade or more, until we get to 6G. </p></div></div></div></div> Fri, 18 Jan 2019 05:28:37 +0000 Problems plagued U.S. Navy destroyer Fitzgerald before fatal collision – report <div class="mco-body-item mco-body-type-text" readability="29.763779527559"> <p class="element element-paragraph"> A scathing internal Navy probe into the 2017 collision that drowned seven sailors on the guided-missile destroyer Fitzgerald details a far longer list of problems plaguing the vessel, its crew and superior commands <a href="" target="_blank">than the service has publicly admitted</a>.</p> </div> <div class="mco-body-item mco-body-type-text" readability="36"> <p class="element element-paragraph"> Obtained by Navy Times, the “dual-purpose investigation” was overseen by Rear Adm. Brian Fort and submitted 41 days after the June 17, 2017, tragedy.</p> </div> <div class="mco-body-item mco-body-type-text" readability="33"> <p class="element element-paragraph"> It was kept secret from the public in part because it was designed to prep the Navy for potential lawsuits in the aftermath of the accident.</p> </div> <div class="mco-body-item mco-body-type-text" readability="35"> <p class="element element-paragraph"> Unsparingly, Fort and his team of investigators outlined critical lapses by bridge watchstanders on the night of the collision with the Philippine-flagged container vessel ACX Crystal in a bustling maritime corridor off the coast of Japan.</p> </div> <div class="mco-body-item mco-body-type-text" readability="37"> <p class="element element-paragraph"> Their report documents the routine, almost casual, violations of standing orders on a Fitz bridge that often lacked skippers and executive officers, even during potentially dangerous voyages at night through busy waterways.</p> </div> <div class="mco-body-item mco-body-type-text" readability="36.174721189591"> <p class="element element-paragraph"> The probe exposes how personal distrust led the officer of the deck, Lt. j.g. Sarah Coppock, to avoid communicating with the destroyer’s electronic nerve center — the combat information center, <a href="" target="_blank">or CIC</a> — while the Fitzgerald tried to cross a shipping superhighway.</p> </div> <div class="mco-body-item mco-body-type-text" readability="35"> <p class="element element-paragraph"> When Fort walked into the trash-strewn CIC in the wake of the disaster, he was hit with the acrid smell of urine. He saw kettlebells on the floor and bottles filled with pee. Some radar controls didn’t work and he soon discovered crew members who didn’t know how to use them anyway.</p> </div> <div class="mco-body-item mco-body-type-text" readability="36"> <p class="element element-paragraph"> Fort found a Voyage Management System that generated more “trouble calls” than any other key piece of electronic navigational equipment. Designed to help watchstanders navigate without paper charts, the VMS station in the skipper’s quarters was broken so sailors cannibalized it for parts to help keep the rickety system working.</p> </div> <div class="mco-body-item mco-body-type-text" readability="32.23046875"> <p class="element element-paragraph"> Since 2015, the Fitz had lacked a <a href="" target="_blank">quartermaster chief petty officer</a>, a crucial leader who helps safely navigate a warship and trains its sailors — a shortcoming known to both the destroyer’s squadron and Navy officials in the United States, Fort wrote.</p> </div> <div class="mco-body-item mco-body-type-text" readability="34"> <p class="element element-paragraph"> Fort determined that Fitz’s crew was plagued by low morale; overseen by a dysfunctional chiefs mess; and dogged by a bruising tempo of operations in the Japan-based 7th Fleet that left exhausted sailors with little time to train or complete critical certifications.</p> </div> <div class="mco-body-item mco-body-type-text" readability="36.223728813559"> <p class="element element-paragraph"> To Fort, they also appeared to be led by officers who appeared indifferent to potentially life-saving lessons that should’ve been learned from other near-misses at sea, including a similar incident near Sasebo, Japan, that occurred only five weeks before the <a href="" target="_blank">ACX Crystal collision</a>, Fort wrote.</p> </div> <div class="mco-body-item mco-body-type-text" readability="32"> <p class="element element-paragraph"> <b>‘Significant progress’</b></p> </div> <div class="mco-body-item mco-body-type-text" readability="35.272727272727"> <p class="element element-paragraph"> Fort’s work took on added urgency after another destroyer assigned to the 7th Fleet, the <a href="" target="_blank">John S. McCain, </a>collided with the Liberian-flagged tanker Alnic MC on Aug. 21, 2017, killing 10 more American sailors.</p> </div> <div class="mco-body-item mco-body-type-text" readability="32"> <p class="element element-paragraph"> But it remained an internal file never to be shared with the public.</p> </div> <div class="mco-body-item mco-body-type-text" readability="33"> <p class="element element-paragraph"> Pentagon officials declined to answer specific questions sent by Navy Times about the Fort report and instead defended the decision to keep the contents of the report hidden from public scrutiny.</p> </div> <div class="mco-body-item mco-body-type-text" readability="39"> <p class="element element-paragraph"> “The Navy determined to retain the legal privilege in order to protect the legal interests of the United States, but provided information regarding the causes and lessons learned to families of those sailors, the Congress and the American people, again to make every effort to ensure these types of tragedies to not happen again,” said Navy spokesman Capt. Gregory Hicks in a prepared written statement to Navy Times.</p> </div> <div class="mco-body-item mco-body-type-text" readability="33.440233236152"> <p class="element element-paragraph"> In the 19 months since the fatal collision, the Navy’s Readiness Reform Oversight Council has made “significant progress” in implementing reforms called for<a href="" target="_blank"> in several top-level Navy reviews </a>of the Fitzgerald and McCain collisions — nearly 75 percent of the 111 recommendations slated to be implemented by the end of 2018, Hicks added.</p> </div> <div class="mco-body-item mco-body-type-text" readability="33"> <p class="element element-paragraph"> Navy Times withheld publication of the Fort report’s details until Pentagon officials could brief the families of the dead Fitz sailors about the grim findings.</p> </div> <div class="mco-body-item mco-body-type-text" readability="38"> <p class="element element-paragraph"> Sailors Xavier Martin, Dakota Rigsby, Shingo Douglass, Tan Huynh, Noe Hernandez, Carlos Sibayan and Gary Rehm drowned in the disaster.</p> </div> <div class="mco-body-item mco-body-type-text" readability="32"> <p class="element element-paragraph"> Coppock pleaded guilty to a dereliction of duty charge at court-martial last year.</p> </div> <div class="mco-body-item mco-body-type-text" readability="32.0625"> <p class="element element-paragraph"> The Fitz’s commanding officer,<a href="" target="_blank"> Cmdr. Bryce Benson</a>, and <a href="" target="_blank">Lt. Natalie Combs</a>, who ran the CIC, are battling similar charges in court but contend unlawful command influence by senior leaders scuttled any chance for fair trials.</p> </div> <div class="mco-body-item mco-body-type-text" readability="36"> <p class="element element-paragraph"> When Fort arrived at her CIC desk, he found a stack of paperwork Combs abandoned: “She was most likely consumed and distracted by a review of Operations Department paperwork for the three and a half hours of her watch prior to the collision,” Fort wrote.</p> </div> <div class="mco-body-item mco-body-type-text" readability="36"> <p class="element element-paragraph"> Although Fort’s report drew parallels to a 2012 non-fatal accident involving the destroyer Porter and the supertanker M/V Otowasan in the Strait of Hormuz, his investigation focused on a near-miss by the Fitzgerald near Sasebo on May 10, 2017.</p> </div> <div class="mco-body-item mco-body-type-text" readability="38"> <p class="element element-paragraph"> During that incident, an unnamed junior officer “became confused by the surface contact picture” of vessels surrounding the destroyer and summoned the warship’s then-commanding officer, Cmdr. Robert Shu, to the bridge, according to Fort.</p> </div> <div class="mco-body-item mco-body-type-text" readability="32"> <p class="element element-paragraph"> Shu set the course to steer the Fitz behind the merchant vessel and then left the bridge.</p> </div> <div class="mco-body-item mco-body-type-text" readability="35"> <p class="element element-paragraph"> But once the officer in charge had cleared the other ship’s stern, he “became immediately aware that another vessel was on the opposite side” of the ship they had just dodged, Fort wrote.</p> </div> <div class="mco-body-item mco-body-type-text" readability="36"> <p class="element element-paragraph"> “(The officer) sounded five short blasts and ordered all back full emergency to avoid collision,” something Lt. j.g. Coppock failed to do weeks later when the ACX Crystal loomed out of the darkness, the report states.</p> </div> <div class="mco-body-item mco-body-type-text" readability="38"> <p class="element element-paragraph"> To Fort, the earlier incident should’ve been a wakeup call for both Shu and Cmdr. Benson, his executive officer who would soon “fleet up” to replace him as skipper, plus Benson’s future second-in-command, Cmdr. Sean Babbitt.</p> </div> <div class="mco-body-item mco-body-type-text" readability="38"> <p class="element element-paragraph"> “FTZ’s command leadership was unaware of just how far below standards their command had drifted,” wrote Fort, a surface warfare officer with more than a quarter-century of experience. “Had the (commanding officer) and (executive officer) critiqued the near-collision, they may have identified the root causes uncovered by this investigation.”</p> </div> <div class="mco-body-item mco-body-type-text" readability="35"> <p class="element element-paragraph"> When contacted by Navy Times, Shu recalled the incident that took place just east of the Tsushima Strait, “a normally busy and recognized waterway.”</p> </div> <div class="mco-body-item mco-body-type-text" readability="36"> <p class="element element-paragraph"> “As I was heading down the ladderwell to my cabin, I heard five short blasts and felt the ship back,” Shu said. “I ran back up to the bridge and there was another vessel behind the one we had just maneuvered for.”</p> </div> <div class="mco-body-item mco-body-type-text" readability="35"> <p class="element element-paragraph"> Although Shu couldn’t recall how close the two vessels got to each other, he insisted that the incident wasn’t a near-collision and that his bridge team “reacted appropriately” and later assured him that they had a good picture of the vessels around their destroyer.</p> </div> <div class="mco-body-item mco-body-type-text" readability="35"> <p class="element element-paragraph"> But Fort’s investigation pointed to a disturbing pattern of watchstanders failing to follow standing orders from a skipper and XO who often were inexplicably absent from the bridge, even when the warship was transiting potentially dangerous waters at night.</p> </div> <div class="mco-body-item mco-body-type-text" readability="37"> <p class="element element-paragraph"> One junior officer spoke of a similar near-collision during low visibility, when a watch team finishing their shift failed to identify a vessel that was closing on them and wasn’t being tracked, according to the report. The oncoming officer of the deck maneuvered out of the vessel’s way but never notified the commanding officer.</p> </div> <div class="mco-body-item mco-body-type-text" readability="35"> <p class="element element-paragraph"> Watchstanders admitted to knowing of other instances when ships got close enough to trigger a call to the CO, but they never made it, according to the report.</p> </div> <div class="mco-body-item mco-body-type-text" readability="39"> <p class="element element-paragraph"> “Procedural compliance by Bridge watchstanders is not the norm onboard FTZ, as evidenced by numerous, almost routine, violations of the CO’s standing orders,” not to mention radio transmissions laced with profanity and “unprofessional humor,” Fort found.</p> </div> <div class="mco-body-item mco-body-type-text" readability="35"> <p class="element element-paragraph"> Benson and predecessor Shu spent little time on the bridge during nighttime transits and Benson was asleep in his quarters on the fateful night the Fitzgerald collided with the ACX Crystal, Fort wrote.</p> </div> <div class="mco-body-item mco-body-type-text" readability="37"> <p class="element element-paragraph"> Some of Benson’s bridge team had never transited the busy waterway before, or had only done so during the day, and “his watchstanders were at least as fatigued as he was from a long day of operations without sufficient rest,” Fort found.</p> </div> <div class="mco-body-item mco-body-type-text" readability="34"> <p class="element element-paragraph"> It also was Benson’s first transit from Sagami Bay to the open sea as the warship’s skipper, a command he assumed just a few days after the near-collision off Sasebo.</p> </div> <div class="mco-body-item mco-body-type-text" readability="38"> <p class="element element-paragraph"> “It is inexplicable that neither Benson nor (executive officer Cmdr. Babbitt) were on the bridge for his first outbound Yokosuka transit as CO, at night, in close proximity to land, and expecting moderately dense fishing and merchant traffic,” Fort wrote.</p> </div> <div class="mco-body-item mco-body-type-text" readability="38"> <p class="element element-paragraph"> Ship travel is governed by the “rules of the road,” a set of guidelines regarding speed, lookouts and other best practices to avoid collisions, but Fort’s report casts doubt on whether watchstanders on board the Fitz and sister warships in the 7th Fleet had sufficient knowledge of them to safely navigate at sea.</p> </div> <div class="mco-body-item mco-body-type-text" readability="34"> <p class="element element-paragraph"> About three weeks after the ACX Crystal disaster, Fort’s investigators sprang a rules of the road pop quiz on Fitz’s officers.</p> </div> <div class="mco-body-item mco-body-type-text" readability="33"> <p class="element element-paragraph"> It didn’t go well. The 22 who took the test averaged a score of 59 percent, Fort wrote.</p> </div> <div class="mco-body-item mco-body-type-text" readability="35"> <p class="element element-paragraph"> “Only 3 of 22 Officers achieved a score over 80%,” he added, with seven officers scoring below 50 percent.</p> </div> <div class="mco-body-item mco-body-type-text" readability="34"> <p class="element element-paragraph"> The same exam was administered to the wardroom of another unnamed destroyer as a control group, and those officers scored similarly dismal marks.</p> </div> <div class="mco-body-item mco-body-type-text" readability="34"> <p class="element element-paragraph"> The XO Babbitt, Coppock and two other officers refused to take the test, according to the report.</p> </div> <div class="mco-body-item mco-body-type-text" readability="34"> <p class="element element-paragraph"> Reached by email, Babbitt told Navy Times that he declined because of the investigation and the fact that Fort had read him his rights.</p> </div> <div class="mco-body-item mco-body-type-text" readability="36"> <p class="element element-paragraph"> “The exam was also given weeks after the collision when the wardroom had not been concentrating on the rules of the road,” he said. “The crew had been pulled from event to event to include the memorial service and the dignified send off and the last thing anyone had been thinking about was how many lights a 50 meter towing vessel on inland waterways should have.”</p> </div> <div class="mco-body-item mco-body-type-text" readability="33"> <p class="element element-paragraph"> Speaking through his defense attorney, Benson declined to comment on the Fort report’s findings.</p> </div> <div class="mco-body-item mco-body-type-text" readability="36"> <p class="element element-paragraph"> In an email to Navy Times, Lt. Cmdr. Justin Henderson said Benson “has never declined or avoided the responsibility that is the burden of command at sea” and remains “accountable for the Fitzgerald and her crew, who remain at the forefront of his thoughts.”</p> </div> Fri, 18 Jan 2019 04:22:00 +0000 Judge unseals trove of internal Facebook documents <div readability="82.245386856588"> <p>Thanks for your interest in republishing this story. As a nonprofit newsroom, we want to share our work freely with as many people as possible. We only ask that you follow a few guidelines.</p> <p> You may embed our audio and video content and republish any written story for free under the <a href="">Creative Commons</a> Attribution-NonCommercial-NoDerivs 3.0 license and will indemnify our content as long as you strictly follow these guidelines: </p> <!-- Text Disclaimer--> <h4>TEXT</h4> <p> Credit and tag Reveal when sharing the story on social. We’re <a href="">@reveal</a> on Twitter and you can find us on at <a href="">facebook/ThisIsReveal</a>. </p> <p> Include this language at the beginning of the story: “This story was produced by Reveal from The Center for Investigative Reporting (link organization name to, a nonprofit news organization. Get their investigations emailed to you directly by signing up at <a href=";utm_medium=Partner&amp;utm_campaign=republish_signup"></a>. </p> <p>Our reporter(s) must be bylined. We prefer the following format: By Emmanuel Martinez, Reveal.</p> <p> If you plan to republish our content, please notify us at <a href=""></a>. </p> <p>Do not change or edit our material, except only to reflect changes in time and location. (For example, “yesterday” can be changed to “last week,” and “Portland, Ore.” to “Portland” or “here.”)</p> <p>Include all links from the story.</p> <!-- Photos Disclaimer--> <h4>PHOTOS</h4> <p>You can republish Reveal photos only if you run them in or alongside the stories with which they originally appeared, include the original caption, and do not change them.</p> <p> If you want to run a photo apart from that story, please request specific permission to license by contacting Senior Digital Producer Sam Ward, <a href=""></a>. Reveal often uses photos we purchase from The Associated Press; those are not available for republication. </p> <!-- Data Disclaimer--> <h4>DATA</h4> <p> If you want to republish Reveal graphics or data, please contact Senior Data Editor Michael Corey, <a href=""></a>. </p> <!-- In General Disclaimer--> <h4>IN GENERAL</h4> <p>We do not compensate anyone who republishes our work. You also cannot sell our material separately or syndicate it.</p> <p> You can’t republish all of our material wholesale, or automatically; you need to select stories to be republished individually. To inquire about syndication or licensing opportunities, please contact Sam Ward, <a href=""></a> </p> <p> If you plan to republish our content, you must notify us at <a href=""></a>. </p> <p>If we send you a request to remove our content from your website, you must agree to do so immediately.</p> <p>FYI, you can grab HTML code for our stories easily. Click on the “Republish this content” link at the bottom of every story. Please do not alter this code.</p> <h4>DO NOT APPLY IF</h4> <p>If you wish to only use portions of the work or create a derivative, you need separate permission and the license and indemnification do not apply. This license only applies to republication of full works.</p> <p>Additionally, we will not provide indemnification if you are located or publishing outside the United States, but you may contact us to obtain a license and indemnification on a case-by-case basis.</p> <!-- Contact Us Disclaimer--> <h4>CONTACT US</h4> <p> If you have any other questions, please contact us at <a href=""></a> </p> </div><div id="republish-content"> <p class="author-date"> <span class="module-article-meta">By <a href="" rel="author">Nathan Halverson</a> / <span class="date" itemprop="datePublished">January 17, 2019 </span></span> </p> <p>This story was originally published by Reveal from The Center for Investigative Reporting, a nonprofit news organization based in the San Francisco Bay Area. Learn more at <a href=""></a> and subscribe to the Reveal podcast, produced with PRX, at <a href=""></a>.</p><p>A trove of hidden documents detailing how Facebook made money off children will be made public, a federal judge ruled late Monday in response to requests from Reveal.</p> <p>A glimpse into the soon-to-be-released records shows Facebook’s own employees worried they were bamboozling children who racked up hundreds, and sometimes even thousands, of dollars in game charges. And the company failed to provide an effective way for unsuspecting parents to dispute the massive charges, according to internal Facebook records.</p> <p>The documents are part of a 2012 class-action lawsuit against the social media giant that claimed it inappropriately profited from business transactions with children.</p> <p>The lead plaintiff in the case was a child who used his mother’s credit card to pay $20 while playing a game on Facebook. The child, referred to as “I.B.” in the case, did not know the social media giant had stored his mom’s payment information. As he continued to play the game, Ninja Saga, Facebook continued to charge his mom’s credit card, racking up several hundred dollars in just a few weeks.</p> <p>The child “believed these purchases were being made with virtual currency, and that his mother’s credit card was not being charged for these purchases,” according to a previous ruling by U.S. District Court Judge Beth Freeman.</p> <p>When the bill came, his mom requested Facebook refund the money, saying she never authorized any charges beyond the original $20. But the company never refunded any money, forcing the family to file a lawsuit in pursuit of a refund.</p> <p>The court documents, which have remained hidden for years, came to light after Reveal from The Center for Investigative Reporting intervened last year to request the records be unsealed. There is increased public interest in Facebook’s business practices in the wake of high-profile scandals, including fake news published on the site and the leaking of user data. On Monday, the court agreed to unseal some of the records.</p> <p>Facebook has 10 days to make the bulk of the documents – more than a hundred pages – available to the public, according to the order. </p> <p>Reveal’s legal effort in the case has already uncovered some of the previously sealed information. Four documents that were either originally sealed or redacted were made partially available to Reveal in October. The documents show widespread confusion by children and their parents, who didn’t understand Facebook continued to charge them as they played games.</p> <p>Facebook employees began voicing their concerns that people were being charged without their knowledge. The social media company decided to analyze one of the most popular games of the time, Angry Birds, and discovered the average age of people playing it on Facebook was 5 years old, according to newly revealed information.</p> <p>“In nearly all cases the parents knew their child was playing Angry Birds, but didn’t think the child would be allowed to buy anything without their password or authorization first,” according to an internal Facebook memo. The memo noted that on other platforms, such as Apple’s iPhone, people were required to reauthorize additional purchases, such as by re-entering a password.</p> <p>A Facebook employee noted that children were likely to be confused by the in-game purchases because it “doesn’t necessarily look like real money to a minor.”</p> <p>Yet the company continued to deny refunds to children, profiting from their confusion. </p> <p>In one of the unsealed documents, two Facebook employees deny a refund request from a child whom they refer to as a “whale” – a term coined by the casino industry to describe profligate spenders. The child had entered a credit card number to play a game, and in about two weeks racked up thousands of dollars in charges, according to an excerpt of messages between two employees at the social media giant.</p> <p>Gillian: Would you refund this whale ticket? User is disputing ALL charges…</p> <p>Michael: What’s the users total lifetime spend?</p> <p>Gillian: It’s $6,545 – but card was just added on Sept. 2. They are disputing all of it I believe. That user looks underage as well. Well, maybe not under 13.</p> <p>Michael: Is the user writing in a parent, or is this user a 13ish year old</p> <p>Gillian: It’s a 13ish yr old. says its 15. looks a bit younger. she* not its. Lol.</p> <p>Michael: … I wouldn’t refund</p> <p>Gillian: Oh that’s fine. cool. agreed. just double checking</p> <p>Facebook often failed to send receipts for these purchases, and links on the company’s website to dispute charges frequently failed to work, according to court records. A Facebook employee is quoted describing their attempt to dispute a charge.</p> <p>“I was stuck in an infinite-loop of questions just today,” the employee wrote. “It feels like the form is this Frankenstein beast that we’ve bolted together.”</p> <p>In unsealing some of the documents in the case, the judge wrote, “this information would be of great public interest, particularly since it relates specifically to Facebook’s transactions with minors.”</p> <p>“I’m glad to hear the public will get to learn more about this matter,” said attorney Ben Edelman, who represented the children and parents. </p> <p>The two sides settled the lawsuit in 2016. Edelman declined to say more, citing a confidentiality clause in the settlement. </p> <p>The judge agreed with Facebook’s request to keep some of the records sealed, saying certain records contained information that would cause the social media giant harm, outweighing the public benefit.</p> <p>In response to a request for an interview, Facebook provided a one-sentence statement: “We appreciate the court’s careful review of these materials.”</p> <p>We’ll continue to cover this story as the Facebook documents are made public in the coming days.</p> <p>Nathan Halverson can be reached at <a href=""></a>. Follow him on Twitter: <a href="">@eWords</a>.</p> <span class="ctx-article-root"><!-- --></span> <img id="pixel-ping-tracker" src="" width="0" height="0"/></div> Fri, 18 Jan 2019 07:35:13 +0000 Attention Economy Is a Malthusian Trap <p><figure><picture><img alt="" border="0" data-srcset="" class="lazyload"/></picture></figure></p><p>Tech’s death in this case is really a sign of two different kinds of success. First, tech died by conquering the world. Netflix is leading a global transition from linear to streaming television. Tesla accelerated an electric awakening among auto companies. But if Netflix and Walt Disney both use technology to stream video, why is only Netflix trading at infinity-times earnings? And if Tesla and BMW “both use battery technology to power luxury cars,” Deluard writes, “why should the former trade at 42 times forward earnings when the latter fetches 5.6 times trailing earnings?” Good question.</p><p>Second, some of the largest tech companies have exhausted their main markets. Apple and Samsung may have reached <a href="" data-omni-click="r'article',r'',d,r'intext',r'7',r'None'">the smartphone plateau</a>, as phone sales seem to have peaked. Facebook and Google have grown to dominate digital advertising. But in the U.S., overall ad spending has historically averaged no more than 3 percent of GDP. How do you grow forever in a sector that isn’t growing? That’s easy: You don’t. There may be a Malthusian trap in the attention economy. Eventually, revenue growth bumps up against the natural limitations of population and waking hours.</p><hr class="c-section-divider"/><p>But here’s another interpretation of the past 12 months in tech: Perhaps it’s not the end of tech, or even the beginning of the end. It’s “<a href="" data-omni-click="r'article',r'',d,r'intext',r'8',r'None'">the end of the beginning</a>,” says Benedict Evans, a partner at Andreessen Horowitz.</p><p>In the first era, tech companies mostly solved media problems. Need a hardware platform for media consumption? Apple and Samsung did it, with several billion smartphone sales. Need a software portal for the world’s information? Google did it. Need a global village to talk about the world’s media? Facebook did it. Monopolize media consumption in the world’s largest country? Tencent did it.</p><p class="c-recirculation-link" data-id="injected-recirculation-link" id="injected-recirculation-link-1"><a href="" data-omni-click="r'article',r'',d,r'intext',r'9',r'None'">Read: When the tech mythology collapses</a></p><p><a href="" data-omni-click="r'article',r'',d,r'intext',r'10',r'None'">Software ate media</a>, and media went down pretty smoothly. Now it has to gnaw through the harder, crunchier parts of the global economy. Software eating life sciences? Software eating elderly care? Software eating household construction? Software eating <a href="" data-omni-click="r'article',r'',d,r'intext',r'11',r'None'">money</a>? Good luck.</p><p>Look closer at one big sector where tech companies have already started chewing: e-commerce. Online shopping is a $500 billion industry in the U.S., which sounds like quite a lot. But really, it’s no more than Americans spend each year at gas stations. Yep, <em>gas stations</em>.</p><p>E-commerce started with the easy stuff. The OG Amazon model sold books, which are among the world’s most reliable, durable units. When you buy <em>Lolita</em> on Amazon, you’re not worried that it will arrive missing the first page, or smelling like fish, or saying “by Dan Brown” on the cover. But with its Whole Foods acquisition, Amazon expanded into groceries, like fruit and meat, which can spoil, sour, and squish in transit. This is a much harder problem. Before millions of people will trust Amazon to deliver with equal reliability hardcovers and heirloom tomatoes, the company will have to invest more money in more warehouses and more transportation equipment.</p> Wed, 16 Jan 2019 23:47:24 +0000 The Scientist and Engineer's Guide to Digital Signal Processing (1999) <div class="short-description" readability="8.8557213930348"><p> Some cookies are required for secure log-ins but others are optional for functional activities. Our data collection is used to improve our products and services. We recommend you accept our cookies to ensure you’re receiving the best performance and functionality our site can provide. For additional information you may view the <a href="" data-id="cookie-details">cookie details</a>. Read more about our <a href="" target="_blank">privacy policy</a>. </p><a href="" class="btn btn-success" data-id="consent"><em class="fa fa-check"/> Accept and proceed </a></div><div class="long-description" readability="6.8417808219178"><a href="" class="btn btn-success" data-id="consent"><i class="fa fa-check"/> Accept and proceed </a><p>The cookies we use can be categorized as follows:</p><dl><dt>Strictly Necessary Cookies:</dt><dd> These are cookies that are required for the operation of or specific functionality offered. They either serve the sole purpose of carrying out network transmissions or are strictly necessary to provide an online service explicitly requested by you. </dd><dt>Analytics/Performance Cookies:</dt><dd> These cookies allow us to carry out web analytics or other forms of audience measuring such as recognizing and counting the number of visitors and seeing how visitors move around our website. This helps us to improve the way the website works, for example, by ensuring that users are easily finding what they are looking for. </dd><dt>Functionality Cookies:</dt><dd> These cookies are used to recognize you when you return to our website. This enables us to personalize our content for you, greet you by name and remember your preferences (for example, your choice of language or region). Loss of the information in these cookies may make our services less functional, but would not prevent the website from working. </dd><dt>Targeting/Profiling Cookies:</dt><dd> These cookies record your visit to our website and/or your use of the services, the pages you have visited and the links you have followed. We will use this information to make the website and the advertising displayed on it more relevant to your interests. We may also share this information with third parties for this purpose. </dd></dl><a href="" class="btn btn-link" data-id="decline">Decline cookies</a></div> Fri, 18 Jan 2019 01:21:54 +0000 Let’s talk about open-source sustainability <p><span><i><span>Do you contribute to open source software (OSS)? If you’re interested in sharing your perspective, </span></i><b><i>please consider filling out the form at the bottom of this post.</i></b></span></p> <p><span>Hello, my name is Devon! I just joined GitHub as the open source product manager. I’m here to support maintainers in cultivating vital, productive communities.</span></p> <p><span>This is my dream job. I’m a developer with a passion for governance and economics, and I joined GitHub with the specific mission of supporting OSS. I also spend a lot of time thinking about how cities work. That might seem like an irrelevant nerdy fact, but cities and OSS share deep parallels. My favorite urban economist Alain Bertaud said, “Close proximity, which is so essential to the creativity of cities, requires special rules, shared investments, and common services.” It’s simply not enough to bring people into the same space—whether it’s virtual or physical—and expect everything to go smoothly. </span></p> <p><span>As the OSS community has grown in scale and importance, the way we think about working together has to evolve, too. What works in a village or a town needs to evolve to serve a metropolis. Open source has grown from a small, academic sharing network to a giant, global web of dependencies. It now forms the backbone of the internet and technology in general. Just like any growing city, we have to coordinate the knowledge, infrastructure, and tools for the good of the whole community. </span></p> <p><span>OSS is an essential and special part of software development. OSS has also been the heart of GitHub since the beginning. However, there is so much more we could do to support the people behind it. I have many ideas, but first I want to hear from you.</span></p> <p><span>OSS makes world-class tools available to everyone. It feels so routine now, but this is such a special part of software. Every <code>import</code> or <code>include</code> statement is the contribution of a team of experts who, together, have devoted immense energy to the problem so that each developer importing their work doesn’t have to. OSS is an extraordinary version of “standing on the shoulders of giants”. </span></p> <p><span>OSS maintainers and contributors build tools for the rest of us, yet they don’t have all the tools, support, and environment they need to succeed. For example:</span></p> <ul><li><b>Lack of communication resources:</b><span> As projects grow, communicating with users becomes increasingly challenging. Many OSS teams find themselves building project and community management tools from scratch, sapping energy from the core project.</span></li> <li><b>Work overload:</b><span> Teams often find themselves exhausted when their user base grows faster than their bandwidth. Solving a big problem for many people is satisfying, but it gets more difficult over time and creates long-term sustainability issues. In many cases, the author never planned to be responsible for a critical piece of digital infrastructure. They were trying to solve their own problem, and it turned out to also be useful to many others.</span></li> <li><b>Abuse:</b><span> No one deserves abuse. OSS contributors are often on the receiving end of harassment, demands, and general disrespect, even as they volunteer their time to the community.</span></li> <li><b>Inadequate resources:</b><span> OSS is everywhere, but it still faces a lack of resources. Developers and companies benefit from a vibrant OSS ecosystem, but individually they lack proportionate incentive to contribute time and money to creating and maintaining projects. This dramatically limits the value of OSS despite its enormous potential. </span></li> <li><b>Sparse analytics:</b><span> Beyond download statistics, maintainers have limited visibility into how their software is used. They have a pulse on the day-to-day needs of the community through hands-on interaction with contributors and users, but the tools to do this could be better, and there are only a few tools that give a larger view of what’s going on.</span></li> <li><b>Asymmetric recognition: <span>Many types of contributions go into an OSS project beyond code. Unfortunately, hard work, including project maintenance, can go unnoticed and unrecognized when it isn’t legible to the project’s users.</span></b></li> <li><b>Insufficient mentorship:</b><span> OSS can be a challenging environment to find mentorship and learn best practices around building and running a project, and newcomers far outnumber experienced folks.</span></li> <li><b>Inadequate governance:</b><span> As a project evolves, the framework by which the team makes, delegates, and communicates decisions must also evolve. Communities are not always well-equipped to guide that evolution.</span></li> </ul><p>… and I’m sure there are more, which is why I want to hear from you!</p> <p><span>I want you to be part of the conversation and our roadmap. These challenges are nuanced, and they are unique to each project and community, so it’s crucial that we have an open dialogue as we focus on helping you address them.</span></p> <p><span>If you’re an open source contributor or maintainer, please join the conversation by filling out the contact form below! I can’t wait to talk to you.</span></p> Thu, 17 Jan 2019 23:27:07 +0000 Animating CSS Grid <figure><video width="100%" controls=""><source src="/animating-css-grid-7203ce119cbda3023100525bcd88ce93.mp4" type="video/mp4"/></video><figcaption>Animated CSS Grid properties in action (Firefox Nightly)</figcaption></figure><p>Soooo, <a href="">Jen Simmons</a> just dropped a surprise bombshell on Twitter – CSS Grid <code class="language-text">grid-template-columns</code> and <code class="language-text">grid-template-rows</code> properties are now <em>animatable</em> in Firefox Nightly! Naturally I had to jump in and have a go right away!</p> <p>Here’s the demo if you want to have a play:</p> Fri, 18 Jan 2019 06:38:46 +0000 My dog was killed on a walk with a walker ordered through Wag <button class="_42ft _5upp _5la0" data-cookiebanner="close_button" data-testid="cookie-policy-banner-close" title="Fjern" type="button" id="u_0_d">Fjern</button><div class="_4juw" readability="12.259493670886">Vi bruger cookies som en hjælp til at personliggøre indholdet, skræddersy og måle annoncer samt give en mere sikker oplevelse. Når du klikker eller navigerer på sitet, tillader du, at vi indsamler oplysninger på og uden for Facebook via cookies. Læs mere, bl.a. om hvad du selv kan styre: <a href="" class="_5l9y" id="cpn-pv-link">Politik om cookies</a>.</div> Fri, 18 Jan 2019 02:29:52 +0000 Google is buying Fossil’s smartwatch tech for $40M <p class="p1"><span class="s1">Rumors about a Pixel Watch have abounded for years. Such a device would certainly make sense as Google attempts to prove the viability of its struggling wearable operating system, Wear OS. Seems the company is finally getting serious about the prospect. Today Fossil announced plans to sell its smartwatch IP to the software giant for $40 million.</span></p> <p class="p1"><span class="s1">Sounds like Google will be getting a nice head start here as well. The deal pertains to “a smartwatch technology currently under development” and involves the transfer of a number of Fossil employees to team Google. </span></p> <p class="p1"><span class="s1">“Wearables, built for wellness, simplicity, personalization and helpfulness, have the opportunity to improve lives by bringing users the information and insights they need quickly, at a glance,” Wear OS VP Stacey Burr said in a statement. “The addition of Fossil Group’s technology and team to Google demonstrates our commitment to the wearables industry by enabling a diverse portfolio of smartwatches and supporting the ever-evolving needs of the vitality-seeking, on-the-go consumer.”</span></p> <p id="speakable-summary">Like the Pixel before it, a -created smartwatch could ultimately serve as a proving group for the company’s open operating system. Wearables in general have struggled recently, and Wear OS is certainly not an exception. A <a href="">rebrand and redesign</a> haven’t done much to shake loose the cobwebs. In fact, Fossil has remained a <a href="">rare constant</a>, developing reasonably priced, fitness-focused products sporting the software.</p> <p>The smartwatch category continues to be dominated by Apple’s offerings, and top competitors Fitbit and Samsung have opted to go different routes, supporting the Pebble-based Fitbit OS and Tizen, respectively. All of this has left Google struggling to differentiate itself and its partners’ offerings. Fossil’s team certainly has the know how to build solid watch hardware, so this could prove a solid match.</p> <p>Fossil is quick to note, of course, that it’s still got a team of 200 working on R&amp;D, and while the company is no doubt losing some quality employees, it’s still committed to wearable tech.</p> <p>“Fossil Group has experienced significant success in its wearables business by focusing on product design and development informed by our strong understanding of consumers’ needs and style preferences,” Fossil EVP Greg McKelvey said in a statement. “We’ve built and advanced a technology that has the potential to improve upon our existing platform of smartwatches. Together with Google, our innovation partner, we’ll continue to unlock growth in wearables.”</p> <p>From the outside, at least, this looks to be a similar (albeit much smaller scale) deal to the one <a href="">Google struck with HTC</a> to help bolster its smartphone offerings.</p> Thu, 17 Jan 2019 18:36:09 +0000 Stanford Researchers Launch Free TV Service to Improve Live Streaming Using AI <figure><a href=""><img src="" alt=""/></a></figure><p>A team of Stanford researchers led by <a href="" rel="noopener" target="_blank">Francis Yan</a>, a doctoral student in computer science, have launched a new free <a href="">Live TV Streaming Service</a> website called <a href="" rel="noopener" target="_blank">Puffer</a>. It’s part of a nonprofit academic research study in the computer science department at Stanford University, working to use AI to improve Internet transmission and video-streaming algorithms. The project is advised by professors <a href="" rel="noopener" target="_blank">Keith Winstein</a> and <a href="" rel="noopener" target="_blank">Philip Levis</a>.</p> <p>Users can stream six TV stations Bay Area locals including CBS (KPIX 5), NBC (KNTV 11), ABC (KGO 7), FOX (KTVU 2), PBS (KQED 9), and Univision (KDTV 14). Since it is part of a nonprofit academic study, it is 100% FREE and doesn’t have ads.</p> <p>The ultimate goal of the study is to discover new algorithms to reduce stalls, improve picture quality, reduce startup/channel switching, and improve adaptive streaming. In our initial tests, we were able to switch channels nearly instantly and quickly reaches full quality streams.</p> <p>The streaming service is limited to just 500 participants at a time, so if you’re interested in trying it out, we’d suggest signing up for a <a href="" rel="noopener" target="_blank">free account</a>.</p> <p>The streams are all in 1080p 60 fps and works on Chrome, Firefox, and Edge on your computer or through the browser on Android phones and tablets. Unfortunately, due to resource constraints, there are no apps for streaming media players like Roku or Apple TV.</p> <p>All the work is open-source and can be <a href="" rel="noopener" target="_blank">viewed</a> on GitHub.</p> <nav class="buy-block"><a href="" target="_blank" rel="noopener" class="buy-block-cta no-underline">Watch Now</a></nav> Fri, 18 Jan 2019 01:33:29 +0000 GitHub dashboard UI refresh <div class="post__content markdown-body col-12 col-md-10 mb-2 mb-md-4" readability="39.838759689922"> <p>Logged in lately? You may notice some changes to your GitHub dashboard. We’ve updated dashboards to surface personalized repository suggestions, featuring a third column, new styling, and a full-width layout. GitHub is slowly moving to more full-width layouts and using one on the dashboard gave us the opportunity to highlight more projects for you to discover.</p> <p>As part of this release, the “Discover repositories” page has found a new home under Explore. Repository suggestions appear on Explore and in the new dashboard sidebar.</p> <p>Have feedback on the new dashboard? <a href="">Let us know</a>.</p> </div> Fri, 18 Jan 2019 05:44:33 +0000 Going old school: how I replaced Facebook with email <p>In November 2017, I deactivated my account on Facebook. I didn’t leave Facebook for moral reasons back then but more because it was starting to feel like a waste of time and valuable brain cycles that I wanted to focus elsewhere. (I realize some people can’t leave Facebook completely for work or other personal reasons.) There were aspects of Facebook that I thought I would miss — the relative ease of use, keeping up with what is going on in lots of people’s lives, etc — so I decided to work out a new way of communicating that was completely Facebook-free after using Facebook heavily for many years. I haven’t missed it at all. This post is about what I did and what I learned.</p> <p><strong>My history with Facebook and why I left</strong></p> <p>I had been on Facebook for a very long time (11 years) and had accumulated hundreds of “friends” on the platform. In the early days, it was fun and I enjoyed keeping up with people. But I kept noticing a great paradox in my life: I felt like I didn’t have enough time for the people I cared about (including myself) yet I found myself scrolling through Facebook for hours each week peering into the lives of hundreds of people, some of whom I honestly didn’t know very well and never knew very well. My brain got unwittingly wrapped up in their dramas, their political arguments, their triumphs and tragedies. I saw children fighting with their parents in the comments, political battles, people working out places to meet up — activities usually reserved for the private sphere. When I really thought about it, observing all of this seemed like a really odd way to spend significant time and energy. There are many people out there who I like and would love to get to know better but it doesn’t mean I have to keep up with all of them at that level of granularity.</p> <p>The Facebook “privacy” model is also maddening and can be surprisingly dehumanizing. I remember once commenting on an old high school friend’s post to gently point out a factual error on a topic of which I had first-hand knowledge (note to self: <em>never</em> worth it on social media) and got attacked by someone I didn’t know for being a “fancy New York CEO.” I had developed a thick skin at that point (um, from being a CEO) so that specific incident didn’t bother me so much — but I it reinforced something I had been thinking: “This environment is incredibly WEIRD. It’s supposed to be about human connection yet so much of what occurs is dehumanizing. Why do we <em>do this</em> to ourselves? This whole thing is very unhealthy.” So I decided to step away. (I stayed on Twitter because I find it fun, I learn a lot from smart people, following doesn’t have to be reciprocal, and there is zero pretense of intimacy, but that’s another story.)</p> <p><strong>Keeping up with close friends and family post-Facebook: a simple email list</strong></p> <p>People in my life didn’t have much to say about me leaving Facebook but I did get a few<br/>plaintive emails. <em>How will we keep up with you? How will we see photos of your child?</em> The implication was that without Facebook, all would be lost and we would lose contact forever. I’m exaggerating a little but I was legitimately surprised at the sense of finality that some people seemed to feel, as if there would be no other possibilities for us to connect to each other once I left. <img data-attachment-id="1717" data-permalink="" data-orig-file=";h=231" data-orig-size="512,512" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="email_shiny_icon" data-image-description="" data-medium-file=";h=231?w=300" data-large-file=";h=231?w=512" class=" wp-image-1717 alignright" src=";h=231" alt="email_shiny_icon" width="231" height="231" srcset=";h=231 231w,;h=462 462w,;h=150 150w,;h=300 300w" sizes="(max-width: 231px) 100vw, 231px"/>Sure, Facebook might be the most <em>convenient</em> way to connect but I never thought of convenience as the hallmark of good relationships. That said, there were people I did want to stay in touch with so I came up with a plan: start a very small mailing list via <a href="">Mailchimp’s Forever Free plan</a> to stay in touch with very close friends and my family.  I’ve sent three emails this year and it’s been a great overall experience. Here’s what I learned:</p> <p><strong>Lesson 1: Quite a few of the people who mattered most to me were not on Facebook.</strong></p> <p>My mailing list had to start somewhere. To assemble it, I looked at three things: 1) my list of Facebook friends, 2) my personal address book, and 3) the past couple of years people I emailed (by looking at my Sent folder). I have been obsessive about keeping contact info over the past 20+ years so my address book has about 3000 entries in it. As I looked at all of these sources, I ran across names of people who I had had meaningful relationships at some point in my life but had never been on Facebook. One example was one of my aunts who is in her 80s who lives in Durham, NC who would bring me homemade sausage biscuits at my dorm when I went to Duke (as a native Southerner, there is literally nothing more comforting than biscuits from a family member). Using Facebook had given me a false sense that “everyone is there” but she wasn’t. I didn’t realize until I asked around in the family that she had an iPad and was a regular email user. There were more like her than I thought. Some of the entries I had in my address book were outdated but I emailed some of those people with the address I had and heard back nearly every time. I had to track down a couple of people through mutual friends. All of this took more time than clicking “yes” on a friend request on Facebook but the effort was its own reward as it led me to very deliberately reconnect with people along the way. Put simply, using Facebook skews your contact towards other people who use Facebook and that can leave a lot of people you really care about out of your life.</p> <p><strong>Lesson 2: Email is more intimate and leads to better conversations.</strong></p> <p>On Facebook, I think most people realize that their posts can be seen by many people so a lot of thought goes into what they post. We’ve gotten so used to it that it seems unremarkable but there’s definitely a performative aspect when you are constantly communicating in front of all of your friends at the same time (seriously, isn’t this a weird way to communicate when you think about it?) Also, you never <em>really</em> know who is going to read what you write given the wacky permissions on Facebook (see my “fancy New York CEO” anecdote above). There is very little performative aspect to writing an email to a known list of people since you’re not (consciously or subconsciously) fishing for “likes” or other comments.  My email list is broadcast-only but any replies go directly to me. The replies I get are much more personal and informal than what I used to see on Facebook. There are no unwanted ads shoving themselves into the conversation. It’s more like old-school letter-writing: intimate, no outside observers, letting your guard down. I don’t sit there and think about what other people might think about what I’m writing — just the person who emailed me. To me, this is closer to what true friendship is like.</p> <p><strong>Lesson 3: You control the narrative completely in email which provides a much better opportunity for story-telling.</strong></p> <p>Social media platforms have algorithms that control what you see and the order in which you see it. As I put my emails together, I didn’t realize just how much control I had given up on Facebook until I experienced the absolute control of a personal email. Facebook pushes the cognitive overhead of piecing together the specifics of your friends’ lives by parsing a constant stream of posts, news, and ads. There is no beginning, middle, and end with Facebook. If as Shakespeare wrote all the world’s a stage and we are the players in the story, Facebook is a play where the actors are constantly interrupted by the blare of news headlines or the urgency of advertising messages. Every word and pixel in the content of my email is controlled by me (with the minor exception of a few items in the Mailchimp footer, but no big deal). No ads, no news headlines. It’s hard to read things out of context because the email itself <em>is</em> the context.</p> <p><strong>Lesson 4: With email, I’m completely free to switch platforms and have lots of choices</strong></p> <p>Mailchimp is a great platform and a company I trust but if something changed, it would be very little hassle to migrate my list to another platform and company. Email has been around a long time and exporting and importing a list are very easy. I never need to export the content I put into the system because it’s in my email inbox.</p> <p><strong>Lesson 5: Occasionally, I didn’t know what people were talking about in social situations</strong></p> <p>I’m occasionally in group conversations at parties and gatherings where people are talking knowingly about some experience most people in the group saw on Facebook already. I can usually figure things out by listening or asking questions. It’s also more fun <em>not to know</em> sometimes so you can, you know, talk about it in person like people used to do.</p> <p><strong>Lesson 6: It’s somewhat complicated to do it this way, but ultimately worth it</strong></p> <p>The overall setup and technical aspects are definitely more involved. I don’t assume that anyone wants email from me so I initially sent out an email to people with a link inviting them to the list. Following that link led to a double opt-in process that was difficult for some less tech-savvy people so I had to do some tech support along the way. (Some people either didn’t get the invite email or didn’t want to get my emails, and that’s cool, too!) I’m very comfortable with tech issues but I hadn’t done a lot of hands-on email work in a long time so I had to learn how to use the various features of Mailchimp, which is “easy” but still work. I had to have some sort of design and that required some work, but Mailchimp has reasonable templates you can use to start (besides, great design isn’t that important in this context). Writing the emails takes a lot of work in an absolute sense but pales in comparison to the time I was wasting on Facebook. I’m not subject to aggressive “growth hacking” from Mailchimp to send out my email and I completely forget about it for months at a time (as opposed to Facebook, which was always trying to burrow into my lizard brain to try to make me think about it).</p> <p>It may not be for everyone but I’m really happy with this new setup. If you have any questions about it, feel <a href="">free to ask in this thread on Twitter</a> and I’ll do my best to answer.</p> Thu, 17 Jan 2019 18:22:46 +0000 Oklahoma Department of Securities Leaked Millions of Files <p>The UpGuard Data Breach Research team can now disclose that it has discovered, reported, and secured a storage server with exposed data belonging to the Oklahoma Department of Securities, preventing any future malicious exploitation of this data. While file size and file count are imprecise tools for gauging the significance of an exposure, they at least provide familiar yardsticks for a sense of scale, and in this case, the publicly accessible data totalled three terabytes and millions of files. The contents of those files ran the gamut from personal information to system credentials to internal documentation and communications intended for the Oklahoma Securities Commission.</p> <!--more--> <p>The amount, and reach, of administrative and staff credentials represents a significant impact to  the Oklahoma Department of Securities’ network integrity.</p> <h2>The Discovery</h2> <p>It is uncertain exactly how long this data store was configured for public access, but Shodan, a search engine for internet-facing IP addresses, first registered it being publicly accessible on November 30th, 2018. UpGuard analysts identified the server's potential for sensitive content on December 7 and notified Oklahoma on December 8. Public access was removed that day, preventing any further downloads by the means used by the UpGuard analysts.</p> <p>By the best available measures of the files’ contents and metadata, the data was generated over decades, with the oldest data originating in 1986 and the most recent modified in 2016. The data was exposed via an unsecured rsync service at an IP address registered to the Oklahoma Office of Management and Enterprise Services, allowing any user from any IP address to download all the files stored on the server.</p> <p>The <a href="">Oklahoma Securities Commission</a> is part of the state’s Department of Securities. Like the federal Securities and Exchange Commission, they ensure that individuals and corporate entities trading securities are certified to do so and follow the regulations that protect citizens from fraud. The website for the Securities Commission has an UpGuard Cyber Risk score of 171 out of 950, indicating severe risk of breach. Among the issues lowering the website’s score is the use of the web server<a href="" rel=" noopener" target="_blank"> IIS 6.0, which reached end of life in July 2015,</a> meaning no updates to address any newly discovered vulnerabilities have been released in the last three and a half years. Of all the sites on the domain, <a href="" rel=" noopener" target="_blank"></a> has the worst risk score.</p> <h2>The Significance</h2> <p>In each report from the Data Breach Research team, we have to make decisions about how to present the findings to best convey their significance. In some cases, that structure is inherent in the data storage itself, as file directory or database schemas contain the organizational logic designed for the meaning of the data. In other cases, the files do not have a strong organizing logic or are heterogeneous over many different directories. For example, when there are directories for each of a business’ customers, the contents of those documents can vary widely.</p> <p>In this case, the scale of the data makes it impractical to perform any kind of exhaustive documentation of the exposed information. To achieve the research team’s goal of showing how cyber risk results from misconfigurations and digital supply chains, this report will approach the dataset from two cross cutting angles: the types of digital artifacts and the types of data stored in them.</p> <h2>Artifact Types</h2> <p>One classification method is to sort the files by file type, with the file extension providing a straightforward method for identifying file type. This method doesn’t tell us much about the significance of the data– it’s quite possible to have database dumps with no sensitive data and jpgs with protected PII– but reviewing some of these files types help serve the research team’s goal of highlighting the risk related to different artifacts types. Awareness of the types of risk attached to different artifact types can help inform processes and procedures for handling those files to reduce cyber risk associated with their storage.</p> <p><img src=";name=ok%20sec%20files2.png" alt="ok sec files2" width="455" srcset=";name=ok%20sec%20files2.png 228w,;name=ok%20sec%20files2.png 455w,;name=ok%20sec%20files2.png 683w,;name=ok%20sec%20files2.png 910w,;name=ok%20sec%20files2.png 1138w,;name=ok%20sec%20files2.png 1365w" sizes="(max-width: 455px) 100vw, 455px"/></p> <p><em>Example of file types, file count, and total byte size for the files in the "archive" directory.  The total file count is much larger in part because of the many files contained in compressed formats like the five Virtual Machine Disk files second from the top of the list.</em></p> <h3>Personal Storage Table (.pst) Archives</h3> <p>Storing backups of email mailboxes is a common practice required by data detention policies. The contents of those backups rarely includes concentrated sensitive data, like in a user database, but over the course of thousands of emails people invariably reveal information intended to be private. Plaintext passwords, images of identification cards, tax documents, and internal strategic deliberations– like in the <a href="">Facebook emails released to the public by the DCMS committee</a>– are all commonly found in .pst files. In the case of the OK Securities Commission exposure, email backups from 1999 to 2016 were present, with the largest and most recent reaching 16GB in size.</p> <p><img src=";name=e.png" alt="e" width="525" srcset=";name=e.png 263w,;name=e.png 525w,;name=e.png 788w,;name=e.png 1050w,;name=e.png 1313w,;name=e.png 1575w" sizes="(max-width: 525px) 100vw, 525px"/></p> <h3>Virtual Machine Disk Images</h3> <p>Sometimes the entire state of a machine needs to be stored as part of processes like employee offboarding, disaster recovery, or inventory cycling. When restored, virtual machine files can include all kinds of data. Files related to the business can include system credentials, personal information, and financial documents. Employees can also be personally exposed; people very commonly store some personal files on their work computers, and browser caches can include credentials for their personal accounts and services. The OK collection contained virtual machine backups of systems used within the Department of Securities.</p> <h2>Data Types</h2> <p>While file types govern how we interact with data in digital formats, the contents of the files are what is actually sensitive. In the course of our research we have developed a data taxonomy based on the types of entities affected by breaches. In this case we found examples of the many of the types of data that might be leaked in a breach.</p> <h3>Personal Information</h3> <p>The rsync server contained multiple accounting, administration, and investigatory directory trees along with a few virtual machine backup drive files containing personal information. Much of the exposed information was for individuals involved in the exchange of financial securities, sometimes operating under larger organizations, and sometimes acting as individuals. The documents varied in the number of individuals and the types of information describing them.</p> <ul readability="7.5"><li readability="1"> <p>One Microsoft Access database contained information on approximately ten thousand brokers, including their social security numbers.</p> </li> <li readability="8"> <p>A CSV with the partial name “IdentifyingInformation.csv” containing the date of birth, state of birth, country of birth, gender, height, weight, hair color, and eye color for over a hundred thousand brokers.</p> </li> <li readability="3"> <p>A database related to viators, a financial vehicle through which terminally ill patients can sell their life insurance benefits, contained information related to people with AIDS including patient names and T cell counts.</p> <img src=";name=d.png" alt="d" width="1401" srcset=";name=d.png 701w,;name=d.png 1401w,;name=d.png 2102w,;name=d.png 2802w,;name=d.png 3503w,;name=d.png 4203w" sizes="(max-width: 1401px) 100vw, 1401px"/></li> </ul><em>Database of agents certified by Department of Securities identified by Social Security Number.</em><ul><li> <p><em><img src=";name=b.png" alt="b" width="1103" srcset=";name=b.png 552w,;name=b.png 1103w,;name=b.png 1655w,;name=b.png 2206w,;name=b.png 2758w,;name=b.png 3309w" sizes="(max-width: 1103px) 100vw, 1103px"/></em></p> <em>Database containing names and Social Security Numbers.</em></li> </ul> <h3><strong>System Credentials </strong></h3> <p>Exposed system credentials can carry the highest risk for large scale abuse. Not only can credentials be used to gather PII, but in offering access to systems themselves they may be used to modify files– for example, for the purpose of further distributing malware– or to gather information that is intentionally obscured in its storage format. Passwords should be stored in a hashed or encrypted format, but access to the systems where users input those passwords could allow attackers to intercept them in plaintext. While exposed system credentials do not immediately impinge on individuals’ privacy in the same way that exposed personal information does, they carry systemic risk that may result in secondary breaches. </p> <ul readability="2"><li readability="-1"> <p>VNC credentials for remote access to OK Department of Securities workstations.</p> </li> <li readability="-1"> <p>A BlueExpress database of credentials for third parties submitting securities filings.</p> </li> <li readability="3"> <p>Spreadsheet of IT services with the usernames and passwords for accounts with Thawte, Symantec Protection Suite, Tivoli, and others.</p> </li> </ul><p><img src=";name=c.png" alt="c" width="1521" srcset=";name=c.png 761w,;name=c.png 1521w,;name=c.png 2282w,;name=c.png 3042w,;name=c.png 3803w,;name=c.png 4563w" sizes="(max-width: 1521px) 100vw, 1521px"/></p> <h2>Business Information</h2> <p>Like personally identifiable information, business documents can reveal more than intended about the interior of a corporate organization. Just as personal information can increase the risk of individuals being defrauded or deceived, business information can provide insight that attackers might use to fool employees by demonstrating familiarity with knowledge that only authorized persons would have. The Oklahoma rsync server contained an abundance of business information.</p> <ul readability="1"><li readability="-1"> <p>Training documents for personnel working on the Securities Commission.</p> </li> <li readability="-1"> <p>Commissioners email histories.</p> </li> <li readability="-1"> <p>Supporting files for Department of Securities investigations.</p> </li> <li readability="-1"> <p>Spreadsheets documenting the timeline for investigations by the FBI and people they interviewed.</p> </li> </ul> <p><img src=";name=a.png" alt="a" width="1179" srcset=";name=a.png 590w,;name=a.png 1179w,;name=a.png 1769w,;name=a.png 2358w,;name=a.png 2948w,;name=a.png 3537w" sizes="(max-width: 1179px) 100vw, 1179px"/></p> <p><em>A message in one of the mailbox backups containing sensitive information</em></p> <h2>Conclusion</h2> <p>Businesses and organizations naturally accumulate stores of data, both because of the value of that data and to comply with retention policies. Creating backups is a good practice to increase resilience in the face of attacks like ransomware. Backups are also necessary for migrations to ensure data can be recovered as businesses adopt newer and more secure technologies. But as this case highlights, the final crucial step is to maintain control over every copy of those data stores. </p> <p>The good news is that, while the contents of the server extended over years, the known period of exposure was quite short. Thanks to the Data Breach Research team's techniques for quickly identifying risks, the exposure was identified only one week after it showed up in Shodan's catalogue of global IP addresses. Shortening the window of exposure reduces the likelihood of other parties accessing the data and enables its owners to take responsive measures before the data is used maliciously.</p> Thu, 17 Jan 2019 01:23:10 +0000 New Ethereum Dev Tools from 0x <p name="ef62" id="ef62" class="graf graf--p graf-after--figure">Today we’re releasing four new tools to help Ethereum developers working on smart contracts. We’re excited to finally release these tools publicly, as we have been using them for a while internally at 0x and have shared them with a few other projects in the community.</p><p name="1535" id="1535" class="graf graf--p graf-after--p"><em class="markup--em markup--p-em">Check out </em><a href="" data-href="" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank"><strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em">sol-compiler</em></strong></a><em class="markup--em markup--p-em">, </em><a href="" data-href="" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank"><strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em">sol-trace</em></strong></a><em class="markup--em markup--p-em">, </em><a href="" data-href="" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank"><strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em">sol-coverage</em></strong></a><em class="markup--em markup--p-em">, and </em><a href="" data-href="" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank"><strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em">sol-profiler</em></strong></a><em class="markup--em markup--p-em"> to get started!</em></p><h3 name="93be" id="93be" class="graf graf--h3 graf-after--p">Ethereum tooling</h3><p name="d0e0" id="d0e0" class="graf graf--p graf-after--h3">At 0x, we embrace the UNIX tools philosophy: “each tool should only do one thing, and one thing well.” When we started building 0x protocol, Truffle was the de-facto framework for projects. It worked reasonably well and we’re glad that it helped us get started. However, as we grew, our requirements evolved and we kept pushing the boundaries of what was possible with the framework. Truffle is the Ruby-on-rails of Solidity development and we found it hard to extend it in unintended ways. That’s why we decided to build a set of modular tools that can be combined together and used by others. None of the tools we are releasing today are 0x-specific. You can start using them in your project independently of your current stack (even with Truffle projects!).</p><p name="c5e8" id="c5e8" class="graf graf--p graf-after--p">By evaluating the existing toolset and breaking up our workflow into logical pieces, we’ve realized that Ethereum development consists of two main stages. Compiling contracts into artifacts and then doing stuff with those artifacts (e.g running tests, debugging, verifying or analyzing metrics, etc…). Unfortunately, there is no single artifact format used by all tools. Some that have reasonable adoption are Truffle’s artifacts and <a href="" data-href="" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank">Solidity’s standard JSON output</a> (SSJO from now on). Truffle artifacts include a lot of compiler metadata that we never used and were missing other pertinent pieces of data. Unfortunately, there is no way to customize what compiler metadata is included by Truffle. At the same time, the SSJO format is configurable and can contain only the parts you want.</p><p name="8385" id="8385" class="graf graf--p graf-after--p">Want just the ABI? you got it! Need source maps, bytecode, and AST for your fancy debugger? We got you. And it’s what the compiler already outputs — so why reinvent the wheel? The only problem is that SSJO is based around compilation units. One compilation unit could contain multiple smart contracts. We found it more convenient to work with artifacts that correspond 1–1 with a single contract (and it’s dependencies). So, the artifact format we use is heavily inspired by SSJO and contains SSJO sections, but only the sections related to a specific smart contract. Additionally, it also includes some additional information such as deploys by the network.</p><p name="20bb" id="20bb" class="graf graf--p graf-after--p">The good news is that <strong class="markup--strong markup--p-strong">you don’t need to adopt our artifact format in order to use three of our tools! </strong>Check out this <a href="" data-href="" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank">Truffle example project</a> that uses 3 of the tools.</p><h4 name="fcc5" id="fcc5" class="graf graf--h4 graf-after--p"><a href="" data-href="" class="markup--anchor markup--h4-anchor" rel="noopener" target="_blank">@0x/sol-compiler</a></h4><p name="1e82" id="1e82" class="graf graf--p graf-after--h4">Sol-compiler is a wrapper around the Solidity compiler that makes it easier to compile entire projects. It produces the artifacts that can be used by all the other tools with zero configuration. Learn more <a href="" data-href="" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank">here</a>. I’ll just tease you with the fact that it’s the only solidity compiler I’m aware of that has “watch mode”.</p><h4 name="178a" id="178a" class="graf graf--h4 graf-after--p"><a href="" data-href="" class="markup--anchor markup--h4-anchor" rel="noopener" target="_blank">@0x/sol-trace</a></h4><p name="c3bf" id="c3bf" class="graf graf--p graf-after--h4">I’m sure some of you will be familiar with this error message: <code class="markup--code markup--p-code">Error: VM Exception while processing transaction: revert.</code> Instead of that error, you can now get nice stack-trace like in all other modern languages.</p><figure name="9073" id="9073" class="graf graf--figure graf-after--p"/><h4 name="f940" id="f940" class="graf graf--h4 graf-after--figure"><a href="" data-href="" class="markup--anchor markup--h4-anchor" rel="noopener" target="_blank">@0x/sol-coverage</a></h4><p name="fd29" id="fd29" class="graf graf--p graf-after--h4">If it’s not tested, it’s broken. And how do you know if you’ve tested it? By measuring code coverage with sol-coverage.</p><figure name="26ab" id="26ab" class="graf graf--figure graf-after--p"/><h4 name="a602" id="a602" class="graf graf--h4 graf-after--figure"><a href="" data-href="" class="markup--anchor markup--h4-anchor" rel="noopener" target="_blank">@0x/sol-profiler</a></h4><p name="86c7" id="86c7" class="graf graf--p graf-after--h4">Gas is money. Both your money and your user’s money. So why not spend it efficiently and be aware of the bottlenecks and optimization opportunities? Sol-profiler lets you do just that.</p><figure name="0fa2" id="0fa2" class="graf graf--figure graf-after--p"/><p name="0235" id="0235" class="graf graf--p graf-after--figure">The last three tools need to have some information about your smart contracts to work. They can either retrieve this information from truffle artifacts, @0x/sol-compiler artifacts, or from any other source with the help of a small and customizable adapter. Check out any of the docs for more info.</p><h3 name="85d7" id="85d7" class="graf graf--h3 graf-after--p">Nitty gritty technical details</h3><p name="6dab" id="6dab" class="graf graf--p graf-after--h3 graf--trailing">Some of you might be interested to know how these tools work, especially the revert traces since all of these tools are trace-based. Each tool is written as a subprovider that you inject into your provider stack. It then eavesdrops on all your transaction calls and gas estimate requests. When a transaction executes, it fetches it’s trace and uses the source maps to map the data back to lines of Solidity source code or assembly. The traces also contain gas information which is used by the profiler. If you simply ignore the gas information, you end up with code coverage (i.e which lines of the source were executed?). <br/> <br/>We have abstracted away most of the technical complexity so that developers can easily use these tools without knowing how they work under-the-hood. Contributions are always welcome and we hope you find these tools useful!</p> Fri, 18 Jan 2019 01:07:39 +0000 Leonid Logvinov Digitally cloning a 1914 Delage Type S engine block <aside class="aside" readability="0.62"><nav class="contextual-navigation"/>&#13; <div class="section--small" readability="7"> <p class="meta-heading">Last updated: 10 December 2018</p> </div>&#13; &#13; </aside><main class="section__list" readability="30.939695162359"><p class="summary ">The French manufactured Delage Type S was a revolutionary racing car for its time. Only one vehicle exists today, and it has been carefully restored by its Australian owner. A failed engine block however looked set to hamper the project, until a clever additive manufacturing technique was employed to replicate the unique part.</p>&#13; &#13; &#13; &#13; &#13; <div class="section--med section--remove-top" readability="14"> <h2 class="display__title">The challenge</h2> <p class="lead">How to clone an engine?</p> <p>After 100 years, the unique engine block in the only surviving 1914 Delage Type S failed. The car's owner, who was undertaking a painstaking renovation to restore the old car to its former glory, had only the original damaged engine to work with. Remanufacturing a new part without detailed plans was going to be difficult, and using traditional manufacturing methods was not an option.</p> </div>&#13; &#13; &#13; &#13; &#13; &#13; &#13; <div class="section--med section--remove-top" readability="24"> <h2 class="display__title">Our response</h2> <p class="lead">Rapid prototyping technology</p> <p>Not to be thwarted by traditional manufacturing limitations, the restoration team looked to new manufacturing techniques to solve their problem, and discovered that producing this unique part was indeed possible. </p> <p>Working with our partners at WYSIWYG 3D and Keech, a digital copy of the existing engine was made by scanning the original block to accurately capture its every detail. CAD modelling was then used to provide a virtual block that could be checked, simulated and modified as required. </p> <p> Once finished, the CAD designs were sent to Lab22 at CSIRO's Clayton site, where a sand mould was 3D printed on Australia's only sand printer – the Voxeljet VX1000. </p> </div>&#13; &#13; &#13; &#13; &#13; &#13; &#13; <div class="section--med section--remove-top" readability="27"> <h2 class="display__title">The results</h2> <p class="lead">Getting the old girl back on the road</p> <p> Back at Keech, the mould set was assembled and liquid iron was poured in a trial casting. Areas of improvement were identified and minor design modifications were made to ensure a near perfect cast. The resulting reproduction casting, which was close to a perfect clone of the original, was approved for machining and the final machined block was assembled with original parts before being fitted and tuned by Up the Creek Workshop. </p> <p>Thanks to additive manufacturing, the last surviving 1914 GP Delage is back in the race! </p> <div class="inline--full-width inline-image full-width-image larger-version"> <div class="modal"> <div class="modal__content-container"> <button class="btn btn--primary--reversed close" aria-label="Close">&#13; <i class="fa fa-times" aria-hidden="true"/>&#13; </button> <div class="section--x-small"> <div class="grid"> <div class="col-1-2" readability="6"> <div class="modal__caption" readability="7"> <p>&#13; Delage restored 1978&#13;  ©Phil Guilfoyle </p> </div> </div> </div> </div> </div> </div> </div> <p><em>The project was led by Phil Guilfoyle on behalf of the owner Stuart Murdoch. Phil assembled and led the team of WYSIWYG 3D (scanning the original engine block <em>and preparing 3D file for machining and casting</em>), Keech 3D (mould design, CAD), Keech foundry (mould design and casting of cast iron block), Up The Creek Workshop (mechanical work on the Delage and machining of the reproduction engine) and CSIRO (advice on 3D sand printing and printing of the sand moulds using the Voxeljet VX1000). </em></p> </div>&#13; &#13; &#13; &#13; &#13; <div class="call-to-action section section--med" readability="5.6217616580311"> <div class="callout callout--info" readability="7.2279792746114"> <h2 class="h4 callout__title">Do business with us to help your organisation thrive</h2> <p>&#13; We partner with small and large companies, government and industry in Australia and around the world.&#13; </p> </div> </div>&#13; &#13; &#13; &#13; &#13; &#13; &#13; &#13; &#13; &#13; </main> Fri, 18 Jan 2019 03:20:29 +0000 Adding new DNA letters makes novel proteins possible <p><span data-caps="initial">T</span><small>HE FUZZY</small> specks growing on discs of jelly in Floyd Romesberg’s lab at Scripps Research in La Jolla look much like any other culture of <em class="Italic">E. coli</em>. But appearances deceive—for the <small>DNA</small> of these bacteria is written in an alphabet that has six chemical letters instead of the usual four.</p><div class="newsletter-form newsletter-form--inline" readability="6"><div class="newsletter-form__message" readability="7"><strong>Get our daily newsletter</strong><p>Upgrade your inbox and get our Daily Dispatch and Editor's Picks.</p></div></div><p>Every other organism on Earth relies on a quartet of genetic bases: <small>A</small> (adenine), <small>C</small> (cytosine), <small>T</small> (thymine) and <small>G</small> (guanine). These fit together in pairs inside a double-stranded <small>DNA</small> molecule, <small>A</small> matching <small>T</small> and <small>C</small>, <small>G</small>. But in 2014 Dr Romesberg announced that he had synthesised a new, unnatural, base pair, dubbed <small>X</small> and <small>Y</small>, and slipped them into the genome of <em class="Italic">E. coli</em> as well.</p><p>Kept supplied with sufficient quantities of <small>X</small> and <small>Y</small>, the new cells faithfully replicated the enhanced <small>DNA</small>—and, crucially, their descendants continued to do so, too. Since then, Dr Romesberg and his colleagues have been encouraging their new, “semisynthetic” cells to use the expanded alphabet to make proteins that could not previously have existed, and which might have properties that are both novel and useful. Now they think they have found one. In collaboration with a spin-off firm called Synthorx, they hope to create a less toxic and more effective version of a cancer drug called interleukin-2.</p><p>In a normal cell, protein-making is a factory-like operation. <small>DNA</small> is first transcribed into <small>RNA</small>—also a string of bases, but a single, rather than a double strand. The <small>RNA</small>’s bases are then read, in groups of three known as codons, by a molecular machine called a ribosome. Sixty-one of the 64 possible codons correspond to one of 20 versions of a type of molecule called an amino acid. The other three act as “stop” signals. When a ribosome reads a codon, it links it with another molecule that carries the appropriate amino acid. The resulting string of amino acids is a protein.</p><p>This arrangement has long been exploited to make natural proteins for use as drugs. The potential of semisynthetic cells is to do something similar, but with an un-natural protein as the result. That would permit a wider range of properties.</p><p>Others have tried to achieve this by repurposing superfluous “stop” codons to encode novel amino acids, and one firm, Ambrx, has succeeded in doing so industrially. But this approach can add a maximum of only two amino acids to the existing set. Dr Romesberg’s process has already beaten that, with two published successes and another eight awaiting publication. His system could, in principle, provide 152 extra codons on top of the existing 64.</p><p>Dr Romesberg and Laura Shawver, Synthorx’s boss, picked interleukin-2 in particular to work on because of the mismatch between its potential and its reality. Though it is useless at low doses—actually suppressing the immune response to tumours rather than enhancing it—at high doses it is extremely effective at promoting such an anti-tumour response. Unfortunately, a side-effect is that it damages the walls of blood vessels, causing plasma to leak out. When this happens in the lungs, the patient may drown. As Dr Shawver puts it, some people have been cured of their cancers thanks to interleukin-2, “but they have to live to tell the tale”.</p><p>Interleukin-2 works by binding to, and stimulating the activity of, immune-system cells called lymphocytes. The receptor it attaches itself to on a lymphocyte’s surface is made of three units: alpha, beta and gamma. Immune cells with all three form a strong bond to interleukin-2, and it is this which triggers the toxic effect. If interleukin-2 can be induced to bind only to the beta and gamma units, however, the toxicity goes away. And that, experiments have shown, can be done by attaching polyethylene glycol (<small>PEG</small>) molecules to it.</p><p>The trick is to make the <small>PEG</small>s stick. This is where the extended genetic alphabet comes in. Using it, Synthorx has created versions of interleukin-2 to which <small>PEG</small>s attach themselves spontaneously in just the right place to stop them linking to the alpha unit. Tested on mice, the modified molecule has exactly the desired anti-tumour effects. Synthorx plans to ask permission for human trials later this year.</p><p>Dr Shawver sees <small>THOR</small>-707, as the new interleukin is known, as just the beginning. Synthorx already has synthetic versions of several others in the pipeline. And the wider possibilities are endless. The beauty of Dr Romesberg’s system is that it works without disrupting a cell’s normal function, making it possible to hijack cells’ factory-like properties to produce almost any “designer” protein. These might have properties not normally seen in organic molecules—semi-conductor proteins that can be woven into soft materials, perhaps.</p><p>Nor need those who worry about genetically modified organisms escaping from the lab fret about this particular system. Without a steady supply of <small>X</small> and <small>Y</small>, any escapee would not get far in the wild.</p> Thu, 17 Jan 2019 18:45:07 +0000