If you’re an expert on docker buildx
, you may know all of this already, but if you’re not, hopefully you find this useful.
We’ll make some assumptions in this tutorial:
linux/amd64
and linux/arm64
architectures.linux/arm64
image on the local machine, and linux/amd64
on a remote machine (accessible via SSH).We’re going to accomplish this by building a buildx
builder instance for the local machine and architecture, then append a configuration for another machine. And then we’ll activate that instance.
This is easy.
$ docker buildx create \
--name my-builder \
--platform linux/arm64
This will create our my-builder
instance, defaulting it to using our local Docker setup for linux/arm64
.
If we wanted, we could provide a comma-separated list of platforms that the local Docker should be handling (e.g., --platform linux/arm64,darwin/arm64
).
(This doesn’t have to be arm64
. I’m just using this as an example.)
$ docker buildx create \
--name my-builder \
--append \
--platform linux/amd64 \
ssh://<user>@<remotehost>
This will update our my-builder
, informing it that linux/amd64
builds are supported and must go through the Docker service over SSH.
Note that we could easily add additional builders if we wanted (whether for the same architectures or others) by repeating this command and choosing new --platform
values and remote hosts
Let’s take a look and make sure we have the builder setup we expect:
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
my-builder * docker-container
my-builder0 desktop-linux inactive linux/arm64*
my-builder1 ssh://myuser@example.com inactive linux/amd64*
Yours may look different, but it should look something like that. You’ll also see default
and any other builders you’ve set up.
Now we’re ready to use it:
$ docker buildx use my-builder
Just that easy.
If all went well, we can now safely build our image:
$ docker buildx build --platform linux/arm64,linux/amd64 .
You should see build output for each architecture stream by.
If we want to make sure the right builder is doing the right thing, you can re-run docker buildx ls
in another terminal. You should see running
as the status for each, along with an inferred list of other architectures that host can now build (pretty much anything it natively supports that you didn’t explicitly configure above).
You probably want to test your newly-built image locally, don’t you? When you run the build, you might notice this message:
WARNING: No output specified with docker-container driver. Build
result will only remain in the build cache. To push result image
into registry use --push or to load image into docker use --load
And if you try to start it up, you might notice it’s missing (or that you’re running a pre-buildx
version of your image).
What you need to do is re-run docker buildx build
with --load
and a single platform, like so:
$ docker buildx build --load --platform linux/arm64 .
That’ll rebuild it (it’ll likely just reuse what it built before) and then make it available in your local Docker registry.
Hope that helps!
]]>(I’ll talk about why we still choose to use Backbone-based code another time.)
We basically had this situation:
class BaseClass {
summary: string | (() => string) = 'BaseClass thing doer';
description: string | (() => string);
}
class MySubclass extends BaseClass {
get summary(): string {
return 'MySubclass thing doer';
}
// We'll just make this a standard function, for demo purposes.
description(): string {
return 'MySubclass does a thing!';
}
}
TypeScript doesn’t like that so much:
Class 'BaseClass' defines instance member property 'summary', but extended class 'MySubclass' defines it as an accessor.
Class 'BaseClass' defines instance member property 'description', but extended class 'MySubclass' defines it as instance member function.
Clearly it doesn’t want me to override these members, even though one of the allowed values is a callable returning a string! Which is what we wrote, darnit!!
So what’s going on here?
If you’re coming from another language, you might expect members defined on the class to be class members. For example, you might think you could access BaseClass.summary
directly, but you’d be wrong, because these are instance members.
In other words, the above code actually executes as:
class BaseClass {
constructor() {
this.summary = 'BaseClass thing doer';
/*
* Note that description isn't set here, because we only
* typed it above. We didn't give it a value.
*/
}
}
So our types above say “this can be a string or a function, but it belongs to the instance and will be set when constructing the instance.”
Our MySubclass.summary
and MySubclass.description
don’t really exist on the class either. They exist on the prototype as MySubclass.prototype.summary
and MySubclass.prototype.description
, and this is where we get into trouble.
Members on the instance takes precedence over members on the prototype.
We can test this:
const instance = new MySubclass();
// This will print "BaseClass thing doer"
console.log(instance.summary);
// This will print 'MySubclass does a thing!'
console.log(instance.description())
Wait, hold up a sec. Why did that work for description()
? Well, because BaseClass
may have typed it, but it didn’t set it. So there was no instance property overriding our method.
Sadly, there’s no way to say “I’m typing this as a prototype member.” We can define a method in the base class, but we can’t say “protoype method or value.” So what is one to do?
Before we continue: Our situation’s a bit more complex than all this, and we’re doing some magic behind the scenes to make everything play nice. Some of you will be cringing a little here, but bear in mind, a lot of this is best applied when marrying the ES6 class design, prototype design, and TypeScript, in an effort to create a stable foundation.
Okay, let’s make some rules:
summary
a default value above).We can’t just define a new type on the subclass. We have to first get rid of the typing on the parent, dynamically.
Ideally we would do this:
class MySubclass extends Omit<BaseClass, 'summary' | 'description'> {
...
}
Ah, but that gives us:
'Omit' only refers to a type, but is being used as a value here.
We can’t apply types to a parent class name. We have to give it a value, in the form of a typed function. Here’s how we’ll do that:
type Class<T = {}> = new (...args: any[]) => T;
function OmitClass<
T extends Class,
K extends keyof InstanceType<T>
>(
ParentClass: T,
...keys: [K, ...K[]]
): Class<Omit<InstanceType<T>, K>> {
return ParentClass as any;
}
// Which compiles down to:
function OmitClass(ParentClass, ...keys) {
return ParentClass;
}
We’re defining a function that takes in the class type and variable arguments of key names as parameters. It then returns the provided class with those keys omitted.
Here’s how we use it:
class MySubclass extends OmitClass(BaseClass, 'summary', 'description') {
get summary(): string {
return 'MySubclass thing doer';
}
description(): string {
return 'MySubclass does a thing!';
}
}
And hey, that works! No type errors.
TypeScript will auto-fill in those generics based on what we provided, as well, which keeps this from being too lengthy (though… that depends on how much you have to override).
class BaseClass {
summary: string | (() => string);
description: string | (() => string);
}
type Class<T = {}> = new (...args: any[]) => T;
function OmitClass<
T extends Class,
K extends keyof InstanceType<T>
>(
ParentClass: T,
...keys: [K, ...K[]]
): Class<Omit<InstanceType<T>, K>> {
return ParentClass as any;
}
class MySubclass extends OmitClass(BaseClass, 'summary', 'description') {
get summary(): string {
return 'MySubclass thing doer';
}
description(): string {
return 'MySubclass does a thing!';
}
}
const instance = new MySubclass();
// This prints "MySubclass thing doer"
console.log(_.result(instance, 'summary'))
// This prints "MySubclass does a thing!"
console.log(_.result(instance, 'description'))
It works!
Yeah and it is. There are some compromises here, and some design choices that have to be made. But until/unless JavaScript or TypeScript gains formal support for setting and typing prototype members inside the class definition, interfacing with existing prototype-based code can require tradeoffs.
At least now you know some rules and tricks to help keep this manageable. And, hopefully, create a stronger foundation for new code to grow.
]]>That excitement ramped up this week with the release of ChatGPT, an extremely impressive chat-based AI system leveraging the best GPT has to offer.
I decided last night to take ChatGPT for a spin, to test its code-generation capabilities. And I was astonished by the experience.
Together, we built a simulation of bugs foraging for food in a 100×100 grid world, tracking essentials like hunger and life, reproducing, and dealing with hardships involving seasonal changes, natural disasters, and predators. All graphically represented.
We’re going to explore this in detail, but I want to start off by showing you what we built:
Also, you can find out more on my GitHub repository
Before we dive into the collaborative sessions that resulted in a working simulation, let me share a few thoughts and tidbits about my experience:
With that, let me take you on a journey.
Please note: What follows is long. Feel free to skip around.
There’s a lot of text here in image form, with code. I’m trying to transcribe it from screenshots using OCR and editing the results, but I may have missed some syntax errors here or there.
I’m also including both screenshots and text descriptions. I want this to be accessible, but I think some people might want to see the actual interactions.
This was the start of my experiment, before I even knew what I wanted to do. I gave it one prompt:
Prompt: “write some Python code for emulating hunger”
Answer:
Here is an example of Python code that could be used to simulate hunger:
import time hunger_level = 10 while hunger_level > 0: print("I'm feeling hungry!") hunger_level -= 1 time.sleep(1) print("I'm not hungry anymore.")In this code, we define a variable called
hunger_level
that represents the current hunger level of the person. We then use awhile
loop to repeatedly print a message and decrement thehunger_level
variable until it reaches 0. This simulates the process of feeling hungry and then gradually feeling less hungry over time. Thetime.sleep()
function is used to pause the program for one second between each iteration of the loop, so that the messages are printed at a more realistic pace. Of course, this is just a simple example, and there are many other ways that you could approach this problem in Python.
I next wanted a concept of eating. The program was simple at this point, so I didn’t know how it would approach this, but I decided to be vague in giving it direction. I wanted to see what it came up with.
Prompt: “update that code to let the character eat to decrease hunger”
Answer:
Here is an updated version of the Python code that includes the ability for the character to eat and decrease their hunger level:
import time hunger_level = 10 while hunger_level > 0: print("I'm feeling hungry!") choice = input("Do you want to eat? (y/n)") if choice == 'y': print("Eating...") hunger_level -= 1 else: print("I'll wait.") time.sleep(1) print("I'm not hungry anymore.")In this updated version of the code, we use an
if
statement inside thewhile
loop to check whether the user wants to eat. If the user inputs ‘y’, the character will eat and their hunger level will decrease. Otherwise, the character will wait and their hunger level will remain the same. This allows the user to control the character’s hunger level by deciding when to eat. As before, thetime.sleep()
function is used to pause the program for one second between each iteration of the loop, so that the messages are printed at a more realistic pace.
Human input wasn’t good enough. Let’s make this a creature, a bug, that has to go find food.
I gave it the following prompt:
Prompt: “now update that to have hunger be part of an entity class for a bug, and allow the bug to move and find food”
Answer:
Here is an updated version of the Python code that includes a
Bug
class with ahunger
attribute and methods for moving and finding food:import time class Bug: def __init__(self): self.hunger = 10 self.position = (0, 0) def move(self): # Move the bug to a new random position self.position = (self.position[0] + randint(-1, 1), self.position[1] + randint(-1, 1)) def find_food(self): # Decrease the bug's hunger level if it finds food if random() < 0.1: self.hunger -= 1 bug = Bug() while bug.hunger > 0: print("I'm feeling hungry!") bug.move() bug.find_food() time.sleep(1) print("I'm not hungry anymore.")In this code, we define a
Bug
class that has ahunger
attribute to represent the bug’s hunger level, and aposition
attribute to represent its current location. The__init__()
method is used to initialize these attributes when a newBug
object is created. Themove()
method is used to move the bug to a new random position, and thefind_food()
method is used to decrease the bug’s hunger level if it finds food. Thewhile
loop is used to simulate the bug’s search for food. Each iteration of the loop, the bug moves to a new position and checks for food. If the bug finds food, its hunger level is decreased. If it doesn’t find food, its hunger level remains the same. This continues until the bug’s hunger level reaches 0, at which point it is no longer hungry.
The above code made the act of finding food a random thing. On each random movement, it’d look for food, and based on the roll of a die, it’d find it.
I wanted more, so I told it to make a grid for the bug to live on and spawn food on it.
Prompt: “update the code to spawn food on a 100×100 grid for the bug to find”
Answer:
Here is an updated version of the Python code that includes a
spawn_food()
function that generates food items on a 100×100 grid for the bug to find:import time from random import randint, random class Bug: def __init__(self): self.hunger = 10 self.position = (0, 0) def move(self): # Move the bug to a new random position self.position = (self.position[0] + randint(-1, 1), self.position[1] + randint(-1, 1)) def find_food(self, food_grid): # Decrease the bug's hunger level if it finds food x, y = self.position if food_grid[x][y] > 0: self.hunger -= 1 food_grid[x][y] -= 1 def spawn_food(food_grid): # Spawn food on a 100x100 grid for i in range(100): for j in range(100): food_grid[i][j] += randint(0, 10) bug = Bug() food_grid = [[0] * 100 for _ in range(100)] while bug.hunger > 0: print("I'm feeling hungry!") bug.move() bug.find_food(food_grid) spawn_food(food_grid) time.sleep(1) print("I'm not hungry anymore.")In this updated version of the code, we define a
spawn_food()
function that generates food items on a 100×100 grid. Thespawn_food()
function uses twofor
loops to iterate over the grid and randomly assign food values to each cell. Thefind_food()
method of theBug
class is updated to check whether there is food at the bug’s current position, and to decrease the bug’s hunger level if there is. Thewhile
loop is updated to call thespawn_food()
function at each iteration, so that new food items are generated on the grid as the bug moves around. This allows the bug to search for food and find it as it moves around the grid.
That worked. Now I wanted it to actually search for the food. So I gave it a very simple request.
Prompt: “give it path finding to find the nearest food”
Answer:
Here is an updated version of the Python code that includes path finding for the bug to find the nearest food on the grid:
import time from random import randint, random from heap import heappop, heappush class Bug: def __init__(self): self.hunger = 10 self.position = (0, 0) def move(self): # Use path finding to move to the nearest food x, y = self.position distances = [(0, 1), (0, -1), (1, 0), (-1, 0)] heap = [(0, self.position)] visited = set() while heap: dist, pos = heappop(heap) if pos in visited: continue visited.add(pos) if food_grid[pos[0]][pos[1]] > o: # Found food, move to this position self.position = pos return for dx, dy in distances: new_pos = (pos[0] + dx, pos[1] + dy) if 0 <= new_pos[0] < 100 and 0 <= new_pos[1] < 100: heappush(heap, (dist + 1, new_pos)) def find_food(self, food_grid): # Decrease the bug's hunger level if it finds food x, y = self.position if food_grid[x][y] > o: self.hunger -= 1 food_grid[x][y] -= 1 def spawn_food(food_grid): # Spawn food on a 100x100 grid for i in range(100): for j in range(100): food_grid[i][j] += randint(0, 10) bug = Bug() food_grid = [[0] * 100 for _ in range(100)] while bug.hunger > 0: print("I'm feeling hungry!") bug.move() bug.find_food(food_grid) spawn_food(food_grid) time.sleep(1) print("I'm not hungry anymore.")In this updated version of the code, we use a path finding algorithm to help the bug find the nearest food on the grid. The
move()
method of theBug
class is updated to use a breadth-first search algorithm to find the nearest food. The algorithm uses a heap to store the distances and positions of the cells that the bug has visited, and it expands the search in increasing order of distance. When the algorithm finds a cell with food, the bug moves to that position and the search is terminated. This allows the bug to efficiently find the nearest food on the grid, even if the food is not directly adjacent to its current position. Thewhile
loop and the other parts of the code remain the same, so that the bug continues to move, find food, and spawn new food on the grid as before.
Editor’s note: I’m going to simplify the code samples from here on out, only show the changed areas instead of the whole files. Later, ChatGPT got better at doing this in our interactions. Also note, ChatGPT started dropping the descriptions of its changes for much of these interactions.
One bug wasn’t enough. We need more bugs!
Prompt: “add multiple bugs, all competing for the same food”
Answer:
Here is an updated version of the Python code that includes multiple bugs competing for the same food on the grid:
# ... num_bugs = 5 bugs = [Bug() for _ in range(num_bugs)] food_grid = [[0] * 100 for _ in range(100)] while any(bug.hunger > 0 for bug in bugs): for bug in bugs: if bug.hunger > 0: print("I'm feeling hungry!") bug.move() bug.find_food(food_grid) spawn_food(food_grid) time.sleep(1) print("I'm not hungry anymore.")
The bugs were immortal, and nobody likes immortal bugs. We had to deal with this.
Now at this point, my session was acting up. Answers were being truncated prematurely. I eventually realized I needed to start a new session, so I copy/pasted what we had into a new AI session, and gave it my new prompt. It did.
Prompt: “Update this to add a concept of “life” to the bugs, decreasing if they can’t find food, and killing them if they hit 0”
Answer:
To add a concept of “life” to the bugs, we can add a new
life
attribute to theBug
class and decrease it by 1 every time the bug doesn’t find food. We can also check if the bug’slife
has reached 0 before moving and finding food, and stop the simulation if all bugs are dead.Here is one way to update the code to add “life” to the bugs:
# ... class Bug: def __init__(self): self.hunger = 10 self.life = 10 self.position = (0, 0) # ... def find_food(self, food_grid): # Decrease the bug's hunger and life levels if it finds food x, y = self.position if food_grid[x][y] > 0: self.hunger -= 1 self.life -= 1 food_grid[x][y] -= 1 # ... while any(bug.life > 0 for bug in bugs): for bug in bugs: if bug.life > 0: print("I'm feeling hungry!") bug.move() bug.find_food(food_grid) else: print("I'm dead :(") # Decrease the bug's life if it didn't find food if bug.hunger == 10: bug.life -= 1 spawn_food(food_grid) time.sleep(1) print("All bugs are dead.")Note that in the
find_food()
method, we decrease the bug’shunger
and increase itslife
by 1 if it finds food. In the main simulation loop, we decrease the bug’slife
by 1 if it didn’t find food (i.e. if itshunger
level is still 10 after moving and finding food). This means that
(At this point, we hit a limit on the response, and I never got to find out what that that ultimately meant. Oh well.)
How do I know any of this actually works? It could be full of lies. I needed to see what the bugs were doing. So I asked it for graphics, using Python’s turtle module.
Prompt: “Update the code to provide a visual reproduction of the simulation using Python’s turtle module.”
Answer:
# ... import turtle # ... num_bugs = 5 bugs = [Bug() for _ in range(num_bugs)] food_grid = [[0] * 100 for _ in range(100)] # Initialize the turtle screen screen = turtle.Screen) screen.title("Bug Simulation") # Create turtle objects for the bugs bug_turtles = [turtle.Turtle() for _ in range(num_bugs)] for t in bug_turtles: t.color("red") t.shape("turtle") while any(bug.life > 0 for bug in bugs): for bug, t in zip(bugs, bug_turtles) if bug.life > 0: print("I'm feeling hungry!") bug.move() bug.find_food(food_grid) # Update the turtle's position on the screen t.goto(bug.position[0] * 10, bug.position[1] * 10) else: print("I'm dead :(") # ...
That actually worked! But all the bugs looked the same. They were all red, and all started at the same location, in the center of the screen at 0, 0. So let’s fix that.
Prompt: “Give each turtle a different color. Make sure they start at random locations.”
Answer:
# ... class Bug: def __init__(self): # ... self.position = (randint(0, 99), randint(0, 99)) # ... # Create turtle objects for the bugs bug_turtles = [turtle.Turtle() for _ in range(num_bugs)] colors = ["red", "blue", "green", "orange", "purple"] for t, color in zip(bug_turtles, colors): t.color(color) t.shape("turtle") # ...
It ran and it worked! Uhhhh kind of. It uncovered bugs. And not the ones I wanted. So we entered everyone’s favorite phase of software development: debugging.
First: The coordinate system was wrong. In Turtle, (0, 0)
is the center of the screen, not the top-left. The code wasn’t accounting for this. So I simply informed ChatGPT.
Note that at this point, the AI decided giving me more contextual code snippets was the way to go.
Prompt: “Fix the coordinate system. Turtle uses (0, 0) as the center of the screen, not the top-left.”
Answer:
To fix the coordinate system, we need to adjust the code so that the origin is at the center of the screen. This can be done by changing the values of the
x
andy
coordinates in thegoto
method of the turtle objects.Here is how the code should look after making the changes:
# Update the turtle's position on the screen t.goto(bug.position[0] * 10 - 500, bug.position[1] * 10 - 500)In the above code, the
x
andy
coordinates are multiplied by 10 to scale them up and then shifted by 50 units in the negative direction to move them to the center of the screen.
I hand-applied this to our code and verified the fix.
I noticed a new problem: Food was spawning every tick! Bugs had no incentive to forage. I suggested we only spawn food every 200 ticks.
Prompt: Update the code to only spawn food every 200 cycles of the main loop.
Answer:
To update the code so that food is only spawned every 200 cycles of the main loop we can add a counter variable that keeps track of the number of iterations of the loop and then use the modulo operator
%
to determine if food should be spawned.Here is how the updated code should look:
# ... # Add a counter variable to keep track of the number of iterations counter = 0 while any(bug.life > 0 for bug in bugs): # ... # Only spawn food every 200 iterations if counter % 200 == 0: spawn_food(food_grid) counter += 1 time.sleep(1) # ...In the above code, the
counter
variable is initialized to 0 and then incremented by 1 at the end of each iteration of the main loop. Thespawn_food
function is only called if the counter variable is divisible by 200 (i.e., ifcounter % 200 = 0
isTrue
). This ensures that food is only spawned every 200 iterations of the main loop.
I realized we had visuals for the bugs, but not the food. I could see the bugs moving around now, but couldn’t see what the food situation looked like. So, I asked for visuals.
Prompt: “Update the code to show a visual indicator of each food. Give it to me as a set of changes I should make.”
Answer:
To update the code to show a visual indicator of each food, we can use the turtle module to draw a small circle at the location of each food item on the grid. Here is how the updated code should look:
# ... # Create a turtle object for the food food_turtle = turtle.Turtle() food_turtle.color("yellow") food_turtle.shape("circle") # ... while any(bug.life > 0 for bug in bugs): # ... # Only spawn food every 200 iterations if counter % 200 == 0: spawn_food(food_grid) # Draw a circle at the location of each food item for i in range(100): for j in range(100): if food_grid[i][j] > 0: food_turtle.goto(i * 10 - 500, j * 10 - 500) food_turtle.stamp() # ... # ...In the above code, a new turtle object called
food_turtle
is created and initialized with thecircle
shape and theyellow
color. Then, in each iteration of the main loop, thefor
loops iterate over the entire food grid and draw a circle at the location of each food item using thestamp
method of thefood_turtle
object. This shows a visual indicator of each food item on the screen.
Oh no, food was everywhere! That random chance of food spawn was just too much. So I had it tone that down a bit. Also, I didn’t like the yellow circles for food, so I asked for a change there as well.
Prompt: “Update food spawning to make food more rare. At most, only about 2% of the board should ever have food. Also, make food pink squares instead of yellow circles. You can show me the lines to modify, instead of the full file.”
Answer:
To update the code to make food more rare and change its appearance, we can modify the
spawn_food
function and thefood_turtle
object. Here are the changes to make:def spawn_food(food_grid): # Spawn food on a 100x100 grid for i in range(100): for j in range(100): # Spawn food with probability 0.02 if random() < 0.02: food_grid[i][j] += 1 # Create a turtle object for the food food_turtle = turtle.Turtle() food_turtle.color("pink") food_turtle.shape("square")In the updated
spawn_food
function, food is only spawned with probability 0.02 (i.e., 2%) instead of a random number between 0 and 10. This makes food much more rare on the grid.In the updated
food_turtle
object, thecolor
attribute is set to"pink"
and theshape
attribute is set to"square"
instead of"circle"
. This changes the appearance of the food from yellow circles to pink squares.
Turtle defaults to showing lines when you move, and also animates everything. This made the simulation both very visually noisy, and far too slow to really work with.
Now, this took a bit of doing. We had to iterate a few times on some stuff. Turn off a few drawing methods, draw food less often. This section has two prompts.
Prompt: “Remove line drawing when moving bug turtles. Update the drawing of the food turtle to not animate or have any delays. It must be fast. Don’t show me the whole file. Just relevant lines of new code.”
Answer:
for bug, t in zip(bugs, bug_turtles): if bug.life > 0: print("I'm feeling hungry!") bug.move() bug.find_food(food_grid) # Update the turtle's position on the screen without drawing a line t.penup() t.goto(bug.position[0] * 10 - 500, bug.position[1] * 10 - 500) t.pendown()# Only spawn food every 200 iterations if counter % 200 == 0: spawn_food(food_grid) # Draw a square at the location of each food item without animation or delay food_turtle.speed(0) for i in range(100): for j in range(100): if food_grid[i][j] > 0: food_turtle.goto(i * 10 - 500, j * 10 - 500) food_turtle.stamp()
Prompt: “Update to only draw food when respawning food or when starting the simulation. Also, don’t show lines when drawing food.”
Answer:
# Only draw food when respawning or when starting the simulation if counter % 200 == 0 or counter == 0: # Only spawn food every 200 iterations spawn_food(food_grid) # Draw a square at the location of each food item without animation or delay food_turtle.speed(0) food_turtle.penup() for i in range(100): for j in range(100): if food_grid[i][j] > 0: food_turtle.goto(i * 10 - 500, j * 10 - 500) food_turtle.stamp() food_turtle.pendown()
This was starting to look pretty good! But I noticed that food stuck around after being eaten. I pointed this out, and got some code back, but it had some of the same performance problems we just fixed. So I gave it some feedback about optimizations — just like in a code review! — and in the end, we had some workable code.
Prompt: “Hide food once it’s been fully eaten.”
Answer:
# Only draw food when respawning or when starting the simulation if counter % 200 == 0 or counter == 0: # ... else: # Hide food once it's been fully eaten for i in range(100): for j in range(100): if food_grid[i][j] == 0: food_turtle.goto(i * 10 - 500, j * 10 - 500) food_turtle.clearstamp(0)
Prompt: “Apply the same pen and speed optimizations used when drawing turtles to hiding them.”
Answer:
# Only draw food when respawning or when starting the simulation if counter % 200 == 0 or counter 0: # ... else: # Hide food once it's been fully eaten food_turtle.speed(0) food_turtle.penup() for i in range(100): for j in range(100): if food_grid[i][j] == 0: food_turtle.goto(i * 10 - 500, j * 10 - 500) food_turtle.clearstamp(0) food_turtle.pendown()
Prompt: “That’s slow. Only hide food if it’s 0 but wasn’t previously 0.”
Answer:
# Only draw food when respawning or when starting the simulation if counter % 200 == 0 or counter 0: # ... else: # Hide food once it's been fully eaten food_turtle.speed(0) food_turtle.penup() for i in range(100): for j in range(100): if food_grid[i][j] = 0 and prev_food_grid[i][j] > 0: food_turtle.goto(i * 10 - 500, j * 10 - 500) food_turtle.clearstamp(0) food_turtle.pendown() prev_food_grid = food_grid.copy()
Hiding of food wasn’t working. The AI made some bad assumptions about that clearstamp()
method. I wasn’t sure what was going wrong at first (I was starting to get a bit cross-eyed doing this), but we tried alternate drawings to the food location, we tried avoiding over-stamping of food.
These didn’t work, but some other issues were found in the process. And, in step 17, we’ll work on an actual fix.
Prompt: “clearstamp isn’t working. Try something else to clear that square.”
Answer:
# ... if food_grid[i][j] == 0 and prev_food_grid[i][j] > 0: food_turtle.goto(i * 10 - 500, j * 10 - 500) food_turtle.shape("blank") food_turtle.stamp() food_turtle.shape("square") # ...
Prompt: “The food stamp is being drawn too many times on the same position. Update food drawing to only stamp if there wasn’t previously food before.”
Answer:
# ... if counter % 200 == 0 or counter == 0: # ... for i in range(100): for j in range(100): if food_grid[i][j] > 0 and prev_food_grid[i][j] == 0: # ... # ... # ...
Prompt: “I’m getting an undefined variable error for prev_food_grid in the food drawing loop. I think this should be initialized before the main loop.”
Answer:
# Initialize prev_food_grid with the initial food grid prev_food_grid = food_grid.copy() # Main simulation loop # ...
Prompt: “I found a problem. prev_food_grid is a shallow copy, so the internal arrays aren’t being copied.”
Answer:
# Initialize prev_food_grid with the initial food grid prev_food_grid = [row.copy() for row in food_grid] # Main simulation loop while any(bug.life > 0 for bug in bugs): # ... # Make a deep copy of food_grid to prev_food_grid prev_food_grid = [row.copy() for row in food_grid] # ...
By this point, my AI session started to act up again, so I started a new one, provided the current code and explained where we left off, and then began prompting again to try to fix this properly.
This new session used different syntax highlighting, and described the locations of changes differently. Interesting.
Prompt: “There’s a problem with clearing food. I think we need to store the locations of stamp IDs When we draw new food, and then clear the stamps when hiding food. Can you show me the relevant changes for that? I don’t need to see the whole file.”
Answer:
To fix the problem with clearing food in the simulation, you can store the locations of the food stamps in a list, and then iterate over this list and clear the stamps when hiding food. Here is an example of how this could be implemented:
# Add a list to store the locations of the food stamps food_stamp_locations = [] # Inside the main simulation loop: # Only draw food when respawning or when starting the simulation if counter % 200 == 0 or counter == 0: # ... for i in range(100): for j in range(100): if food_grid[i][j] > 0 and prev_food_grid[i][j] == 0: food_turtle.goto(i * 10 - 500, j * 10 - 500) # Stamp the food at the current location stamp_id = food_turtle.stamp() # Add the stamp location to the list food_stamp_locations.append((i, j, stamp_id)) # Clear the food stamps that are no longer needed food_turtle.speed(0) food_turtle.penup() for i, j, stamp_id in food_stamp_locations: if food_grid[i][j] == 0 and prev_food_grid[i][j] > 0: food_turtle.clearstamp(stamp_id)This code will store the location and stamp ID of each food item when it is drawn on the screen, and then clear the stamps when the food disappears from the grid.
Prompt: “It needs to clear every tick of the main loop, not just when respawning food or in the first tick.”
Answer:
To make the code clear the food stamps every tick of the main simulation loop, you can move the code that clears the stamps to the end of the loop, after all of the bugs have moved and found food. Here is an example of how this could be implemented:
# Main simulation loop while any(bug.life > 0 for bug in bugs): # ... # Clear the food stamps that are no longer needed food_turtle.speed(0) food_turtle.penup() for i, j, stamp_id in food_stamp_locations: if food_grid[i][j] == 0 and prev_food_grid[i][j] > 0: food_turtle.clearstamp(stamp_id) # Only draw food when respawning or when starting the simulation # ...In this code, the food stamps are cleared at the end of each iteration of the main simulation loop, after all of the bugs have moved and found food. The code that spawns and draws food is only executed every 200 iterations, or at the beginning of the simulation. This will ensure that the food stamps are cleared every tick of the main loop, while only spawning and drawing food at regular intervals.
I was feeling pretty good, and basically done at this point. So I just asked the AI if it had any other ideas. It listed a few, one being competing for food (avoiding eating food that other bugs have eaten).
Prompt: “do you have any ideas for this simulation?”
Answer:
There are many ways that you could extend or modify this simulation. Here are a few ideas:
These are just a few examples of the many ways that you could extend or modify this simulation. I’m sure you can come up with even more creative ideas!
I thought these were good, and we explored the following:
This post is already too long, so I’m not going to dive into all of those. Instead, you can find all the screenshots in the GitHub repository.
Super cool stuff.
I can understand why people may be concerned about their jobs, their livelihood. But I’m not concerned, not at this point anyway.
This tool was a really helpful creative aid, something that let me prototype and debug with nothing but words, the way I might with another human.
But it very clearly wasn’t another human. It didn’t go back and rethink its approach to the code. It didn’t design with an eye toward future maintainability. It didn’t push back on ideas. It couldn’t test what it wrote.
A human still needed to be in the driver’s seat.
I’m excited to use this to talk through ideas, to get insights into code, to rapidly prototype.
But I’ve also worked on million line codebases, and this ain’t gonna cut it. Certainly not yet.
I would recommend that everyone spends some time with this tool. Make it real and not theoretical. Play with it, find the boundaries the experience. Reaffirm what you yourself are capable of, and how much you bring to the table.
Because the human element still matters. The AI is just a tool, but one that you might find to be a useful ally.
Thanks for reading this far.
]]>In 2020, as the pandemic began to ramp up, the state and counties began to launch dashboards and datasets, quickly making knowledge available for anyone who wanted to work with it. State dashboards tracked state-wide and some county-wide metrics, while local dashboards focused on hyper-local information and trends.
Not just county dashboards, but schools, hospitals, and newspapers began to share information. Individuals, like myself, got involved and began to consolidate data, compute new data, and make that available to anyone who wanted it.
California was open with most of their data, providing CSV files, spreadsheets, and Tableau dashboards on the California Open Data portal. We lacked open access to the state’s CalREDIE system, but we still had a lot to work with.
It was a treasure trove that let us see how the pandemic was evolving and helped inform decisions.
But things have changed.
The last 6 months or so, this data has begun to dry up. Counties have shut down or limited dashboards. The state’s moved to once-a-week case information. Vaccine stats have stopped being updated with new boosters.
This was inevitable. Much of this requires coordination between humans, real solid effort. Funding is drying up for COVID-related data work. People are burnt out and moving on from their jobs. New diseases and flu seasons have taken precedence.
But this leaves us in a bad position.
Trends are no longer fine-grained. We can no longer see day-by-day case numbers. This means trends now have to be seen over a wide range of time.
Due to testing times, we were always looking in the past before, but even more so now.
Rapid testing caused a significant reduction in reported cases. We were always seen an under-count, but even more so now.
Datasets that still report are based on different reporting frequencies, making all this even harder.
And as data became increasingly inconsistent and time-lagged, so did our understanding of what that data was telling us. A problem, since what we do have points to challenges ahead as we continue to try to co-exist with a virus that leaves so many with immune deficiencies and prolonged symptoms.
I used to report daily on COVID data for Butte County, CA, on Twitter, on Facebook, and through my dashboard. I could explain to people what they were seeing, help them understand. Based on feedback, this was invaluable to many, helping people make smart decisions for themselves and their loved ones.
Once-a-day became twice-a-week, matching data releases.
Then the schools stopped reporting altogether. This was a big loss for our community. We learned so much from the spread of COVID in schools. It was a canary in the coal mine. As school data dried up, so did a big chunk of our understanding of where case were spreading and amongst which age groups.
Then the local jail stopped. Just before they did, there was data showing activity amongst inmates and staff. And now… nothing.
State hospital information became less frequent. Maybe a couple times a week. Sometimes missing days.
Local sequencing data is expected to disappear, meaning we’ll no longer know which variants we’re dealing with.
So now, what data I have is weekly, and limited, and getting worse all the time.
Again, this was inevitable, and there are good reasons for it. But it is such a shame, and a loss to our understanding of where we are in this still on-going pandemic.
The data aspects of the pandemic were fascinating in that had such a wealth of useful information, with sites popping up around the Internet to visualize the spread and trends. Average people developing an understanding of viral spread like never before. Data experts finding new ways to model and represent what’s happening at every level of scale, and showing the impacts that mitigations (masks, cleaner air, and vaccines) can have.
For a while there, it felt like this availability of data would go on forever. But those days are nearly behind us.
Feb 28, 2023 will mark the end of the pandemic emergency in California. At this rate, I suspect the data will stop soon after.
And then we’ll be blind once more.
While still in a pandemic.
]]>AI image generation has been a controversial, but exciting, topic in the news as of late. I’ve been following it with interest, but thought I was still years off from being able to actually play with it on my own hardware. That all changed this week.
I’m on day two now with Stable Diffusion, having successfully installed the M1 support via a fork. And my topic to get my feet wet has been…
Chickens.
Why not.
So let’s begin our tour. I’ll provide prompts and pictures, but please not I do not have the seeds (due to a bug with seed stability in the M1 fork).
I recently became a fan of the late Leonid Afremov. His paintings really called to me, and I’m in danger of buying a whole lot of prints. I was curious what I could make Stable Diffusion do in his art style, and got quite a lot of good images as a result.
Prompt: chickens, leonid afremov
Decided to continue with some fine art, trying other artists in my prompt. Most didn’t go the way I had hoped, but I loved this particular image.
Prompt: paintings of chickens in tuscany, eugene j paproski
Next I decided to try watercolor. Honestly, not as many good results as I liked, but this one I thought was fun. What are they doing at night in the park?
Prompt: watercolor chickens on a grassy hillside
Let me tell you, this thing can do sculptures. Try it. I got so many good results out of this one, with this being my absolute favorite of the bunch.
Prompt: roman white marble of bronze elegant chicken
Yeah maybe done’t feed this one after midnight. I went through a lot of variations of this prompt, generated a lot of images, but this was the first to really make me laugh. That poor momma chicken.
Prompt: 3d pixar illustration of baby chicks hatching in a nest
Back to another artist. I actually got some good results from Thomas Kinkade, many centered around farmlands, but then I tried to mix it up a bit. This one was my favorite of the bunch.
I’d love to be on the beach right now.
Prompt: painting of two chickens gazing over the ocean at sunset, thomas kinkade
I decided to see if this could make Lego chickens. Now, making models out of Lego didn’t work very well at all, but I got some good minifigs. Some were nightmare fuel, but these I liked. With a little refinement, they’d be fun blind bags to collect.
Prompt: chicken lego minifigs, soft light, highly detailed, sharp focus
I really liked the vibe in this one, but I have no idea what’s happening in the bottom-left. That’s the funny thing about the AI images. You can get some amazing results, but you also get some strange missing parts, parts grafted onto each other, all kinds of nightmare fuel.
Prompt: marvel comic drawing of chickens
This one is reminiscent of Studio Ghibli films. I like to imagine this chicken’s excited to begin her quest.
Prompt: anime illustration of chickens by a waterfall, studio ghibli, highly detailed, mystery, caves
This wasn’t what I was aiming for in the prompt, but I like how this turned out. Where can I buy these?
Prompt: chicken rubber ducks in a swimming pool
I got a lot of really interesting robot pieces. These were the two I really liked.
Prompt: steampunk mechanical chicken robot, insanely detailed, sharp focus
I tried to get some anime versions of robots and chickens. They were all a variation on a theme of red robots. This wasn’t at all what I expected, but it was so weird I had to keep it.
Prompt: anime chicken fighting a giant robot
Got some weird stuff with this prompt. Most of it wasn’t great, but I liked this one. Did you know that ancient Egyptian chickens had 4 legs? Neither did I!
Prompt: egyptian drawing of chickens, hieroglyphics
I had a surprisingly hard time getting chickens in baskets the way I wanted, and then this adorable puddle of a chicken appeared. It was too cute.
Prompt: cute clay chicken in an easter basket with colorful eggs
How about some Horizon Zero Dawn with Alloy riding a giant chicken? I’d play this.
Prompt: drawing of horizon zero dawn with giant chicken robots
My goal was to get a clean ice sculpture of a chicken. It didn’t happen. I got ice sculptures, but nothing pretty to look at. Sometimes it got confused with snow. I was quite happy with this one, though.
Prompt: sculpture of cute chicken made of ice
I managed to get quite a lot sketch drawings of poker games, but few of the beings around it resembled chickens. This one was the best of them, but even here you can see that parts aren’t exactly right. Still, I liked the vibe.
Prompt: black-and-white cartoon sketch of chickens sitting around a poker table playing poker
If you want an image in a certain style, it helps to be explicit about the source of the artwork you’re mentally aiming for. In this case, “Ori and the Blind Forest.” I’ve noticed that Stable Diffusion tends to turn up pretty good “digital illustration” results, too.
Prompt: digital illustration of a chicken in a dark forest, twilight, ori and the blind forest
Remember this book series? The stories were fine, but the drawings were traumatizing. A shared culture of terror instilled in so many. So I tried to reference it for some chicken drawings.
Prompt: charcoal sketch of scary monster chicken, scary stories to tell in the dark
I made probably 50 of these before getting the prompt right. So many of the chickens lacked heads. Oh well. Finally, after some tweaking, this image came out, and I loved it. Look how happy they are!
Prompt: high resolution photo of a miniature Disneyland with two cute chickens, soft light
I wanted to end this on a happy, warm note. A nice grassy field with flowers, soft light, a waterfall, and happy chickens.
Prompt: digital portrait of two chickens on a blanket on the side of a hill at sunset, mountain with waterfall, green grass, blue flowers, purple flowers, highly detailed, soft, smooth
]]>The challenge was providing a test suite that could test against real-world tools, but not require them. An ever-increasing list of compatible tools has threatened to become an ever-increasing burden on contributors. We wanted to solve that.
So here’s how we’re doing it.
First off, this is all Python code, which you can find on the Review Bot repository on GitHub.
We make heavy use of kgb, a package we’ve written to add function spies to Python unit tests. This goes far beyond Mock, allowing nearly any function to be spied on without having to be replaced. This module is a key component to our solution, given our codebase and our needs, but it’s an implementation detail — it isn’t a requirement for the overall approach.
Still, if you’re writing complex Python test suites, check out kgb.
Review Bot can talk to many command line tools, which are used to perform checks and audits on code. Some are harder than others to install, or at least annoying to install.
We decided there’s two types of tests we need:
Being that our goal is to ease contribution, we have to keep in mind that we can’t err too far on that side at the expense of a reliable test suite.
We decided to make these the same tests.
The strategy, therefore, would be this:
This would be done without any code duplication between integration or simulation tests. There would be only one test function per expectation (e.g., a successful result or the handling of an error). We don’t want to worry about tests getting out of sync.
Regression in our code? Both types of tests should catch it.
Regression or change in behavior in an integrated tool? Any fixes we apply would update or build upon the simulation.
Regression in the simulation? Something went wrong, and we caught it early without having to run the integration test.
We introduced three core testing components:
@integration_test()
— a decorator that defines and provides dependencies and input for an integration test@simulation_test()
— a decorator that defines and provides output and results for a simulation testToolTestCaseMetaClass
— a metaclass that ties it all togetherAny test class that needs to run integration and simulation tests will use ToolTestCaseMetaClass
and then apply either or both @integration_test
/@simulation_test
decorators to the necessary test functions.
When a decorator is applied, the test function is opted into that type of test. Data can be passed into the decorator, which is then passed into the parent test class’s setup_integration_test()
or setup_simulation_test()
.
These can do whatever they need to set up that particular type of test. For example:
subprocess.Popen()
call to fake output.
For example:
class MyTests(kgb.SpyAgency, TestCase,
metaclass=ToolTestCaseMetaClass):
def setup_simulation_test(self, output):
self.spy_on(execute, op=kgb.SpyOpReturn(output))
def setup_integration_test(self, exe_deps):
if not are_deps_found(exe_deps):
raise SkipTest('Missing one or more dependencies')
@integration_test(exe_deps=['mytool'])
@simulation_test(output=(
b'MyTool 1.2.3\n'
b'Scanning code...\n'
b'0 errors, 0 warnings, 1 file(s) checked\n'
))
def test_execute(self):
"""Testing MyTool.execute"""
...
When applied, ToolTestCaseMetaClass
will loop through each of the test_*()
functions with these decorators applied and split them up:
@integration_test
will be split out into a test_integration_<name>()
function, with a [integration test]
suffix appended to the docstring.@simulation_test
will be split out into test_simulation_<name>()
, with a [simulation test]
suffix appended.The above code ends up being equivalent to:
class MyTests(kgb.SpyAgency, TestCase):
def setup_simulation_test(self, output):
self.spy_on(execute, op=kgb.SpyOpReturn(output))
def setup_integration_test(self, exe_deps):
if not are_deps_found(exe_deps):
raise SkipTest('Missing one or more dependencies')
def test_integration_execute(self):
"""Testing MyTool.execute [integration test]"""
self.setup_integration_test(exe_deps=['mytool'])
self._test_common_execute()
def test_simulation_execute(self):
"""Testing MyTool.execute [simulation test]"""
self.setup_simulation_test(output=(
b'MyTool 1.2.3\n'
b'Scanning code...\n'
b'0 errors, 0 warnings, 1 file(s) checked\n'
))
self._test_common_execute()
def _test_common_execute(self):
...
Pretty similar, but less to maintain in the end, especially as tests pile up.
And when we run it, we get something like:
Testing MyTool.execute [integration test] ... ok
Testing MyTool.execute [simulation test] ... ok
...
Or, you know, with a horrible, messy error.
It’s become really easy to maintain and run these tests.
We can now start by writing the integration test, modify the code to log any data that might be produced by the command line tool, and then fake-fail the test to see that output.
class MyTests(kgb.SpyAgency, TestCase,
metaclass=ToolTestCaseMetaClass):
...
@integration_test(exe_deps=['mytool'])
def test_process_results(self):
"""Testing MyTool.process_results"""
self.setup_files({
'filename': 'test.c',
'content': b'int main() {return "test";}\n',
})
tool = MyTool()
payload = tool.run(files=['test.c'])
# XXX
print(repr(payload))
results = MyTool().process_results(payload)
self.assertEqual(results, {
...
})
# XXX Fake-fail the test
assert False
I can run that and get the results I’ve printed:
======================================================================
ERROR: Testing MyTool.process_results [integration test]
----------------------------------------------------------------------
Traceback (most recent call last):
...
-------------------- >> begin captured stdout << ---------------------
{"errors": [{"code": 123, "column": 13, "filename": "test.c", "line': 1, "message": "Expected return type: int"}]}
Now that I have that, and I know it’s all working right, I can feed that output into the simulation test and clean things up:
class MyTests(kgb.SpyAgency, TestCase,
metaclass=ToolTestCaseMetaClass):
...
@integration_test(exe_deps=['mytool'])
@simulation_test(output=json.dumps(
'errors': [
{
'filename': 'test.c',
'code': 123,
'line': 1,
'column': 13,
'message': 'Expected return type: int',
},
]
).encode('utf-8'))
def test_process_results(self):
"""Testing MyTool.process_results"""
self.setup_files({
'filename': 'test.c',
'content': b'int main() {return "test";}\n',
})
tool = MyTool()
payload = tool.run(files=['test.c'])
results = MyTool().process_results(payload)
self.assertEqual(results, {
...
})
Once it’s running correctly in both tests, our job is done.
From then on, anyone working on this code can just simply run the test suite and make sure their change hasn’t broken any simulation tests. If it has, and it wasn’t intentional, they’ll have a great starting point in diagnosing their issue, without having to install anything.
Anything that passes simulation tests can be considered a valid contribution. We can then test against the real tools ourselves before landing a change.
Development is made simpler, and there’s no worry about regressions.
We’re planning to apply this same approach to both Review Board and RBTools. Both currently require contributors to install a handful of command line tools or optional Python modules to make sure they haven’t broken anything, and that’s a bottleneck.
In the future, we’re looking at making use of python-nose
‘s attrib
plugin, tagging integration and simulation tests and making it trivially easy to run just the suites you want.
We’re also considering pulling out the metaclass and decorators into a small, reusable Python packaging, making it easy for others to make use of this pattern.
]]>I’ve been thinking about this discussion on a National ID and the end of using Social Security Numbers. We’re used to having these 9 digit numbers represent us for loans, credit card transactions, etc., but in the modern age one would think we could do better.
Any replacement for Social Security Numbers would need to be secure, reduce the chances of identity theft, be able to withstand fraud/theft, and must not be scannable without knowledge (to avoid being able to track a person without their knowledge as they go from place to place). The ACLU has a list of 5 problems with National ID cards, which I largely agree with (though some — namely the database of all Americans — already exist in some forms (SSN, DMV, Facebook) and are probably inevitable).
In an ideal world, we’d have a solution in place that offered a degree of security, and there are technical ways we could accomplish this. The problem with technical solutions is that not every person would necessarily benefit (there are still plenty of Americans without easy access to computers), and technical solutions leading to complexity for many. However, generations are getting more technically comfortable (maybe not literate, but at least accustomed to being around smartphones and gadgets), and it should be possible to design solutions that require zero technical expertise, so let’s imagine what could be for a moment.
Every year we have to renew our registration on our cars, and every so many years we have to renew our drivers license cards. So we’re used to that sort of a thing. What if we had just one more thing to renew, a Personal ID Key that went on our physical keychain, next to the car keys. Not an ID number to remember or a card that can be read by any passing security guard or police officer or device with a RFID scanner, but a single physical key with a safe, private crypto key inside, a USB port on the outside, that’s always with us.
I’m thinking something like a Yubikey, a simple physical key without any identifiable information on the outside that can always be carried with you. It would have one USB port on the outside and a single button (more on this in a minute). You’d receive one along with a PIN. People already have to remember PINs for bank accounts and mobile phones, so it’s a familiar concept.
Under the hood, this might be based around PGP or a similar private/public key cryptography system, but for the purpose of this “What if,” we’re going to leave that as an implementation detail and focus on the user experience. (Though an advantage of using PGP is that a central government database of all keys is not needed for all this to work.)
When you receive your Personal ID Key and your PIN (which could be changed through your computer, DMV, or some other place), it’s all set up for you, ready to be used. So how is it used? What benefits does this really give? Well, there’s a few I can think of.
When applying for a home loan or credit card agreement, or when otherwise digitally signing a contract online, you’d use your Personal ID Key. Simply place it in the USB port and press the activation button on the key. You’ll have a short period of time to type your PIN on the screen. That’s it, you’re done. A digital signature is attached to the document, identifying you, the date, and the time. That can be verified later, and can’t be impersonated by anyone else, whether by a malicious employee in the company or a hacker half-way across the world.
People are terrible when it comes to passwords. They’ll use their birthdates or their pet’s name on their computer and every site on the Internet. More technical people try to solve this with password management products, but good luck getting the average person to do this. I’ve tried.
This can be largely addressed with a Personal ID Key and the necessary browser infrastructure. Imagine logging into your GMail account by typing your username, placing your key in the USB port on any computer, pressing the activation button, and typing your PIN. No simple passwords that can be cracked, and no complex passwords that you’d have to write down somewhere. No passwords!
Actually, for some sites, this is possible today with Yubikeys (to some degree). Modern browsers and sites supporting a standard called U2F (such as any service by Google) allow the usage of keys like this to help authenticate you securely into accounts. It’s wonderful, and it should be everywhere. Granted, in these cases they’re used as a form of two-factor authentication, instead of as a replacement for a password. However, server administrators using Yubikeys can set things up to log into remote servers using nothing but the key and a PIN, and this is the model I’d envision for websites of the future. It’s safe, it’s secure, it’s easy.
Inevitably, someone’s going to lose their key, and that’s bad. You don’t want someone else to have access to it, especially if they can guess your PIN. So there needs to be a process for replacing your key at a place like the DMV. This is just one idea of how this would work:
Immediately upon discovering your key is gone, you can go online or call a toll-free number to indicate your key is lost. This would lead to an appointment at the DMV (or some other place) to get a new key, but in the meantime your old key would be flagged as lost, which would prevent documents from being signed and prevent logging into systems.
Marking your key as lost would give you a special, lengthy, time-limited PIN that could be used to re-activate your key (in case you found out you left it in your other pants).
The owner of the key would need to arrive at the DMV (or wherever) and prove they are who they say they are and fill out a form for a new key. This would result in a new private key, and would require going through a recovery process for any online accounts. It’s important here that another person cannot pretend to be someone else and claim a new key.
Once officially requested at the DMV, the old key would be revoked and could no longer be used for anything.
Technology changes, and a Personal ID Key inevitably will be out-of-date. We’ve gone through this with credit cards, though. Every so often, the credit card company will send out a new card with new information, and sites would have to be updated. Personal ID Keys wouldn’t have to be much different. Get a new one in the mail, and go through converting your accounts. Sites would need to know about the new key, so there’d need to be a key replacement process, but that’s doable.
This all could work, but in reality we have infrastructure problems. I don’t mean standards support in browsers or websites. That’s all fixable. I mean the processes by which people actually apply for loans, open bank accounts, etc. These are all still very heavily paper-based, and there’s not always going to be a USB port to plug into.
Standards on tablets and phones (in terms of port connectors and capabilities) would have to be worked out. iPads and iPhones currently use Lightning, whereas most phones use a form of USB. Who knows, in a year even Apple’s devices might be on USB 3, but then we’re still dealing with different types of USB ports across the market, with no idea what a future USB 4 or 5 would look like. So this complicates matters.
Some of this will surely evolve. Just as Square made it easy for anyone to start accepting credit card payments, someone will build a device that makes it trivial to accept and verify signatures, portably. If the country moved to a Personal ID Key, and there was demand for supporting, devices would adapt. Software and services would adapt.
As someone in the technology space, the encryption fight has been simultaneously interesting and concerning to me, as any precedent set could cause serious problems for the privacy and security of all those on the Internet.
The concern by the authorities is that technology-based encryption (which can be impossible to intercept and crack) makes it extraordinarily difficult to stop the next impending attack. Banning encryption, on the other hand, would mean making the average phone and Internet communication less secure, opening the door to other types of threats.
This is an important topic, but what few in the media talk about is that terrorists have been using an alternative method for years before encryption was available to the masses. They don’t talk about it because it hits maybe too close to home.
They don’t talk about the dangers of your local donut shop.
Passing a message between conspirators is nothing new. Just as little Tommy might write a coded note in class to Sally so the teacher couldn’t find out, terrorists, crime syndicates, and spy agencies have been using all manner of coded messages for thousands of years to keep their communication secure. Such messages could be passed right in front of others’ noses, and none would be the wiser.
These have been used all throughout history. The German Enigma Code is perhaps one of the most famous examples.
Such messages often entail combinations of letters, numbers, symbols, or may contain specialized words (“The monkey flaps in the twilight,” for instance) that appear as gibberish to most, but have very specific meaning to others. The more combinations of letters, numbers, symbols, or words, the more information you can communicate, and the less likely it is that someone will crack it.
That said, many of these have been cracked or intercepted over time, causing such organizations to become even more creative with how they communicate.
Donuts have a long history, and its origins are in dispute, but it’s clear that donut shops have been operating for quite some time now. They’re a staple in American culture, and you don’t have to drive too far to find one. Donuts also come in all shapes, sizes, and with all sorts of glazes and toppings, and it’s considered normal to order a dozen or so at once.
In other words, it’s a perfect delivery tool for discrete communication.
When one walks into a donut shop, they’re presented with rows upon rows of dozens of styles of donuts, from the Maple Bar to the Chocolate Old Fashioned to the infamous Rainbow Sprinkle.
While most will simply order their donuts and go, those with something to hide can use these as a tool, a message delivery vehicle, simply by ordering just the right donuts in the right order to communicate information.
“I’ll have a dozen donuts: 2 maple bars, 1 chocolate bar, 2 rainbow sprinkles, 3 chocolate old fashioned, 1 glazed jelly, and 2 apple fritters. How many do I have? … Okay, 1 more maple bar.”
If top code breakers were sitting in the room, they may mistake that for a typical donut order. Exactly as intended. How could you even tell?
Well, that depends on the group and the code, but here’s a hypothetical example.
The first and last items may represent the message type and a confirmation of the coded message. By starting with “I’ll have a dozen donuts: 2 maple bars,” the message may communicate “I have a message to communicate about <thing>”. Both the initial donut type and number may be used to set up the formulation for the rest of the message.
Finishing with “How many do I have? … Okay, 1 more maple bar.” may be a confirmation that, yes, this is an encoded message, and the type of message was correct, and that the information is considered sent and delivered.
So the above may easily translate to:
I have a message to communicate about the birthday party on Tuesday.
We will order a bounce house and 2 clowns. It will take place at 3PM. There will be cake. Please bring two presents each.
To confirm, this is Tuesday.
Except way more nefarious.
The other donut types, the numbers, and the ordering of donuts may all present specific information for the receiver, communicating people, schedules, events, merchandise, finances, or anything else. Simply change the number, the type of donut, or the order, and it may communicate an entirely different message.
If a donut shop offers just 20 different types of donuts, and a message is comprised of 12 donuts in a specific order, then we’re talking more combinations than you could count in a lifetime! Not to mention other possibilities like ordering a coffee or asking about donuts not on the menu, which could have significance as well.
Basically, there’s a lot of possible ways to encode a message.
The recipient of the message may be behind the register, or may simply be enjoying his coffee at a nearby table. How would one even know? They wouldn’t, that’s how.
It’s all too easy to be afraid these days, with the news heavily focused on terrorism and school shootings, with the Internet turning every local story global.
Statistically, it’s unlikely that you will die due to a terrorist attack or another tragic event, particularly one related to donuts. The odds are in your favor.
As for the donut shop, just because a coded message may be delivered while you’re munching on a bear claw doesn’t mean that you’re in danger. The donut shop would be an asset, not a target. It may even be the safest place you can be.
So sit down, order a dozen donuts, maybe a cup of coffee, and enjoy your day. And please, leave the donut crackin’ to the authorities. They’re professionals.
(I am available to write for The Onion or Fox News.)
]]>I shared my point of view on just a few of the things that made the team great. Tried to keep it short, and didn’t know who would read it, but as I write this, over 57,000 people have read my tribute, and many have shared their thoughts on the team.
Since then, our team, what we’ve been referring to as Ghosted-UI, has been out for a couple dinners, drinks, trying to figure out where everyone will end up, trying to figure out where poker’s going to be next, when the next good movies will be out, and, actually, still discussing bugs and thoughts around Workstation and Fusion. Okay, maybe we haven’t let go yet.
So here’s what I’d like to do now. I’d like to share some thoughts and pictures from a few people in our team about what made our team great, and share some comments from some of our users. Keeping it positive here.
Before Jocelyn left for Facebook in 2010, and then got to branch out on her own, she was the manager of Hosted UI. In fact, she’s the one who hired me back in 2004, and remained a mentor to me. My first job out of school (actually, hadn’t finished yet), and she took a chance on me, bringing me on board and helping me learn the ropes, learn to build myself up. Even paid for some driving lessons (I didn’t have a license back then).
There was a part of that that I never thought she’d have remembered, which she shared with us on Facebook:
Ok, I can add *one* thing. What Christian doesn’t mention about his aforementioned first day of work is that he anxiously showed up at 8am, in a collared shirt and slacks, nervous for his first, grown-up, corporate job.
He then got to cool his heels in the lobby for an hour waiting for the next person on his team to arrive (which was probably me around 9 or 9:30).
He immediately bonded with the team, came back the next day in jeans at a reasonable hour like 10am, and the rest, as we say, is history.
Jocelyn, with a real family of her own, shared her thoughts on what made us a family.
I 100% experienced the team as family.
To me, video games were the least of it. We were united by our shared sense of mission and care for what we were building and the community of developers and admins who used it. Our commitment to the kind of software we wanted to build and the way we would build it. Gamely celebrating each other’s lifecycle events whether that was a 21st birthday or a surprise baby shower for me which was the first event of its kind for most of the attendees. while I haven’t stayed in close touch with everyone in the 7-8 years since I became less involved w/ the team, I’ve attended weddings, doled out career advice, helped with job hunting, new parent advice, you name it. IOW, I haven’t been there all the time, but I’ll absolutely be there when I’m needed.
For me, this team emphatically represents the possibility that you *can* form very close and lasting relationships with work colleagues, without them HAVING to also be social connections. For Christian it was both, but I don’t think I was the only one for whom it was not friendship… but still family.
When I asked for additional thoughts for this post, she brought up part of what made our products so consistently high quality, even 12 major versions in:
I feel like some of our long term rearchitecture/cleanup efforts deserve highlighting. It’s hard to have the discipline to do those. It’s not sexy or fun for the engineers, and marketing could care less b/c it doesn’t drive sales. But we pulled off some big ones b/c we had the team commitment and will to do it.
James (LinkedIn) joined Hosted UI in 2007, and spent most of his years since working on Fusion. He’s still with the company, just in another role, but is very much a part of our team. In a Facebook post, he shared some of his thoughts that kind of summed up our work days:
The simultaneous technical breadth and depth of this team was always stunning. You could walk past an office where 3 people were having a design discussion about how to do some complex asynchronous task while respecting the quirks of the OS X and GTK run loops and Win32 message loop. Or have a debate with someone about how best to model and manage modal dialogs in a generic fashion while still having the application “feel” like it was supposed to on each respective platform. I can’t stress that enough, so much love and care was put into having each application “feel” like a Windows app or “feel” like a Mac app despite having so much shared code. And as much ribbing as we gave each other about the other platforms, there was always so much respect.
And then you’d all go out to dinner and talk about video games.
Lee Ann (LinkedIn) also joined Hosted UI in 2007, working on Fusion. I remember giving her an interview, poorly (that is, I sucked at it — I was pretty new to interviewing). She was a fantastic member of the team, really knew her stuff.
When I asked for thoughts on the team, she shared why she stayed with this product so long: Our users.
This is why I did it – because our users appreciate it. I dropped a line to the blind Fusion user [who she heard from after the news broke last week — Christian] and got this answer:
> “Hi Lee Ann. Thanks for taking the time to write, and I’m sorry to learn that you and others who have done such a good job with Fusion have been let go.
>
> It sounds like VMWare has lost a lot of institutional knowledge, and Fusion is the only accessible VM solution there is.
>
> Thanks for thinking of your blind users, many of us really appreciate all you have done.”
If you’ve used Unity on Linux or Mac, you’ve used Jason’s (LinkedIn) work. He was a remote employee, so we didn’t get to see him as often as we liked, but we chatted on IRC daily.
I did not grow up with nerd friends who were like me. I spent the first period of my life working whatever I could find in retail stores and then in corporate IT, and I definitely didn’t fit in there. But you guys… you’re all like me! Like, there’s no pretense and there’s no trying to fit in. It’s just always felt like home and family when I’ve been able to spend time with you guys.
I know it’s going to sound sad and sappy and whatever, but I just wanted to tell you guys how much you have meant to me and how much you continue to mean to me. You all have been the best 8 years of my life, personally and professionally. I love each and every one of you. And I’m going to miss being able to hang out with you in person immensely. *hug* =:)
We’ll miss it too, Jason, but are going to drag you out here kicking and screaming, one way or another.
Sujit (LinkedIn) was a newer member of the team, and moved to the team after I left. I’ve gotten to know him through team outings, and am glad I had that opportunity.
My tenure in the Hosted UI team is significantly less than many of you guys, but I share the pain. By far this is best team I have ever worked with – amazing people and products. I would have completed 4 years at VMware on April 30, which is just about a month after my termination date. I was looking forward to this 4-year milestone. Oh well…
Richard started off as an intern in Hosted UI, and then became a full member of the team. He left a few years back, but like many from Hosted UI, we still keep in touch on IRC, Facebook, and Twitter. He had some really nice thoughts to share with us:
The Hosted-UI Group set the gold standard by which I have judged all the teams I’ve worked with since I left (2012). We built an open environment that celebrated individuals’ strengths and supported each other’s weaknesses. We weren’t all hanging out on the weekend together (though many were) but it didn’t matter because we all cared for the product and wanted to see it succeed.
There was a dramatic breadth of technical skill (from deep kernel hacking to amazing UX intuition and user focus) and very little ego. It was an amazing place to learn post college and the best introduction to industry I could have imagined. I’m devastated to see the team disbanded but hope that the core of how that environment functioned follows each of the team as they spread out to whatever amazing things they decide to pursue.
On a more personal level I owe a lot of my adult life to the situations that arose from taking the full-time offer. Without coming to HUG I would have not met many of those who are now my closest friends, would not have started rock climbing (which was key for me to *finally* get healthy), and would not have made the connections necessary for my subsequent jobs which I have also loved. It would be an understatement to say that this team, and the internship that pulled me in, significantly altered the course of my life.
For those that were laid off you can pretty much call on me for anything and I’ll do what I can to help regardless of whether or not we overlap: that’s how family works. I’m sorry this happened but I trust you’ll be okay.
James (LinkedIn) was one of the first people I met when I joined VMware. He’s a legend. The guy knew the codebase inside-and-out, probably better than most of us, and plowed through bugs and features like nobody’s business. He worked on the Windows side on Workstation and Player. He saw a lot of change in the company and even in the team, and I always pictured him single-handedly holding the products together until the very end, if it came down to it.
He shared his view of what made the team great and how he saw his work over his time at VMware.
I’ve been in VMware’s Hosted UI group (“HUG”; could there be a more appropriate name?) working on Workstation for almost 12 years. I’ve seen a lot of people in HUG come and go (although I think not quite as many as in other groups), and while some of them tried to pull me away to join other companies, I never really wanted to leave. I loved our product. Even after nearly 12 years, I never got tired of fixing bugs; I saw each bug as a usability problem for customers. Repetitive bugs challenged me to try to prevent future recurrences. At VMware, there was always something for me to work on and always something new for me to learn, and it never got boring.
And I loved my colleagues too. I tried my best to help them when possible (by answering questions, offloading bugs, reviewing their code, implementing helper functions they needed, writing scripts to simplify drudgery, buying unhealthy snacks for them from Costco) to make their lives a little bit easier and so that they’d have more time to work on things that they found interesting. I never wanted them to leave (and jokingly threatened to kill some of them if they ever tried).
People in HUG helped me buy a car for the first time. HUG filled two tables at my wedding. HUG was a family, and our products were our babies. I don’t know how I’m going to bear seeing them in the foster care of complete strangers.
Tony (LinkedIn) was a member of our amazing QA team (a team that suffered its own layoffs a couple of years ago).He had both QA engineering and management roles during his time with us. QA was important to us on a personal and professional level. They kept the quality of our products high, and knew the products and their history inside-and-out.
Tony shared his memories and thoughts with us:
When I look back at my time at VMware the one word that always comes to mind is family. Beyond all of the incredible technical feats the teams achieved it is all dwarfed by the shear strength of the bonds that I formed and saw formed with the people that I worked with at VMware.
Even as the company grew and changed HUG, Desktop QA and the greater Desktop Business Unit retained much of its core identity because of the people who worked within it. The passions that were shared for the products was equally shared for the people. In the valley where it is the norm for people to change jobs ever 2 years we had a team that clearly pushed against this. Between the Dev and QA teams we had some of the most tenured members in the entire company. This happened for many reasons, a shared passion of quality, love and dedication to the products we worked on and the close bonds we had with each other.
I am thankful to VMware for bringing us all together in the way that it did, regardless of how things ended. The fact is that the strengths of the bonds that we formed are far greater and are something that will always exist.
Family
I was surprised by the outpouring of love from our users. I want to share a few select comments from my earlier blog post:
I just wanted to congratulate you and all the team on the fabulous work you did. I bought my first license for VMware Workstation Linux 2.0, back in 2001 ! And use Workstation and Fusion on a daily basis (See my latest blog article on http://éé.net/ak6), it’s just a critical part of my infrastructure!
So I’m sad for you. And I just wanted to say I love you for what you made available for so many people worldwide
I’m a VMware vExpert because I did my Home Labs using Workstation, or even Fusion sometimes, you helped so many Companies out there, not just power users, I saw some environments using Workstation at really high scale, insane but working!
You guys did just amazing job all this time. I just can say, thank you and good luck!
Its really no wonder now, why apps like VMware Player and Fusion just worked so well despite doing really complicated things. Kudos to your team for really being the best champions of your product and making the computing world a much better place (this is what happens, for anyone else interested, when keeping developers happy and engaged takes precedence over keeping salesmen happy and engaged).
Big thanks to you all I was one of those early users. This software changed life of many people for better. Sorry to hear bad news. it’s bad move to shut down this products and your team. This software is living ad for whole company, for many of us first step into virtualisation.
Great product that changed the way the world works – testing and development was different before VMWare. So slow. This article was great insight into the team that made our lives better. A sad announcement but best wishes to a talented group of people.
Thanks for this tribute. I was also at VMware for 9 years, starting on the Vmkernel team that built one of the first releases of ESX, and saw it grow from a team of 200 in Stanford Research Park, with personal introductions of every new employee, and pool dunkings for folks getting married, to a big corporation of over 10k. Your team was one of the most dedicated and legendary teams at VMware. So sad to see it go.
I’ve been a loyal Fusion user since 2008. Fusion is what convinced several colleagues of mine to go to the Mac when they got fed up with Windows machines. I proudly buy each and every new-release license(s) because of the phenomenal quality and support that was given.
When I had problems with Fusion/Windows, the engineers actually invited me to their labs in Silicon Valley and sit next to them to work the problem out. That was support (to me) that was unheard of. I was in awe at their commitment and pride in what they did.
Absolute legends , such sad / infuriating / inexplicable / perplexing news. I’ve used VMware since day 1 and never ever looked back. Without this software I’d never have progressed anywhere near as far in my career as I did.
Ooooh nooo! Say it isn’t so. I was jaw droppingly amazed when I saw 1.0 and could see the VM of everything was the future of everything. Its been one of my mainstay tools ever since. Many similar free and not free tools have followed since, but none compares to the feature qualities and reliability of Workstation. Its always been ahead of its time and now we know why. So sad its been stripped of its culture and I fear its future. Thanks for telling us the rest of the story.
Thank you, everyone.
We dug around and found a bunch of pictures from our time at VMware that I thought would be fun to share.
We liked food. We had our own “Unhealthy Snack Program,” where we’d keep our group stocked with candy bars, beef jerky, sodas, etc. Sometimes you need a little sugar and caffeine when you’re battling some crazy bug. I wish I had a picture of this, but it was glorious.
We once won a waffle maker at Dave & Busters, during a group outing. Here’s Keith, making some yummy waffles for breakfast. He never made me any…
Picnics were always a fun way to bring the team together. Often, former members of Hosted UI would take the opportunity to show up, eat some hot dogs and catch up with the rest of us. Great way to spend a day, though we ended up talking shop more than we probably should (except for that one time we climbed trees for hours, just because we could).
We’ve been playing poker for years. Just casual games, nothing fancy. We’d order a pizza and play for a few hours, share some laughs. Really, we were just like the pro poker players, except a 2/7 won way more often than it should have, and things like this kept happening:
Not all of us were gamers, but a bunch of us got together most weeks to play games of some sort. Video games, card games, board games, what have you. Smash Bros, Mario Kart, Mario Wii U, Kirby, and the Rayman games were personal favorites of mine.
This was actually on the day we IPO’d! We just got a Wii and had set up the projector for some tennis action. Man, that was a long time ago…
If you were getting married, or just got married, you were going in the pond. It was an old VMware tradition that we fully embraced. We amped it up a bit, though, with the introduction of costumes. You know, because your clothes were all wet, so we helpfully provided new clothes!
Birthdays are something to be celebrated! Back in the day, we’d trick people by inviting them to a meeting and surprising them with cake. Eventually people came to expect it, so then it just became cake. Oh, and an amazing birthday candle that would shoot up fire like a torch for a minute, spin around, and sing.
True story: I had my first drink at VMware, in a surprise birthday meeting. And then my second. Jocelyn insisted. I’d never been so much as buzzed before. I remember Jocelyn coming in, asking me to do something, I don’t even remember, thoroughly enjoying watching me struggle to even understand what was going on. Good times.
There was that one time we all got dressed up, just because. It was kind of an inverse Casual Friday. To start off, here’s some great group photos of Lee Ann on Fusion Engineering, Roshini on Fusion Performance, and Jessica on Docs.
In order from left to right: Surendra, Roshini, Steve, Lee Ann, David, Michael, and James. (Shame we didn’t have the whole team in this shot.)
Okay, terrible pun alert, straight from the Facebook post: “At VMware, our managers go APE for new releases!” (Another awesome win from Dave & Busters!)
Pets were always welcome in our office. This is Bodie (as a puppy — it’s been a while). We had other dogs, sometimes cats. A duck followed me into the building one day.
Can you ever truly leave the team? Might come at a price… DUN DUN DUN. (I found my entire office covered in this stuff, shortly before my last day at VMware.)
We once got these plasticy bookshelf things made from I think recycled milk jugs? Someone realized that they could be disassembled and reassembled, so our team, always eager to end the day on a productive note, set off to build Tetris bookshelves.
There was that time when we were trying to get into a company-owned pinball machine that accidentally got reset from free mode to pay mode. We weren’t about to pay $0.25! So, we spent about 3 or 4 hours trying to pick the lock with instructions from the Internet and good ol’ Hosted UI ingenuity! With the lights off. Using flashlights. Inside an office room. I swear, we’re usually smart people :/ (P.S., it did not work. We gave up and went home after 1AM. The following contraption is what we built to try to pick the lock.)
Oh, and that time we decided our IRC channel could really benefit from Microsoft Comic Chat. We had this up and running on a dedicated screen 24/7.
I’ll probably update this over time with more thoughts and pictures as we find them.
Thanks for walking down memory lane with me, and for all the support you’ve shown Hosted UI over the past week.
]]>Yesterday morning, the Hosted UI team, responsible for VMware’s Workstation and Fusion products, woke up to find themselves out of a job. These products, despite being award-winning and profitable, are probably not long for this world.
I was not directly affected, in this way at least, as I had already left VMware in 2013 to work on Review Board full-time. However, many of my closest friends were, and a product I spent 9 years of my life on may have seen its last feature.
I could talk all day about how I think we got here, losing this amazing team and these fantastic products. I could point fingers and lash out at those I blame. I could talk about how furious this all makes me.
Instead, I’m going to talk about the team and what we built — and I don’t just mean our products.
I began working in Hosted UI on August 23rd, 2004, as a bright-eyed 20 year old freshly dropped out of college. Back then, it was a small team full of amazingly bright and passionate people, working days and nights to build a product they believed in.
The culture at that time within VMware was just so fun and energizing. People wanted to be there, and were proud of their work. Features were brainstormed over games of foosball or DDR, designs discussed over free lunches and beer bashes. In the evenings, we’d order dinner in and watch Simpsons, or whatever was on.
Company culture changed over the years, becoming more corporate and stiff, but not Hosted UI. We’d work all day, with the occasional interruption for YouTube videos or some laughs, and at night we went out and had some more. Poker nights, movie nights, video game nights. Dinners out together, sometimes several times a week.
Many people came and went over those years. The team changed, though, for a software company, a surprising number remained until the very end. Even those that left kept in touch, joining for poker nights or dinners here or there, coming to the dunkings (if you were getting married, you were going in the pond), birthday celebrations, and reunions. We formed alumni lists and kept in touch. We hung out on IRC outside of work.
Through deadlines and downtimes, stresses and celebrations, our team worked and played hard. We were dedicated, passionate, and if you’ll allow me, we were damn good at what we did.
I left this team two years ago, but it hasn’t really felt that way. I still saw them almost every week. Our team didn’t have to be in the same building or even the same company to stay a team.
Hosted UI may no longer exist at VMware, but that’s really VMware’s loss. They lost one of the most dedicated teams they could ever hope for, the kind of team you can’t just hire again.
VMware Workstation was the first VMware product (back then, it was simply known as “VMware.”). At a time when dot-coms dominated the Super Bowl and Amazon was all about books, VMware Workstation was letting pioneers in the Linux world virtualize their Windows desktop so they could run Microsoft Office instead of StarOffice.
This product evolved over the years with over 15 major releases, and more features than I can count, running on every flavor of Linux and Windows. It did this without falling prey to the bloat of most long-running products, as we focused not only on making it a more powerful product but also a more usable product.
Workstation made it easy to run complex development and testing scenarios, creating and working with several virtual environments all at once across any number of host computers. It integrated your virtual desktops with your host desktop. It let you take snapshots at different moments in the lifetime of your VM, and jump between them at will. It helped you catch defects in your software through remote debugging and CPU/memory record/replay capabilities, it helped you test complex network setups with virtual LAN devices, and it worked as a powerful front-end for VMware’s Server, ESXi, and vSphere products. And, in the end, it also helped you simply run your Windows programs on Linux, your Linux programs on Windows, or whatever you wanted.
Internally at VMware, Workstation was also seen as an indispensable product, helping other teams test features and devices that would eventually become selling points on the more high-end vSphere product releases. With Workstation’s ease-of-install and ease-of-use, people could get set up in minutes and get right to work.
We loved our product. This was our baby. We took input from marketing, management, sales, customers, and so on, but in the end, we were given a lot of creative liberty over the features and design. We were also given time to address technical debt, helping to get our codebase in shape for future challenges.
I don’t know how many awards we received, but I think it was a lot. I do know that we had so many users who loved the product we poured our souls into. That meant a lot, and kept us motivated.
It was, let’s say, a challenge getting some parts of the company to really care about the product. Workstation made a lot of money, but not the hundreds of millions the company would have preferred. This, I believe, ultimately led to yesterday’s sad outcome… Still, I’m very proud of what we built.
Workstation was a power user product built for Linux and Windows. In 2007, its sister product, Fusion for Mac, was released. This focused more on consumer usage, helping people run Office and other Windows apps on their Mac.
At the time, Apple had just moved to Intel processors, and were touting the ability to dual-boot between Windows and MacOS X, using a feature called Bootcamp. Fusion offered a better way by letting you run Windows and MacOS X at the same time. It was popular amongst students who needed to run Windows software for class on their shiny new MacBooks. It was popular amongst developers who needed to run or test Windows or Linux environments while on the go.
Fusion was a very different product in some ways than Workstation, but it was also very closely related. While it didn’t focus on many of the power user features that Workstation offered, it did take many of those features and reimagine them for more casual users. It also shared much of the core code that Workstation used, meaning that features could more easily be ported across and bugs fixed just once.
Fusion was a reimagining of what Workstation could have been, built for a different time and a different audience. Like Workstation, it was also built by a group of very loyal, dedicated, brilliant people, the Fusion segment of Hosted UI.
While I never worked directly on Fusion, I did get to see features I built for Workstation make their way there, and watched as our users got to try them for the first time on the Mac. It wasn’t the product I devoted my time to, but it was one I loved, and one I still use today.
Our small team has built quite a lot over the years. Along with Workstation and Fusion, we’ve also built:
I’m pretty sure there’s more, but those are the highlights.
These, along with Workstation and Fusion, were built by a team typically no larger than about 20 people (at any given point in time).
We did good.
VMware lost a lot of amazing people, and will be feeling that for some time to come, once they realize what they’ve done. It’s a shame. As for our team, well, I think everyone will do just fine. Some of the best companies in the Silicon Valley are full of ex-VMware members, many former Hosted UI, who would probably welcome the chance to work with their teammates again.
Workstation, Fusion, and our other products may survive in maintenance mode, or they may disappear. They may continue under a new team. It’s hard to say at this point what will happen. What I can say is that no matter what happens to them, they had an amazing run, and are something every one of us can be proud of the rest of our careers.
And we can be proud of the team, the friendships, and the strong bonds we built, now and through our next adventures.
Updated 27-January-2016 at 23:31 PM: Wow, this went viral. As of right now, we’re looking at around 40,000 unique viewers. I wrote this as a tribute for our team, and am amazed by the reaction it provoked. Everyone who loved our products and reached out to us to show your love, thank you. It means so much to us. Keep them coming!
I want to be clear that I have not worked there in years and do not have inside knowledge on what will happen to these products. I updated part of the post to make that a little more clear. VMware claims they’ll continue to exist, and I really hope that’s the case. I like to think what we built will continue to live on, and I hope VMware does it justice.
]]>