Is Google Pursuing AGI?


where it says

"But some people think they detect an even more grandiose design. Google is already working on a massive and global computing grid. Eventually, says Mr Saffo, .they're trying to build the machine that will pass the Turing other words, an artificial intelligence that can pass as a human in written conversations. Wisely or not, Google wants to be a new sort of deus ex machina."

Peter Norvig (one of Google's AI leaders) shed some light onto this at his talk at the ACC05 conference last September.

What he alluded to there was a goal, in 5+ years from now, of having a system that can answer any natural language query whose answer exists somewhere on the Internet.

E.g. if asked "Who was the first President of the US" it would answer "George Washington" because somewhere there is a web page with a sentence such as "George Washington, the first President of the United States, blah blah."

This would be Step 1. He didn't talk about it, but it's obvious Step 2 would be something that could answer questions whose answers are not contained on any single Web page.

To see how far off we are from this Step 2 now, peruse the results of the Pascal Challenge on "Recognizing Textual Entailment", from last year:

Anyway, I suspect what Norvig described reflects Google's intentions; and IMO is not exactly a direct approach at AGI in the sense that it has no focus on self-understanding, creativity, and so forth. However, I can see how proceeding in this direction could in time create a system that could (with appropriate expenditure of additional effort) be turned into an AGI.

How Infant Language Processing Builds on Physical Inference

It's a rare occurence, but I have just read an AI research paper which is of nontrivial interest...

N. L. Cassimatis (2004). Grammatical Processing Using the Mechanisms of Physical Inferences. In Proceedings of the Twentieth-Sixth Annual Conference of the Cognitive Science Society.

available at

As the author describes it,

" A model of syntactic parsing model based almost entirely on the mechanisms in the physical reasoning model, making the case for the cognitive substrate principle. "

The author Nick Cassimatis, who is specifically oriented toward creating human-level intelligence, has

- articulated an explanation of infant-level physical learning in terms of his logic-based AI framework, PolyScheme (in which multiple reasoning algorithms interact using a common predicate-logic language)
- then shown how the same mechanisms and representations used for infant physical learning can be used for language learning

I first became aware of Nick's PolyScheme approach to AGI when we both presented at the AAAI workshop on Achieving Human-Level Intelligence Through Integrated Systems and Research, in late 2004.

I think PolyScheme is a sensible approach at heart, though as currently articulated it seems to me a long way from constituting a fully-developed architecture for AGI.

Genetics of Aging in Humans and Flies

In spite of the lack of focus placed on the subject by the "research funding powers that be", insights into the genetics of aging keep rolling out, month by month and year by year.

Some recent research discusses specific alleles that seem correlated with survival to age 90:

In a similar vein I just read biologist Michael Rose's very excellent book The Long Tomorrow

which recounts his work studying aging in flies. One of his more striking results is that after a certain age is reached, the mortality rate in flies stops increasing and stays constant. (The book is easy to read by anyone who understands high school biology, yet presents and describes important research without significant dumbing-down. It also does a pretty good job of getting across the flavor of modern experimental biology research ... and of emphasizing the point that a lot more progress toward curing aging could be made if society chose to devote resources toward this goal. Many very good scientists, such as Rose himself and Aubrey de Grey and many others, have promising ideas regarding how to better understand and potentially alleviate the aging process, but our society is more interested in spending money blowing people up and inventing new forms of fabric softener. Bummer, huh.)

Also, in reviewing the work of some other researchers, Rose notes that there seem to be 300-400 genes that behave differentially in old versus young flies. He disputes the ideas of some other researchers who claim that there may just be 1 or 2 genes serving as master control genes for the aging process. I tend to agree with him, yet, I wonder if a careful analysis of gene expression data from old and young flies might indicate that a few dozen of these 300-400 genes are more "central" and in a sense drive the behavior of the other differentially-behaving genes. This is something I'd be interested to work on myself if I could find some good data freely available. To oversimplify slightly, it's something that could be addressed by simply plugging some relevant microarray data into our Biomind software.

The data used in this paper

would seem to be adequate for this kind of study, but, I have not found the raw data available online. I'm going to see if this research group is interested in collaboration.

On the theme of foolish allocation of resources, my contacts at the Center for Disease Control suggest to me that the budget cuts hinted at in

are likely actually going to happen. Bush seems to be cutting the CDC's budget by 7% or so (to fund his war in Iraq and his differential tax cuts for the wealthy, I suppose). Of course, this doesn't directly affect aging research because in its immense foolishness the government does not consider aging a disease. But it affects aging research indirectly because there are commonalities between aging and other diseases. Fortunately, real scientific progress continues in spite of this sort of idiocy.

Robot Recognizes Its Own Mirror Image

Though substantially overhyped in this article

" Robot Demonstrates Self Awareness By Tracy Staedter, Discovery News "

this robot that recognizes its own mirror image and in this sense can distinguish "self from other" is still an interesting achievement, since it was done via adaptive hierarchical neural nets rather than any kind of total "cheating" methodology.

Still, my bet is that the way this little bot achieves self-recognition bears fairly little resemblance to how humans, apes or dolphins do it (these are the only species that seem to be able to consistently carry out this cognitive feat: )

Perhaps this line of research will lead to a branch of cognitive robotics focused on "distributed cognition" (cf. )

This is the sort of thing my colleagues and I will experiment with using our Novamente AI system in the AGISIM simulation world, as the NM/AGISIM connection matures. (But AGISIM doesn't support mirrors yet!)


I recently read a well-thought-out and elegantly written (though rather dense) article, Metaptation: The Product of Selection at the Second Tier, by David King. The concept is not a particularly new one; King riffs on the evolution of evolvability, learning how to learn, and Hofstadter's metaphor of knob-twiddling vs. knob-creation. The article is worth reading for King's eloquence and careful reasoning even if you're already familiar with all of the material (I'd estimate about 2/3rds familiarity myself).

Briefly, metaptations are adaptations which are selected for in an evolutionary system not because of their direct effects on fitness, but because they tend to lead to the appearance of meaningful adaptations. An example would be a reorganization of an organism's genome that had no effect on it's phenotype, but made deleterious mutations less common and/or "helpful" mutations more common (e.g., varying the size of an entire organism is more likely to succeed than varying the size of an individual organ). This kind of second-tier organization is particular dear to me because it forms the intellectual grounding for my current research efforts, designing evolutionary-probabilistic learning algorithms that incorporate explicit metaptive mechanisms for on-the-fly creation of meaningful varaibles (i.e., Hofstadterian knobs).

Note that this is a kind of group-level selection effect (which I am generally leery of), but carefully reasoned and circumscribed. The selfish gene paradigm is not negated but extended to recognize a hierarchy of selective levels. I could go on, but if you're still interested at this point, just go read the article :).


Cancer Genome Atlas project launched

Biology, one of the best-funded branches of science these days, marches rapidly ahead....

According to the Washington Post,

" Federal health officials yesterday launched the biggest genetic research endeavor since the landmark human genome project: an ambitious effort to categorize all of the hundreds of molecular glitches that turn normal healthy cells into cancers.The Cancer Genome Atlas, whose total cost could reach $1 billion or more, will for the first time direct the full force of today's sophisticated genetic technologies to the thorough understanding of a single disease -- one that will eventually strike nearly half of all Americans alive. "

Semi-relatedly, I had a meeting at the National Cancer Institute recently where I discussed with some researchers the need to use advanced pattern-recognition technology to find combinations of genes and proteins that contribute to cancer (rather than just studying the effects of individual genes in isolation, which is the default paradigm now). They understood the need, but I got the impression that their progress toward actually adopting a "radical" approach like this will be fairly slow.

However, if the NCI is going to put big bucks into trying to obtain a "complete" understanding of cancer, then they are going to run up against this problem pretty quickly.

At some point in the not too distant future of biology, genomics is going to meet systems biology -- i.e., enabled by sophisticated informatics, it will supply sufficient data to make simulation models of the interactions inside cells and organisms. At this point we will see really fast progress toward a fuller understanding of biological systems. Perhaps the study of cancer (since it's so popular with funding sources) will be the domain in which this transition occurs....

At dinner with a group of biologists in Melbourne earlier this year, I asked how long they thought it would be till human biology was basically finished (in the sense that we pretty much fully understand the human organism in its original unaugmented condition). No one wanted to venture a guess, but when I speculated "50 years", a couple folks were brave enough to agree with me... (I didn't get into the Singularity; it wasn't that kind of crowd ;-)

The management of uncertainty in the human brain: new experimental insights

When someone talks to me about using neuroscience to inspire AI theory, I always complain that we simply don't understand the brain well enough for this to be feasible yet.

I definitely stand by this statement -- but, I'm always excited when some neuroscience results come out that seem to have some connection with ideas I've encountered in my AI work.

Along these lines: Some recent neuroscience results, pointed out to me by Pei Wang, appear to qualitatively validate the approach taken in my Novamente AI system and Pei's NARS AI system (and some other AI approaches such as Walley's imprecise probability theory), in which numbers measuring frequency are augmented by additional numbers measuring the uncertainty in these frequency measures.

In other words, some of us maverick AI theorists have been saying for a while that using just ONE number (typically probability) to measure uncertainty is not enough. Two numbers -- e.g. a probability and another number measuring the "weight of evidence" in favor of this probability (or to put it differently, the "confidence" one has in the probability) -- are needed to make a cognitively meaningful algebra of uncertainty.

Well, this paper suggests that the brain also reckons in terms of uncertainties-of-probabilities as well as probabilities themselves:

Neural Systems Responding to Degrees of Uncertainty in Human Decision-Making Ming Hsu, Meghana Bhatt, Ralph Adolphs, Daniel Tranel, and Colin F. Camerer, Science 9 December 2005: 1680-1683

Much is known about how people make decisions under varying levels of probability (risk). Less is known about the neural basis of decision-making when probabilities are uncertain because of missing information (ambiguity). In decision theory, ambiguity about probabilities should not affect choices. Using functional brain imaging, we show that the level of ambiguity in choices correlates positively with activation in the amygdala and orbitofrontal cortex, and negatively with a striatal system. Moreover, striatal activity correlates positively with expected reward. Neurological subjects with orbitofrontal lesions were insensitive to the level of ambiguity and risk in behavioral choices. These data suggest a general neural circuit responding to degrees of uncertainty, contrary to decision theory.

Of course, these results contradict aspects of traditional statistical decision theory, but they don't contradict mathematical probability theory in general -- just some particular, conventional ways of using it to study decisions. The way probability theory is used in Novamente, and the way it's used by imprecise-probabilities-theorists like Peter Walley, is actually somewhat validated by these findings.

The article can be obtained (for money) at

and some related journalistic discussion is at

Transvision 2006: Helsinki, Finland ... August

Another interesting conference coming up: Transvision 2006, the conference of the World Transhumanist Association, will be held in Helsinki Finland this August.

This looks to be a really interesting one!

The conference organizer, Ari Heljakka, is a collaborator on the Novamente AGI project and the leader of the Finnish Transhumanist Association, and I know he is making a big effort to get a wide variety of interesting speakers with deep knowledge of, and a high level of current activity in, their subject areas.

If you're a researcher in a Singularity-relevant area of science or technology, please consider coming to Helsinki to give a presentation on your work. The call for papers is here:

Conference Session on Human-Level AI, Vancouver, July 2006

It is interesting to see that the academic AI community is finally, slowly, waking up to the notion that artificial general intelligence is valuable and viable and worth thinking seriously about.

For instance, in Vancouver in 2006, at this conference

there will be a panel session called “A Roadmap to Human-Level Intelligence” at which I among other AGI-oriented AI researchers will be presenting.

To quote the conference website:

Building intelligent systems with the human level of competence is the ultimate grand challenge for science and technology in general, and the computational intelligence community in particular. How are we going to achieve it? Several exciting projects aimed at reaching human-level intelligence have been formulated recently. Some of these projects start from low-level neuromorphic brain simulations, some focus on mesoscopic brain simulators, some are based on hybrid architectures and some try to develop higher-level cognitive functions at purely symbolic level. What are the merits, what are the limitations, and what can we expect at the end of each road? Potential applications span across areas of basic brain research and medicine to cognitive robotics and space research.

At the WCCI 2006 congress we plan to have a special session and a panel discussion aimed at defining a roadmap to building systems with human-level intelligence. It is a multi disciplinary subject demanding concentrated effort of experts from various fields. The emphasis will be on the scalability of the proposed models, defining the series of challenges that should be solved by these models, evolutionary and bootstrap approaches that may bring us there faster. Before the panel we shall have a special session where position papers will be presented. An edited book containing expanded versions of these papers will be published after the conference. Such organization should allow us to concentrate more on intensive discussion during the panel.

Hacking Sleep (or, too much to do before the singularity strikes)

Greetings, post-interesting readers! One of the banes of pre-singularity existence is time-crunch. Arguably this is getting worse as things accelerate (certainly so for doctoral-student-AGI-developer-wannabes), and drastic measures such as hacking one's sleep cycle might be called for.

The basic idea is to train your body to go directly to REM sleep, bypassing (presumed) less essential stages. This is known as Polyphasic Sleep. In the "Uberman Sleep Schedule" (no, I didn't invent the name, and neither did Ben, AFAIK) this is accomplished by replacing nightly "monophasic" sleep with six evenly spaced 20-30 minute naps.

Research seems sparse, but it sounds plausible at first glance. Evolutionary psychology might suggest that it's not wise to chronically meddle with something this... On the other hand, we moderns certainly meddle with plenty other things and, to an extent, seem to get away with it.