Big Blue Descendant May Be Future Nobel Winner

April 4th, 2009

Posted by: admin

Computers and robots have helped automate science dramatically over the last 70 years.  In most cases, this involved so-called ‘brute force’ applications, where the scope of calculations involved sucked up a lot of time and required a lot of people.  Computers helped displace the legions of human calculators – usually female -  employed to do astronomical or ballistic calculations by hand (read When Computers Were Human).  Computers and robotics helped make genome mapping something that can be done in months rather than decades.

In what could prompt another shift in scientific human resources, there are reports (H/T Wired Science) of a robot actually forming and testing hypotheses.  Working in the baker’s yeast genome, the robot Adam worked to fill in gaps in understanding of the yeast’s metabolism.  By scanning a database of similar genes and enzymes from other organisms, Adam utilized algorithms to determine possible genes in the yeast genome that would correspond to orphan enzymes – enzymes that had no known gene coding for them.  After forming an hypothesis, Adam would conduct the experiment, analyze the data, and refine the hypothesis.  The robot’s designers recently confirmed by hand Adam’s discovery of three genes that coded for an orphan enzyme.

At the moment, designing these robots will continue to be a specialized affair – one robot for a particular area of research will likely be different from a robot for a different area of research.  But if there is the potential of standardizing, or at least spreading, such robot development, how and where science can be conducted could change dramatically.

Once again, some of the routine tasks of science will not require people.  There are plenty of potential consequences of this shift, especially since many of those affected would hold Ph.Ds.  Lab heads won’t need as many people on staff, reducing the need to import scientific talent and reducing the positions available in what is the traditional, apprentice-like, guild nature of scientific training.  Given the near obsessive focus on numbers in the decades-long arguments over the scientific pipeline, a potential policy outcome of such a shift would be to claim the problem of scientific talent is solved.  That may not reflect reality, but that wouldn’t be the first time.

On the other hand, this kind of automation has the potential to dramatically decrease the costs of research.  If certain kinds of research no longer need as many people, the costs of their training, salary, and benefits decline or disappear.  Start-up costs for junior researchers and professors decline.  This would allow for research to be conducted at institutions and in places where resources are not as flush.  Running a small lab out of a state school or in an emerging market may not carry as many challenges (or as much stigma) as it does now.

This is, of course, speculative.  But if the policy challenges of emerging science and technology are to be effectively addressed, some speculation will be required to meet the changes as they happen, rather than after it’s been too long to address them.

4 Responses to “Big Blue Descendant May Be Future Nobel Winner”

  1. Mark Bahner Says:


    This is one more reason why anyone who is really interested in predicting future events should read a whole lot of what Ray Kurzweil has written.

    For instance, how many mammograms does a world-class radiologist review in a career? Well, why not have a whole team of world-class radiologists judge a set of mammograms, then tie in their judgements with the image of the mammogram? This is all tremendously complicated, I’m sure.

    But the end product is that one has a computer that has the knowledge of a whole team of world-class radiologists, and can read tens of thousands of mammograms an hour, and also learn and remember all the outcomes of all the cases that were related to those mammograms.

    That eventually means anyone in the world has access to a computer “expert” that has read 10s of millions of mammograms. And never sleeps or blinks, or gets distracted. Amazing.

  2. 2
  3. Maurice Garoutte Says:

    While robots for different fields will certainly require different knowledge bases some sub-systems could be common. For example standardized visual perception and, natural language interface modules could be the same for many special purpose robots.

    My personal contribution toward this goal was a low level module that transforms pixel based images to symbolic data similar to the function of the human retina. Although implemented in a security system the software was structured to allow use in a generic visual perceptual system.

    One more item of interest, at least to me: I believe that new information is processed in the context of prior trusted knowledge. If that is true could we achieve intellectual diversity in the laboratory by loading different robots with different memories, even if they all have the same logic inference engine?

  4. 3
  5. Approaching the Singularity from Two Points « The Emergent Fool Says:

    [...] via Prometheus, Wired reports on a robot-software combination that was able to generate, test, and refine [...]

  6. 4
  7. Mark Bahner Says:

    This webpage has info on automated warehousing:

    …higher productivity, expected energy savings, increased security (less pilferage of products).

    There’s the future. In fact, I expect that in less than 2-3 decades, it will be possible to “virtual grocery shop,” with the products taken from a completely automated warehouse (incredibly densely packed, and dark, with no parking lot), and delivered to one’s door by a autonomous vehicle.