Some people see a future populated by billions of mechanical micromachines, robots no bigger than a speck of dust that are programmed to do our bidding. UCSF School of Pharmacy researcher Christopher Voigt, PhD sees a different future. He sees living micromachines that can be engineered into a new kind of pharmaceutical. His micromachines are bacteria.

In collaboration with scientists at the United States Department of Energy's Lawrence Berkeley National Laboratory, Voigt is making bacteria that are modified genetically to perform a variety of jobs. "They might be called upon to clean up toxic chemicals or to seek out and attack cancer cells," Voigt says.

Voigt's work is one of the many applications of a field called systems biology.

Read More

Well informed linkages between genotype and individual phenotypes will require a completely new generation of simulation models that are suitable for rigorous experimentation.

Rational development of new therapeutic interventions, including new drugs, requires understanding the functional interactions between subcellular networks, the functional units of cells, organs and systems, and how the emergence of disease alters them. If the required information is somehow encoded in the genome, it is currently invisible. Some of it may be hidden within the protocols used by macromolecules to interact within networks and modules (Csete & Doyle 2002). Absent the needed information, our current best alternative is to “copy” nature (build models that reflect our current state of knowledge) and compute these interactions (i.e., simulate) to determine the logic of healthy and diseased states. The impressive growth in bioinformatics databases and the relentless growth in computer power have helped open a door to new methods to explore functionality hierarchically from genes to individual patients. In the mean time, the rapid accumulation of biological information and data is overwhelming our ability to understand it. Many of the newer ideas that we have can not be easily tested.

Genomics has provided us with a massive “parts catalogue.” The details about the individual “parts” and their structures are emerging from proteomics. That is a good start. However, there are very few entries in our “user's guide” for how these parts interact to sustain life or cause disease. Too frequently, the cellular, organ and system functions of these parts are unknown. Yet clues are emerging from homologies in the gene sequences and elsewhere., but that is not enough. Successful development of rational therapeutic interventions will require knowledge of how the parts behave in context, as they interact with the rest of the relevant cellular machinery to generate functions that only become evident at a higher levels. However, there simply is not enough time, people, or resources to get the needed information using the traditional approach of well-controlled experiments that carefully and systematically manipulate one or a few variables at a time. Without this integrative knowledge, we will likely be left in the dark as to which parts are most relevant in disease states.

Searching for patterns in genome and gene expression databases alone will not get us very far in addressing these particular problems. There is a fundamental reason. Genes code for RNA and protein sequences. They do not explicitly code for the blueprint of interactions between macromolecules and other cell components. Nor do they indicate which proteins occupy critical nodes in the hierarchical web of events supporting cell, organelle, and system function in health and disease. Much of the logic within the dynamic network of interactions in living systems is implicit. Wherever possible, nature leaves much of the detail to be “engineered” by context-specific designs or the refinement of the molecules themselves, and to the exceedingly complex way in which their properties have been influenced and exploited during evolution. There is no genetic code for the properties and roles played by water, yet these properties, like many other naturally occurring physicochemical properties, are essential to life on earth. As Noble observed (Noble 2002a), “It is as though the function of the genetic code, viewed as a program, is to build the components of a computer, which then self-assembles [in precise sequences] to run programs about which the genetic code knows nothing.” Similarly, Sydney Brenner (1998) observed that "Genes can only specify the properties of the proteins they code for, and any integrative properties of the system [of which they are a part] must be ‘computed’ by their interactions." In order to discover and understand these interactions we need to compute them, and so Brenner concluded, "this provides a framework for analysis by simulation."

Important lessons have been learned from over a decade of research in metabolic engineering. They clearly teach that when the complexity of a system is too great to grasp intuitively we must use computer models to make further real progress (Bailey 1999). Depending on time and place, individual macromolecules may participate in multiple pathways. Individual differences in sex, age, disease, and even internal and external environmental factors can dynamically alter the background against which macromolecular function is expressed. In such a context an important added value of modeling and simulation is that they can be used to hypothesize new approaches, and to identify where gaps in knowledge exist. One can determine whether or not existing data are sufficient to generate the system output under study. When they are not, the models can be used to suggest possible directions for further study and to offer predictions about possible results. For such a process to be successful it is essential to have an iterative interaction between modeling, simulation, and experimentation. We already know that computational modeling and simulation of biological systems can add significant value in the discovery and development of new therapeutic agents (Noble & Colatsky 2000, Noble 2002b). Within such virtual environments the researcher may also conduct experiments to systematically test the possible impact of different conditions. The results allow one to select the best overall design principles in advance of real life studies. They may also be used to help the researcher conduct virtual genetic studies in which cellular components are 'knocked-out' or 'knocked-in'. The resulting information may then be used to design new drugs, to carry out a more advanced research plan, or to define the optimal therapeutic profile of a new drug prior to chemical synthesis. The researcher can even explore in advance, in a rational and systematic way, whether the most effective treatment is a drug that acts specifically on a single target or one that acts at multiple targets (as is the case for the potent antibiotic pristinamycin), and in what relative proportion these additional activities might be expected to occur. Finally, one can envisage that by combining multiple models one can prospectively investigate issues of clinical safety and efficacy to answer questions about toxicology and pharmacodynamics at the level of the individual.

C. Anthony Hunt, PhD
The University of California, San Francisco
© 2003

References

  • Bailey JE, 1999. Lessons from metabolic engineering for functional genomics and drug discovery. Nat. Biotechnol. 17:616-618.
  • Brenner S, 1998. “Biological computation.” In: The limits of reductionism in biology. Wiley, Chichester (Novartis Found. Symp. 213), p 106-116.
  • Csete ME, Doyle JC, 2002. Reverse engineering of biological complexity. Science 295:1664-69.
  • Noble D, Colatsky TJ, 2000. A return to rational drug discovery: computer-based models of cells, organs and systems in drug target identification. Emerg. Therap. Targets 4:39-49.
  • Noble D, 2002a. Modeling the heart—from genes to cells to the whole organ. Science 295:1678-82.
  • Noble D, 2002b. The rise of computational biology. Nat. Rev. Mol. Cell Biol. 3:460-63

In the near future, within the larger sphere of therapeutics, where will one find the more exciting, more rewarding career opportunities and options?

Pharmaceutical and biotechnology companies are increasing the productivity of their research organizations on a massive scale in order to deliver new, safer and more effective drugs, and to maintain competitive growth. There is also a pressing need to decrease the time (and cost) from therapeutic concept validation to new drug approval. During the past 40 years, the median cost to market a drug has increased from $50 million to $500 million. The time required for drug development and approval has increased from eight to thirteen or more years, while the failure rate of new drug lead compounds has stayed constant at about 90%. The needed productivity improvements become even more important with the drug market's anticipated increasing segmentation, which is driven in part by the 'omics? revolution (as in genomics, proteomics, etc.). The growing pharmacogenomic expectation is that new technology can be used to identify population subsets that are better targets for new drug development than the entire population. Industry's response, thus far, has been to expand and fortify technology domains along the discovery/development pipeline. The most visible are combinatorial chemistry, high-throughput screening, in vivo screening, analytical chemistry, drug safety, ADME adsorption, distribution, metabolism, and excretion toxicology, pharmacokinetics, genomics, proteomics, etc.).

I maintain that over the next 20 years that new diagnostic technologies will allow more new drugs to be targeted to specific subsets of patient populations. Such therapeutic "optimization" and individualization will require that the pharmaceutical and biotechnology private sector increase research and development efficiency in order to take advantage of the new, smaller market sectors for subset specific drugs. The pharmaceutical industry will face additional challenges with the continuing climate of mergers and acquisitions, which is certain to put extra burdens on the industry's resources due to the increasing need for research by cross-functional teams, remote geographical locations, diverse scientific and cultural backgrounds, and the diverse processes of the entrepreneurial segment of the business.

To meet this challenge it is imperative that companies leverage all the available knowledge, particularly at the intersection of experimentation, bioengineering, simulation, modeling, and informatics. The entire pharmaceutical R&D process is characterized by people making decisions. Yet to make better decisions, people require rapid access to intelligently organized knowledge and the emerging tools to mine for informational gems. A growing need exists to capture all available information and knowledge, including a company's past experimental data, corporate successes and failures, and so on, and to disseminate that knowledge in easily accessed form throughout the company. A critical factor for future success will be to find an efficient way to provide all decision makers, bench scientists, engineers, R&D managers, and business development, licensing executives, etc., access to semi-intelligent decision support tools that have direct access to the relevant knowledge and information.

Within the decade, as the genomics waves advance, the number of potential molecular therapeutic targets will increase by a factor of 10 to 60. The companies that will benefit most will be those that adopt a more systematic, computationally enhanced approach to drug discovery supported by an underlying, integrated engineering and informatics framework. Moreover, the implementation of such an integrated, enterprisewide framework is expected to leverage the improvements and innovations being made in each of the above technology domains that contribute to new drug approvals. That leveraging process will have a fundamental impact on the pharmaceutical and biotechnology industry, reducing the time and cost required to bring a new drug to market.

The future of the healthcare enterprise and its impact on human health will be dependent on how well the next generation of scientists is able to integrate all the technology domains across the R&D cycle into an integrated, informed strategy for drug discovery and realization. The critically important incorporation of clinical data into this strategy is expected to dramatically increase the probability that a lead compound will become a marketable drug, and so will further reduce the time and cost required for drug discovery and development. Such change, such evolution, affords tremendous opportunities for the new generation of scientists and bioengineers that will be joining the profession during the coming decade and beyond. Realizing that opportunity, however, will require changes in both philosophies and processes of research and graduate education. Those individuals who understand this biological and medical informatics imperative and are willing to adopt new technologies and mind-sets to help bring it to fruition will become the new leaders. They will help produce a fundamental, positive impact on human health and quality of life.

C. Anthony Hunt, PhD
The University of California, San Francisco
© 2001

How will the daily life of a research scientist, a research engineer in 2015 differ from that today or ten years ago?

There is a growing consensus that an increasing fraction of the more exciting, pivotal, biomedical research both in academe and industry will be interdisciplinary and will be undertaken by multidisciplinary teams. Answers to important biomedical questions are increasingly within the reach of such team efforts, and beyond the resources of even the most accomplished individual investigator.

Advancing technology drives science, and in so doing provides an expanding menu of relatively lower cost experimental options. They, in turn, expand the range of questions that scientists and research engineers can ask. Today, a majority of the important research in physics is done by large teams. That pattern is beginning to pervade other fields, and is already evident in biomedical research. The independent individual scientist with several dedicated minions has served as the icon of science, especially academic science, for a couple of hundred years. It has served us well and will continue to do so. In the future, however, it will no longer be the norm. The most exciting opportunities, in industry and soon in academe, will be for scientists having interdisciplinary training, and who are comfortable and efficient working in teams. They will have the skills, including critically needed communication skills, to fill a variety of problem solving roles.

Completing an interdisciplinary Ph.D. program within five years, so that it seamlessly launches one into the professional arena in the desired direction (e.g., industry, academe) is a complex, multistep process. It needs to be planned and coordinated with other activities. Milestones need to be set and achieved in a timely fashion if the target is to be reached within the allotted time. The process is not unlike a decathlon that has been designed to span several years. As with the athlete in a decathlon, the emerging scientist needs a coach. The Ph.D. Mentor (defined below) fills that role.

Unfortunately, most biomedical science and engineering Ph.D. programs today are still funded and organized to train graduate students to be classical, independent scientists. Typically, the new graduate student in such a program must select a single faculty member to be their 'research advisor.' S/he then develops a highly specialized research project around an important question selected by or in consultation with their research advisor. After the thesis is completed and accepted, the new Ph.D. scientist typically migrates to a new laboratory at a new location, a University, a company or a research center, to serve and learn under the direction of another individual scientist.

Prof. Hunt and others are implementing a new, quite different research training opportunity, one that builds multidisciplinary expertise along with essential collaborative and communication skills. We aim to foster scientific vision and independence rather than dependence. We aim to sharpen the trainee's creative and problem solving skills. This evolving plan envisions a logical multi-step process:

1. The student, aided and supported by his or her Ph.D. Mentor, select an initial multidisciplinary area as a starting point, likely one that fits with the student's emerging interests. The area may be represented by aspects of current science and engineering that the student feels is most appealing to them. Or it may be the multidisciplinary area represented by aspects of the research and interests of three or more faculty members. For example, the areas of interest may be molecular bioengineering, computer science and pharmacology, or drug metabolism, genetics and cancer.

2. Once the area overlap is clear, the student then identifies questions or problems that are most interesting, most appealing, or most worthy of answering. Specifically, the student builds, revises and narrows a list of interesting research questions that are expected to require knowledge and expertise in the three designated areas. Faculty members, fellow graduate students and postdocs are all typically willing to help. Rotation projects facilitate this process. Experience shows that from such a process at least one broad, exciting research area will emerge.

3. In parallel with the above, one must identify faculty collaborators. During the preceding process two, three or more faculty collaborators will likely emerge. One will be the Ph.D. Mentor. Those identified may be within the same institution (e.g., they are all UCSF faculty members). One, however, may be from outside: a scientist working at a local biotech company or a faculty member at another University.

The next step is to transform the identified, broad problem area into a research proposal that will become the research focus for one's oral qualifying examination, and ideally for one's thesis project as well. Typically that transformation requires undertaking some preliminary research. As soon as one begins the research, one is immediately faced with the realities of being a project manager. Even though the graduate student is the junior partner in the emerging collaboration, s/he must become the project manager. For that role the student needs a degree of independence from each senior collaborator. The successful project is one where each faculty collaborator sees the merits of the project and is thus willing to materially contribute resources (space, reagents, computer hardware &/or software, stipend support, etc.). The student's Ph.D. Mentor will be the science management tutor, and will help insure that all necessary resources are available, and that a reasonable, achievable timeline is available.

No scientist today can have the depth of knowledge and expertise in all of the specialized sub-fields that are relevant to his or her research. For the graduate student following the plan outlined above, it is the research, the science, and the emerging questions that will dictate the areas of specialized knowledge that the student will need to acquire. The Ph.D. Mentor will assist, guide and advise. Collaborators will contribute what is needed but is missing. In so doing they become a partner, a steak holder, in the project. Clearly, the faculty collaborators will have been selected because they are expected to bring considerable additional knowledge and/or resources to the project. At times, additional specialized expertise may be needed.

Traditionally, the new graduate student selects a research advisor toward the end of his or her first year. Next, s/he becomes a member of the new research advisor's laboratory, selects one of the on-going research projects within the lab, and then begins doing research, usually following the directions of a senior graduate student, a postdoc, or the new research advisor. The thesis project typically emerges from research already underway within the laboratory. It is common for the student to become completely dependent on their research advisor. As the end of the Ph.D. process approaches it is that advisor who determines when you have finished your research. The Ph.D. Mentor plan, on the other hand, strives to foster a much different learning and training experience. The Ph.D. Mentor's goal is to work with the trainee (and the trainee's academic program advisor) and the collaborating scientists to develop a research and training plan that extends beyond the Ph.D. program to include the first, critically important, post-Ph.D. career move. The Ph.D. Mentor's role is thus larger than that of the traditional Ph.D. research advisor.

C. Anthony Hunt, PhD
The University of California, San Francisco
© 2001

Research and development (R&D) are interconnected, cyclic, evolving, adaptive processes. The three major stages of R&D, as defined by the National Science Foundation, are as follows. The separations are not sharp. We prefer to draw an analogy to the three primary colors, yellow, red, blue, ad to think of R&D in the not for profit and for profit sectors as representing a rainbow of activities. The value of each activity is dependent on the level of creative activity in the others.

Basic research: The objective of basic research is to gain more comprehensive knowledge or understanding of the subject under study, without specific applications in mind. In industry, basic research is defined as research that advances scientific knowledge but does not have specific immediate commercial objectives, although it may be in fields of present or potential commercial interest. Understanding how a protein folds or how a specific molecule elicits a particular biological response are examples of basic research.

Transnational research (also called applied research): Transnational research is aimed at gaining the knowledge or understanding to meet a specific, recognized need, or solve a specific problem. Transnational research includes investigations oriented to discovering new scientific knowledge that has specific objectives, for example with respect to systems, products, processes, or services. Finding a better treatment or diagnostic for a disease is an example of transnational research.

Development: Development is the systematic use of the knowledge or understanding gained from basic and transnational research directed toward the eventual production of useful materials, devices, systems, or methods, including the design and development of prototypes and processes. Making a new vaccine against AIDS and testing it in animals is an example of development.

It is frequently said that technology drives basic and transnational research and that basic research fuels technological development. The truth is that this is not a linear process. Disconnected, the value to society of three phases of R&D is drastically discounted. R&D is cyclic and iterative, and frequently combinatorial. Frequently, forward progress in basic research requires a shift in focus to transnational research and to development in order to develop a method that enables the next step along the original basic research path.

C. Anthony Hunt, PhD
The University of California, San Francisco
©2001

Bill Hewlett (cofounder of Hewlett-Packard) always told his employees to "attack the undefended hill" to do the research that leads to creation of products, services, and processes that no other company makes or provides. History shows that by creating unique innovations, HP moved into markets with no competition and generated very fast sales and profit growth as clients learned about their new solutions to previously unmet, sometimes unnoticed needs. Hewlett told his people to discover and occupy such hills, rather than fight the defenses on high hills that other companies had already discovered and occupied. Intuitively, he knew that the R&D landscape (or knowledgescape) offers many undiscovered high hills, and that his people would find them if they looked for them. That lesson has been well learned and is now taught in all of the better business schools.

One can easily draw an analogy between Hewlett's undefended hills and the landscape of cutting edge science and engineering research. Beyond the cutting edge, one finds an uncharted knowledgescape of undefended hills, large and small. Behind the cutting edge the knowledgescape is populated by "established" scientists and researchers representing new and established fields. Occupied and often well defended hills are visible everywhere, especially within the traditional disciplinary domains. As a graduate student, as an emerging research professional, where do you look to find the best research projects, the best career opportunities? The natural inclination is to follow those ahead of you and to seek the safety of the (apparently friendly) crowd. Furthermore, when looking for a research project or when deciding on a research direction, one's initial impulse is to look toward the apparent safety of occupied hills.

Realistically, however, the new researcher who is newly arrived on a highly populated tall hill is simply not as visible (to the world of potential employers) as the person climbing a newly discovered, unoccupied hill on the frontier side of the cutting edge. The history of science and engineering departments within major research universities, as well as all the rapidly growing segments of the high technology private sector, clearly shows that more doors, more opportunities, are opened for the more visible person climbing the previously unoccupied hill.

There are risks and there are problems with seeking undefended hills that fit your talents and interests. The cutting edge is risky territory. Portions of it are unexplored and much of it is shrouded in "fog" (the fog of lack of knowledge and information). To see the local landscape—the local unoccupied hills—through this fog, you need a compound lens of knowledge, the instincts that come with experience, and some luck. The visibility is far greater where established fields have cleared away the "fog." These areas are easy to spot; they are behind—they trail—the leading edge. Not only is the visibility there greater, the level of activity is greater. Remember that the cutting edge is moving. It is advancing left, right, and forward into the fog shrouded places, and the excitement and activity that characterizes the cutting edge moves with it.

Frequently the explorers that discover and first occupy new hills, choose to stay. They become scientific "settlers." They need and welcome the flood of colonizers that follow (including graduate students, postdocs, and other scientists), and begin "empire building."

The above "territory," "explorer," "defended hill" analogy is useful, within limits. In a real landscape the size of a hill is fixed. It changes little. That's not the case on the corresponding, imagined knowledgescape. A new knowledge hill can grow and expand as the occupant(s) of that hill generate and discover additional new knowledge. The opposite can also occur. A poorly assembled "compound lens of knowledge" can cause optical illusions that make a mound initially seem like a promising hill. As the newly arrived occupant(s) work(s) to generate and discover additional new knowledge, that promising hill may soon be seen as nothing more than a bump in the knowledgescape, or possibly just a mirage.

At the cutting edge there are vast, unexplored territories between the populated domains, the established fields. The courageous graduate student will choose a research direction and will seek out research projects within the fog-shrouded territory between, and, ideally, slightly ahead of the established fields. At that point, a knowledgeable guide can be incredibly helpful.

Companies grow very fast when they offer clients a unique product, service, or process that serves a real need or want. That's why Bill Hewlett (cofounder of Hewlett-Packard) always told his employees to attack the undefended hill -- to create products, services, and processes that no other company makes. By creating unique innovations, HP moves into markets with no competition and generates very fast sales and profit growth as clients learn about HP's new solutions to their previously unmet needs.

C. Anthony Hunt, PhD
The University of California, San Francisco
© 2001