The Future of the Web, Intelligent Devices, and Education

min read

EDUCAUSE Review, vol. 42, no. 1 (January/February 2007): 32–47.

Howard J. Strauss received the 2006 EDUCAUSE Leadership Award (sponsored by SunGard Higher Education, an EDUCAUSE Platinum Partner) for "visionary, entertaining, and provocative thought leadership in the world of higher education information technology." At the time of his death in September 2005, Howard was Manager of Academic Outreach for Princeton University, providing education and outreach services for faculty and charged with raising the level of academic IT use and sophistication on campus. During almost thirty-five years at Princeton, he held a variety of positions in administrative, systems, and academic computing. As manager of Princeton's Advanced Applications group, he found ways to turn the latest information technology into practical applications.

Howard was probably best known to his professional colleagues for his quirky and insightful writings and presentations. He published dozens of articles and was a regular contributor to several computer journals. He was an institution at Educom, CAUSE, EDUCAUSE, and other higher education conferences, where he regularly spoke to standing-room-only audiences. Attendees could count on him to highlight important new technologies, offer a unique vision of the future, and articulate that vision with humor and creativity. Iconoclastic and often contrarian, he was able to make computing approachable and understandable while he also challenged the most technically knowledgeable members of his audience.

Dan Oberst, a friend and former colleague, said of Howard:

For years, Howard was a mainstay at the Educom, CAUSE, and EDUCAUSE conferences, filling his workshops and earning high praise from the participants. Through his many presentations, lectures, articles, and CREN TechTalks Webcast series, he presented technology in ways that informed and amused, but always caused his audiences to think.

Howard was not shy about peering into the future and describing what he saw. In the following article, first published in 1999, we see hints of Service Oriented Architecture before it became an acronym, the iPod, Google's and Microsoft's "software as Web service" offerings, and even a description of Wikipedia. Yet his futuristic talks and articles, filled with "gee-wiz" gizmos and projections, were aimed also at changing how people thought about what was coming next. If you thought the future was going to look like more of the present, or like a better or faster version of the present, you were looking in the wrong direction, he'd remind you. I suspect he would also be happy to remind us, today, that there is much more we can do, especially in higher education. SMILE and NOAH, described in this article, remain only ideals and ideas of how computing and education might intersect. The question for us is whether the gamers and social networks will deliver them before we do.

The idea of reprinting a seven-year-old article with the word "future" in the title would give most people pause. But I think it's just the kind of challenge that Howard Strauss would gladly have taken up.

Editor's Note: Dan Oberst, Director of Enterprise Infrastructure Services in the Office of Information Technology at Princeton, passed away on November 9, 2006, shortly after contributing these comments about Howard. Like Howard, Dan will be missed by numerous friends, family, and colleagues in the higher education IT field.

The following article is reprinted from Educom Review, vol. 34, no. 4 (July/August 1999). © 1999 Howard Strauss and Educom (now EDUCAUSE). Reprinted by permission of the estate of Howard Strauss.

Predicting the future cannot be done by anyone with any certainty. Most of us would be hard-pressed to predict with any accuracy what we will be eating for dinner a week from now. Predicting what twists and turns technology will take is far more difficult than ascertaining our future menus, but it is a task we all must undertake if we are to plan for tomorrow. And planning for tomorrow is something we all must do. The future has a way of arriving a little before we are ready to give up the present. Having some idea of what is coming won't take all the surprise and mystery out of the future, but at least we will be a bit better prepared.

To predict the future, we often look at trends and make reasoned guesses about which ones will continue and where they will lead. In essence, we accept that the past contains the seeds of the future, which we can predict if we can correctly read the DNA of those seeds. That is never an easy task.

For example, the history of U.S. commercial aviation, from its inception until 1958, reveals a steady increase in the speed of aircraft. Someone in 1958, looking at that trend and realizing that the "new" Boeing 707 jetliner could fly at nearly 600 miles per hour, might have concluded that fifty years later we all would be flying three to ten times faster. But of course we are not. Except for the Concord, on which few of us travel, commercial aircraft fly just a few percent faster than they did in 1958. It was not that the speed of sound was a technological barrier we couldn't cross. Instead, an economic barrier and a sociological barrier (the United States would not allow sonic booms over its landmass) stopped us. These were barriers we could not have easily predicted.

In the 1930s, automobiles were traveling at 70 miles per hour on roads ill equipped to accommodate them. In the 1970s, cars had slowed down to 55 miles per hour on modern, limited-access highways that could have accepted much higher speeds. Once again, social and economic issues conspired to stop technological advances.

Since the invention of the vacuum tube in the mid-1870s, great efforts have been spent to make it both smaller and more efficient. By the late 1940s, tiny low-power tubes were commonly available. Not realizing the implications of the invention of the transistor in 1947 at Bell Labs, a person might have predicted that one day soon, vacuum tubes would be the size of a pencil eraser and would consume just a fraction of a watt. It is a common trap to look at what we have today and assume that tomorrow will have the same thing, only better. But in the history of the world, dinosaurs died out completely. They did not continue to evolve indefinitely into better dinosaurs.

In this article I will look to the past for trends in hardware, software, networking, and education and attempt to extrapolate where they are going and what their broad implications might be. But readers should be wary. There are many different ways that trends can be interpreted, and it is easy to pick trends that support one's thesis and ignore ones that don't. This article may prove to be a road map for the future; it will at least provide some new visions of what could have been and what still might be.

Where We Were

The recent past has been characterized by the explosive growth of the World Wide Web, often called simply the Web. In August 1981 about 200 computers hosted Web servers. By July 1998 there were over 36 million Web servers. Today there are so many URLs (Web addresses) that examining them all at the rate of one per second (an impossibly fast clip) would take over eight years. But in truth, no one knows how many URLs there are. Lucent Technologies estimates the number at over 830 million. If Lucent is right, our examination of all URLs would take closer to twenty-seven years.

Today, over 150 million people use the Web. Here again, estimates are at best fuzzy. The user count is expected to more than double each year. Elementary math quickly shows that this trend cannot continue for very long because the number of Web users would soon exceed the number of people on Earth. However, I expect this trend to continue and in fact to accelerate far into the future.

Today almost all Web users are people. Tomorrow, intelligent devices operating on behalf of people and institutions will dominate the Web. This will come about because of the accelerating decrease in the cost, weight, size, and power requirements of computer hardware, the increase in the speed of networks, and the growing wireless infrastructure now being put in place.

In the past the Web was used to display documents and images. Today it is being used for education, research, software distribution, audio and video conferencing, and electronic commerce. Commercial uses of the Web are the fastest-growing Web sector. The .com domains make up about half of all Web domains, with the .edu domains far behind. With billions of dollars in Web commerce already being conducted, the Web seems to threaten the current mode of business. How long will local bookstores remain in business when online bookstore Web sites are doing over $2 million per day of business? Will the layoffs at brokerage houses accelerate as more stocks are traded online? Will any travel agents survive the ease of booking travel on the Web?

Yet the Web is in its infancy and has yet to show us what it will do when it matures. Today you must go to a desktop or laptop computer to use the Web. Tomorrow, common intelligent devices that will number in the billions will routinely use the Web, with little intervention from us.

What Will Replace the Web?

There was a time not too long ago when it seemed that everything of any importance was in Gopher space. For those of you who don't remember Gopher, from the University of Minnesota, it is what the Web replaced. Likewise, Gopher had replaced dozens of CWISes (Campus-Wide Information Systems). So, as hard as it is to believe, the Web could disappear. It hasn't so far because it has managed to evolve, in effect replacing itself. In fact, it has done this twice so far.

At first the Web was just a place to display one's corporate presence. It was a glassed-in bulletin board where everyone could read about your university or company. You certainly couldn't interact with it, and no business of any sort was done there. In the first reinvention of the Web, the Web became able to support the mainline businesses of a university or company. Universities distributed forms, allowed for electronic admissions applications, and let students view their grades via the Web. Companies, such as UPS and FedEx, let users track their packages via the Web. These activities could have been done in some other way, but the Web was more convenient.

In the second reinvention of the Web, the Web offered services that were not part of the mainline business of a university or company. UPS for example, now offers a variety of secure e-mail delivery services. These are services that did not exist before the Web and could not reasonably exist without it—or something like it. How should universities participate in this second reinvention of the Web? That will be left for you to ponder, but universities must be part of this new wave.

Some Evolutionary Signs

We will not make the leap to the Web of the future in one giant step. There are already evolutionary signs of what is to come, if we are able to read them correctly. But we should be careful. Some of the most wonderful developments are actually dinosaurs that will not in fact make it to the future.

One evolutionary sign is a phone by BellSouth. This is a regular cordless phone, but when it is moved too far away from its base station it becomes a cellular phone. In the future, all intelligent devices will be wireless but will choose the most economical way to be so, just like this phone. The Seiko Message Watch has its own phone number and Internet Protocol (IP) address and can monitor news, sports, weather, stocks, or whatever you'd like. It is a tiny, wireless, low-powered, specialized, intelligent device that takes care of itself—all characteristics of the devices we will use in the future.

The Cross Pad, from A. T. Cross Company, uses as its input device a pad on which you place a piece of paper. You write on the piece of paper, and the pad converts your writing to digital ink or attempts handwriting recognition. Like devices of the future, this is a specialized intelligent device that uses something you use every day (a piece of paper) as its input device. Unfortunately, you need the pad to interpret what you write. A similar device in the design stage at Carnegie Mellon University eliminates the pad. In fact it lets you draw 3-D objects in the air.

The Toshiba Tecra 8000 laptop has everything you could wish for. It is fast. It has tons of memory. It has a scanner. It has a camera. Its screen measures more than 14 inches diagonally. Its battery lasts for seven hours. But it is a mature technology, and mature technologies, like dinosaurs, get replaced. Of course anyone would be happy to have this laptop today. Even though it is definitely a dinosaur, it is the very best dinosaur there is, and the critters that are evolutionarily superior to dinosaurs have yet to appear. But they soon will, and we need to prepare for their arrival.

Hardware Trends

In 1965 Gordon Moore suggested, in what's known as Moore's law, that the density of transistors on a chip would double every eighteen to twenty-four months. This implied that the cost per transistor should halve every eighteen to twenty-four months. For over three decades, Moore's law has proved remarkably accurate, but during that entire period many have predicted that the trend could not continue. They said that, in effect, Moore's law would be repealed when doubling the density of transistors on a chip became impossible.

In fact it appears that Moore's Law has been repealed—not because the density of transistors has failed to keep up with it but because developments are moving faster than Moore had imagined, due to unpredicted technological advances. Intel, for example, has found a way to store two bits per transistor and thinks it may be able to store four bits on a transistor. This will immediately double or quadruple the effective density of transistors and cut the cost of a memory chip by a factor of two or four. IBM has found a way to use copper instead of aluminum wiring on a chip, making chips 40 percent faster, smaller, and cheaper. One advance after another has continued to accelerate Moore's law, pushing this doubling time frame toward twelve or even nine months.

The effect of this is to make every hardware element smaller, faster, and cheaper. Disk storage, which today costs about a dime a megabyte, will cost less than a penny a megabyte in 2004. RAM, which costs about $4 a megabyte today, will cost less than 40¢ a megabyte in 2004. One megahertz of computational speed, which costs about 80¢ today, is likely to cost less than a dime in 2004.

By implication, any device that can tolerate an increase in cost of a dime could become an intelligent device. A toy, a pen, a pair of eyeglasses, a shirt, a doorknob, a paperback book, or any common object you can image will, in the near future, be an intelligent device that has memory and computational abilities. Of course none of these devices will be like the general-purpose PCs of today. They will continue to be toys, pens, shirts, or whatever they were meant to be, but they will be "smart" versions of what they were, and they will be able to communicate with each other.

Software Trends

Today most software is bought in shrink-wrapped packages from stores that stock zillions of titles. EggHead Software realized that having physical stores and stocking software made no sense. It now has a virtual store on the Web. But EggHead needs to go a step further. No one wants or needs the shrink-wrapped packages and CD-ROMs. It is silly to send software via UPS or the U.S. Mail when all the buyer really wants is the bits that are on the CD-ROMs. In the future, all software will be obtained via networks. No physical store will sell software.

As software delivery shifts to the network, the paradigm for paying for software will begin to look like that for television. Today commercial TV is free, paid for by advertising; public TV is also free, paid for by government and donations; and cable TV and pay-per-view TV are paid for by subscribers. A similar model will emerge for software. First, much software will be free, paid for by advertising. This is how most of the search engines on the Web are paid for; likewise, the costs of numerous free e-mail services are absorbed by marketers, who want to get their messages out to captive e-mail users.

Second, the government too offers some free software today. Much more will be coming in the future. Why should we buy software to do our taxes? How long will it be before the Internal Revenue Services offers it to us on the Web?

Third, like cable TV, companies will soon offer software by subscription. Pay a company a few dollars a month, and you will get the use of a basic package of software. The software you subscribe to will always be kept up-to-date, and new offerings will be frequently added to keep your interest (and your monthly fees) flowing. For users who want more than the basic software package, premium packages will offer products in vertical areas such as finance, architecture, engineering, manufacturing, music, and so forth. And some software packages will be pay-per-use. This will allow you to try a package (though free trials will be the rule) or to pay for a software package for only the actual time you need it.

New Applications and Telematons

With fast, cheap, intelligent processors and inexpensive software, many new applications will become possible. Real-time speech recognition and translation and conversions among speech, text, image, and audio formats will be possible. Vision systems that allow gas pumps to fill your gas tank without human intervention (Shell Oil is installing these now) or vans that can follow the edges of roads (Carnegie Mellon University has an experimental van that does this) will become commonplace. Software will also become worldly. It will know, for example, that "Don't drink and drive" does not refer to drinking milk.

In 1987 a group of five Princeton students and I entered a contest, sponsored by Apple Computer, to design the computer of the year 2000. After just a bit of design work we decided that for Apple to succeed as a business, it had to get out of the personal computer field and invent a new industry. By 2000, we reasoned, the personal computer industry would be flooded with mature technology devices that would compete on paper-thin profit margins, nonessential gadgets, and cosmetic changes. We saw the personal computer industry fading and a new industry, which we called the Telemation industry, emerging. The Telemation industry would build specialized, intelligent, communicating devices that we called Telematons.

Our design for the computer of the year 2000 was not a computer at all. It was the first of the many Telematons that we envisioned would be built. We called our first Telematon the Apple PIE (Personal Information Environment). Although the PIE was not a computer, it had great computational power. It used its computational power to manage an information environment appropriate for its users. It took all of the information sources—radio, TV, CDs, the Web, books, telephones, newspapers, other PIEs—and managed the information. It drew no line between entertainment and education. It focused on integrating and managing information. Although our design won second prize, Apple never built anything like it. Instead it built Performas, PowerMacs, and iMacs, which have garnered perhaps 5 percent of the PC market. While Apple battles for a tiny share of a mature commodity market, other companies are introducing Telematon precursors.

Why are Telematons now being developed? A long, long time ago, memory, CPU cycles, hardware, and software were expensive. Because of that, general-purpose devices such as conventional personal computers were built. But that was a long time ago. Every day, intelligent devices become cheaper to build, and general-purpose devices make less sense. Telematons are special-purpose devices that are now possible. They often have specialized input and output devices and might not even be recognizable as computers.

An example of a Telematon precursor is a FATS, Inc., system that is used to train law-enforcement agents, hunters, military units, and fire-fighters. In its fire-fighting mode, the FATS (see www.fats.com) system presents to its user a large screen that covers an entire wall. On the screen is projected one of many fire-fighting scenarios. Faced with one of these simulations, a fire-fighter grabs a hose, ax, or whatever tools and equipment are appropriate and goes to work on solving the problem.

The hose, which has the correct weight and feel of a real hose because it is a real hose, is actually an input device, reporting back to the system its location, aiming point, and so forth. The screen, rather than being static, responds to the fire-fighter's actions. Voice-detection and voice-recognition systems also respond to his or her commands.

The fire-fighter is placed in a realistic, interactive, real-time environment in which to hone fire-fighting skills. No computer ever appears to be in sight. No keyboard, screen, or mouse is ever part of what the fire-fighter sees. The heavy hose, not a mouse, is the pointing device. The firefighter's shouted commands and movements, not a keyboard, form the input device.

Likewise, in the future we'll probably use dozens of computers that a visitor from today probably wouldn't be able to recognize as such. For example, here are a few of the numerous other Telematons imagined by the Apple PIE group.

Togamatons

Twenty-five years ago it seemed crazy to imagine anyone wearing a radio or cassette player. Today it is hard to imagine a jogger not wearing one of these or a CD player. Tomorrow it will be hard to imagine people, whatever they are doing, not wearing at least one Togamaton.

A Togamaton is a Telematon that is with you all the time. You might wear it as a watch or jewelry or carry it with you in your wallet, purse, or pocket. Togamatons might also be part of your clothes. For input, you might talk to them, scribble on them, or keyboard on them. Output devices would be their own displays—voice and video output—or they might use your eyeglasses as a display device. Some airline maintenance crews already use wearable computers that display output on a lens of a pair of special eyeglasses.

Refridgermatons

What do members of your family use as a message center? Your PC, your phone, your TV? Of course not. They leave notes on the refrigerator. Why not have a Refridgermaton built into the door of every refrigerator? It would be a bidirectional e-mail, voice mail, Web page, speakerphone, and message center located right where it would be most useful. It would also include a UPC scanner that could keep track of what you put into your fridge, and with Web access, your Refridgermaton could tell you what items you are short of, produce shopping lists (and order over the Web), and create recipes with the food you have on hand. You'd also be able to access your Refridgermaton remotely (should I pick up milk on the way home?), or it might call one of your Togamatons to tell you to pick up the order the Refridgermaton placed at the supermarket. A Refridgermaton could page your kids, remind you about birthdays, and keep to-do lists for family members, and of course, your fridge would still keep food cold.

Automatons

"Cars may visit [the] Internet while on the interstate" (New York Times, September 19, 1997). Your car already has many computers and will have many more in the near future. A few cars already have Global Positioning System (GPS) navigators, cell phones, and other connections to wireless networks. An Automaton would put your car on the Web, communicate with service personnel, and optimize your driving experience. With an Automaton, if you lock your keys in your car (will there still be keys?), your Automaton—in response to your voice—can open your car for you.

The Future of Education

With the profusion of Telematons, or whatever the cheap, fast, wireless, intelligent devices of the future are called, education will be fundamentally changed.

There are three variables in instruction: the material covered; the level of mastery of the material; and the time allocated to cover the material. Any two of these can be constrained. The other will always be variable. For example, if the material covered and the time allocated is fixed, then the level of mastery will vary from student to student. This is the model used in most classrooms today and the one that makes the least sense. Why bother to teach if one is not interested in mastery of the material? What does it mean when someone gets a C in Chemistry 101? Does it mean that the person knows only 75 percent of what he or she should have learned? If so, how do we expect this student to survive in Chemistry 201 with students who got an A in Chemistry 101? Can this person still be a chemist while having no knowledge of 25 percent of basic chemistry? Do you want a doctor operating on you who got straight C's? Making the mastery of material a variable is a compromise caused by the economics of teaching large numbers of people. The decision that most people should be educated makes it too expensive to ensure that all of them are well educated.

In Plato's day, in contrast, the material covered and the level of mastery were fixed. The time to cover the material was variable. The only acceptable level of mastery was complete mastery. A student received personalized, highly interactive instruction and moved ahead at his or her own pace. One didn't move on to the next level until the previous level was mastered. This was accomplished using a one-on-one tutor system, with the material and presentation tailored to each student. But this model simply isn't affordable when hundreds of thousands of students need to be taught thousands of different subjects. Also, whereas Plato had a good mastery of most of what an educated person needed to know in his day, no one tutor could do that in even a single specialized field today.

Some have thought that technology might be able to allow us to afford to get back to the Plato model of education, but so far these efforts have not been very successful. TV didn't do it. Movies and slides didn't either. Even Web courses and distance learning mostly only reduce the costs of delivering the same old "talk and chalk" kind of instruction that was delivered without all the new technology. Students still get C's in courses and go on to struggle in the next course in the sequence.

While universities continue to struggle with technology in education, TV stations have mastered the art of delivering the evening news. Universities present students with scholars who often have no teaching skills and use all the multimedia features that a blackboard, awful handwriting, and chalk can deliver. TV stations have trained actors and newsreaders backed by worldwide news teams, graphic artists, animators, and a studio full of support people, all ready to present the evening news to us. A five-minute evening news piece is almost always more memorable than any number of one-hour lectures in a classroom.

Thus, a first essential step for universities is to adopt the "evening news" model of education, deleting the bad features and taking the good features a step further. Evening news plusses include the following:

  • You are where the action is live. At some point, students need to see a diagram of a nuclear reactor (if that is what is being studied), but getting a live tour of a nuclear reactor before seeing the details makes learning more meaningful.
  • The evening news uses multimedia. Using as many of your senses as possible improves learning. And making material compelling and entertaining improves retention.
  • The evening news brings in the specialists and pros. Doctors, lawyers, pilots, and a host of consultants, along with numerous staff specialists such as weathermen, sportscasters, financial analysts, and so forth, are always on hand to provide in-depth inside information.
  • The evening news is scripted, rehearsed, polished, and checked for correctness and good pedagogy. News is delivered by professionals who know how to present information.

Some evening news minuses include the following:

  • There is no interactivity.
  • Many newscasters are just newsreaders and have no in-depth knowledge of the subject they are presenting.
  • There is a tendency to oversimplify and to do sound bites.
  • The evening news is linear.
  • Attempts at humor and levity often fall flat or are simply distracting.

Adopting the good features and avoiding the bad, the evening news model should throw away grading entirely and instead insist on mastery. Everyone either gets an A+ or gets no credit. And all students can take as long as they are willing to take to master the material. This model should also throw away the awarding of degrees. Students would simply get credit for whatever work they mastered. A transcript would consist of only a list of the material a student had mastered.

Of course to implement this model would require a sea of Platos skilled in every discipline offered by every college and university. Even if we could afford such a group of Platos, no such group exists. What we need instead, and what will soon be possible, is a software aid (running on Telematons, of course) that I call SMILE, for Software-Managed Instruction, Learning, and Education.

The SMILE paradigm creates a personal software mentor model. In essence, SMILE presents a student with a personal, Plato-like tutor for all areas the student is studying. SMILE lets every student go at his or her own pace. It soon learns what pedagogy works best for a particular student and learns a student's weak and strong points. It cajoles, urges, entertains, and manages a student's education. Its presentations use the modified evening news model described above.

Although SMILE exists today only as an idea, much of what is needed to produce it is already in place. SMILE would run on a variety of Telematons—or whatever we want to call specialized, portable, wireless, intelligent electronic devices. The Sony PlayStation II, for example, is a specialized device that has a 6.2-gigaflop (one billion floating-point operations per second) processor (about three times the speed of an Intel Pentium III). With some wireless additions and a few accessories, it would make a fine platform for SMILE.

Since SMILE would use the modified evening news model, it would separate content from presentation. Experts in a field would create and submit their content material using a standard interface. They would worry little or not at all about presentation. Pedagogical experts and graphic designers would work on the material, presented interactively (over the Internet and Internet2) to make it suitable for SMILE. SMILE would provide a high-level API (Application Program Interface) so that common tutor-like functions could be invoked. For example, SMILE would adjust the difficulty of each new step depending on whether a student's progress in some area was lagging or racing ahead. Since SMILE includes common facilities to evaluate and track a student's progress and pedagogically sound ways to keep a student's interest, neither content specialists nor presentation experts need to fuss much with these aspects.

After a time, SMILE would accumulate material on many subjects. Since SMILE's unit of instruction is not a course but a "module," students or student advisers could link several modules into customized courses. Even if the material comes from many different sources and content providers (professors, graduate students, researchers, business professionals, factory workers, etc.), students would see a common, familiar format. Students would be able to interact electronically with content experts, but more often they would use the resources of SMILE itself or would use NOAH.

NOAH (Network-Optimized Allocation of Help) is an idea for a planned network service that allows network-connected peers and specialists to provide live, interactive help to supplement what SMILE can do. Many people on the Internet today already field questions from colleagues. While I was writing this article, for example, I received a question about Netscape Mail. I paused what I was doing, ran an experiment or two, sent off my answer, and returned to this document. NOAH formalizes and enhances this common occurrence. Users who want to participate in NOAH register by indicating areas in which they have expertise, their degree of expertise, and some biographical information. They also may indicate times when they do not want to be contacted and limits to how often they are willing to provide free help. Any NOAH-registered person can easily indicate availability or unavailability to NOAH on the fly. For example, users registered to NOAH but working on tasks from which they do not want to be interrupted can make themselves temporarily unavailable to NOAH.

When a question is sent to NOAH, it is analyzed by NOAH, and the best-available registered NOAH participants are sent the question. NOAH effectively chooses the Internet experts best qualified to answer the question. NOAH might choose to send the question to just one person or to a few. Answers flow back to NOAH, where they are catalogued and passed on to the person asking the question. By cataloging questions along with answers and the authors of the answers, NOAH is often able to provide an answer from an expert without contacting anyone at all.

NOAH requires policing to ensure that it is not abused and that experts are in fact real experts. Much of this can be done by users themselves, who rate the quality of the help they receive, and by NOAH itself, which refuses to do homework for sixth-graders. Experts who rarely provide good help, for example, will no longer be sent questions.

Can NOAH run with all volunteer experts? Maybe not. A number of paid professionals could be added to the NOAH system in specialized areas. Some of these paid online experts could be restricted to questions from specific SMILE modules. For example Sony, running a software engineering SMILE course, could provide NOAH experts as an extra incentive to purchase its course instead of the one from an Ivy League university. Using this scheme, companies and university help desks could use NOAH to replace or supplement their existing user support.

Can we really build a SMILE system today? Of course we can. The more troublesome problem is the difficulty of giving up the present and forging forward into the uncertain future.