Creating the Future with Human Factors

The best way to predict the future is to create it. --Abraham Lincoln
http://commons.wikimedia.org/wiki/File:AnotherEarth.jpg —Image from wikimedia commons
The Earth we have and the world we want






The future will arrive, including new technology. Do we want whatever future just happens to happen, or do we want to guide the process? If the latter, here I explain how it is that the field of human factors (HF) is responsible for planning and building a sustainable future human civilization.




It was not my idea. I heard it from Waldemar Karwowski at the Past President’s Forum at the 2013 meeting of the Human Factors and Engineering Society. But I am running with it.

At first it was startling, at least to me, that HF would have as a goal to build a sustainable human civilization. It was not the size or difficulty of the goal that bothered me — a goal of physics is to explain everything physical, a goal of psychology is to explain everything mental, and a goal of philosophy is to understand existence itself. What was startling to me was that that this purported goal of human factors, to create a sustainable human civilization going indefinitely into the future, was not already claimed by another scholastic field.

As I thought through field after field, I could not find a better contender for guiding the future of human civilization than human factors. The basic sciences have no direction but to understand reality (though this is part of their beauty). The applied sciences do have goals, and they are not about building future civilization (e.g., the goal of oncology is to cure cancer). And engineering provides no input into what to build, even as engineers build really cool things.

It was a compelling idea. After all, shouldn’t some field have the long-term future of mankind in mind? And wouldn’t it be nice if that future has the properties that the human factors field has figured out how to instill? For example, wouldn’t it be nice if the future were an ergonomic one, surprisingly nice to use even if of an unexpected design, and it doesn’t hurt your back? Wouldn’t it be nice if future technology were as a rule well-tested for human usability and vetted to be safe prior to adoption?

It seemed to me both prudent and exciting to think broadly about engineering the future from a human factors perspective, like some sort of a combination of science fiction and compliance with safety regulations.

One Step at a Time

Our future is not going to result from the invention and implementation of one or two or even ten great ideas. We are not waiting for a transporter and a warp drive and then we’re there. The future is currently arriving in a million million tiny steps—a more energy efficient refrigerator today, a laptop that doesn’t break so easily when you drop it tomorrow, voice recognition that works a little better, a lighter weight shoe, a more efficient filter for clean water, a hospital policy that reduces hospital infections by five percent, a law repealed that treated juveniles unfairly, an even more energy efficient refrigerator, voice recognition that works still better … for better or worse, these are the sorts of steps that are the nuts and bolts of the future.

The next step in medical devices is not “a tricorder.” It is “a better blood glucose meter,” where “better” functionally stands in for “more tricorder-like.” How do you get to a tricorder? Use the methods discovered and implemented in the field of human factors to engineer for safety and usability and repeat, and that is where our tricorders will come from.

A major contribution of a human factors approach is that each small but real step can receive, as appropriate, a layer of contemplation, a level of risk analysis, some time to build in mitigation for risks, and some amount of small-scale testing, all prior to full-scale implementation. Would healthcare.gov or even Skynet have undergone a bit more testing first if our culture around changes to technology and society were enmeshed with human factors analysis?

This week at Core Human Factors, Inc., we are working on an artificial organ, an insulin pump, a health-related mobile app, two new injection devices, and a new method for breast imaging. The future is not just 500 years from now, the future is also next year, and we all want an optimized near-future, too. Incremental progress. One day a tricorder.

Robots run amok

Engineers, as decent people, are well-intentioned. However, we know from science fiction that in striving to create a better future, we run the risk of robots run amok. (Wait. We know this from science fiction? Yes. If the risk is plausible, and if the risk is described first or brought to our attention by science fiction, then we can know about risks from science fiction.)

So, isn’t it risky to attempt to create the future? Isn’t it safer to step back and not get involved, and at least avoid being responsible for amok robots?

I’d say it’s risky not to strive. See if after the next three paragraphs you don’t agree.

We have robots and we will have more in the future. Are we confident that we have adequate risk controls designed into our future robots? Only if we step up and make sure of it. Of course, it’s not just robots — the future needs new hospitals, new methods of agriculture, new cities, new laws…. Given that all this is happening, it is surely better to aim for those new technologies and systems to interact safely with people. To aim well for safety requires understanding how people will interact with the new technology — in other words, human factors.

Just as physicians first and foremost attempt to “do no harm,” in future world engineering, we might first strive to avoid disasters and dystopias. The lesson of robots run amok is not to cease building robots, for people are building robots anyway, and you cannot stop them. A more helpful lesson is to adopt a focus on risks and to brainstorm and promote effective methods for avoiding identified risks.

Risk-averse methods that discover the parameters and limitations of human perception, cognition, and action are among the central contributions of the field of human factors, as are methods to incorporate these human parameters into designs. It is because of decades of human factors work in aviation that it is safer to fly in a plane than to walk across the street. From this perspective, it seems riskier to sit back and see what future comes than to proactively contribute, if at all possible, to design for safety (i.e. human factors for sustainable civilization).

Safe and Effective

Going forward safely is half of the benefit of human factors sitting as the leader of the STEM (Science, Technology, Engineering, and Math) fields. The other half is in channeling the creativity of scientists and engineers through hard-won knowledge of human perception, cognition, action, and desires.

Forgive the seeming stereotyping, but engineers have been found to be lacking in understanding specifically of psychology. And to a large degree, applied science progresses based on what applied scientists think is impressive, and without requesting input from end-users of any sort. But imagine that market research, desirability studies, usability testing, and research on basic human performance and psychology were fully integrated into the cultures of applied science and engineering. The stuff they work on and the stuff they direct their creativity towards solving would be that much more relevant.

"Science faction"

The field of human factors can be seen as science fiction’s empirical cousin. In human factors usability analysis, we say, “here is a possible design improvement. What will happen if people use it? What will go wrong?” This is what science fiction writers do too.

Just as telephony naturally scales from engineering around two telephones to worldwide communication, human factors naturally scales from optimizing individual interactions with single pieces of technology to worldwide optimization of human interaction with technological systems. In other words, even if the goal of HF did not start out as future world engineering, this can now still be the overarching endgoal.

Positioning human factors as the science of optimizing future human civilization gives human factors a new definition among the disciplines. It is a specific, unique, and worthy goal that demands unique theory and that leads to unique and worthwhile practical consequences. By designing and adopting innovations through the orientations of safety and usability, we decide more responsibly which technology will make up our future.

As implausible as it sounded, Waldemar was right—and then some. Human factors is the discipline responsible for creating a sustainable human civilization. How? By
1. advancing knowledge on human perception, cognition, action, and desires
2. deriving shared goals of disasters and dystopias to avoid
3. guiding the STEM fields towards research and design for safety, usability, and sustainability of future civilization

Next Steps: Saving time, money, and the world

What goals for the future can we possibly agree on? Just which dystopias should we avoid and just which utopias should we strive for? Determining specific long-term and medium-term goals for the future can be part of the deep theory of human factors. Some specific goals, like the three listed below, are likely widespread enough to provide direction for the development of the future as goals are researched and improved.
  1. Avoid deadly or painful unintended consequences of new technology
  2. Eliminate unwanted hunger and unwanted sickness
  3. Increase efficacy of known medical treatments
Do we want safe zones for experimenting with technology? Maybe. Do we want computers to write fiction? Perhaps there are better uses for computers. A space elevator? A worldwide hyperloop that connects all countries, making international shipping cost about the same as domestic? GPS-enabled solar balloons with solar-powered propellers to directly deliver canned food from individuals to famine sites? Artificial feeders for mosquitoes so mosquitoes don't eat our blood and transmit disease? Contagious vaccines?

Not every futuristic sounding idea is worth spending time on, and some are potentially dangerous. Determining goals systematically and responsibly and designing for safety and for human usability can save time, money, and, if successfully averting disaster, the world.

Future world engineering for a sustainable human civilization is an understandable goal, conceptually simple and intuitive to children, many of whom enjoy science fiction, but that is otherwise unclaimed in the sciences.

Creating a sustainable human civilization is a vital goal that can naturally align current and aspiring engineers, psychologists, anthropologists, sociologists, linguists, urban planners, environmentalists, epidemiologists, psychohistorians, economists, ecologists, philosophers, and legal theorists under one “human factors” umbrella. These and more disciplines can be positioned as limbs of a primary human factors field that is responsible for optimizing the overall course of the future.

(Remember that the goals of physics, psychology, and philosophy are big too. Also note that each of these fields, like human factors, support work on deep theory, minutiae, and every scale in-between.)

We are living in a science fiction world and we are all the authors.



To cite: M Egeth. (2013, December 24). Creating the Future with Human Factors [Blog post]. Retrieved from: http://blog.corehf.com.