Humans, Machines Enter a New Orbit
As humans prepare to push off the safe haven of Earth and embark on journeys into deep space, a new generation of explorers is in the making — some of them human, some robotic and some with aspects of both.

By Daniel Stolte, University Communications
March 28, 2018

Wolfgang Header Full Frame.jpg

Wolfgang Fink: "Where does the human end, and the machine begin? Should robots have rights? This is what we will run into eventually."
Wolfgang Fink: "Where does the human end, and the machine begin? Should robots have rights? This is what we will run into eventually." (Photo: Bob Demers/UANews)


For almost 20 years, humans have maintained a continuous presence beyond Earth. The International Space Station has provided a habitat where humans can live and work for extended periods of time. Yet, despite having established a permanent base for life in space, terra firma is always in reach — within 254 miles, to be exact. If a crew member were to fall seriously ill, he or she could make the return trip back to Earth in a matter of hours. 

"As soon as you venture beyond low Earth orbit, to go to Mars or even further, bailing out no longer is an option," says Wolfgang Fink, associate professor and Keonjian Endowed Chair in the UA's College of Engineering. "You're on your own." 

Fink predicts that in the not-too-far future, humans will work side by side with robotic machines, non-human intelligence and smart devices in ways never seen before. Human logic and thinking will be joined by, and complemented by, artificial brains and reasoning algorithms.

For the first time in history, Fink says, we have reached a level where soon the lines between what is considered "human" and what is considered "artificial" are beginning to blur.  

When There Is No Turning Back

A manned mission to Mars, which involves an outbound journey of at least one year, can succeed only if no vital parts of the system break beyond repair, including those made of flesh and blood. Anticipating system failures and addressing them before they occur becomes paramount. When no doctors are around, not only does the crew have to be autonomous, health care does, too. 

"The key here is prognostics and health management, a concept that is beginning to cross from the realm of technology, specifically in the aerospace industry where it has been used for decades, into the realm of human health," says Fink, who recently was named a fellow in the Arizona Center for Accelerated Biomedical Innovation, or ACABI, and who is spearheading an industry-university partnership, the Center for Informatics and Telehealth in Medicine, or InTelMed, at the UA.

For example, many parts of a modern airplane are connected to a data network, even Wi-Fi, and provide continuous status updates without oversight from the crew. This allows maintenance personnel to anticipate malfunctions before they happen and meet the plane upon arrival with the right parts and tools needed to remedy the issue. 

Whether it is about keeping airplanes flying or maintaining human health for the duration of a deep space mission, the idea is the same, Fink says: "Rather than trying to treat the person once they're sick, you constantly monitor their health status to predict and remedy any problems before they occur." 

Funded in part by the National Science Foundation, InTelMed has the goal of devising biofeedback-controlled wearable sensor technologies and health care data-streaming capabilities, paired with cloud-based intelligent data analysis, to create autonomous systems that can monitor the health status of individuals independently of health care providers in the flesh.

One of Fink's projects illustrates how this approach could play out in the very near future. With a grant from the National Science Foundation, his team created a way to turn a smartphone into an eye-examination device. The technology, which could prove life-changing especially in remote, underserved areas of the world, uses imaging and a remote, cloud-based "expert system" — which uses intelligent software based on disease models to suggest diagnoses much like a human medical expert — to quickly identify patients at risk of losing their vision. 

Down the road, Fink says, it's easy to envision activity tracker-like devices with the capability of not only monitoring but intervening.

"Sensors automatically uplink their data to the cloud, where data-mining algorithms come up with a prognosis, diagnosis or even a treatment," he says, "for example, through implantable devices that stimulate certain parts of the brain and trigger behavioral responses like curbing food cravings or calming a person down. It's a closed-loop system, much like the thermostat controlling the heating and cooling in your house." 

The Physician on Your Wrist

A research team led by Esther Sternberg and Perry Skeath of the UA's Center for Integrative Medicine, or UACIM, is developing the next generation of wearable devices that can keep tabs on a person's health status by measuring biomarkers: particular biochemicals in blood, saliva, urine or sweat that indicate how a body system is functioning. After discovering that cortisol, a stress hormone, is secreted in sweat, the researchers are combining expertise in medicine, chemistry, engineering and data management to design a patch sensor to monitor stress and many other biomarker molecules. 

Combined with other sensors that keep tabs on other vitals such as heart rate, blood pressure and sweat responses, such technology could, in principle, be advanced further to ensure the long-term health of astronauts on deep space missions. Obviously, possibilities abound for earthly applications, as well, such as monitoring patients who are at risk of stroke or heart attack. 

"The devices we are developing are basically microchemistry labs, so they can be used for many applications," says Skeath, assistant research director at UACIM and assistant professor at the UA College of Medicine – Tucson. "The tricky part is tailoring the sensor suite to the task, whether that's an astronaut going to Mars or a soldier on the battlefield."

While a wearable, cortisol-measuring device potentially could measure stress in real time, the data it generates can be ambiguous because other, non-stress-related factors come into play and change the reading. It is critical that scientists first have a solid understanding of what exactly constitutes stress and define a precise set of measures that capture that condition.

To study this, the team has set up a lab dedicated to tracking various physiological and molecular responses to stress challenges in volunteers.  

"We expose them to controlled stress challenges while performing a host of measurements," says Sternberg, research director at UACIM and professor in the College of Medicine – Tucson. "Then we look at what the minimal set of measurements is that captures the condition."

Once the researchers know that, they need to make each measurement reliable and accurate, so that the set of biomarker changes will zero in on the specific challenge rather than giving a reading that's driven by unrelated factors.

"For example, when we look at cortisol in sweat, we have to ask important questions about the physiology involved," Skeath says. "Does cortisol degrade over time? Do other substances dilute it? Do we lose it before it gets from the pore to the sensor? Once we have those questions answered, then it's time for the engineers." 

Teaching Machines to Expect the Unexpected

As machines become smarter, efforts are underway to endow them with enough autonomy and learning capabilities to work without any human oversight. Such robots could operate in environments too hazardous for humans to venture into — for example, natural disaster zones such as the tsunami-stricken nuclear power plant in Fukushima, Japan, or beyond the reach of Earth-based mission control centers.   

In his Visual and Autonomous Exploration Systems Research Laboratory, Fink and his team are working on building a robotic field geologist. Unlike traditional planetary missions that focus on, say, a spacecraft studying a planetary body from a high orbit, or a rover analyzing features of the landscape at close range, his concept of tier-scalable reconnaissance mimics the approach a human explorer would take by first surveying global features, then homing in on the lay of the land in a certain region, and finally investigating interesting features at close range. 

"Instead of putting all the smarts on one system, you distribute them among several different and spatially distributed systems," Fink explains, "and that creates the redundancy and robustness you need for a critical mission like planetary exploration." 

In this scenario, an orbiter would oversee one or more aerial vehicles such as blimps or quadcopters hovering in the atmosphere (on planets that have one), which in turn would command a fleet of miniaturized rovers, directing them to various points of scientific interest. Having such a team of artificial scientists working autonomously on different levels also would enhance the overall intelligence inherent to the mission, Fink says. 

"Especially for planets or moons in the outer solar system, where the distance to Earth prohibits real-time commanding, you can have such a system conduct its own science, deploy and redirect its agents as needed to obtain the results, and decide which are interesting enough to be sent back to Earth," he says. 

In a shift away from current paradigms, which typically center around one highly sophisticated robot, the tiered payload would involve less complex, less expensive and more expendable units, creating redundancy, according to Fink. 

"If you only have one rover, you're not going to deploy it to an area where it might get stuck or suffer damage," he says, "but if you have several at your disposal, you might want to risk sacrificing a few, if that would help you answer the question whether there was life on Mars, for example." 

Because these robotic explorers will have to make decisions on their own, they will need cognitive abilities that until now have been unique to humans, such as curiosity.

As opposed to artificial intelligence, or AI, Fink's research team is developing reasoning algorithms that are not rule-based to teach machines to recognize features in a landscape that — for one reason or another — a human explorer would classify as "interesting." In Fink's lab, a small fleet of track-bearing rovers serves as testing platforms: They learn to explore a landscape by roaming freely, avoiding obstacles and paying attention to what is in front of them. 

"Equipped with our Automated Global Feature Analyzer software package, an orbiter or blimp would try to identify anomalies on the ground using a set of purely mathematical, unbiased algorithms," Fink explains. "It would then transfer that information to the rovers on the ground, so they can go investigate up close. No longer would humans be the ones pushing the buttons."

The challenging work is hard to beat for students such as Alex Brooks.

"What's unique about working in Dr. Fink's lab is that you really get the opportunity to do a lot of the actual work on the projects," Brooks says. "For example, on the rovers, for the autonomy part, I'm really the primary developer for the software that helps them navigate. ... In his lab, if you demonstrate that you're capable of handling advanced work, you can explore that."

From Cyborgs to Superhumans

One could see how the lines between "human" and "artificial" start to blur in a future where humans and machines interface and work together ever more closely, and machines execute complex missions with minimal or no human oversight.

Take the booming field of bioengineering, especially neuroprosthetics, where implantable technology is used to prevent bouts of depression and epileptic seizures, suppress tremors caused by Parkinson's disease, or restore hearing or vision. 

Fink's work on image processing and neural stimulation algorithms has dramatically improved the performance of the only FDA-approved retinal implant, and has paved the way to enhancing its resolution such that the wearer has a chance of seeing more than just facial features and reading large-font lettering. 

 

Giving vision back to the blind through artificial vision implants or replacing stroke-damaged brain tissue with biomimetic devices are premier flagship examples of a human brain/machine interface. But one can see how it may only take a small step to "improving" otherwise healthy individuals with technology. 

It might sound like the stuff of sci-fi novels and movies to go from systems monitoring the health of astronauts, pilots, soldiers or athletes to creating some kind of "superhuman." But in a way, that's exactly where things are going, according to Fink. 

"There is a critical ethical boundary that needs to be considered," he says. "Where do you stop helping humanity and enter the realm of the supranatural where nothing is wrong with a human, but you try to go on top of that?

"Where does the human end, and the machine begin? Should robots have rights? This is what we will run into eventually."

Extra info

ABOUT THIS SERIES

The digital, physical and biological worlds are converging with startling speed, and a future that was unimaginable only a few years ago is already upon us. University of Arizona researchers are at the forefront of this sweeping change, often working across disciplines toward important discoveries. The UANews series Fast Forward has introduced some of the UA's change agents — and shown how their efforts are transforming the way we live.

Share

Resources for the media

Wolfgang Fink

UA College of Engineering

520-621-8734

wfink@email.arizona.edu