Brains, robots, and their evolution

rebecca@sfbg.com

The Bay Area is fully engaged with the technology industry, triggering political flare-ups over Google Glass, tech buses, and larger debates over how the tech industry is morphing the Bay Area’s social and economic landscape. Meanwhile, university researchers are busily putting technology to use in service of their studies, or carefully examining how technology is shaping people’s lives.

A pair of recent events in San Francisco and Berkeley illuminate how web-based technology has become deeply embedded in everyday life, helping to shape human realms as personal and unique as emotions, brain health, and behavior.

Medical researchers at the University of California San Francisco have devised a tool they hope will advance our understanding of neuroscience and brain disease. On April 8, UCSF researchers launched a new project called the Brain Health Registry, which uses the Internet to recruit volunteer research subjects who play online brain games as part of the enrollment.

Across the bay at UC Berkeley’s Center for New Media, a recent symposium explored the implications of living in a world increasingly populated with robots and “smart” technologies that are designed to anticipate and respond to human behavior and dynamic environments. The April 4 event, called Robots and New Media, highlighted some thorny and intriguing questions about how robots “play a critical new role as extensions of ourselves,” according to the event description.

With discussion from cognitive neuroscientists about what happens to the human brain during interactions with robots, the talk also dived into questions about how much trust people should be willing to lend to emerging technologies.

 

BRAIN TRUST

Michael Weiner often wonders whether swimming in the San Francisco Bay can be credited with sharpening the mind. “I can tell you this: It sure makes you feel good,” said Weiner, who frequently plunges into the frigid bay waters as a member of the Dolphin Club.

For Weiner, a professor of radiology at the University of San Francisco who specializes in Alzheimer’s research, the curiosity goes beyond idle speculation. He’d like to conduct a clinical study to explore the impact that swimming in cold water has on mental functioning. But at the moment, he and a team of UCSF researchers are focused on a much bigger project.

Weiner is the founder of UCSF’s Brain Health Registry, designed to answer these brain impact questions by making it easier to do clinical studies. Realizing that the high cost of recruiting volunteers can slow down cognitive research, he’s turned to the Internet to build a database of volunteer subjects.

“The idea is to collect tens of thousands of people into a registry and then use it to select subjects for clinical trial,” he explained. To enroll, participants provide their names and other personal information, and answer questions about their patterns around sleep, mood, exercise, medications, use of alcohol, and other behaviors. They also take online cognitive tests provided by Lumosity, a brain-game company.

Their test results and personal information are then entered into the registry, which can be used to aid research in several ways. It can be analyzed to detect trends — for example, are there patterns suggesting a linkage between sleep disorders and poor cognitive functioning? It could be used to help researchers detect people with very early Alzheimer’s, Weiner noted. And UCSF researchers can contact registry volunteers to invite them in for clinical studies.

“I want 50,000 people of all ages within the San Francisco Bay Area,” Weiner said of his initial goal. By the end of 2017, the recruitment goal is 100,000. So far, 2,000 have signed up as volunteer subjects.

The Brain Health Registry could turn out to be a tool for facilitating long-term goals like finding a cure for Alzheimer’s. But having this giant database filled with sensitive personal information brings up at least one important question: What if there’s a data breach?

“I’ve been doing research for a long time, and never has there been a loss of data,” Weiner responded, vouching for the system’s ability to keep data safe. “I’m in there, my two children are in there.”

 

LIKABLE ROBOTS

Carla Diana spoke at Berkeley’s Center for New Media symposium on April 4. A designer whose work involves playing around with the expressive elements of technology, she helped create a robot called Simon with a team of researchers at the Georgia Institute of Technology.

She said the purpose of designing Simon was “to study how we can interact with the machine in the most natural way possible.”

Simon is cute and looks like a doll. With an all-white head and torso designed by Meka Robotics, a San Francisco-based robotics company that was recently acquired by Google, Simon has expressive droopy eyes outfitted with cameras, light-up ears that flop up and down, and mechanical hands that grip things.

He (it?) is programmed to track people as they interact with him, understand spoken sentences, and respond with expressive sound and movement that mimics human social behavior. Diana said the robot was designed with diminutive features on purpose, as a way of conveying that he has a lot to learn.

Diana is a fellow at Smart Design, where she oversees the Smart Interaction Lab. Her work isn’t just about making machines — it’s also about studying how people interact with smart technologies, and thinking carefully about things like how the “personality” of a machine can excite people, motivate them, or push their buttons, so to speak, by designing for a sensory experience.

Asked during the question-and-answer period about the ethical implications of designing machines meant to reach people on an emotional level, Diana acknowledged that this is precisely what smart technology designers are up to.

“It’s the responsibility of the designer to realize that we are doing that,” she replied. “We are creating entities that have the ability to manipulate humans’ emotions. And that’s that.”

 

WELCOME TO THE MACHINE

Mireille Hildebrandt, a lawyer, researcher, philosopher, and professor who flew in from the Netherlands to speak at the symposium, offered a big-picture view of what it means for people to interact with “smart” technologies or robotic machines, and she threw out questions about the overarching implications.

“We’re moving toward something called ubiquitous computing,” she explained. “The environment starts to adapt to your assumed, preferred preferences.”

Common examples of this adaptable technology abound on the Internet: Targeted advertising is based on individuals’ unique preferences. Google search tries to guess what you’re looking for before you finish typing.

What happens when this kind of “smart,” predictive tech goes beyond the computer screen? In some cases, that’s already happening: Think facial recognition technology that can scan an environment to detect a specific person. A less creepy example is smart appliances such as thermostats or robotic vacuum cleaners.

Bringing it up a notch, Hildebrant asked the audience to imagine that everyone had a personal robot. “What if your robot does some A/B design, testing your moods, susceptibility to spending, voting, and other behaviors?” she asked. “What if your robot is online with its peers, sharing your behaviors to improve the user experience? … However smart they are, they aren’t human. They have no consciousness, let alone self-consciousness.”

In a robotic environment, she said, “You can be calculated. When I, as a robot, act like this, then [a person’s] behavior will likely be like that. … We would have to realize that they are continuously anticipating us.”

To live in this kind of souped-up environment brings up big questions, Hildebrant said: “Who’s in control? What’s the business model? And how will it affect our privacy?”