Ready, Set, Future!
I’ve enrolled in this Ready, Set, Future! Introduction to Futures Thinking course on Coursera.
I’m liking it a lot, and I appreciate that the instructor and I have a somewhat shared background in game development. Jane McGonigal is a game designer, and I was an animator & technical artist. Certainly different roles and work, but there’s an overlap in that Venn diagram.
Anyway, Futures Thinking is not about prediction. There’s no crystal ball to gaze into. It’s a process of understanding signals that are happening now, that could lead to something(s) new ten years from now.
I’m still wrapping my head around it all. But its work aligned to my strengths and interests, and I think there sure is a need for it. So much ambiguity these days, Futures Thinking can help you understand a possible future, and make plans to make it real.
So what’s a Signal?
Here’s one of the examples of a signal provided in the course:
Yeah, give that headline a read again. 5G collars let cows choose when they want to be milked. BTW, the milking is done by robots.
That happened in 2019, on one farm with 50 cows. If I recall correctly. It was an experiment.
The pros of this approach are:
- This will reduce infections, diseases and discomfort for the cows. They don’t have to wait for the farmers to milk them.
- The farmer is freed up to take care of the many other things needed to run a farm.
Let’s say this scales up over 10 years. What else might be true?
- Do most or all animals on a farm perform self-service?
- Are the local farms networked to monitor production, disruptions, illness?
- Will farmers be able to do new things to make their farms more effective and efficient?
- I could keep going…
This is Futures Thinking.
- Here’s a signal (some small new thing).
- In ten years what else might be true?
- Do we want those things?
- How do we create the futures we want, and mitigate the ones we don’t?
My first Signal
One of our assignments was to find a signal and think through it. A friend shared this article on Facebook and I thought it was a great signal.
Here’s my response to the signal.
Summary: A stroke survivor hears their own voice and sees their face move for the first time in eighteen years, thanks to a team of researchers pushing the limits of neuroscience, brain-computer technology, and AI.
This represents a change from people affected by paralysis and living “locked-in” lives (fully sensing, typical cognitive abilities, without the ability to move or speak) to the ability to communicate effectively and “really live while I’m still alive!”
The force creating this, and that which can cause it to scale is the creation of equitable communicative lives for all people. To better ourselves, and make the space for all voices.
Ten years from now, the effects on the portion of the population affected with paralysis are clear. People will be able to communicate effectively. What other opportunities would there be for this kind of “think to speech” for vocal, and non-vocal people?
- Could this technology be effective in the mainstream?
- Could one think and not say “hey Siri remind me to do the laundry when I get home”?
- Could we speak to each other in our minds?
- Could our pets talk to us?
I believe that the world would be better for this change, especially if used for medical and neuroscience applications.
What could you see happening if this becomes more widespread over the next ten years?
Sidenote: using AI to make all our lives more human and humane sounds great. I’ve seen many versions of this sentiment, and I fully agree:
Humans working the hard jobs for minimum wage while the ai robots write poetry and create artwork is not the future I wanted.