Predicting the future is tricky business. While we're less than six years from the year 2025, attempting to forecast what the cyber security landscape will look like by then requires a combination of experience, insight, and pure conjecture.

That didn't stop a roomful of security pros from doing just that Wednesday at the RSA Conference in San Francisco. In a classroom environment, a few hundred attendees were asked to consider a handful of possible 2025 scenarios, and discuss their hopes and fears for each.

The exercise was designed as an introductory window into Cybersecurity Futures 2025, a global initiative spearheaded by the UC Berkeley Center for Long-Term Cybersecurity, the nonprofit research firm CNA's Institute for Public Research, and the World Economic Forum's Global Centre for Cybersecurity. It also represented a 40-minute version of an 8-hour seminar the effort's leaders — Professor Steve Weber, a professor at UC Berkeley, and Dawn Thomas, associate director of the CNA institute — have been conducting.

The first scenario attendees considered was labeled "New Wiggle Room," which was defined as the concept that perfect data makes societies perfectly miserable. It was represented by a video that presented a future in which the tony Silicon Valley community of Portola Valley had deployed the technology needed to become the country's smartest city.

In the video, fictional residents discuss their reactions to the effort, which results in such things as a woman calling the police after being alerted that her neighbor's cat had entered her yard, as well as a man bemoaning receiving an automated parking ticket each time his car extended a few inches in front of a neighbor's driveway. The result is that people start creating multiple online identities so they can safely rail about each other.

"It turns out that having accurate information about everything creates more problems that it solves," one of these faux citizens states.

Attendees, who were seated at round tables, were then asked to discuss as a group the related issues that would keep them up at night. Our table's reactions ranged from an invasion of privacy and a propensity for monitoring all the wrong things to the unnecessary buildup of bad blood between people.

With that, we were on to a second scenario, this one labeled the "Quantum Leap," a reference to the anticipated (and continuing) huge jumps in technological capabilities. We then watched a video featuring fictional news reports from 2025 detailing how Mexican drug cartels had obtained access to quantum computers, resulting in fears of the cartels manipulating the Mexican economy.

"It's like going from an abacus to an iPhone," one faux expert says in the video of the prospect.

The video then goes on to another fictional report of a Mexican quantum computing expert having disappeared.

In discussing our fears of this scenario, our table focused on things like waking up to empty bank accounts, seeing the electrical grid brought down, and other potential chaos-causing events.

It was then on the third scenario, dubbed "Trust Us," in which the world is in a state of deciding between the unsafe-but-open Internet, and a fictional alternative, the safe but closely watched SafetyNet.

In this case, the video depicted the CEO of a fictional tech giant being fired for calling the SafetyNet into question. The CEO claims that the software behind SafetyNet is too sophisticated, and is beyond human ability to understand. In this future world, news reports detail citizens calling for more controls on and understanding of algorithms, with recognition that small data manipulations could have significant impact on outcomes.

This time, our table's fears centered around bad data being used against us, as well as the potential for society to overreact by shutting down the fictional SafetyNet without fully understanding the technology's full implications.

With all of these thoughts about future security and privacy crises swirling in our heads, we were then asked to fill out March Madness-style brackets pitting outcomes against each other, with good outcomes on one side, and bad outcomes on the other, voting via the RSA Conference app, and sharing real-time results.

In the end, the outcome attendees most wanted to see was a future in which big data is effectively marshaled to end global challenges such as health care, climate change and poverty, nudging out a world in which future generations become skeptical yet savvy about the nature and value of truth.

On the fear-based side, two outcomes ran neck-and-neck: One in which we're unable to recognize when data is being used to manipulate us, and a second in which data manipulation leads to the collapse of trust-based institutions.

In other words, when it comes to the cyber security challenges of the future, security pros clearly believe it's all about the data: On the one hand, they see benign uses for big data as a potential gold mine, while on the other hand, they see horrifying potential to doctor that data and undermine such efforts.

In wrapping things up, CNA's Thomas invited attendees to check out the effort's web site, share it in the workplace, and thus fuel the data the initiative seeks to collect so that it can better understand the issues. Longer term, Thomas asked attendees to hold futures sessions with their companies' leadership, show them data from both the site and the RSA Conference session, and, most importantly, engage in a discussion about preventing detrimental outcomes and investing in positive ones.

The upshot: They say hindsight is 20/20, so if we can develop hindsight before things happen, we should be able to steer technology and society in a better direction.