PeakFlow.Zone

View Original

Should we be intimidated by increasing IT intellect?

Can the idea of collective intelligence, combining humans with AI, keep humanity ahead of potentially malevolent machines?

Netcel recently held an event at The Royal Institution, attending a fascinating lecture by Geoff Mulgan where he outlined his ideas on Collective Intelligence.  A concept that, he proposes, will keep humanity several steps ahead of the rapidly blossoming intelligence of machines.

In the last couple of years, there have been increasingly apocalyptic stories of how computers will create mass unemployment in the white-collar sector.

The machines are learning rapidly. Possibly the most impressive example is Google DeepMind’s AlphaGo. A year ago, DeepMind announced it had beaten the Go World Champion with a version of their AI programme. Whilst Go rules are simpler than Chess, it is a game of huge complexity. The number of potential moves a programme would have to search is vast - more than a googol (10100) times larger than chess (a number greater than the number of atoms in the Universe!). Creating a “brute force” algorithm searching all possibilities is untenable. This makes its defeat of the best human so impressive.  Even more awe-inspiring though, is that less than a year later, a new version, AlphaGoZero, beat this old incarnation 100-0.  And the new version had no input from human experts, used only one computer (with four special TPUs) and took only 40 days of training, playing 30 million games against itself in that time. All it knew at the start was the rules of Go.

So what chance do we stand against this intimidating IT intellect?

Geoff Mulgan explained his ideas to address this at The Royal Institution. We were seated in the famous lecture theatre that will be familiar to all aspiring techies, who looked forward to the BBC’s showing of the RI’s Christmas Lecture.  Geoff is CEO of NESTA, controlling a budget of over £10m in science and technology research.  His CV is impressive being a senior advisor to both Blair and Brown and a self-declared policy polymath.  His lecture summarised the ideas in his new book ‘Big Mind: How Collective Intelligence Can Change Our World’

He provided fascinating examples of the power of collective human intelligence. Starting with the 1906 ‘Wisdom of Crowds” experiment. 800 people rather unpleasantly estimated the weight of a slaughtered Ox to within 1% of its actual value. However, there were flaws to the Ox analysis and there are many counterexamples, where teams, meetings, companies, communities and countries make poor decisions. The collective well-paid minds and technology on Wall Street did not spot the impending financial crash of 2008. 

Geoff outlined his ideas on what is needed for collective intelligence to be effective. It must:

  • Combine sources of observations and data.
  • Focus, determining what data is most important.
  • Analyse and interpret this input to create predictive models of reality.
  • Have a deep memory of these inputs and predictions.
  • Have the ability to act in the real world in response to the models.
  • Be creative and innovative in response to new problems.
  • Be empathetic, having the ability to see a problem from the perspective of another person or organisation.
  • Provide judgement and ultimately wisdom, to ascertain the best way forward in complex situations.

Geoff provided many examples of his vision of creative intelligence already at work. Particularly those that combine humans and AI. My favourite is DuoLingo, an app for learning languages.

An old estimate to ‘master’ a language was 130 hours. Rosetta Stone claimed to have reduced this to 54 hours. Duolingo combined AI and 150,000 humans to iterate and improve its learning algorithm to reduce the time to 34 hours.

Of more important global value, there are several examples aimed at improving global health  e.g. AIME(predicting Zika and Dengue outbreaks), NCRAS (UK best practice for cancer treatment) and for monitoring the environment e.g. Planetary Skin and Copernicus.

The conclusion of the talk was led by a plea for collective intelligence to be established as a new science.  Whilst there are decades of research on how humans think individually, there is a mere fraction on how we can think more effectively in groups. I think there are some concerns with the idea of CI. Firstly, does the wisdom of crowds potentially create a statistically "mean" middle-of-the-road solution, missing out on potentially exciting outliers in the tail of the distribution. The model also needs to consider an immune system, particularly when AI is involved.

What happens if the system becomes infected and sets off in a malevolent direction?

How will humans (or indeed other AI computers) be involved to provide their judgement and wisdom to prevent this?

It is clear from the examples that the way forward will combine human and machine capabilities, using more technology to allow human collaboration at scale. Ultimately though, Geoff’s book predicts that what we may learn from this new collective intelligence may make uncomfortable reading. Its ability to generate new levels of awareness and expand our horizons may just generate discomforting uncertainty about the future.

The original article can be found on the Netcel site.