Image: Yuichiro Chino/Getty Images

Image: Yuichiro Chino/Getty Images

By Harriet Dempsey-Jones

How long will it be until artificial intelligence surpasses that of our own? UQ graduate Matthew Dahlitz explores the issue in his new documentary, featuring scientists from the Queensland Brain Institute.


While we may feel bombarded by doomsday predictions about creating robots with human-like intelligence, UQ graduate, neuropsychotherapist and filmmaker Matthew Dahlitz believes there’s no need to panic. At least, for now.

Dahlitz (Bachelor of Arts (Psychological Science) '94, Master of Counselling '14) has combined his knowledge of the human mind with his passion for the arts to release his first feature-length documentary with son Jachin, through their independent film production and media house, Perfekt Studios.

Titled Toward Singularity, the documentary explores how brain science is being used to inform the development of super intelligent computers and features interviews with a number of scientists from UQ’s Queensland Brain Institute (QBI).

Watch the trailer for Toward Singularity.

Dahlitz said the film’s title, Toward Singularity, refers to a theoretical point in the future when machines become more intelligent than their human creators

To some, this time will also signify the point where the growth of technology and artificial intelligence (AI) becomes unstoppable and irreversible.

Dahlitz – who also established the online magazine The Neuropsychotherapist and later The Science of Psychotherapy website – said he and his son began researching the documentary with their opinions largely shaped by media sensationalism and ominous warnings from public figures, such as Stephen Hawking and Elon Musk.

An image of Jachin and Matthew Dahlitz at the Queensland Brain Institute.

Jachin and Matthew Dahlitz at the Queensland Brain Institute. Image: Anjanette Webb

Jachin and Matthew Dahlitz at the Queensland Brain Institute. Image: Anjanette Webb

“The media is often very dramatic, suggesting the world is about to end in a decade or two at the hands of AI,” Dahlitz told Contact.

“But once we started talking to academics, who are very close to the field, we found that the experts mostly believe there is no reason to be worried.

“I thought that, for dramatic effect, we might be able to get some speculation about the dangers posed by ‘the singularity’, but we couldn’t. The researchers were really honest and there isn’t a lot of fear about what is to come.”

It seems like good news. But, why are they so optimistic?

QBI Honorary Research Fellow researcher Dr Peter Stratton, who is interviewed in the documentary, explains.

The long road ahead


“We can build AI that can do a huge array of impressive things. At the moment, however, we have to make separate AI systems for each individual task,” Dr Stratton said.

“This means that we are currently restricted to what is known as ‘narrow’ or ‘weak’ AI: systems programmed to only do one thing. For example, we might have a computer that can play chess better than a human, but that is the only thing it can do.

“To reach ‘the singularity’, we would need to develop ‘artificial general intelligence’ (AGI), or flexible machines that can apply their intelligence to any problem.

"We are at least 50 years from that, perhaps more like 200 years."

According to Dr Stratton, today’s AI is limited in other key ways that prevent it from connecting to Skynet (the advanced AI computer system that attempts to take over the world in the Terminator movies) and taking dominion over the human race.

“Currently, AI systems can only learn in a way that is imposed by us from the outside,” he said.


“We decide what we want computers to learn, and create mathematical functions that define how that network learns. The intelligence of the system is, therefore, completely dictated by the data we feed into the system."
Dr Peter Stratton

"Until AI systems can self-direct their own learning, they will be stuck in the starting blocks.”

Dr Stratton’s research at QBI focuses on whether humans can improve how AI works by making it think more like humans do.

“AI is currently ‘brain-inspired’, but not realistically brain-like,” Dr Stratton said.

“The basic processing elements are neuron-like, but the way we train these networks is very different to how the brain works. They are trained mathematically, rather than a more organic, self-organising fashion, like in the human brain.

“To create AGI, I think we need to take a step back and build our AI models a little more like the brain,” he said.

“Initially, these new systems won’t do as well as current ‘deep learning’ systems do right now, but I think we can get further down the track with something that works more like the brain."

Image: gremlin/Getty Images

An image of an evil-looking cyborg.

Building in a safety switch


Probing deeper, the Dahlitzes found that even when scientists finally do achieve human-like AI, the researchers interviewed in Toward Singularity believe there will still be no reason to worry.

“The academics we spoke to were confident that, even when we do get to the level of AGI, there will be enough checks and balances in place to avoid some of the apocalyptic, doomsday scenarios,” Dahlitz said.

Dr Stratton echoes this point.

“I think when we get to the stage where we can build AI that is as smart as us, we will also be able to stop it if we need to,” Dr Stratton said.

“It will come down to safety triggers, motivations, and understanding more about intelligence and about ourselves.”

Dr Stratton said our own drive to survive might be the reason we fear that AI would seek to wipe us out if it develops consciousness.

An image of Queensland Brain Institute researcher Dr Peter Stratton.

Queensland Brain Institute researcher Dr Peter Stratton. Image: Anjanette Webb

Queensland Brain Institute researcher Dr Peter Stratton. Image: Anjanette Webb

“People are competitive, and all animals in the world are competitive – but that is because we had to be. We evolved that way, and if you weren’t competitive, you and your species died out.”

By understanding our own biases, Dr Stratton said we could avoid programming them into our AI.

“There is no reason AI needs to be built with a survival instinct. You could build an AI with a sole goal to dismantle itself, and it would do everything it could to turn itself off.”

If there were any danger from intelligent computers, Stratton said it would not be because they developed malevolent feelings about humans.


“The biggest threat with AI is not that it decides it wants to compete with humans and wipe us out; it is the risk of unintended consequences."
Dr Peter Stratton

“An example is a very efficient paperclip maker, where we build something to make paperclips and it realised it could build more paperclips if it demolished the world and started pulling down buildings to use as raw material for making even more paperclips. The AI is doing something we want it to do, but in a way we don’t want it to do it. 

“That is a bit more likely. But, even so, it seems pretty far-fetched that we would build an AI to do one thing, but we give it the capacity to do many other things, and completely lose control of it.”

Humans will change too


Having seen firsthand the calm and positive attitudes of the experts, the Dahlitz duo wanted to reflect this in their documentary.

“The initial intro and outro of the movie was a dramatic piece aimed at getting people to sit up and take notice that reaching ‘the singularity’ might not be so good for humanity,” Dahlitz, pictured, said.

“When we did some initial screening for feedback, the academics involved said it was not really like that, and it was probably true that AI was going to be a lot more helpful.

“So, we changed the beginning and the end of the film. It still gets your attention, but it is far more positive.”

Dr Stratton is similarly upbeat.

He said we should stop being so concerned, because by the time we get there, it’s likely we will see things very differently.

“When the time comes, people will look back on us being worried about ‘the singularity’ and laugh. It is like people 500 years ago talking about the man in the moon, or aliens attacking, and HG Wells-type stuff.

“As technology progresses, these points become moot, because we realise they don't have a basis in reality. I think the same thing is going to happen with ‘the singularity.”

While it might seem like a strange idea to us now, Dr Stratton believes there may be no question of ‘us versus them’ in the future. This will be because when ‘the singularity’ finally does occur, we will likely be part-machine ourselves.

“Already people are getting silicone retinas and cochlear implants, so we are starting to merge with technology already,” Dr Stratton said.

“I work with deep brain stimulation: we put electrodes in people’s brains to help them function better. I really think it is going to be more of a case of blending with the machines rather than competing with them.”


Image: Anjanette Webb

An image of UQ graduate and Toward Singularity filmmaker Matthew Dahlitz.

Join the conversation

Have your say on this story by leaving a comment below. Your comments here are governed by Facebook Terms of Service and UQ Social Media Terms of Use.