Experts asked to rank 20 ways Artificial Intelligence could be used to facilitate crime over the next 15 years, in order of concern, listed "deepfakes" - fake audio or video content so real it would have been considered conclusive just a few years ago - as number one.

The 20 ways were ranked in order of concern based on the harm AI could cause, the potential for criminal profit or gain, how easy they would be to carry out and how difficult they would be to stop.

Crimes of low concern included burglar bots - small robots used to gain entry into properties through access points such as letterboxes or cat flaps - because they are easy to defeat using letterbox cages. AI-assisted stalking can be extremely damaging to individuals but could not operate at scale.

If you see video and audio of a leader saying or doing things that are alarming, are you more likely to believe it's real? Many will, while many will believe nothing ever again - and that is the problem with deepfakes. 

Crimes that were of medium concern included the sale of items and services fraudulently labeled as "AI", such as security screening and targeted advertising. Since supplements and alternatives to medicine are a $35 billion business, this could be hugely profitable for companies that engage in fraud in technology the same way.

Yet deepfakes are the most troubling. Fake content would be difficult to detect and stop, and that it could have a variety of aims - from discrediting a public figure to extracting funds by impersonating a couple's son or daughter in a video call. Such content may lead to a widespread distrust of audio and visual evidence, which itself would be a societal harm. 

Aside from deepfakes, five other AI-enabled crimes were judged to be of high concern. These were using driverless vehicles as weapons, helping to craft more tailored phishing messages (spear phishing), disrupting AI-controlled systems, harvesting online information for the purposes of large-scale blackmail, and AI-authored fake news.