Artificial Intelligence: Rise of the Machines?

Anderskirkeby
Anders Kirkeby

Little over a quarter of a century ago, one still had to put a stamp on an envelope and put it in a mailbox to send a letter. A phone call was made either from home or from the office, and if a call was urgent on the way from one to the other, one had to stop at a phone booth. Music you bought in a record store. And analysts at banks and investment companies poured over many different documents for a long time and checked dozens of sources of information before making their decisions, hoping they hadn’t missed something which might backfire in a costly way.

Courtesy of computer technology, these are all things from the past, and one will have great difficulty imagining getting through an average day without something now as trivial as a smartphone. Cars can now drive themselves, as does the plane overhead. All this, of course, courtesy of human data input. So too in the financial world, where artificial intelligence is making headway within the investment community.

So the obvious next step is for machines to actually think for themselves and make decisions accordingly, in all aspects of society. Something to embrace or to be afraid of? Will machines soon tell us where and when to invest, or quite simply do so on their own? SimCorp’s experts explain.

Artificial intelligence is still in its infancy, but progress is fast, faster than ever before. “Currently it is only a few heavily quantitative hedge funds and very early novel fund products which employ artificial intelligence,” says Anders Kirkeby, Vice-President Enterprise Architecture at SimCorp. “For it to become not just widely used but dominant will take proof of superior performance when costs are taken into account, i.e. the net return. Acceptance as a ‘thing’ in the market will probably take another two to three years. Being a common choice is probably some four to six years away and actual dominance is probably still a decade away – assuming market dynamics do not change fundamentally, as to position artificial intelligence less favourably.”

As with every technological breakthrough, there will inevitably be positives and negatives about artificial intelligence. “Data science technology offers the ability to analyse very large data sets repeatedly and along many more dimensions than people,” Kirkeby continues. “On top of that, artificial intelligence offers the ability to spot patterns and interesting correlations human analysts wouldn’t find – either because they need too much time or have various biases. But artificial intelligence is only as good as those designing it. It’s all too easy to find correlation without causation. Also, one day artificial intelligence may become so adept at interfacing with the human world that it will have access to more or less all the information human beings have. But we are not quite there yet so designers of artificial intelligence solutions must remain humble as to the limitations of what they build. Further, there is a chance of herd behaviour which if we are not careful could result in new interesting market crashes and general volatility.”

A certain apprehension is customary with every technical innovation, and thus it takes a while to convince all involved about the advantages. “Generally speaking our clients are not quite there yet. The buy-side industry is technology-driven and has been for decades but it does remain quite conservative. But we do spot the occasional interest, but it is still at the level of ‘what is this thing and should I care?’”

As always, the overriding question is whether the rate of accuracy of artificial intelligence over human intelligence can be scientifically and correctly quantified. Anders Kirkeby is adamant: “Within specific discrete domains the answer would be a clear yes – artificial intelligence can verifiably outperform human beings performing a task whose outcome is measured by clearly objective quantitative measures.”

But what about the legal side of things? Are there any legal constraints to implementing artificial intelligence, per sector, per country, per region? “Not yet, but we will definitely need some,” Kirkeby admits. “In fact we will most likely need regulators to run AIs watching other AIs to ensure stable and fair markets. This is a fun but bigger and more speculative topic which can expand upon if interested.”

With legal constraints come the inevitable legal hurdles of liability. How will these be measured and or quantified? “It’ll be a fair few years before we are ready to contemplate making AIs legal persons so as such artificial intelligence cannot be liable for anything,” says Kirkeby. “The ones owning the AIs are liable. Part of the novelty of artificial intelligence is that it is essentially a complexity game – you apply machinery to do a task which is increasingly complex. As a result the solutions become more opaque.”

“So the owners and/or producer of an artificial intelligence based solution carry any inferred or explicit liability. How can they do that when the thing creating the liability is growing more and more opaque? That is one of the questions me and my research team, SimCorp Technology Labs, are grappling with; we think we can see how we can add value to a number of real-world challenges in the buy-side investment management industry we serve. But it is actually in many ways a harder task to find credible and informative ways of opening up the box for inspection to create reassurance while keeping things simple and possibly even protecting some IP.”

“This is a space where we should expect to see lots of regulation. Investment behaviour will be scrutinized. But more broad challenges of the same category abound with an obvious example being self-driving cars – one day a self-driving car will have to make a judgment between saving the passenger or a stranger.”

In this day and age, news bulletins are rife with reports on ever-increasing performances in hacking by malicious companies, organizations and even countries, so which steps can one take to prevent serious issues in this domain on a local, international or even global scale? “At this junction I’d mostly worry about hacking deliberately and subtly modifying data to drive an artificial intelligence to certain behaviours,” Kirkeby explains. “The algorithms could be hacked but I think we know how to secure them quite well. So the real risk is that the artificial intelligence operates happily doing what it and its handlers expect but results are still wrong because the data has been tampered with before it reaches the AI. In the investment management space there are no good excuses to not encrypt lines, use certificates and in general harden all external interfaces to prevent tampering of data at rest or in motion.”

“Today machine learning is the hottest field in computer science which touches several other disciplines too. We have talked about artificial intelligence since the 1950s and the level of optimism has come and gone,” Kirkeby says. “But now we have access to capable and easy to work with tools alongside almost infinite compute power in the cloud. As a result we are now finally seeing real and useful applications including self-driving cars.”

“Within artificial learning, ’deep learning’ is probably both the newest and fastest growing area,” he continues. “Essentially it just means you employ a series of AI tools in a chain or hierarchy where one feeds another. This layered approach adds additional depth for spotting complex patterns in data. Alphabet has invested a lot in this particular area – it would seem they believe very strongly in unsupervised learning and want to use deep learning to compensate for not teaching their AI what to do.”

This brings us to the inevitable question: when will machines be able to think for themselves? Kirkeby: “That requires a very clear definition of what thinking for themselves really means. The classic Turing test is hardly an indication a computer system is able to think for itself. We could take it to mean that a computer system ceases to be purpose specific and choses its own purpose in life. Ray Kurzweil who has popularised the concept of the singularity – the point in time when computers surpass the total combined human capacity for intelligence on Earth – predicts this will happen around 2045. Personally I think that’s a very interesting thought but it doesn’t help us in the meantime. We need to not worry so much about when machines can think for themselves and rather focus on building AIs for use cases that matter to people. As we perfect these systems I expect we will eventually reach a level where we are generally happy with the outcome but we do not any longer fully comprehend exactly why an AI behaved in a particular way. To me intelligence is an emergent phenomenon which may arise in different contexts but always out of complex networks.”

Seventy-five years ago, British scientist Alan Turing invented the predecessor of the modern computer and cracked the Enigma code. Little did he know that in the early 21st century his invention would once again be about to revolutionize the world in almost every segment of society. The question he famously asked himself back then, ‘Can machines think?’ could very soon be answered. Until that time, the scientific community is hard at work to meet one of Turing’s other famous reflections: ‘A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.’

Alan Mathison Turing was an English computer scientist, mathematician, logician, cryptanalyst and theoretical biologist Raymond "Ray" Kurzweil is an American author, computer scientist, inventor and futurist.