The technology industry is facing up to the world-shaking ramifications of artificial intelligence. There is now a recognition that AI will disrupt how societies operate, from education and employment to how data will be collected about people.
Machine learning, a form of advanced pattern recognition that enables machines to make judgments by analysing large volumes of data, could greatly supplement human thought. But such soaring capabilities have stirred almost Frankenstein-like fears about whether developers can control their creations.
Failures of autonomous systems — like the death last yearof a US motorist in a partially self-driving car from Tesla Motors — have led to a focus on safety, says Stuart Russell, a professor of computer science and AI expert at the University of California, Berkeley. “That kind of event can set back the industry a long way, so there is a very straightforward economic self-interest here,” he says.
加州大学伯克利分校(University of California, Berkeley)计算机科学教授、人工智能专家斯图亚特?拉塞尔(Stuart Russell)表示,自动系统的失误(就像去年驾驶一辆特斯拉汽车(Tesla Motors)部分自动驾驶汽车的美国驾车者死亡那样)促使人们关注安全。他表示:“这种事件可能会严重阻碍行业的发展,因此这里有着非常直接的经济自身利益。”
Alongside immigration and globalisation, fears of AI-driven automation are fuelling public anxiety about inequality and job security. The election of Donald Trump as US president and the UK’s vote to leave the EU were partly driven by such concerns. While some politicians claim protectionist policies will help workers, many industry experts say most jobs losses are caused by technological change, largely automation.
Global elites — those with high income and educational levels, who live in capital cities — are considerably more enthusiastic about innovation than the general population, the FT/Qualcomm Essential Future survey found. This gap, unless addressed, will continue to cause political friction.
Vivek Wadhwa, a US-based entrepreneur and academic who writes about ethics and technology, thinks the new wave of automation has geopolitical implications: “Tech companies must accept responsibility for what they’re creating and work with users and policymakers to mitigate the risks and negative impacts. They must have their people spend as much time thinking about what could go wrong as they do hyping products.
The industry is bracing itself for a backlash. Advances in AI and robotics have brought automation to areas of white-collar work, such as legal paperwork and analysing financial data. Some 45 per cent of US employees’ work time is spent on tasks that could be automated with existing technologies, a study by McKinsey says.
Industry and academic initiatives have been set up to ensure AI works to help people. These include the Partnership on AI to Benefit People and Society, established by companies including IBM, and a $27m effort involving Harvard and the Massachusetts Institute of Technology. Groups like Open AI, backed by Elon Musk and Google, have made progress, says Prof Russell: “We’ve seen papers?.?.?.?that address the technical problem of safety.”
为了确保人工智能有利于人类,已经建立了一些行业和学术计划。其中包括由IBM等公司创建的人工智能造福人类和社会合作组织(Partnership on AI to Benefit People and Society),以及涉及哈佛大学(Harvard)和麻省理工学院(MIT)的一项2700万美元计划。得到埃隆?马斯克(Elon Musk)和谷歌(Google)支持的OpenAI等组织已取得进展,拉塞尔教授表示:“我们看到了一些论文……它们针对安全性的技术问题。”
There are echoes of past efforts to deal with the complications of a new technology. Satya Nadella, chief executive of Microsoft, compares it to 15 years ago when Bill Gates rallied his company’s developers to combat computer malware. His “trustworthy computing” initiative was a watershed moment. In an interview with the FT, Mr Nadella said he hoped to do something similar to ensure AI works to benefit humans.
AI presents some thorny problems, however. Machine learning systems derive insights from large amounts of data. Eric Horvitz, a Microsoft executive, told a US Senate hearing late last year that these data sets may themselves be skewed. “Many of our data sets have been collected?.?.?.?with assumptions we may not deeply understand, and we don’t want our machine-learned applications?.?.?.?to be amplifying cultural biases,” he said.
Last year, an investigation by news organisation ProPublica found that an algorithm used by the US justice system to determine whether criminal defendants were likely to reoffend, had a racial bias. Black defendants with a low risk of reoffending were more likely than white ones to be labelled as high risk.
Greater transparency is one way forward, for example making it clear what information AI systems have used. But the “thought processes” of deep learning systems are not easy to audit.Mr Horvitz says such systems are hard for humans to understand. “We need to understand how to justify [their] decisions and how the thinking is done.”
As AI comes to influence more government and business decisions, the ramifications will be widespread. “How do we make sure the machines we ‘train’ don’t perpetuate and amplify the same human biases that plague society?” asks Joi Ito, director of MIT’s Media Lab.
Executives like Mr Nadella believe a mixture of government oversight — including, by implication, the regulation of algorithms — and industry action will be the answer. He plans to create an ethics board at Microsoft to deal with any difficult questions thrown up by AI.
He says: “I want?.?.?.?an ethics board that says, ‘If we are going to use AI in the context of anything that is doing prediction, that can actually have societal impact?.?.?.?that it doesn’t come with some bias that’s built in.’”
Making sure AI systems benefit humans without unintended consequences is difficult. Human society is incapable of defining what it wants, says Prof Russell, so programming machines to maximise the happiness of the greatest number of people is problematic.
This is AI’s so-called “control problem”: the risk that smart machines will single-mindedly pursue arbitrary goals even when they are undesirable. “The machine has to allow for uncertainty about what it is the human really wants,” says Prof Russell.
Ethics committees will not resolve concerns about AI taking jobs, however. Fears of a backlash were apparent at this year’s World Economic Forum in Davos as executives agonised over how to present AI. The common response was to say machines will make many jobs more fulfilling though other jobs could be replaced.
The profits from productivity gains for tech companies and their customers could be huge. How those should be distributed will become part of the AI debate. “Whenever someone cuts cost, that means, hopefully, a surplus is being created,” says Mr Nadella. “You can always tax surplus — you can always make sure that surplus gets distributed differently.”