skip to primary navigationskip to content

Arts Humanities and Social Sciences Research

The School of Arts and Humanities and the School of the Humanities and Social Sciences

Studying at Cambridge

Research in the News

Living with artificial intelligence: how do we get it right?

By Anonymous from University of Cambridge - School of Arts and Humanities. Published on Feb 28, 2018.

This has been the decade of AI, with one astonishing feat after another. A chess-playing AI that can defeat not only all human chess players, but also all previous human-programmed chess machines, after learning the game in just four hours? That’s yesterday’s news, what’s next?

True, these prodigious accomplishments are all in so-called narrow AI, where machines perform highly specialised tasks. But many experts believe this restriction is very temporary. By mid-century, we may have artificial general intelligence (AGI) – machines that are capable of human-level performance on the full range of tasks that we ourselves can tackle.

If so, then there’s little reason to think that it will stop there. Machines will be free of many of the physical constraints on human intelligence. Our brains run at slow biochemical processing speeds on the power of a light bulb, and need to fit through a human birth canal. It is remarkable what they accomplish, given these handicaps. But they may be as far from the physical limits of thought as our eyes are from the Webb Space Telescope.

Once machines are better than us at designing even smarter machines, progress towards these limits could accelerate. What would this mean for us? Could we ensure a safe and worthwhile coexistence with such machines?

On the plus side, AI is already useful and profitable for many things, and super AI might be expected to be super useful, and super profitable. But the more powerful AI becomes, the more we ask it to do for us, the more important it will be to specify its goals with great care. Folklore is full of tales of people who ask for the wrong thing, with disastrous consequences – King Midas, for example, who didn’t really want his breakfast to turn to gold as he put it to his lips.

So we need to make sure that powerful AI machines are ‘human-friendly’ – that they have goals reliably aligned with our own values. One thing that makes this task difficult is that by the standards we want the machines to aim for, we ourselves do rather poorly. Humans are far from reliably human-friendly. We do many terrible things to each other and to many other sentient creatures with whom we share the planet. If superintelligent machines don’t do a lot better than us, we’ll be in deep trouble. We’ll have powerful new intelligence amplifying the dark sides of our own fallible natures.

For safety’s sake, then, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our time. Luckily they’ll have the smarts for the job. If there are routes to the uplands, they’ll be better than us at finding them, and steering us in the right direction. They might be our guides to a much better world.

However, there are two big problems with this utopian vision. One is how we get the machines started on the journey, the other is what it would mean to reach this destination. The ‘getting started’ problem is that we need to tell the machines what they’re looking for with sufficient clarity and precision that we can be confident that they will find it – whatever ‘it’ actually turns out to be. This is a daunting challenge, given that we are confused and conflicted about the ideals ourselves, and different communities might have different views.

The ‘destination’ problem is that, in putting ourselves in the hands of these moral guides and gatekeepers, we might be sacrificing our own autonomy – an important part of what makes us human.

Just to focus on one aspect of these difficulties, we are deeply tribal creatures. We find it very easy to ignore the suffering of strangers, and even to contribute to it, at least indirectly. For our own sakes, we should hope that AI will do better. It is not just that we might find ourselves at the mercy of some other tribe’s AI, but that we could not trust our own, if we had taught it that not all suffering matters. This means that as tribal and morally fallible creatures, we need to point the machines in the direction of something better. How do we do that? That’s the getting started problem.

As for the destination problem, suppose that we succeed. Machines who are better than us at sticking to the moral high ground may be expected to discourage some of the lapses we presently take for granted. We might lose our freedom to discriminate in favour of our own tribes, for example.

Loss of freedom to behave badly isn’t always a bad thing, of course: denying ourselves the freedom to keep slaves, or to put children to work in factories, or to smoke in restaurants are signs of progress. But are we ready for ethical overlords – sanctimonious silicon curtailing our options? They might be so good at doing it that we don’t notice the fences; but is this the future we want, a life in a well-curated moral zoo?

These issues might seem far-fetched, but they are already on our doorsteps. Imagine we want an AI to handle resource allocation decisions in our health system, for example. It might do so much more fairly and efficiently than humans can manage, with benefits for patients and taxpayers. But we’d need to specify its goals correctly (e.g. to avoid discriminatory practices), and we’d be depriving some humans (e.g. senior doctors) of some of the discretion they presently enjoy. So we already face the getting started and destination problems. And they are only going to get harder.

This isn’t the first time that a powerful new technology has had moral implications. Speaking about the dangers of thermonuclear weapons in 1954, Bertrand Russell argued that to avoid wiping ourselves out “we have to learn to think in a new way”. He urged his listener to set aside tribal allegiances and “consider yourself only as a member of a biological species... whose disappearance none of us can desire.”

We have survived the nuclear risk so far, but now we have a new powerful technology to deal with – itself, literally, a new way of thinking. For our own safety, we need to point these new thinkers in the right direction, and get them to act well for us. It is not yet clear whether this is possible, but if so it will require the same cooperative spirit, the same willingness to set aside tribalism, that Russell had in mind.

But that’s where the parallel stops. Avoiding nuclear war means business as usual. Getting the long-term future of life with AI right means a very different world. Both general intelligence and moral reasoning are often thought to be uniquely human capacities. But safety seems to require that we think of them as a package: if we are to give general intelligence to machines, we’ll need to give them moral authority, too. That means a radical end to human exceptionalism. All the more reason to think about the destination now, and to be careful about what we wish for.

Inset image: read more about our AI research in the University's research magazine; download a pdf; view on Issuu.

Professor Huw Price and Dr Karina Vold are at the Faculty of Philosophy and the Leverhulme Centre for the Future of Intelligence, where they work on 'Agents and persons'. This theme explores the nature and future of AI agency and personhood, and our impact on our human sense on what it means to be a person.

Powerful AI needs to be reliably aligned with human values. Does this mean that AI will eventually have to police those values? Cambridge philosophers Huw Price and Karina Vold consider the trade-off between safety and autonomy in the era of superintelligence.

For safety’s sake, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our time
Huw Price and Karina Vold

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.


In tech we trust?

By lw355 from University of Cambridge - School of Arts and Humanities. Published on Feb 23, 2018.

Dr Jat Singh is familiar with breaking new ground and working across disciplines. Even so, he and colleagues were pleasantly surprised by how much enthusiasm has greeted their new Strategic Research Initiative on Trustworthy Technologies, which brings together science, technology and humanities researchers from across the University.

In fact, Singh, a researcher in Cambridge’s Department of Computer Science and Technology, has been collaborating with lawyers for several years: “A legal perspective is paramount when you’re researching the technical dimensions to compliance, accountability and trust in emerging ICT; although the Computer Lab is not the usual home for lawyers, we have two joining soon.”

Governance and public trust present some of the greatest challenges in technology today. The European General Data Protection Regulation (GDPR), which comes into force this year, has brought forward debates such as whether individuals have a ‘right to an explanation’ regarding decisions made by machines, and introduces stiff penalties for breaching data protection rules. “With penalties including fines of up to 4% of global turnover or €20 million, people are realising that they need to take data protection much more seriously,” he says.

Singh is particularly interested in how data-driven systems and algorithms – including machine learning – will soon underpin and automate everything from transport networks to council services.

As we work, shop and travel, computers and mobile phones already collect, transmit and process much data about us; as the ‘Internet of Things’ continues to instrument the physical world, machines will increasingly mediate and influence our lives.

It’s a future that raises profound issues of privacy, security, safety and ultimately trust, says Singh, whose research is funded by an Engineering and Physical Sciences Research Council Fellowship: “We work on mechanisms for better transparency, control and agency in systems, so that, for instance, if I give data to someone or something, there are means for ensuring they’re doing the right things with it. We are also active in policy discussions to help better align the worlds of technology and law.”

What it means to trust machine learning systems also concerns Dr Adrian Weller. Before becoming a senior research fellow in the Department of Engineering and a Turing Fellow at The Alan Turing Institute, he spent many years working in trading for leading investment banks and hedge funds, and has seen first-hand how machine learning is changing the way we live and work.

“Not long ago, many markets were traded on exchanges by people in pits screaming and yelling,” Weller recalls. “Today, most market making and order matching is handled by computers. Automated algorithms can typically provide tighter, more responsive markets – and liquid markets are good for society.”

But cutting humans out of the loop can have unintended consequences, as the flash crash of 2010 shows. During 36 minutes on 6 May, nearly one trillion dollars were wiped off US stock markets as an unusually large sell order produced an emergent coordinated response from automated algorithms. “The flash crash was an important example illustrating that over time, as we have more AI agents operating in the real world, they may interact in ways that are hard to predict,” he says.

Algorithms are also beginning to be involved in critical decisions about our lives and liberty. In medicine, machine learning is helping diagnose diseases such as cancer and diabetic retinopathy; in US courts, algorithms are used to inform decisions about bail, sentencing and parole; and on social media and the web, our personal data and browsing history shape the news stories and advertisements we see.

How much we trust the ‘black box’ of machine learning systems, both as individuals and society, is clearly important. “There are settings, such as criminal justice, where we need to be able to ask why a system arrived at its conclusion – to check that appropriate process was followed, and to enable meaningful challenge,” says Weller. “Equally, to have effective real-world deployment of algorithmic systems, people will have to trust them.”

But even if we can lift the lid on these black boxes, how do we interpret what’s going on inside? “There are many kinds of transparency,” he explains. “A user contesting a decision needs a different kind of transparency to a developer who wants to debug a system. And a third form of transparency might be needed to ensure a system is accountable if something goes wrong, for example an accident involving a driverless car.”

If we can make them trustworthy and transparent, how can we ensure that algorithms do not discriminate unfairly against particular groups? While it might be useful for Google to advertise products it ‘thinks’ we are most likely to buy, it is more disquieting to discover the assumptions it makes based on our name or postcode.

When Latanya Sweeney, Professor of Government and Technology in Residence at Harvard University, tried to track down one of her academic papers by Googling her name, she was shocked to be presented with ads suggesting that she had been arrested. After much research, she discovered that “black-sounding” names were 25% more likely to result in the delivery of this kind of advertising.

Like Sweeney, Weller is both disturbed and intrigued by examples of machine-learned discrimination. “It’s a worry,” he acknowledges. “And people sometimes stop there – they assume it’s a case of garbage in, garbage out, end of story. In fact, it’s just the beginning, because we’re developing techniques that can automatically detect and remove some forms of bias.”

Transparency, reliability and trustworthiness are at the core of Weller’s work at the Leverhulme Centre for the Future of Intelligence and The Alan Turing Institute. His project grapples with how to make machine-learning decisions interpretable, develop new ways to ensure that AI systems perform well in real-world settings, and examine whether empathy is possible – or desirable – in AI.

Machine learning systems are here to stay. Whether they are a force for good rather than a source of division and discrimination depends partly on researchers such as Singh and Weller. The stakes are high, but so are the opportunities. Universities have a vital role to play, both as critic and conscience of society. Academics can help society imagine what lies ahead and decide what we want from machine learning – and what it would be wise to guard against.

Weller believes the future of work is a huge issue: “Many jobs will be substantially altered if not replaced by machines in coming decades. We need to think about how to deal with these big changes.”And academics must keep talking as well as thinking. “We’re grappling with pressing and important issues,” he concludes. “As technical experts we need to engage with society and talk about what we’re doing so that policy makers can try to work towards policy that’s technically and legally sensible.”

Inset image: read more about our AI research in the University's research magazine; download a pdf; view on Issuu.

Fairness, trust and transparency are qualities we usually associate with organisations or individuals. Today, these attributes might also apply to algorithms. As machine learning systems become more complex and pervasive, Cambridge researchers believe it’s time for new thinking about new technology.

With penalties including fines of up to €20 million, people are realising that they need to take data protection much more seriously
Jat Singh
Want to hear more?

Join us at the Cambridge Science Festival to hear Adrian Weller discuss how we can ensure AI systems are transparent, reliable and trustworthy. 

Thursday 15 March 2018, 7:30pm - 8:30pm

Mill Lane Lecture Rooms, 8 Mill Lane, Cambridge, UK, CB2 1RW


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.


International experts sound the alarm on the malicious use of AI in unique report

By sjr81 from University of Cambridge - School of Arts and Humanities. Published on Feb 21, 2018.

Twenty-six experts on the security implications of emerging technologies have jointly authored a ground-breaking report – sounding the alarm about the potential malicious use of artificial intelligence (AI) by rogue states, criminals, and terrorists.

For many decades hype outstripped fact in terms of AI and machine learning. No longer.
Seán Ó hÉigeartaigh

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.


Fake news ‘vaccine’: online game may ‘inoculate’ by simulating propaganda tactics

By fpjl2 from University of Cambridge - School of Arts and Humanities. Published on Feb 20, 2018.

A new online game puts players in the shoes of an aspiring propagandist to give the public a taste of the techniques and motivations behind the spread of disinformation – potentially “inoculating” them against the influence of so-called fake news in the process.

Researchers at the University of Cambridge have already shown that briefly exposing people to tactics used by fake news producers can act as a “psychological vaccine” against bogus anti-science campaigns.

While the previous study focused on disinformation about climate science, the new online game is an experiment in providing “general immunity” against the wide range of fake news that has infected public debate.

The game encourages players to stoke anger, mistrust and fear in the public by manipulating digital news and social media within the simulation. 

Players build audiences for their fake news sites by publishing polarising falsehoods, deploying twitter bots, photo-shopping evidence, and inciting conspiracy theories in the wake of public tragedy – all while maintaining a “credibility score” to remain as persuasive as possible.

A pilot study conducted with teenagers in a Dutch high school used an early paper-and-pen trial of the game, and showed the perceived “reliability” of fake news to be diminished in those that played compared to a control group. 

The research and education project, a collaboration between Cambridge researchers and Dutch media collective DROG, is launching an English version of the game online today at

The psychological theory behind the research is called “inoculation”:

“A biological vaccine administers a small dose of the disease to build immunity. Similarly, inoculation theory suggests that exposure to a weak or demystified version of an argument makes it easier to refute when confronted with more persuasive claims,” says Dr Sander van der Linden, Director of Cambridge University’s Social Decision-Making Lab

“If you know what it is like to walk in the shoes of someone who is actively trying to deceive you, it should increase your ability to spot and resist the techniques of deceit. We want to help grow ‘mental antibodies’ that can provide some immunity against the rapid spread of misinformation.”

Based in part on existing studies of online propaganda, and taking cues from actual conspiracy theories about organisations such as the United Nations, the game is set to be translated for countries such as Ukraine, where disinformation casts a heavy shadow.

There are also plans to adapt the framework of the game for anti-radicalisation purposes, as many of the same manipulation techniques – using false information to provoke intense emotions, for example – are commonly deployed by recruiters for religious extremist groups.

“You don’t have to be a master spin doctor to create effective disinformation. Anyone can start a site and artificially amplify it through twitter bots, for example. But recognising and resisting fake news doesn’t require a PhD in media studies either,” says Jon Roozenbeek, a researcher from Cambridge’s Department of Slavonic Studies and one of the game’s designers.

“We aren’t trying to drastically change behavior, but instead trigger a simple thought process to help foster critical and informed news consumption.”

Roozenbeek points out that some efforts to combat fake news are seen as ideologically charged. “The framework of our game allows players to lean towards the left or right of the political spectrum. It’s the experience of misleading through news that counts,” he says.

The pilot study in the Netherlands using a paper version of the game involved 95 students with an average age of 16, randomly divided into treatment and control.

This version of the game focused on the refugee crisis, and all participants were randomly presented with fabricated news articles on the topic at the end of the experiment.

The treatment group were assigned roles – alarmist, denier, conspiracy theorist or clickbait monger – and tasked with distorting a government fact sheet on asylum seekers using a set of cards outlining common propaganda tactics consistent with their role.    

They found fake news to be significantly less reliable than the control group, who had not produced their own fake article. Researchers describe the results of this small study as limited but promising. The study has been accepted for publication in the Journal of Risk Research.

The team are aiming to take their “fake news vaccine” trials to the next level with today’s launch of the online game.

With content written mostly by the Cambridge researchers along with Ruurd Oosterwoud, founder of DROG, the game only takes a few minutes to complete. The hope is that players will then share it to help create a large anonymous dataset of journeys through the game.  

The researchers can then use this data to refine techniques for increasing media literacy and fake news resilience in a ‘post-truth’ world. “We try to let players experience what it is like to create a filter bubble so they are more likely to realise they may be living in one,” adds van der Linden.

A new experiment, launching today online, aims to help ‘inoculate’ against disinformation by providing a small dose of perspective from a “fake news tycoon”. A pilot study has shown some early success in building resistance to fake news among teenagers.   

We try to let players experience what it is like to create a filter bubble so they are more likely to realise they may be living in one
Sander van der Linden
A screen shot of the Fake News Game on a smart phone.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.


Kettle's Yard is back

By sjr81 from University of Cambridge - School of Arts and Humanities. Published on Feb 12, 2018.

We thought you might like a look inside the 'new' Kettle's Yard, which reopened to the public on Saturday, February 10, to learn more about its past – and future.

As Kettle's Yard opens its doors following a two-year, multi-million pound redevelopment and transformation of its gallery spaces, the work of 38 leading contemporary and historic internationally-renowned artists has gone on display in a spectacular opening show.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.


Opinion: What Ancient Greece can teach us about toxic masculinity today

By ts657 from University of Cambridge - School of Arts and Humanities. Published on Feb 08, 2018.

Comedy and tragedy masks

‘Toxic masculinity’ has its roots in Ancient Greece, and some of today’s most damaging myths around sexual norms can be traced back to early literature from the time, as Professor Mary Beard discusses in her latest book Women & power: a manifesto

Euripides Hippolytus has toxic masculinity on every page, Greek myths are populated by rapists who are monstrous or otherworldly while Medusa is an early example of victim blaming. Of course, in some texts, rapists are condemned and victims believed. But the ending is usually the same – triumph for the aggressor, tragedy for the survivor.

In Hippolytus, the titular male hero challenges sexual norms because he is celibate, by some counts asexual, preferring to spend his time outdoors.  He is also a pious young man devoted to Artemis, the goddess of the wilderness, and virginity. 

Aphrodite, as goddess of sexual love, is none too impressed.  Hippolytus refuses to worship her.  To seek her revenge, Aphrodite causes Hippolytus’ stepmother, Phaedra, to fall in love with him.  Phaedra sexually harasses him, and his resistance leads her to falsely accuse him of rape in her suicide note. Hippolytus flees in disgrace and is killed. A sad tale, and far more complex than this brief summary can show. 

My work training University of Cambridge students to be active bystanders, as part of the University’s Breaking the Silence campaign, has made me think more about Hippolytus and the concepts of masculinity that stretch back to ancient times.

Hippolytus’s father Theseus prefers to accept his son is a rapist rather than the fact he does not fit with the definition of a ‘real man’.  What kind of man doesn’t want sex after all; what young prince left at home with his young and beautiful stepmother wouldn’t be tempted to get in bed with her? When deciding sexual and gender norms, we often make emotionally based value judgments. These create false beliefs that are some of the most resistant to truth, according to one US study.

Challenging myths and stereotypes

I cannot help but wonder whether society’s restricted definition of masculinity is contributing to the staggering statistics we see about the prevalence of sexual harassment and sexual violence on college campuses, as has been documented in the NUS report Hidden Marks. ‘Toxic’ norms of male behaviour are interrogated in anti-harassment programmes such as Cambridge’s Good Lad Initiative or the Twitter movement #HowIWillChange.

The images in popular culture, from men’s magazines to Hollywood movies, not to mention pornography so readily accessible on the internet, show a very restricted kind of masculinity.  The kind where aggression is rewarded and celebrated. 

Is it surprising, then, that so many of today’s young men seem to lack the confidence to be OK with taking things slow?  With not going out for the sole purpose of getting laid?  Isn’t that what everyone else is doing after all?

Challenging myths and stereotypes is also central to Cambridge’s bystander intervention programme.

We use social norms theory to show that what is perceived as the dominant view may well not be.  The ‘silent majority’ is strong.  And it only takes one or two people to stop being silent to change what is perceived to be normal and acceptable. 

We are empowering the students in our workshops to challenge the stereotypes, to see that it’s OK for them or their male friends to be a different kind of man.  Helping students to understand the culture, and perceptions, that enable sexual violence to take place is an important foundation for preparing them to be active bystanders.

Making a difference

Sex offender ‘monsters’ are as prevalent in today’s media as they were on the ancient stage. Rachel Krys, co-director of the End Violence Against Women coalition describes these stereotypes as unhelpful, allowing unacceptable behaviour short of sexual assault to be disassociated from perpetration. According to the coalition, most perpetrators “look normal, can be quite charming, and are often part of your group”. When we move away from the idea that perpetrators have to be monsters, we can begin to own and change unacceptable behaviours in our friends, our group and even ourselves.

It’s clear these are complex issues, and we know it’s not easy standing up to your friends, or going against the crowd.  Intervening may be awkward, and it may feel uncomfortable.  But it can make a real difference, not just for potential victims but also for potential perpetrators.

A recent study of London commuters shows that only 11% of women who were sexually harassed or assaulted on the Underground were helped by a bystander.

The report describes the devastation of finding that even when surrounded by people, they were unsafe. And bystanders witnessing their abuse and doing nothing left victims with the lifelong impression that no one cared.

There are also so many different ways to intervene, and it is not just about confronting people or taking a stand in a crowd.  The workshops help students practice intervention skills in realistic scenarios that could come up in their day-to-day university life, and explore the different options that may be available to them. 

It has been encouraging to see how the students participating have already started to gain not only confidence, but also awareness of how prevalent some of these situations are, and how what might seem like a very small action can make such a big difference.

Are we taking at least a small step to changing the culture at the University of Cambridge?  I certainly hope so.

Tori McKee is Tutorial Department Manager at Jesus College. Join this week's Breaking the Silence campaigning to increase bystander interventions to stop sexual harassment as part of National Sexual Abuse and Sexual Violence Awareness Week 2018. Download materials here or at

 Tori McKee, a PhD scholar in Classical Studies, looks at ancient and modern ways of being a man

When we move away from the idea that perpetrators have to be monsters, we can begin to own and change unacceptable behaviours in our friends, our group and even ourselves

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.


Artificial intelligence is growing up fast: what’s next for thinking machines?

By cjb250 from University of Cambridge - School of Arts and Humanities. Published on Feb 06, 2018.

We are well on the way to a world in which many aspects of our daily lives will depend on AI systems.

Within a decade, machines might diagnose patients with the learned expertise of not just one doctor but thousands. They might make judiciary recommendations based on vast datasets of legal decisions and complex regulations. And they will almost certainly know exactly what’s around the corner in autonomous vehicles.

“Machine capabilities are growing,” says Dr Stephen Cave, Executive Director of the Leverhulme Centre for the Future of Intelligence (CFI). “Machines will perform the tasks that we don’t want to: the mundane jobs, the dangerous jobs. And they’ll do the tasks we aren’t capable of – those involving too much data for a human to process, or where the machine is simply faster, better, cheaper.”

Dr Mateja Jamnik, AI expert at the Department of Computer Science and Technology, agrees: “Everything is going in the direction of augmenting human performance – helping humans, cooperating with humans, enabling humans to concentrate on the areas where humans are intrinsically better such as strategy, creativity and empathy.” 

Part of the attraction of AI requires that future technologies perform tasks autonomously, without humans needing to monitor activities every step of the way. In other words, machines of the future will need to think for themselves. But, although computers today outperform humans on many tasks, including learning from data and making decisions, they can still trip up on things that are really quite trivial for us.

Take, for instance, working out the formula for the area of a parallelogram. Humans might use a diagram to visualise how cutting off the corners and reassembling it as a rectangle simplifies the problem. Machines, however, may “use calculus or integrate a function. This works, but it’s like using a sledgehammer to crack a nut,” says Jamnik, who was recently appointed Specialist Adviser to the House of Lords Select Committee on AI.

“When I was a child, I was fascinated by the beauty and elegance of mathematical solutions. I wondered how people came up with such intuitive answers. Today, I work with neuroscientists and experimental psychologists to investigate this human ability to reason and think flexibly, and to make computers do the same.”

Jamnik believes that AI systems that can choose so-called heuristic approaches – employing practical, often visual, approaches to problem solving – in a similar way to humans will be an essential component of human-like computers. They will be needed, for instance, so that machines can explain their workings to humans – an important part of the transparency of decision-making that we will require of AI.

With funding from the Engineering and Physical Sciences Research Council and the Leverhulme Trust, she is building systems that have begun to reason like humans through diagrams. Her aim now is to enable them to move flexibly between different “modalities of reasoning”, just as humans have the agility to switch between methods when problem solving. 

 Being able to model one aspect of human intelligence in computers raises the question of what other aspects would be useful. And in fact how ‘human-like’ would we want AI systems to be? This is what interests Professor José Hernandez-Orallo, from the Universitat Politècnica de València in Spain and Visiting Fellow at the CFI.

“We typically put humans as the ultimate goal of AI because we have an anthropocentric view of intelligence that places humans at the pinnacle of a monolith,” says Hernandez-Orallo. “But human intelligence is just one of many kinds. Certain human skills, such as reasoning, will be important in future systems. But perhaps we want to build systems that ‘fill the gaps that humans cannot reach’, whether it’s AI that thinks in non-human ways or AI that doesn’t think at all.

“I believe that future machines can be more powerful than humans not just because they are faster but because they can have cognitive functionalities that are inherently not human.” This raises a difficulty, says Hernandez-Orallo: “How do we measure the intelligence of the systems that we build? Any definition of intelligence needs to be linked to a way of measuring it, otherwise it’s like trying to define electricity without a way of showing it.”

The intelligence tests we use today – such as psychometric tests or animal cognition tests – are not suitable for measuring intelligence of a new kind, he explains. Perhaps the most famous test for AI is that devised by 1950s Cambridge computer scientist Alan Turing. To pass the Turing Test, a computer must fool a human into believing it is human. “Turing never meant it as a test of the sort of AI that is becoming possible – apart from anything else, it’s all or nothing and cannot be used to rank AI,” says Hernandez-Orallo.

In his recently published book The Measure of all Minds, he argues for the development of “universal tests of intelligence” – those that measure the same skill or capability independently of the subject, whether it’s a robot, a human or an octopus.

His work at the CFI as part of the ‘Kinds of Intelligence’ project, led by Dr Marta Halina, is asking not only what these tests might look like but also how their measurement can be built into the development of AI. Hernandez-Orallo sees a very practical application of such tests: the future job market. “I can imagine a time when universal tests would provide a measure of what’s needed to accomplish a job, whether it’s by a human or a machine.”

Cave is also interested in the impact of AI on future jobs, discussing this in a report on the ethics and governance of AI recently submitted to the House of Lords Select Committee on AI on behalf of researchers at Cambridge, Oxford, Imperial College and the University of California at Berkeley. “AI systems currently remain narrow in their range of abilities by comparison with a human. But the breadth of their capacities is increasing rapidly in ways that will pose new ethical and governance challenges – as well as create new opportunities,” says Cave. “Many of these risks and benefits will be related to the impact these new capacities will have on the economy, and the labour market in particular.”

Hernandez-Orallo adds: “Much has been written about the jobs that will be at risk in the future. This happens every time there is a major shift in the economy. But just as some machines will do tasks that humans currently carry out, other machines will help humans do what they currently cannot – providing enhanced cognitive assistance or replacing lost functions such as memory, hearing or sight.”

Jamnik also sees opportunities in the age of intelligent machines: “As with any revolution, there is change. Yes some jobs will become obsolete. But history tells us that there will be jobs appearing. These will capitalise on inherently human qualities. Others will be jobs that we can’t even conceive of – memory augmentation practitioners, data creators, data bias correctors, and so on. That’s one reason I think this is perhaps the most exciting time in the history of humanity.”

Inset image: read more about our AI research in the University's research magazine; download a pdf; view on Issuu.

Our lives are already enhanced by AI – or at least an AI in its infancy – with technologies using algorithms that help them to learn from our behaviour. As AI grows up and starts to think, not just to learn, we ask how human-like do we want their intelligence to be and what impact will machines have on our jobs? 

Perhaps we want to build systems that ‘fill the gaps that humans cannot reach’, whether it’s AI that thinks in non-human ways or AI that doesn’t think at all
José Hernandez-Orallo
Artificial intelligence

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.


How Japan’s ‘salaryman’ is becoming cool

By sjr81 from University of Cambridge - School of Arts and Humanities. Published on Feb 02, 2018.

Read more about the new research into 'Cool Japanese men.

Japanese men are becoming cool. The suit-and-tie salaryman remodels himself with beauty treatments and 'cool biz' fashion. Loyal company soldiers are reborn as cool, attentive fathers. Hip-hop dance is as manly as martial arts. Could it even be cool for middle-aged men to idolise teenage girl popstars? 

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.


Report highlights opportunities and risks associated with synthetic biology and bioengineering

By cjb250 from University of Cambridge - School of Arts and Humanities. Published on Nov 21, 2017.

Rapid developments in the field of synthetic biology and its associated tools and methods, including more widely available gene editing techniques, have substantially increased our capabilities for bioengineering – the application of principles and techniques from engineering to biological systems, often with the goal of addressing 'real-world' problems.

In a feature article published in the open access journal eLife, an international team of experts led by Dr Bonnie Wintle and Dr Christian R. Boehm from the Centre for the Study of Existential Risk at the University of Cambridge, capture perspectives of industry, innovators, scholars, and the security community in the UK and US on what they view as the major emerging issues in the field.

Dr Wintle says: “The growth of the bio-based economy offers the promise of addressing global environmental and societal challenges, but as our paper shows, it can also present new kinds of challenges and risks. The sector needs to proceed with caution to ensure we can reap the benefits safely and securely.”

The report is intended as a summary and launching point for policy makers across a range of sectors to further explore those issues that may be relevant to them.

Among the issues highlighted by the report as being most relevant over the next five years are:

Artificial photosynthesis and carbon capture for producing biofuels

If technical hurdles can be overcome, such developments might contribute to the future adoption of carbon capture systems, and provide sustainable sources of commodity chemicals and fuel.  

Enhanced photosynthesis for agricultural productivity

Synthetic biology may hold the key to increasing yields on currently farmed land – and hence helping address food security – by enhancing photosynthesis and reducing pre-harvest losses, as well as reducing post-harvest and post-consumer waste.

Synthetic gene drives

Gene drives promote the inheritance of preferred genetic traits throughout a species, for example to prevent malaria-transmitting mosquitoes from breeding. However, this technology raises questions about whether it may alter ecosystems, potentially even creating niches where a new disease-carrying species or new disease organism may take hold.

Human genome editing

Genome engineering technologies such as CRISPR/Cas9 offer the possibility to improve human lifespans and health. However, their implementation poses major ethical dilemmas. It is feasible that individuals or states with the financial and technological means may elect to provide strategic advantages to future generations.

Defence agency research in biological engineering

The areas of synthetic biology in which some defence agencies invest raise the risk of ‘dual-use’. For example, one programme intends to use insects to disseminate engineered plant viruses that confer traits to the target plants they feed on, with the aim of protecting crops from potential plant pathogens – but such technologies could plausibly also be used by others to harm targets.

In the next five to ten years, the authors identified areas of interest including:

Regenerative medicine: 3D printing body parts and tissue engineering

While this technology will undoubtedly ease suffering caused by traumatic injuries and a myriad of illnesses, reversing the decay associated with age is still fraught with ethical, social and economic concerns. Healthcare systems would rapidly become overburdened by the cost of replenishing body parts of citizens as they age and could lead new socioeconomic classes, as only those who can pay for such care themselves can extend their healthy years.

Microbiome-based therapies

The human microbiome is implicated in a large number of human disorders, from Parkinson’s to colon cancer, as well as metabolic conditions such as obesity and type 2 diabetes. Synthetic biology approaches could greatly accelerate the development of more effective microbiota-based therapeutics. However, there is a risk that DNA from genetically engineered microbes may spread to other microbiota in the human microbiome or into the wider environment.

Intersection of information security and bio-automation

Advancements in automation technology combined with faster and more reliable engineering techniques have resulted in the emergence of robotic 'cloud labs' where digital information is transformed into DNA then expressed in some target organisms. This opens the possibility of new kinds of information security threats, which could include tampering with digital DNA sequences leading to the production of harmful organisms, and sabotaging vaccine and drug production through attacks on critical DNA sequence databases or equipment.

Over the longer term, issues identified include:

New makers disrupt pharmaceutical markets

Community bio-labs and entrepreneurial startups are customizing and sharing methods and tools for biological experiments and engineering. Combined with open business models and open source technologies, this could herald opportunities for manufacturing therapies tailored to regional diseases that multinational pharmaceutical companies might not find profitable. But this raises concerns around the potential disruption of existing manufacturing markets and raw material supply chains as well as fears about inadequate regulation, less rigorous product quality control and misuse.

Platform technologies to address emerging disease pandemics

Emerging infectious diseases—such as recent Ebola and Zika virus disease outbreaks—and potential biological weapons attacks require scalable, flexible diagnosis and treatment. New technologies could enable the rapid identification and development of vaccine candidates, and plant-based antibody production systems.

Shifting ownership models in biotechnology

The rise of off-patent, generic tools and the lowering of technical barriers for engineering biology has the potential to help those in low-resource settings, benefit from developing a sustainable bioeconomy based on local needs and priorities, particularly where new advances are made open for others to build on.

Dr Jenny Molloy comments: “One theme that emerged repeatedly was that of inequality of access to the technology and its benefits. The rise of open source, off-patent tools could enable widespread sharing of knowledge within the biological engineering field and increase access to benefits for those in developing countries.”

Professor Johnathan Napier from Rothamsted Research adds: “The challenges embodied in the Sustainable Development Goals will require all manner of ideas and innovations to deliver significant outcomes. In agriculture, we are on the cusp of new paradigms for how and what we grow, and where. Demonstrating the fairness and usefulness of such approaches is crucial to ensure public acceptance and also to delivering impact in a meaningful way.”

Dr Christian R. Boehm concludes: “As these technologies emerge and develop, we must ensure public trust and acceptance. People may be willing to accept some of the benefits, such as the shift in ownership away from big business and towards more open science, and the ability to address problems that disproportionately affect the developing world, such as food security and disease. But proceeding without the appropriate safety precautions and societal consensus—whatever the public health benefits—could damage the field for many years to come.”

The research was made possible by the Centre for the Study of Existential Risk, the Synthetic Biology Strategic Research Initiative (both at the University of Cambridge), and the Future of Humanity Institute (University of Oxford). It was based on a workshop co-funded by the Templeton World Charity Foundation and the European Research Council under the European Union’s Horizon 2020 research and innovation programme. 

Wintle, BC, Boehm, CR et al. A transatlantic perspective on 20 emerging issues in biological engineering. eLife; 14 Nov 2017; DOI: 10.7554/eLife.30247

Human genome editing, 3D-printed replacement organs and artificial photosynthesis – the field of bioengineering offers great promise for tackling the major challenges that face our society. But as a new article out today highlights, these developments provide both opportunities and risks in the short and long term.

Reaching for the Sky

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

License type: 

Opinion: Charles Manson: death of America's 1960s bogeyman

By Anonymous from University of Cambridge - School of Arts and Humanities. Published on Nov 21, 2017.

So, Charles Manson has died, aged 83, of “natural causes”. The con-man, musician and erstwhile cult leader, who came to embody mainstream American fears of the 1960s counterculture “gone wrong”, had an easier death at Kern County hospital in California than any of the seven people whose murders he orchestrated in August 1969.

Manson has been largely out of the public view since his conviction for the Tate-LaBianca killings in January 1971 alongside several members of his “family” – but there has been little diminution of his grisly fame. Earlier this year it was announced that Quentin Tarantino is making a film about the Manson murders. Big names such as Margot Robbie and Brad Pitt are among those said to be lining up for parts.

There remains something strange about the attention that Manson generated in life and now in death. It’s a level of interest which far exceeds matters of public record. As such, it’s difficult to know what to say by way of response to the news of his passing – or even what to say for the purposes of a tentative obituary. Difficult, because it’s hard to know precisely who (or what) the name Charles Manson is being used to describe.

Manson, born Charles Milles Maddox in 1934, spent most of his life behind bars. Before convening the cult-like group “The Family” in 1967, he had convictions for car theft and robbery. But it was towards the end of 1969 that he really came to public attention. He was arrested and put on trial for his role as the mastermind of a total of nine murders, including those of the actress Sharon Tate and four friends, and Leno and Rosemary LaBianca over the course of the weekend of August 8-9 1969. In March 1971 he was given the death penalty which was later commuted to commuted to life imprisonment.

According to testimony at his trial and that of his followers, Manson was not actually directly responsible for wielding a murder weapon at either of the two crime scenes (although there’s plenty to suggest he was involved in other murders at around the same time). But the court found he had masterminded and ordered the Tate-LaBianca killings, made all the more horrific by the fact that actress Tate had been pregnant at the time of her murder.

Different kind of celebrity

Whether you like it or not, from his conviction to his death, Manson was a celebrity. He became a celebrity when he made the cover of Life magazine in December 1969 and Rolling Stone in June 1970 – and subsequent novels, films, recordings, interviews, t-shirts and comic books have sought alternately to shore up this status and to demythologise it.

This Manson culture industry (which shows no signs of slowing down) has kept his name in public circulation for nearly half a century. It’s this material which invariably forms the basis of the analysis whenever Manson’s life and “career” is considered. What becomes visible is something of a schizoid split in which the name Charles Manson gains two points of reference.

There’s “Charles Manson” which effectively describes the life of Charles Milles Maddox, criminal – and then there’s “Charles Manson”, the potent symbol of evil, the name which in the words of one of his recent biographers has become a “metaphor for unspeakable horror”.

An early example of the latter came from the writer Wayne McGuire in 1970. Writing in his column for Fusion magazine, “An Aquarian Journal”, he speculated that at “some point in the future”, Manson would “metamorphose into a major American folk hero”. The comment was later used as the epigraph for The Manson File, a collection of Manson-related writings first published by Amok Press in 1988. The prediction was fully realised in 1997 with the inclusion of Manson in James Parks’ collection, Cultural Icons. Here, nestling between Lata Mangeshkar, Mao Tse Tung and Robert Mapplethorpe, Manson was identified as an “American Murderer” who “channelled his peculiar cocktail of black magic, drugs, sex and rock n roll into homicidal mania”. It is this “peculiar cocktail” that underpins Manson’s symbolic status.

What gives his crimes – and his name – a notoriety in excess of that held by the likes of the Boston Strangler, Albert DeSalvo and the unknown “Zodiac Killer”, who terrorised Northern California in 1968 and 1969, is that they simultaneously interact with a matrix of other powerful symbols that carry a greater cultural resonance than the breaking of a law, however severe.

Hollywood meets the crazies

Tate’s murder brought into collision two heavily mediated zones: Hollywood and the counterculture. Manson’s interest in The Beatles and use of their song title “Helter Skelter” as a blood-drenched slogan further intensified this disturbing elision of murder and popular culture. As with The Rolling Stones’ concert at the Altamont Speedway in December 1969 – at which a member of the audience was murdered by a Hells Angel – the Manson murders, once filtered through media sensitive to their range of connections, become emblems for the “end” or even “death” of the 1960s.

Whether viewed as catalyst or symptom, they are events that stand in for explanations of economic shift, geopolitical crisis and social inequality which describe the decade’s apparent decline into death, violence and what Hunter S. Thompson called “bad craziness”.

If anything it was the Tate-LaBianca murders that carry the metaphorical currency, while the name “Manson” now probably signifies something else. It’s a name to conjure with. “Manson” brings to mind the shadow-side of the 1960s: the incipient violence that lay beneath the counterculture’s day-glo optimism and the lost potential of a decade’s calls for peace and pacifism. When viewed from the vantage point of seeing the long, strange and violent life laid out, it refers also to someone who understood and was able to exploit the potency of the popular culture around him.

There’s very little to celebrate here, but maybe there’s something to learn about what it means to be a celebrity.

James Riley, Fellow and College Lecturer in English, Girton College, Cambridge, University of Cambridge

This article was originally published on The Conversation. Read the original article.

Charles Manson, one of America's most notorious criminals and cult leaders, has died. In an article written for The Conversation, James Riley from Cambridge's Faculty of English discusses Manson and the nature of celebrity. 

Charles Manson 2014

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

License type: