skip to content
 

Speaker Spotlight: George Zarkadakis

George Zarkadakis is a systems engineer with a PhD in Artificial Intelligence, as well as a novelist, playwright, essayist, science communicator and author of non-fiction. His  most recent book is “In Our Own Image: will AI save us or destroy us?” He will be speaking in the  Can politics keep up with technology? panel discussion on 21st October with social anthropologist Beth Singler, Law lecturer Nora Ni Loideain and Will Moy from Full Fact fact-checking charity.

 

Do you think policymakers have been able to keep up with the social issues thrown up by technology?

George Zarkadakis: Policy-making and regulation are generally long and complex processes that must arrive at some kind of consensus at the end. Both those necessary characteristics (duration, consensus) of those processes are not fit for technologies that are advancing at a rapid pace, target our personal lives and psychological states and tend to create “digital tribes” in cyberspace with little or no communication between them.

Are policymakers in any way prepared for the social issues that AI will throw up or are they too busy focusing on current instability?

GZ: In most western countries policymakers tend to come from academic backgrounds that have little to do with physical science or engineering. As a result we have a “two cultures” problem: technologists and policy-makers speak different languages and have different value systems. For example, experimentation, dissent and risk-taking make science and technology successful, but this value system clashes with the risk-averse, stability- and compromise-seeking culture of policy-making.

Are the media too negative about robots and AI?

GZ: Having worked in the media for years I can attest that no one has attracted attention by saying, or writing, that all is well. Media are society's sentinels, our early warning systems of impending doom and this is why they mostly cover the negative aspects of robots and AI. Having said that, the media play an important role in highlighting the risks of AI and robots.

How biased are algorithms?

GZ: All human constructs are “biased”, in the sense that they are created by beings with belief systems. The big challenge with machine learning algorithms is that they “learn” by processing and correlating vast data sets, which are structured or otherwise generated by humans. By selecting those data sets a priori as “significant” for teaching algorithms we effectively insert in the algorithms our view of the world, with all its ambiguities, biases and dilemmas.

Is the gig economy the future for all jobs?

GZ: Labour market data in the US and UK indicate that in countries with weak labour laws the number of ”independent” workers is growing. However, this label covers a wide variety of workers: there are those who work part time by choice and those who do not have an alternative, those who earn more money independently than otherwise, those who care for young children or the elderly, etc. In the next 10 years we will probably see a mix of independent and dependent work, with high-quality, well-paid, full-time jobs becoming increasingly scarce.

What kinds of new jobs will be created by AI?

GZ: AI replaces repetitive tasks. Most of today's jobs are made up of a great percentage of repetitive tasks and this is why a great number of today’s entry and middle level jobs are very likely to disappear or radically change. However, AI cannot (yet) automate tasks that are “one-off” or “unique”; which suggests that new jobs that require exceptional creativity and social skills will remain with the humans.

How can governments protect the most vulnerable workers in the future?

GZ: We may need to rethink what “work” means in a future of near full automation. The idea of a “worker” and how we price “work” in the creation of private goods are concepts that come to us from the time of the first industrial revolution and may not be fit for the 21st century.

How might education change in the future?

GZ: We will need to go back to the basics. The role of early, primary and secondary education will be to develop human bonds of collaboration and a sense of being a citizen, rather than prepare workers for the labour market. What we learn in those years will target synthetic reasoning and creativity, rather than analytical reasoning and repetition. Universities may become more research-oriented, perhaps like start-up accelerators for sciences, and less teaching-oriented.