The Centre will have several launch projects which focus on some issues of immediate concern. Each project cuts across several disciplines.

AI and Work

Prof Kevin Fox (Economics); Prof Steve Frenkel (Sociology/Organisational psychology); Dr. Juan Carlos Carbajal (Economics); Dr Sunghoon Kim (Employment Relations/Human Resource Management); and Assoc Prof Amirali Minbashian (Organisational Psychology).

In what sectors of the economy are AI and Robotics being introduced in Australia? How can this selectivity be explained? To what extent is this welcomed or opposed by various groups in Australia? Is this related to particular (mis) conceptions of these new technologies? How can we characterize and model the adoption, use and effectiveness of AI and Robotics? What are the sources and mechanisms that explain the speed of adoption and effectiveness of use? What are the short term and likely longer term consequences of the diffusion of AI and Robotics in Australia? How does the Australian experience compare with other advanced countries? How can similarities and differences be explained? Are there lessons that can be learnt from these comparisons?

The project will consider these issues from the macro (economy/societal level), meso (industry, market segment or sector level) and micro levels (firm, group and individual level).

AI and War

Prof Toby Walsh (CSE); Dr Jai Galliott (ADFA).

What are the moral arguments for and against autonomous weapons? Does international law need to be adjusted to deal with autonomous weapons? If so, how should it be changed? How do we define a concept at the centre of UN discussion like "meaningful human control"? What is a useful definition of the different "levels of autonomy" that a weapon system can have? Should we differentiate between offensive and defensive weapons? How can we support technically the discussions at the UN and elsewhere surrounding autonomous weapons?

More generally, how do we ensure robots behave ethically? What ethical laws should such robots follows? Can a robot learn our ethical values? How do we ensure that they cannot be hacked to behave in undesirable ways? How do we promote less controversial military applications for AI and Robotics like mine clearing? And can we develop a professional code of ethics for researchers themselves working in AI and Robotics?


Kalervo N. Gulson (Environment & Society Network, Arts & Social Sciences), Matthew Kearnes (Environment & Society Network, Arts & Social Sciences), Edward Scheer (Arts & Media, Arts & Social Sciences), Iain Skinner (Electrical Engineering & Telecommunications, Engineering), Andrew Murphie (Arts & Media, Arts & Social Sciences)

Key questions to be asked include (1) what are the new ways of learning based upon machine learning and machine pedagogies? How do we educate an AI system?  How is its learning different?; (2) what are the possibilities and challenges for education and education policy by implementing artificial intelligence into instructional and assessment settings? How will robotics (as ‘actors) and AI (as ‘authors’), and the connection to big data, change what we understand as representation in the creative arts?; (3) what are the ethical, economical and biosocial considerations of implementing artificial intelligence into educational organizations; and (4) how does machine learning use ideas from social policy, including policy and value networks, and how can policy analysts use these same ideas?