Looking for algorithm experts to speak with for your story? Look no further. While the following is far from exhaustive, we hope itll provide a starting point for those beginning to report on this increasingly important topic.
From his website: Dr. Robert Smith is a technologist, complexity scientist, entrepreneur, writer and sought-after public speaker. He is an artificial intelligence (A.I.) expert and has worked with clients, companies and institutions across the private and public sectors. Having grown up in Alabama at the height of the Civil Rights movement, he has a deeply informed perspective on bias in algorithmic systems and is dedicated to the pursuit of human-readable and responsible A.I. He is the author of Rage Inside the Machine: The Prejudice of Algorithms and How to Stop The Internet Making Bigots of Us All.
From her website: Dr. Rumman Chowdhurys passion lies at the intersection of artificial intelligence and humanity. She is a pioneer in the field of applied algorithmic ethics, working with C-suite clients to create cutting-edge technical solutions for ethical, explainable and transparent AI since 2017.She is currently the CEO and founder of Parity, an enterprise algorithmic audit platform company. She formerly served as Global Lead for Responsible AI at Accenture Applied Intelligence. ... She is a multiple TedX speaker, a Forbes Tech contributing author and has been named by InformationWeek as one of 10 influential AI and machine learning experts to follow on Twitter. She was also named one of BBCs 100 Women, recognized as one of the Bay Areas top 40 under 40, and honored to be inducted to the British Royal Society of the Arts (RSA). She has also been named by Forbes as one of Five Who are Shaping AI.
Co-founder of We and AI, a UK-based nonprofit working to increase the awareness and understanding of Artificial Intelligence (AI) amongst the general UK population. Founding Editorial Board Member at AI & Ethics Journal, which seeks to promote informed debate and discussion of the ethical, regulatory, and policy implications that arise from the development of AI.
From policylab.stanford.edu: Alex Chohlas-Wood is the executive director of the Stanford Computational Policy Lab. Alex has led the development of data-driven tools in both the private and public sector, including as the Director of Analytics for the New York Police Department (NYPD). Alex holds an M.S. from New York University and a B.A. from Carleton College. He contributed the article Understanding risk assessment instruments in criminal justice to a 2020 Brookings Institution series called AI & Bias. His work focuses on using technology and data science to support criminal justice reform.
Ziad Obermeyer is an Associate Professor of Health Policy and Management and UC Berkeley. He is a physician and researcher who works at the intersection of machine learning and health. Co-author of the 2019 Sciencejournal article Dissecting racial bias in an algorithm used to manage the health of populations.
From his website: Sendhil Mullainathan is the Roman Family University Professor of Computation and Behavioral Science at Chicago Booth. His current research uses machine learning to understand complex problems in human behavior, social policy, and especially medicine, where computational techniques have the potential to uncover biomedical insights from large-scale health data. In addition to being a co-PI at the joint Berkeley-UChicago Laboratory for Systems Medicine, Sendhil is the cofounder of the computational medicine initiative, Nightingale.
Co-author of the 2019 Science journal article Dissecting racial bias in an algorithm used to manage the health of populations.
From de.ed.ac.uk: I am a Chancellor's Fellow at the Centre for Research in Digital Education and the Edinburgh Futures Institute, examining the intersections of digital technologies, science, and data with education policy and governance. My current research focuses on two key themes. One is the expansion of educational data infrastructures to enable information to be collected from schools and universities, then analysed and circulated to various audiences. The second is the emergence of intimate data relating to students psychological states, neural activity, and genetic profiles, and the implications for increasingly scientific ways of approaching educational policy and practice.
Hes author of Big Data in Education: The digital future of learning, policy, and practice, and blogs at Code Acts in Education.
Knight Professor of Constitutional Law and the First Amendment, Yale Law School and founder of the Information Society Project at Yale, an international community working to illuminate the complex relationships between law, technology, and society. Faculty director at the Abrams Institute for Freedom of Expression and the Knight Law and Media Program at Yale. In a 2018 paper, he argued: Neither judge-made doctrines of First Amendment law nor private companies will prove reliable stewards of the values of free expression in the twenty-first century. This means that we must rethink the First Amendment's role in the digital era. ... [T]he state, while always remaining a threat to free expression, also needs to serve as a necessary counterweight to developing technologies of private control and surveillance.
UCLA law professor with a computer programming background, and prominent blogger at The Volokh Conspiracyon legal issues, including the First Amendment. In 2012, he co-wrote a white paper (funded by Google) arguing that Googles and similar companies search algorithms are themselves a form of free and therefore protected speech.
The following experts were panelists at a 2020 roundtable event on the impact of artificial intelligence on freedom of expression, hosted by the Organization for Security and Co-Operation in Europe. Additional information about the roundtable, and the work of its participants, can be found here.
From osce.org: Carly Kind is a human rights lawyer holding the position of Director of the Ada Lovelace Institute, a research institute and deliberative body with a remit to ensure data and artificial intelligence (AI) work for people and society. Carly has worked on promoting data rights and governance for many years, and on advocating against corporate and State surveillance. She is one of the experts in the RFoM project on the impact of AI on freedom of expression. In this video, she explains how the use of AI and data collection, surveillance capitalism and State surveillance are closely intertwined, and can impact access to information and freedom of speech. See
From osce.org: Ingrid Brodnig is a digital expert and author of several books focusing on public debate online and how digital platforms are shaping it; misinformation and manipulation online; and online abuse and how to fight it. Ingrid is Austrias Digital Champion for the European Commission and is one of the experts in the RFoM project on the impact of artificial intelligence (AI) on freedom of expression. In this video, she explains how automated bots and the use of algorithms and AI can influence public discourse and pluralism online, and emphasizes the need for more transparency.
From osce.org: Eliška Pírková, Europe Policy Analyst at Access Now, is a human rights and policy expert. In her work, she focuses on online free speech and content regulation, as well as intermediary liability and the impact of artificial intelligence (AI) on fundamental rights. During the COVID-19 pandemic, she has worked on recommendations on how to defend free expression while fighting misinformation. Ms. Pírková is one of the experts of the RFoM project on the impact of AI on freedom of expression. In this video, she explains how the impact of AI on free speech becomes even more visible during the COVID-19 pandemic.
From osce.org: Krisztina Rozgonyi, Assistant Professor at the University of Vienna, is a legal and regulatory expert on media governance. She focuses on policy development, the representation of public interest, and democratic values in the media. Ms. Rozgonyi chairs the expert group focusing on media pluralism of the RFoM project on the impact of artificial intelligence (AI) on freedom of expression. In this video, she explains the main impact that the use of AI and automation have on access to information and media diversity online.
From osce.org: Lorena Jaume-Palasí (founder of The Ethical Tech Society, and co-founder of AlgorithmWatch and the Internet Governance Forum Academy) is a renowned expert on internet governance. She focuses on the social relevance and human rights impact of automation and digitization. Ms. Jaume-Palasí chairs the expert group focusing on hate speech of the RFoM project on the impact of artificial intelligence (AI) on freedom of expression. In this video, she introduces one of the main challenges to freedom of expression in the use of AI to identify and remove hate speech online.
From osce.org: Dr. Djordje Krivokapić LL.M. is a lawyer, co-founder of the SHARE Foundation and Professor at the University of Belgrade, where he focuses on the intersection of emerging technologies and society, particularly free speech, privacy, security and open access. In this video, he introduces the main challenges to freedom of expression of using artificial intelligence (AI) for removing terrorist and extremist content online. Dr. Krivokapić is the main editor of the Representative on Freedom of the Media non-paper on the impact of AI on freedom of expression.
Freddy Martinez, Policy Analyst, Open The Government
Joy Buolamwini, Founder, Algorithmic Justice League
Michele Gilman, University of Baltimore professor, faculty fellow at Data & Society
Khari Johnson, AI staff writer, VentureBeat
Stephen Downes, online learning technology researcher, National Research Council of Canada
Bart Knijnenburg, assistant professor in human-centered computing, Clemson University
Justin Reich, executive director, MIT Teaching Systems Lab
Jerry Spinrad, associate professor of computer science, Vanderbilt University
Next: Coverage and Research