Tech | Visa | Scholarship/School | Info Place

Women in AI: Urvashi Aneja looks at the social impact of AI in India

In an effort to give female academics and others focused on AI some well-deserved and long-overdue time in the spotlight, TechCrunch has launched a series of interviews focusing on the remarkable women contributing to the AI ​​revolution. As the AI ​​craze continues, we will publish multiple articles throughout the year highlighting critical work that is often overlooked. Read more profiles here.

Urvashi Aneja is the founding director of the Digital Futures Lab, an interdisciplinary research effort examining the interactions between technology and society in the Global South. She is also an associate fellow in the Asia-Pacific Program at Chatham House, an independent policy institute in London.

Aneja’s current research focuses on the social impact of algorithmic decision-making systems and platform governance in India. Aneja recently authored a study on the current use of AI in India, reviewing use cases across various sectors including policing and agriculture.

Q&A

In a nutshell, how did you get started in the field of artificial intelligence? What drew you to this field?

I started my career in research and policy engagement in the humanitarian field. For many years, I have studied the use of digital technologies in protracted crises in low-resource settings. I quickly realized that there is a fine line between innovation and experimentation, especially when working with vulnerable populations. The lessons learned from this experience have left me deeply concerned about the technological solutionist narrative surrounding the potential of digital technologies, particularly artificial intelligence. At the same time, India launched the “Digital India” mission and the “National Strategy for Artificial Intelligence.” I am troubled by the mainstream view that AI is a panacea to solve India’s complex socio-economic problems, but there is a complete lack of critical discussion around the issue.

What work (in artificial intelligence) are you most proud of?

I am proud that we have been able to draw attention to the political economy of AI production, and the wider implications for social justice, labor relations and environmental sustainability. Narratives about artificial intelligence often focus on the benefits of a specific application and, at best, on the benefits and risks of that application. But this only misses the forest for the trees—a product-oriented perspective obscures broader structural impacts, such as AI’s contribution to epistemic injustice, deskilling of the workforce, and unaccountable power in the majority world continuation. I’m also proud that we’ve been able to translate these concerns into concrete policy and regulation – whether that’s designing procurement guidance for the use of AI in the public sector or providing evidence in legal proceedings against big tech companies in the Global South.

How do you deal with the challenges of the male-dominated tech industry and the male-dominated artificial intelligence industry?

Let my work speak for itself. And keep asking: Why?

What advice would you give to women seeking to enter the field of artificial intelligence?

Develop your knowledge and expertise. Make sure your technical understanding of the problem is correct, but don’t just focus on AI. Instead, study broadly so that you can make connections across fields and disciplines. Not enough people understand AI as a sociotechnical system, a product of history and culture.

What are the most pressing issues facing artificial intelligence in its development?

I think the most pressing issue is the concentration of power within a handful of tech companies. While this problem is not new, new developments in large language models and generative artificial intelligence have exacerbated it. Many of these companies are now fanning concerns about the risks of artificial intelligence. Not only does this distract from existing harms, it also makes these companies necessary to address AI-related harms. In many ways, we are losing the momentum of the “tech shock” that emerged in the wake of Cambridge Analytica. I am also concerned that in places like India, AI is being positioned as a necessity for socio-economic development, providing an opportunity to leapfrog ongoing challenges. Not only does this exaggerate the potential of AI, but it also ignores the impossibility of moving beyond the institutional development required to put in place safeguards. Another issue we don’t consider seriously enough is the impact of AI on the environment – ​​the current trajectory may be unsustainable. In the current ecosystem, those most vulnerable to the impacts of climate change are unlikely to be the beneficiaries of AI innovations.

What issues should artificial intelligence users pay attention to?

Users need to realize that AI is not magic or anything approaching human intelligence. It is a form of computational statistics that has many beneficial uses, but is ultimately just a probabilistic guess based on history or previous patterns. I’m sure there are several other issues that users need to be aware of, but I would like to remind everyone that we should be wary of attempts to shift responsibility downstream onto users. I’ve seen this recently in the use of generative AI tools in most of the world’s resource-poor settings, and rather than being wary of these experimental and unreliable technologies, the focus tends to shift to the end-user (such as farmers or frontline personnel) how to use them. Health workers, need to improve their skills.

What is the best way to build artificial intelligence responsibly?

This must first assess the needs of artificial intelligence. Is there a problem that AI can uniquely solve, or are there other possible approaches? If we want to build artificial intelligence, do we need a complex black box model, or will a simpler logic-based model work? We also need to refocus domain knowledge into building artificial intelligence. In our obsession with big data, we sacrifice theory – we need to build a theory of change based on domain knowledge, which should be the basis of the models we are building, not just big data. Of course, this doesn’t include critical issues like engagement, inclusive teams, labor rights, and more.

How can investors better promote responsible AI?

Investors need to consider the entire life cycle of AI production, not just the output or results of AI applications. This requires consideration of a range of issues, such as whether the workforce is fairly assessed, environmental impact, the company’s business model (i.e. is it based on commercial oversight?) and accountability measures within the company. Investors will also need to seek better, more rigorous evidence of the benefits of AI.

#Women #Urvashi #Aneja #social #impact #India

Leave a Reply

Your email address will not be published. Required fields are marked *