Tech | Visa | Scholarship/School | Info Place

Women in AI: Aspen Institute’s Christine Gloria tells women entering the field and ‘follow your curiosity’

In an effort to give female academics and others focused on AI some well-deserved and long-overdue time in the spotlight, TechCrunch has launched a series of interviews focusing on the remarkable women contributing to the AI ​​revolution. As the AI ​​craze continues, we will publish multiple articles throughout the year highlighting critical work that is often overlooked. Read more profiles here.

Kristine Gloria leads the Emerging and Smart Technologies Initiative at the Aspen Institute – a Washington, D.C.-based think tank focused on values-based leadership and policy expertise.Gloria Holds a Ph.D. She holds a bachelor’s degree in cognitive science and a master’s degree in media studies, and her past work includes research at MIT’s Internet Policy Research Program, the Entrepreneurship Policy Lab in San Francisco, and the Center for Society, Technology, and Policy at the University of California, Berkeley.

Q&A

In a nutshell, how did you get started in the field of artificial intelligence? What drew you to this field?

To be honest, I definitely didn’t start my career pursuing artificial intelligence. First, I really wanted to understand the intersection of technology and public policy. At the time, I was studying for a Masters in Media Studies, exploring the idea of ​​remix culture and intellectual property. I live and work in Washington as an Archer Fellow at the New America Foundation. One day I distinctly remember sitting in a room full of public policymakers and politicians proposing terms that didn’t quite match their actual technical definitions. It was not long after this meeting that I realized that in order to advance public policy, I needed these credentials. I went back to school and earned a PhD in cognitive science, focusing on semantic technologies and online consumer privacy. I was extremely lucky to find a mentor, advisor, and lab that encouraged an interdisciplinary understanding of how technology is designed and built. As a result, I improved my technical skills while developing a more critical perspective on the many ways technology engages our lives. In my role as Director of Artificial Intelligence at the Aspen Institute, I have had the privilege of ideating, engaging, and collaborating with some of the leading thinkers in the field of artificial intelligence. I’ve always found myself drawn to people who take the time to deeply question whether and how artificial intelligence will impact our daily lives.

I’ve led various AI initiatives over the years, and one of the most meaningful is just getting started. Now, as a founding team member and Director of Strategic Partnerships and Innovation at newly launched nonprofit Young Futures, I’m excited to be part of this mindset to achieve our mission of making the digital world an easier place to grow . Specifically, as generative AI becomes table stakes, and as new technologies emerge, it is both urgent and critical that we help preteens, teens, and their support groups navigate this vast digital wilderness together.

What work (in artificial intelligence) are you most proud of?

There are two initiatives that I am most proud of. The first is that my work deals with the tensions, pitfalls, and impacts of AI on marginalized communities. “The Power and Progress of Algorithmic Bias,” published in 2021, illustrates months of stakeholder engagement and research surrounding this issue. In the report, we asked one of my all-time favorite questions: “How can we (data and algorithm operators) reframe our own models to predict a different future, one that centers the needs of the most vulnerable ?” Safiya Noble is the original author of this question, and it’s a question I continue to consider throughout my work. The second most important initiative came recently from my time as head of data at Blue Fever, a company whose mission is to improve the well-being of young people in non-judgmental and inclusive online spaces. Specifically, I led the design and development of Blue, the first artificial intelligence emotional support companion. I learned a lot in the process. Most importantly, I gained a newfound appreciation for the impact a virtual companion can have on someone who is struggling or may not have a support system. Blue is designed and built to bring its “Big Brother Energy” to help guide users to reflect on their spiritual and emotional needs.

How do you deal with the challenges of the male-dominated tech industry and the male-dominated artificial intelligence industry?

Unfortunately, the challenges are real and remain current. I have experienced doubts about my skills and experience among all types of colleagues in the field. But for every negative challenge, I can cite an example of a male colleague who was my fiercest cheerleader. It’s a tough environment and I stick with these examples to help manage it. I also think that the field has changed a lot even in the last five years. The necessary skills and professional experience to qualify as part of “artificial intelligence” are no longer strictly computer science focused.

What advice would you give to women seeking to enter the field of artificial intelligence?

Go in and follow your curiosity. This space is constantly changing, and the most interesting (and potentially most productive) pursuit is to maintain a critical optimism about the field itself.

What are the most pressing issues facing artificial intelligence in its development?

In fact, I think some of the most pressing problems facing AI are the same ones we haven’t fully solved since the web was first introduced. These issues involve agency, autonomy, privacy, fairness, justice, and more. These are at the core of how we position ourselves within the machine. Yes, AI can make things more complex—but so can sociopolitical shifts.

What issues should artificial intelligence users pay attention to?

AI users should be aware of how these systems complicate or enhance their own agency and autonomy. Additionally, because of the discussion surrounding how technology (particularly artificial intelligence) affects our well-being, it’s important to remember that there are proven tools to manage more negative outcomes.

What is the best way to build artificial intelligence responsibly?

Responsible AI building is more than just code. A truly responsible build considers design, governance, policy, and business models. All factors drive each other, and if we only strive to address one part of the build, we will continue to fall short.

How investors can better promote responsible AI

One specific task that I admire Mozilla Ventures for asking is the AI ​​model card. This practice of creating model cards, developed by Timnit Gebru and others, enables teams such as funders to assess the risks and security issues of AI models used in systems. Additionally, and related to the above, investors should fully assess the system’s capabilities and ability to build responsibly. For example, if you have trust and security features in your build or launch model cards, but your revenue model leverages fragile demographic data, your intentions as an investor will be misaligned. I do think you can build responsibly and still be profitable. Finally, I would like to see more collaborative financing opportunities among investors. In the areas of well-being and mental health, solutions will be diverse and widespread because no one is the same and no one solution will solve everyone’s problems. Collective action among investors interested in solving this problem would be a welcome addition.

#Women #Aspen #Institutes #Christine #Gloria #tells #women #entering #field #follow #curiosity

Leave a Reply

Your email address will not be published. Required fields are marked *