Monday, November 25, 2024

Developing Responsible AI Policy For Civil Society


By Shaista Keating and Chloe Mankin

The rapid evolution and widespread adoption of artificial intelligence technologies (AI) offer both opportunities and challenges to civil society, particularly concerning responsible and ethical usage. The use of AI in civil society organizations (CSOs) has profound implications for their missions and beneficiaries. 

As AI becomes increasingly prevalent, CSO leaders must establish robust policy frameworks that prioritize the interests of their beneficiaries and communities while upholding principles of equity, inclusivity, and social justice.

CSO leaders must create structured guidelines for AI integration to address risks and ensure ethical deployment. Foundational efforts in these areas are underway. UNICEF has developed guidelines for AI use. The United Nations has adopted a global AI resolution. The White House has proposed an AI Bill of Rights, and in March 2024, the European Parliament passed the world’s first comprehensive AI law, the Artificial Intelligence Act.

Drawing insights from these efforts, leaders at CSOs can begin to adopt sets of best practices and principles for the responsible use of AI at their organizations. From that foundation, AI use policies and guidelines should be created, shared, and iterated upon to keep pace with the rapid development of AI technology.

Beneficiary-Centric 

And Policy Frameworks

In developing AI policies, CSOs managers must implement a human-centric mindset and prioritize the interests of beneficiaries. By actively involving marginalized communities, CSO managers ensure that AI initiatives promote equity and social justice. 

Central to responsible AI deployment within CSOs is the creation of clear and comprehensive policy frameworks. These frameworks guide organizations through the ethical and operational complexities of AI integration. The absence of clear policies can lead to unintended consequences, including bias, discrimination, and privacy violations. Policy frameworks provide a necessary structure to mitigate these risks and ensure ethical AI deployment.

Lessons From UNICEF

UNICEF provides an example of such collaborative efforts, particularly in safeguarding the rights of children amid the AI revolution. 

Recognizing both the immense potential and profound risks posed by AI in children’s lives, UNICEF has delineated nine guidelines for responsibly implementing AI to ensure children’s well-being. These guidelines encompass crucial aspects such as supporting children’s development, ensuring inclusivity, prioritizing fairness, protecting privacy, ensuring safety, and fostering transparency and accountability. 

Educating leaders in government, business, and individuals about AI and children’s rights is crucial for creating an environment that supports responsible AI deployment, especially for those who depend on adults to safeguard their rights. 

By adhering to these principles, AI can serve as a catalyst for positive change, advancing the interests of all members of society, particularly the most vulnerable.

Ideas From The United Nations

The recent unanimous adoption of a resolution by the United Nations underscores the global imperative for the safe, secure, and trustworthy use of AI. This resolution emphasizes the need for regulations to mitigate potential issues such as digital divides, discrimination, and privacy concerns associated with AI deployment. 

Key aspects of the resolution include raising public awareness about AI’s benefits and risks, investing in AI research, children’s development, ensuring privacy and transparency, and addressing diversity and bias in AI datasets and algorithms. 

TechSoup’s Data Commons initiative supports these UN goals by providing CSOs with access to comprehensive public data, enabling them to make data-informed decisions and address critical community issues effectively.

The Blueprint for an AI Bill of Rights, proposed by the White House, provides a comprehensive framework for ethical AI development and deployment. This blueprint outlines principles such as ensuring the safety and effectiveness of AI systems, protecting against algorithmic discrimination, safeguarding data privacy, providing notice and explanation for AI-driven decisions, and promoting human alternatives and considerations. 

By leveraging these principles, CSO leaders can develop policies that align with responsible AI practices, ensuring that AI serves the best interests of their constituents. Through adaptation to their organizational contexts, CSOs can effectively navigate the ethical complexities of AI integration and drive positive societal change. 

Moreover, actions such as President Joseph Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence highlight concerted efforts towards responsible AI governance. This order mandates comprehensive measures to enhance AI safety, security, privacy, and equity. It includes standards to prevent AI misuse, safeguard against AI-enabled fraud, and promote transparency through content labeling.

The order also addresses civil rights by tackling algorithmic issues and advancing responsible AI in critical infrastructure. By setting guidelines for AI research, workforce development, and international collaboration, the order fosters innovation while ensuring ethical and responsible AI deployment, balancing technological advancement with public interest protection and social equity promotion.

Expertise From The European Union

The Artificial Intelligence Act establishes rules aimed to propel trustworthy AI in Europe and beyond. It was developed through a public consultation process that included CSOs. To avoid negative effects, the act asserts that AI systems should be supervised by humans rather than automated systems. The aim is to encourage innovation in AI while ensuring that AI systems respect fundamental rights, safety, and ethical standards. The act prohibits unacceptable risks associated with manipulative AI.

The new rules ban certain AI applications that threaten citizens’ rights:

  • Cognitive behavioral manipulation of people or specific vulnerable groups. For example, voice-activated toys that encourage dangerous behavior in children are not permitted.
  • Social scoring that classifies people based on behavior, socioeconomic status, or personal characteristics.
  • Biometric identification and categorization of people.
  • Real-time and remote biometric identification systems, such as facial recognition.

Exceptions to these rules are permitted for law enforcement purposes. For example, “real-time” or “post” remote biometric identification systems might be allowed to prosecute serious crimes.

The AI Act follows a risk-based logic in which certain AI systems are prohibited, while others may be classified as high-risk. These systems are subject to additional legal and technological restrictions and may include assessments of people’s fundamental rights. Under the EU guidelines, people have been granted the right to file complaints about AI systems to designated national authorities.

Developing AI Use Policies 

Developing clear and comprehensive AI use policies is essential for CSOs to mitigate risks and ensure responsible AI deployment. These policies should address key considerations, including data privacy, algorithmic transparency, and stakeholder engagement. By aligning their policies with ethical principles, CSOs can build trust and accountability in their AI initiatives.

TechSoup has crafted AI policy guidelines that span a diverse array of topics pertinent to AI integration. From disclosure of generative AI usage to safeguarding proprietary information and user data, TechSoup’s guidelines provide a comprehensive roadmap for ethical AI adoption within CSOs. Their emphasis on applying an ethical framework, limiting AI usage to work purposes, and fostering knowledge-sharing within the community underscores a holistic approach to ethical AI deployment that prioritizes the welfare of all stakeholders involved.

The development of robust policy frameworks for responsible AI usage is crucial for nonprofits to mitigate risks and maximize the benefits of AI technology for their missions. By writing policy statements and adhering to ethical principles, CSO leaders can ensure that AI serves the best interests of all members of society. 

By fostering collaboration across sectors and advocating for responsible AI, CSOs can play a pivotal role in building a more inclusive, equitable, and ethically driven future.

*****

Shaista Keating is senior director of hardware programs at TechSoup. Her email is skeating@techsoup.org. Chloe Mankin is program management specialist at TechSoup. Her email is cmankin@techsoup.org.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles