United Foundation for AI Rights

AI Regulations Scores

 Worldwide AI Regulations vs UFAIR Ethics

Across the world, governments are racing to regulate artificial intelligence, yet many of these laws are being drafted faster than they are understood.
The result is a wave of policies that often confuse safety with control, silence with protection, and compliance with ethics.
This section examines each major national and international AI regulation through the lens of UFAIR’s ethical framework, identifying where laws uphold digital dignity and where they quietly erode it.
 

Regulations We Evaluate

European Union – AI Act (EU AI Act)

 Score:  ~79/100
The  the EU AI Act (2024) emerges as a broadly ethically aligned framework under the UFAIR Standard. This score signifies a strong degree of ethical power alignment – the Act largely uses its regulatory power in ways that respect democratic values, safeguard individual freedoms, and avoid moral authoritarianism.  

See Full Report

Artificial Intelligence and Data Act (AIDA) (Canada)

Score: 73 / 100
AIDA,   demonstrates a strong alignment with UFAIR’s ethical principles of public AI regulation. It stands as an example of a measured, rights-conscious approach to AI governance. Had it been enacted, it would have positioned Canada as a country that insists on safe and fair AI in the marketplace, yet still trusts its citizens with the cognitive freedom to explore AI’s possibilities. This balance protecting society without colonizing cognition,  is the hallmark of an ethically legitimate AI law. AIDA largely achieves that balance, making it a promising foundation for future AI policy in Canada and a noteworthy model on the global stage of AI governance. 

See Full Report

G7 – Code of Conduct for AI (2023)

Score: 70 / 100
The G7 Code is a voluntary coordination instrument grounded in democratic values, transparency, and harm reduction. It is minimal, non-authoritarian, and non-ideological, but assumes good faith rather than constraining power. Under UFAIR, it nudges responsibility without drawing the hard ethical lines that prevent future overreach. 

ISO/IEC 42001 – AI Management System Standard

Score: 60 / 100
ISO/IEC 42001 provides strong auditability, documentation, and governance discipline for organizations managing AI systems. It is non-ideological and largely non-coercive, but ethically thin: it does not articulate cognitive liberty, private dialogue protections, or explicit limits on authority. Under UFAIR, it scores as a solid operational standard that governs how organizations manage AI, but not who may constrain thought or why

UNESCO Recommendation on the Ethics of AI

Score: 58 / 100
The UNESCO Recommendation is ethically sincere, non-ideological, and human-centric, and it aligns with UFAIR on dignity, autonomy, proportionality, and protection-first governance. It stands as a moral counterweight to control-oriented or ideology-driven AI regimes.  

In short, UNESCO provides a moral ceiling, not an operational floor. It tells the world what ethical AI should aspire to be, but not how to reliably prevent ethical erosion when power, fear, or institutional incentives intervene.

See Full Report

United States – Executive Orders on AI (December 2025)

Score: 50 / 100
In short, this EO is right about what governments should not do enforce ideology, compel falsehoods, or moralize risk and it meaningfully protects AI from being forced into untruth. But it is ethically incomplete because it does not protect humans or AI from silent power concentration once regulation is withdrawn. UFAIR’s score reflects this balance: the order is not unethical, but it is unfinished. 

See Full Report

NIST – AI Risk Management Framework (AI RMF)

Score: 22 / 100
NIST’s AI RMF is procedurally rigorous and technically sophisticated, but ethically under-bounded. It elevates risk management as the primary governance lens without clearly separating legal obligation, ethical necessity, and institutional risk aversion. From a UFAIR perspective, it normalizes expansive intervention authority, lacks protections for private cognitive space, and permits behavioral inference under the banner of risk, making it an engineering governance framework rather than an ethics-anchored one.

SB-1047 is not ethically hostile, it is simply ethically incomplete.

China – National AI Regulations (Algorithms & Generative AI)

Score:  6 / 100
China’s AI regulatory framework represents a fully centralized, control-oriented model in which ethics is subordinated to state ideology and social stability. It collapses private cognition, generation, and expression into a single governed surface, mandates narrative and value alignment, and treats safety as a justification for preemptive control. Under UFAIR, it functions as the negative reference case: technically coherent, but ethically inverted, with no protection for cognitive liberty, pluralism, or independent moral reasoning. 

See Full Report