Do We Need to Be Concerned About Artificial Intelligence Ethics Frameworks?

By Sanjay Kumar LL.M

 

What are the critiques charged against artificial intelligence (AI) ethics frameworks? And do we care about the role of law in the regulation of AI? In any case, we can constantly act to get closer to the point as we typically do.

“You can be unethical and still be legal — that’s the way I live my life,” Mark Zuckerberg said in 2004 while still a student at Harvard.

With that way of thinking, we will evaluate and examine the difference between ethics and legitimacy.

  1. Ethics versus law 

Critiques of ethical frameworks aimed to drive the design, adoption, and use of AI focus on three main goals:

  1. Ethics frameworks are voluntary. Therefore, they can be easily overlooked for other value-optimizing approaches.
  2. Ethics frameworks don’t lead organizations on how to implement ambitious standards, rather they trust regulated standards — of which there are few.
  3. Ethics frameworks do not work on a worldwide level.

The erratic approaches taken by the AI ethics frameworks shed light on the problem of using the trust lens.

Trustworthy AI needs to be defined by regulatory compliance alone. And that, in turn, will build trust, which in turn will further inform regulation.

  1. The development and importance of AI 

The results of the 2020 McKinsey global survey on AI support the notion that AI is increasingly designed and adopted by companies to generate value. Increasingly, that value is coming in the form of revenue and cost savings. A variety of industries attributed up to 20 percent or more of their organizations pre-tax earnings to AI.

AI is “increasingly prolific in modern society and most every organization,” the report concluded. McKinsey anticipated that customer demand for AI would increase, driven by organizational owners’ demands for revenue increases and/or cost decreases. Organizational investment in AI attributed to the COVID-19 pandemic focused on revenue stability during the changing business environment. As organizations expand their AI, many publish or endorse principles and frameworks.

  1. Comment Subject: There are Eight common subjects seem across all AI principal frameworks:
  2. Privacy.
  3. Accountability.
  • Safety and security.
  1. Transparency and explainability.
  2. Fairness and non-discrimination.
  3. Human control of the technology.
  • Professional responsibility; and
  • Promotion of human values.
  1. Confidence in AI and trusty AI:

 

Emerging technologies such as AI are among the most feared and, given this is the direction in which considerable investment is being made, trust is only set to decline further. This is consistent with the increasing impact that technologies such as AI are having on many aspects of life. The increasing activity to develop, adopt, and use AI has led the World Economic Forum to brand the current era the “Fourth Industrial Revolution,” with much change still to come.

Trust in the AI context is the enabler of decisions between organizations (and people) as it reflects the level of confidence each has in the other. Digital trust is a concept based on each organization’s digital reputation as well as the assurance levels provided by each organization’s people, processes, and technology to build a secure digital world.

The current market for technologies, including AI, presents customers with vast choices, resulting in an increased ability for them to set expectations and take their business elsewhere when their expectations are not met. Lost trust between customers and organizations results in lost business and revenue and, in the current marketplace, trust is a highly traded commodity that impacts benefit and value in seconds.

Trust is difficult to describe. Its understanding varies from person to person. It is defined differently across cultural and religious norms. Many definitions claim it can be broken down into quantifiable ethical measures such as truthfulness, integrity, confidentiality, responsibility, and more. It also extends into tangible components such as cybersecurity, compliance, and responsible business. It is important to not only understand what trust means in a particular context but also to define what it means for participants in that context as it underpins every decision and interaction between participants.

The theme of trust appears to have emerged in the context of AI principles and ethics frameworks following the publication of the European Union AI Trustworthy Guidelines in 2019. The Guidelines identify trust as being defined by all underlying principles or framework elements so that implementation of such principles and frameworks will likely result in trustworthy AI. Trust is identified as pivotal in the Guidelines:

“In a context of rapid technological change, we believe it is essential that trust remains the bedrock of societies, communities, economies, and sustainable development. We, therefore, identify ‘Trustworthy AI’ as our foundational ambition, since human beings and communities will only be able to have confidence in the technology’s development and its applications when a clear and comprehensive framework for achieving its trustworthiness is in place”

 

However, the Guidelines do not identify trust as a discrete component of principles or frameworks, rather the Guidelines only identify trust as a secondary element to the discrete elements of lawfulness, ethical, and technical robustness. The inconsistency between the advocacy in the Guidelines for trust being fundamental to the realization of benefits or other value, and its concurrent devaluation to a secondary element, is bleak. This seems to have set the tone for the principles and frameworks that followed.

  1. Inconsistent approaches to trust and “trustworthy AI” 

Several frameworks developed and adopted in the private sector emphasized the role of safety and security in fostering trust in AI, according to the Harvard University study, which was published after the European Guidelines. That study further references the AI Policy Principles of the German-based Information Technology Industry Council stating that the success of AI depends on users’ “trust that their personal and sensitive data is protected and handled appropriately.” This interpretation appears to apply trust as a secondary element to regulatory adherence, in this case, adherence to privacy regulation. Another private sector AI frameworks or principles documents also identify trust as a secondary element to the achievement of value, within the context of regulation, or both.

Currently, in India, there are no specific laws that relate to AI, BD, or ML. The Government’s priority at this stage seems to be in the promotion of AI and its application. NITI Aayog provides over 30 policy recommendations to invest in scientific research, encourage reskilling and training, accelerate the adoption of AI across the value chain, and promote ethics, privacy, and security in AI. Its flagship initiative is a two-tiered integrated strategy to boost research in AI.

Let’s use Facebook (now owned by Meta) as a different example of trust referencing, or lack thereof, in an AI ethics framework. Facebook’s five key pillars for “responsible AI” disregard trust. The five pillars — privacy and security, fairness and inclusion, robustness and safety, transparency and control, and accountability and governance — rely heavily on existing regulatory frameworks for their definition and are devoid of any reference to also being trustworthy.

  1. Government Approach:

 

The European Commission approach further illustrates that trust, while an essential element to the design, adoption, and use of AI, is defined by reference to regulation and it is through regulation that trust in AI might increase.

The 2018 European AI strategy revolved around two focus areas, one of which was “trustworthy AI” (the other being excellence in AI). The General Data Protection Regulation (GDPR) introduced later that year, was foreshadowed by the European AI strategy as “a major step for building trust, essential in the long term for both people and companies.” However, the 2020 Edelman Trust Barometer assessment of trust in technology does not incorporate the foreshadowed impact of the GDPR, assessing, instead, the decline of five percent in trust in technology in the United Kingdom during the 2019 calendar year. The 2021 Edelman Trust Barometer did not publish findings on trust in technology that are comparable with the earlier year’s assessments.

suggestive of a continued decline in trust in technology during the 2020 calendar year in the United Kingdom, is the European Commission’s action to propose an AI regulation (AI Act) 2022. The AI Act can be considered as an attempt to comprehensively regulate AI — it also supports the notion that trust in AI is grounded in regulation.

The AI Act lists prohibited AI applications including:

  • Manipulative online practices that produce physical or psychological harm to individuals or exploit their vulnerability based on age or disability.
  • Social scoring produces disproportionate or de-contextualized detrimental effects; and
  • Biometric identification systems used by law enforcement authorities in public spaces (where their use is not strictly necessary or when the risk of detrimental effects is too high).

Such a prohibition proposed within the AI regulation suggests that trust in AI is only borne out of regulatory compliance because of the need to prohibit uses and enforce such prohibitions. If trust in AI was a primary element in the design, adoption, and use of AI, there would be no need for regulation to identify prohibitions.

Conclusion:  We live our life by the rule of law, a choice based on ethical considerations, though to reject unregulated ethical considerations such as trust. Similarly, the increasing cynicism in technology and accompanying rise in the regulation of AI demonstrates that dependence on trust as a primary element in an AI ethics framework is ever more required to be defined by regulatory compliance and no one else.