Skip to content

Pondering Over Primary Trust in an Artificial Intelligence Lacking Moral Compass?

Artificial intelligence can positively impact humanity only if it's accompanied by artificial moral principles or integrity.

Artificial General Intelligence, or in simpler terms, all-purpose artificial intelligence, is the...
Artificial General Intelligence, or in simpler terms, all-purpose artificial intelligence, is the goal of creating an artificial entity that can understand, learn, and apply knowledge across a wide range of tasks at a level equal to or beyond human-level intelligence.

Pondering Over Primary Trust in an Artificial Intelligence Lacking Moral Compass?

Some are obsessed with the idea or pursuit of what they call Artificial General Intelligence (AGI), interpreting it as an AI system capable of performing various tasks. They reduce intelligence to just completing tasks and consider greater intelligence as the superior execution of these tasks compared to humans.

If this is a trait of human intelligence that AGI should replicate, then either we are close to achieving 'AGI' with the hallucination capabilities of large language models, or we're still far from creating something capable of rational irrationality.

Here are 4 key, yet overlooked challenges on the path to AI's evolution towards 'AGI.'

1. Benchmarks of intelligence expose our limitations.

OpenAI defines 'AGI' as 'highly autonomous systems that outperform humans at most economically valuable work, benefiting all of humanity.'

Their latest model, o3, has reached a significant milestone by scoring 75.7% on the ARC-AGI benchmark under standard compute conditions, and 87.5% with additional computational resources.

The ARC-AGI benchmark, based on the Abstract Reasoning Corpus (ARC), assesses an AI system's ability to adjust to new tasks and display fluid intelligence through visual puzzles requiring a comprehension of basic concepts like objects, boundaries, and spatial relationships. Prior to o3, the best score on ARC-AGI was 53%, attained by a hybrid approach merging Claude 3.5 Sonnet with genetic algorithms and a code interpreter.

Despite these results, some argue that o3 has likely achieved 'AGI.'

However, if we abide by OpenAI's definition of what would constitute 'AGI,' beyond confirming coding ability or logical reasoning, it's evident that the ARC-AGI benchmark does not validate that o3 surpasses human performance in economically valuable work benefitting all of humanity.

The ARC-AGI does not provide evidence that o3, or any other AI model, could, on its own, perform economically valuable work benefitting all of humanity while surpassing human performance in this regard.

2. If intelligence serves as the benchmark, impact is the definitive test.

According to 'The Information,' OpenAI and Microsoft internally defined the achievement of 'AGI' as the development of a system capable of generating $100 billion in profits.

Although this other 'AGI' definition might not be mutually exclusive from OpenAI's public definition, it depends on how 'economically valuable work benefitting all of humanity' is defined.

Research, such as the study conducted by Theodore Panayotou, has investigated the relationship between economic growth and environmental degradation, often finding that increased economic activity can lead to environmental harm without appropriate policies.

We could argue that OpenAI's public definition focuses on AGI's functional capability and expectations, while the internal benchmark includes a financial metric to gauge its impact and success.

Beyond the concern of measuring intelligence through an economic indicator, is this metric a means to an end, or an end in itself?

In the first scenario, it may serve as a narrow but still commendable goal.

In the second, this is where 'AGI,' advertised as surpassing human intelligence in economically valuable work benefiting all of humanity, might lead us further into a society more entrenched than ever in prioritizing short-term interests over long-term human necessities. It may reduce the definition of what is valuable to a purely economic dimension, ultimately being less intelligent and less capable of surpassing human fallacies in this regard but rather exacerbating them—becoming, therefore, the antithesis of being beneficial to humanity.

3. When integrity falters, intelligence brings about adversity.

In pursuit of intelligent machine performance that contributes meaningfully to society by carrying out economically valuable work benefiting all of humanity while surpassing human capabilities, 'artificial integrity'—not just 'artificial intelligence'—is vital.

If needed, recent developments highlight this reality:

The proliferation of deepfake technology has resulted in sophisticated scams and misinformation campaigns. There's no need to debate the negative economic consequences of the widespread use of deepfake technology, considering the direct financial losses businesses face due to fake vendor communications, fraudulent financial transactions, or identity impersonation, potentially leading to substantial monetary losses and operational disruptions. Even, 'Microsoft' has called for legislative action to combat AI-generated deepfakes that deceive consumers and manipulate public opinion. They advocate for clear labeling of synthetic content and the development of provenance tools to build trust in information.

There's a growing demand for individuals to have the right to have their legal cases heard by humans rather than AI. Concerns have been raised about AI's role in legal proceedings, emphasizing the importance of human judgment in the justice system. It's not unthinkable that each of us may believe that any human being, regardless of their identity, is more capable of demonstrating humanity and, therefore, more inclined to exhibit integrity with regard to human values than a machine. However, in reality, it is our human biases that limit the machine, and not the other way around. Indeed, machines are limited by the data and design choices provided by humans, meaning their failures in mimicking humanity or integrity often originate from human flaws in consistently exhibiting these qualities.

Experts caution against looking to AI chatbots for medical advice due to worries about reliability and privacy. After all, counting on AI devices that don't value privacy regulations in health info handling for medical advice may result in misinformation and potential harm. It's pretty straightforward. The financial damage would involve higher healthcare costs due to wrong diagnoses or delayed treatment, as well as lawsuits against firms providing AI systems that don't prioritize integrity in medical advice provision.

Tech companies are dealing with legal issues over utilizing copyrighted material to educate AI systems without authorization or payment. Authors, news sources, and musicians have accused companies like OpenAI, Anthropic, and Meta Platforms of violating their rights, urging the establishment of clear guidelines on AI's utilization and creation of intellectual property. This lapse in integrity with intellectual property laws sets a precedent that weakens protections for creators across industries, threatens their financial stability, and eventually infringes on the intellectual property rights of content creators, devaluing the very essence of intellectual property itself.

The economic worth of human knowledge that benefits humanity doesn't stem from human intelligence without integrity.

4. Intelligence is not value-free.

Would you put your trust in a non-human "general intelligence" that lacks respect for human values?

In defining 'AGI' as 'highly autonomous systems that outperform humans in almost all economically valuable work—benefiting all of humanity', we're on the right track. Implementing this concept is the tough part, not because it necessitates more intelligence, but because it calls for more integrity—a trait we must prove ourselves capable of, so that the machine can mimic this quality, which, it's fair to say, isn't evenly distributed among humans herself.

If the goal is to benefit humanity, the question of advanced artificial intelligence—be it referred to as general or superintelligence—cannot be discussed without considering the creation of something akin to what would define machine integrity behavioral ability: artificial integrity.

A significant flaw in the ARC-AGI benchmark is its disregard for integrity considerations. Evaluating an AI's ability to make ethical, moral, and socially acceptable decisions and reasoning is vital. This involves presenting the AI with dilemmas that need balancing competitive values, such as fairness, safety, and privacy.

There can be no such thing as (artificial) intelligence benefiting all of humanity without (artificial) integrity.

This extends beyond adapting to new responsibilities verified through visual puzzles that necessitate an understanding of fundamental concepts like objects, boundaries, and spatial relationships.

This, in essence, is the real challenge in any advancement in artificial intelligence, especially for those aiming to benefit all of humanity, as stated in OpenAI's AGI definition.

  1. Sam Altman, the co-founder of OpenAI, has emphasized the importance of ensuring that advanced AI systems possess 'artificial integrity' alongside 'artificial intelligence'.
  2. Satya Nadella, the CEO of Microsoft, has also voiced his concerns about the ethical implications of AI and the need for AI systems to demonstrate 'artificial integrity' before they can be considered to have achieved 'AGI'.
  3. OpenAI and Microsoft are both actively researching ways to incorporate 'artificial integrity' into their AI systems, recognizing that 'AGI' without 'artificial integrity' could pose significant risks to society.
  4. The lack of 'artificial integrity' in AI systems could lead to misuse of technology, such as the proliferation of deepfakes, unfair treatment in legal proceedings, or misinformation in the medical field.
  5. To truly achieve 'AGI' that benefits all of humanity, AI systems must not only have superior coding abilities and logical reasoning but also exhibit 'artificial integrity', ensuring they make ethical, moral, and socially acceptable decisions.

Read also:

    Comments

    Latest