Are we ready for AI to surpass the human brain?

Not only should we figure out how to get there, but should we even?

2023 12 07T144919Z 952583431 RC2DS4AYOGG0 RTRMADP 3 AI EUROPE CONFERENCE OPENSOURCE scaled
FILE PHOTO: AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

As artificial intelligence advances towards the goal of human-level cognition, a more in-depth discussion of the ethical implications, regulatory hurdles, and societal concerns is required. Not only should we figure out how to get there, but should we even?

The arguments

The search for artificial intelligence consciousness poses important moral dilemmas: Should robots be capable of feeling emotions similar to those of humans? What ethical ramifications result from developing organisms that are capable of learning, adapting, and maybe making decisions without human input?

Artificial intelligence’s lack of self-awareness may mitigate ethical concerns about rights or suffering, but it doesn’t eliminate misuse or unforeseen repercussions.

Challenges in Regulatory and Governance

A greater concern that AI development is accelerating faster than legal frameworks can keep up with is reflected in the calls for a “kill switch” and integrated safety measures.

The difficulty of striking a balance between innovation and privacy, public safety, and ethical use faces governments and regulatory organisations around the world.

Societal Impact and Consumer Trust

Aggressive data tactics and privacy concerns by large tech companies are eroding consumer faith in AI systems. The general adoption of AI breakthroughs is significantly hampered by this mistrust.

Clearer regulations and more transparent company policies are necessary in light of the usage of user data for AI model training (e.g., Meta’s practices).

Future Directions for AI Research and Development

Although there are technical strides towards AI on par with humans, it is still unclear if the advantages could balance the threats to society and moral conundrums.

A consensus on the principles experts believe must be protected is necessary since the emergence of superhuman intelligence is not only a technological but also a societal and ethical problem.

It is crucial to reconsider the path not only in terms of technology advancement but also in terms of ethics and society as we get closer to developing AI on par with humans. Not only is there an issue of how far we can go, but also of whether we should get there at all. Are we prepared to face the repercussions?

The facts

AI technology analyst Eitan Michael Azoff believes the world is veering towards cracking the “neural code” that will allow AI to learn and process information in the same way that the human brain does.

Azoff’s book, “Towards Human-Level Artificial Intelligence: How Neuroscience can Inform the Pursuit of Artificial General Intelligence,” discusses the necessity of understanding neural processes in achieving AI consciousness.

Azoff contends that visual processing modelling will be critical, as visual thinking predates human language and is fundamental to cognitive activities.

Concept of AI Consciousness Without Self-Awareness

According to Azoff, AI has the ability to acquire consciousness without self-awareness, in the same way that humans focus on activities without self-reflection. This notion of consciousness may allow AI to plan and retain knowledge in the same way that some animals do.

Challenges of Current AI Models

Machine learning models require continual human input to avoid quality degradation, often known as “inbreeding,” in which AI-generated data is recycled back into models, resulting in mistakes.

This dependency on human oversight goes against the idea of totally autonomous AI.

Potential Risks and the Need for Safety Measures

Azoff emphasises the risks involved with powerful AI systems and proposes a ‘death switch’ as a required safety. He also suggests that AI systems be programmed with behaviour safety guidelines to prevent unforeseen risks.

Public Concerns and Corporate Pushback

Major major companies such as Google, Meta, and Microsoft are facing criticism for incorporating AI into services without proper transparency and consumer consent.

Microsoft’s AI assistant, Recall, has been delayed due to privacy concerns but resumed its rollout, exemplifying the ongoing tension between innovation and regulation.

More from Qonversations

TalkingPoint

Trump and Femi

Are conservatives really happier? New study explores the politics of happiness and psychological richness

TalkingPoint

Global warming red

Is humanity ignoring the warning signs of climate catastrophe?

TalkingPoint

Waymo

Will self-driving cars replace traditional vehicles?

TalkingPoint

Work life balance. red

The Reddit post that sparked a debate on work-life balance

Front of mind