PolitMaster.com is a comprehensive online platform providing insightful coverage of the political arena: International Relations, Domestic Policies, Economic Developments, Electoral Processes, and Legislative Updates. With expert analysis, live updates, and in-depth features, we bring you closer to the heart of politics. Exclusive interviews, up-to-date photos, and video content, alongside breaking news, keep you informed around the clock. Stay engaged with the world of politics 24/7.

Contacts

  • Owner: SNOWLAND s.r.o.
  • Registration certificate 06691200
  • 16200, Na okraji 381/41, Veleslavín, 162 00 Praha 6
  • Czech Republic

AI could have catastrophic consequences — is Canada ready?

Nations — Canada included — are running out of time to design and implement comprehensive safeguards on the development and deployment of advanced artificial intelligence systems, a leading AI safety company warned this week.

In a worst-case scenario, power-seeking superhuman AI systems could escape their creators' control and pose an «extinction-level» threat to humanity, AI researchers wrote in a report commissioned by the U.S. Department of State entitled

The department insists the views the authors expressed in the report do not reflect the views of the U.S. government.

But the report's message is bringing the Canadian government's actions to date on AI safety and regulation back into the spotlight — and one Conservative MP is warning the government's proposed Artificial Intelligence and Data Act is already out of date.

AI vs. everyone

The U.S.-based company Gladstone AI, which advocates for the responsible development of safe artificial intelligence, produced the report. Its warnings fall into two main categories.

The first concerns the risk of AI developers losing control of an artificial general intelligence (AGI) system. The authors define AGI as an AI system that can outperform humans across all economic and strategically relevant domains.

While no AGI systems exist to date, many AI researchers believe they are not far off.

«There is evidence to suggest that as advanced AI approaches AGI-like levels of human and superhuman general capability, it may become effectively uncontrollable. Specifically, in the absence of countermeasures, a highly capable AI system may engage in so-called power seeking behaviours,» the authors wrote, adding that these behaviours could include strategies to prevent the AI itself from being

Read more on cbc.ca