PolitMaster.com is a comprehensive online platform providing insightful coverage of the political arena: International Relations, Domestic Policies, Economic Developments, Electoral Processes, and Legislative Updates. With expert analysis, live updates, and in-depth features, we bring you closer to the heart of politics. Exclusive interviews, up-to-date photos, and video content, alongside breaking news, keep you informed around the clock. Stay engaged with the world of politics 24/7.

Contacts

  • Owner: SNOWLAND s.r.o.
  • Registration certificate 06691200
  • 16200, Na okraji 381/41, Veleslavín, 162 00 Praha 6
  • Czech Republic

How do you know when AI is powerful enough to be dangerous? Regulators try to do the math

How do you know if an artificial intelligence system is so powerful that it poses a security danger and shouldn't be unleashed without careful oversight?

For regulators trying to put guardrails on AI, it's mostly about the arithmetic. Specifically, an AI model trained on 10 to the 26th floating-point operations per second must now be reported to the U.S. government and could soon trigger even stricter requirements in California.

Say what? Well, if you're counting the zeroes, that's 100,000,000,000,000,000,000,000,000, or 100 septillion, calculations each second, using a measure known as flops.

What it signals to some lawmakers and AI safety advocates is a level of computing power that might enable rapidly advancing AI technology to create or proliferate weapons of mass destruction, or conduct catastrophic cyberattacks.

Those who've crafted such regulations acknowledge they are an imperfect starting point to distinguish today's highest-performing generative AI systems — largely made by California-based companies like Anthropic, Google, Meta Platforms and ChatGPT-maker OpenAI — from the next generation that could be even more powerful.

Critics have pounced on the thresholds as arbitrary — an attempt by governments to regulate math.

“Ten to the 26th flops,” said venture capitalist Ben Horowitz on a podcast this summer. “Well, what if that’s the size of the model you need to, like, cure cancer?”

An executive order signed by President Joe Biden last year relies on that threshold. So does California's newly passed AI safety legislation — which Gov. Gavin Newsom has until Sept. 30 to sign into law or veto. California adds a second metric to the equation: regulated AI models must also cost at least $100 million to build.

Fol

Read more on independent.co.uk