The Download: climate responsibility, and AI training data shortages

2 weeks ago 25

The UN climate conference wrapped up over the weekend after marathon negotiations that ran way over. The most notable outcome was the establishment of a fund to help poor countries pay for climate damages, which was hailed as a win. Beyond that, some leaders are concerned there wasn’t enough progress at this year’s talks.

Consequently, everyone is pointing fingers, blaming others for not taking action fast enough on climate funding. Activists are calling the US the ‘colossal fossil,’ while US leaders complain about being blamed while China is the current leading emitter.

But when it comes to working out who should be paying what in accepting liability for climate damages, we need to look beyond current emissions. When you add up historic emissions, it’s super clear: the US is by far the greatest total emitter, responsible for about a quarter. Read the full story.

—Casey Crownhart

Casey’s story is from the Spark, her weekly newsletter delving into the tricky science of climate change. Sign up to receive it in your inbox every Wednesday.

We could run out of data to train AI language programs 

What’s happening? Large language models are one of the hottest areas of AI research right now, with companies racing to release programs like GPT-3 that can write impressively coherent articles and even computer code. But there’s a problem looming on the horizon, according to a team of AI forecasters: we might run out of data to train them on.

How long have we got? As researchers build more powerful models with greater capabilities, they have to find ever more texts to train them on. The types of data typically used for these models may be used up in the near future—as early as 2026, according to a paper by researchers from Epoch, an AI research and forecasting organization.

Read Entire Article