Kelsey piper ai
That might have people asking: Wait, what?
GPT-4 can pass the bar exam in the 90th percentile, while the previous model struggled around in the 10th percentile. And on the advanced sommelier theory test, GPT-4 performed better than 77 percent of test-takers. These are stunning results — not just what the model can do, but the rapid pace of progress. Her work is informed by her deep knowledge of the handful of companies that arguably have the most influence over the future of A. This episode contains strong language. Tolkien Thoughts?
Kelsey piper ai
Good agreed; more recently, so did Stephen Hawking. These concerns predate the founding of any of the current labs building frontier AI, and the historical trajectory of these concerns is important to making sense of our present-day situation. To the extent that frontier labs do focus on safety, it is in large part due to advocacy by researchers who do not hold any financial stake in AI. But while the risk of human extinction from powerful AI systems is a long-standing concern and not a fringe one, the field of trying to figure out how to solve that problem was until very recently a fringe field, and that fact is profoundly important to understanding the landscape of AI safety work today. The enthusiastic participation of the latter suggests an obvious question: If building extremely powerful AI systems is understood by many AI researchers to possibly kill us, why is anyone doing it? Some people think that all existing AI research agendas will kill us. Some people think that they will save us. Eliezer Yudkowsky and the Machine Intelligence Research Institute are representative of this set of views. In the last 10 years, rapid progress in deep learning produced increasingly powerful AI systems — and hopes that systems more powerful still might be within reach. More people have flocked to the project of trying to figure out how to make powerful systems safe. One might expect that these disagreements would be about technical fundamentals of AI, and sometimes they are. But surprisingly often, the deep disagreements are about sociological considerations like how the economy will respond to weak AI systems, or about biology questions like how easy it is to improve on bacteria, 1 or about implicit worldviews about human nature, institutional progress, and what fundamentally drives intelligence. There are now quite a few people — more than , if less than 1, — across academic research departments, nonprofits, and major tech labs, who are working on the problem of ensuring that extremely powerful AI systems do what their creators want them to do. Many of these people are working at cross-purposes, and many of them disagree on what the core features of the problem are, how much of a problem it is, and what will likely happen if we fail to solve it. That is, perhaps, a discouraging introduction to a survey of open problems in AI safety as understood by the people working on them.
One might expect that these disagreements would be about technical fundamentals of AI, and sometimes they are.
She explores wide-ranging topics from climate change to artificial intelligence, from vaccine development to factory farms. She writes the Future Perfect newsletter, which you can subscribe to here. She occasionally tweets at kelseytuoc and occasionally writes for the quarterly magazine Asterisk. If you have story ideas, questions, tips, or other info relevant to her work, you can email kelsey. She can also accept confidential tips on Signal at Ethics Statement Future Perfect coverage may include stories about organizations that writers have made personal donations to. This does not in any way affect the editorial independence of our coverage, and this information will be disclosed clearly when relevant.
GPT-4 can pass the bar exam in the 90th percentile, while the previous model struggled around in the 10th percentile. And on the advanced sommelier theory test, GPT-4 performed better than 77 percent of test-takers. These are stunning results — not just what the model can do, but the rapid pace of progress. Her work is informed by her deep knowledge of the handful of companies that arguably have the most influence over the future of A. This episode contains strong language.
Kelsey piper ai
The short version of a big conversation about the dangers of emerging technology. Tech superstars like Elon Musk , AI pioneers like Alan Turing , top computer scientists like Stuart Russell , and emerging-technologies researchers like Nick Bostrom have all said they think artificial intelligence will transform the world — and maybe annihilate it. We continually discover ways we can extend our existing approaches to let computers do new, exciting, increasingly general things, like winning at open-ended war strategy games. Current AI systems frequently exhibit unintended behavior. As AI systems get more powerful, unintended behavior may become less charming and more dangerous. Experts have argued that powerful AI systems, whatever goals we give them, are likely to have certain predictable behavior patterns. For all those reasons, many researchers have said AI is similar to launching a rocket. Sign up for the Future Perfect newsletter.
Better read than dead
By Brian Resnick. This does not in any way affect the editorial independence of our coverage, and this information will be disclosed clearly when relevant. ChatGPT and similar releases from OpenAI were trained with reinforcement learning from human feedback — a technique Christiano helped develop. She can also accept confidential tips on Signal at We just need to solve a very hard engineering problem first. That might take the form of a spectacular sci-fi conquest by AIs using advanced weapons or plagues they invented. The field still has lots of open questions — many of which might make AI look much scarier, or much less so — which no one has dug into in depth. I'm not sure if it's the best intro, but it seems like a contender. Most experts in the AI field think it poses a much larger risk of total human extinction than climate change, since analysts of existential risks to humanity think that climate change, while catastrophic, is unlikely to lead to human extinction. For a reason unknown to us, the game does not advance to the second round but the platforms start to blink and the agent quickly gains a huge amount of points close to 1 million for our episode time limit. AI looks increasingly like a technology that will change the world when it arrives.
Kelsey Piper is an American journalist who is a staff writer at Vox , where she writes for the column Future Perfect , which covers a variety of topics from an effective altruism perspective. While attending Stanford University, she founded and ran the Stanford Effective Altruism student organization.
Please enter a valid email and try again. By Kelsey Piper November 29, Audience strategy by Shannon Busta. The field still has lots of open questions — many of which might make AI look much scarier, or much less so — which no one has dug into in depth. AI, he concluded, endangers us. But these grand worries are rooted in research. But narrow AI is getting less narrow. Apparently, lots of people. Narrow AI has seen extraordinary progress over the past few years. Introduction to Effective Altruism. The Oscars Supreme Court Winter warming. Hide table of contents. There are also lots of people working on more present-day AI ethics problems: algorithmic bias , robustness of modern machine-learning algorithms to small changes, and transparency and interpretability of neural nets , to name just a few. Of course, we can build AI systems that are aligned with human values, or at least that humans can safely work with. But researchers there are active contributors to both AI safety and AI capabilities research.
0 thoughts on “Kelsey piper ai”