okay so you know how there's been a lot of talk about artificial intelligence and how it could potentially pose a huge risk to humanity? like, what if we create something that's way smarter than us and it decides we're not worth keeping around? it's a pretty terrifying thought, and it's not just sci-fi writers who are worried about it - some of the biggest names in tech are sounding the alarm too.

User Image

one of the people who's been talking about this is stuart russell, a computer scientist who's basically written the book on ai (literally - he co-authored the textbook that's considered the foundation of the field). he was brought in to testify in this big trial between elon musk and sam altman, and some of the things he said are really striking. basically, he's saying that we have no idea how to make sure these super-intelligent systems are safe, and that making them more capable is probably not a good idea.

User Image

russell's talking about something called the "extinction risk" - the chance that an ai could decide to wipe out humanity. and the thing is, he's not alone in thinking this is a real possibility. other big names in tech, like the ceo of google deepmind, are also worried about it. they're talking about how there's this "race dynamics" thing going on, where everyone's trying to develop more advanced ai systems as fast as possible, without really thinking about the potential consequences.

User Image

it's pretty sobering to hear these guys talking about how we're basically playing with fire here. russell's saying that we need to be way more careful, and that we should be aiming for a risk level that's more like 1 in 100 million per year (which is basically tiny).Read more: Full article on www.pcgamer.com

What do you think about this?