Thaxll'ssillyia
Φραγκολεβαντίνος μπιφτεκάς
- Joined
- Feb 11, 2021
- Messages
- 1,891
Πανω σε αυτο, ο πατερας των Νευρωνικων Δικτυων (και ουσιαστικα των νεων τεχνολογιων στο Deep Learning/LLM) σε μια συνεντευξη του πλεον δεν θεωρει αυτο το σεναριο αδυνατοΧωρίς καν να εξετάσουμε το ενδεχόμενο το ΑΙ να γίνει Skynet και να μας αφανίσει όλους.

"Godfather of artificial intelligence" weighs in on the past and potential of AI
Geoffrey Hinton, who works with Google and mentors AI's rising stars, started researching artificial intelligence over 40 years ago.
Επισης σημερα εβγαλε μια ωραια αποψη ο -γνωστος υποπτος- Marcus, ενω αυριο δημοσιευεται ενα Γραμμα Προειδοποιησης απο το Future of life Institute για τους νεους κινδυνους στην ΑΙ.

AI risk ≠ AGI risk
Superintelligence may or may not be imminent. But there’s a lot to be worried about, either way.

I am not worried, immediately, about “AGI risk” (the risk of superintelligent machines beyond our control), in the near term I am worried about what I will call “MAI risk”—Mediocre AI that is unreliable (a la Bing and GPT-4) but widely deployed—both in terms of the sheer number of people using it, and in terms of the access that the software has to the world. A company called Adept.AI just raised $350 million dollars to do just that, to allow large language models to access, well, pretty much everything (aiming to “supercharge your capabilities on any software tool or API in the world” with LLMs, despite their clear tendencies towards hallucination and unreliability).
Lots of ordinary humans, perhaps of above average intelligence but not necessarily genius-level, have created all kinds of problems throughout history; in many ways, the critical variable is not intelligence but power, which often caches out as access. In principle, a single idiot with the nuclear codes could destroy the world, with only a modest amount of intelligence and a surplus of ill-deserved access.
If an LLM can trick a single human into doing a Captcha, as OpenAI recently observed, it can, in the hands of a bad actor, create all kinds of mayhem. When LLMs were a lab curiosity, known only within the field, they didn’t pose much problem. But now that (a) they are widely known, and of interest to criminals, and (b) increasingly being given access to the external world (including humans), they can do more damage.
Although the AI community often focuses on long-term risk, I am not alone in worrying about serious, immediate implications. Europol came out yesterday with a report considering some of the criminal possibilities, and it’s sobering.