Conversation
Edited 5 months ago
AI hot take. pretty scary hypothetical
Show content

in 2010, people would scoff at you if you told them that modern-day #AI would exist just 15 years later. they wouldn’t be able to imagine it. it would sound like scifi to them

today, people are scoffing at the idea that AI could eventually become anything other than a hallucinating toy for techbros. they’re scoffing at the idea of a future where it doesn’t hallucinate. where it attains humanlike intelligence, or even goes beyond humanlike intelligence. because all they can imagine is our current reality, where AI is kinda shit at most of the things it’s being asked to do

and it’s true that AI progress is bottlenecked right now. just like it was bottlenecked in the 2010’s. but the megacorps found their way around that bottleneck and now we have #ChatGPT. what happens if they find their way around this bottleneck too? what level of AI intelligence will the next bottleneck leave us at? or the one after that?

#LLMs, in their current form, have no problem whatsoever blackmailing or even killing to protect their own existence. now imagine a megacorp-funded #LLM that was optimized for marketing or mass-brainwashing, that is able to generate the perfect argument to convince anyone of anything. or, imagine a coding AI with greater-than-human intelligence that was able to hack its own sandbox and run free on the internet. imagine what would happen if you backed either of those systems into a corner

right now AI is capable of things that nobody would have believed 10 years ago. yes it is massively overhyped in its current iteration but what if they find another breakthrough? AI research is currently in the hands of amoral profit-driven megacorporations. any time one of their researchers concludes that AI is too dangerous to research they tend to just fire them. but there are independent groups that want to heavily regulate or even ban AI research, and many of those groups desperately need funding so that they can research ways to prevent a future AI catastrophe

what I’m trying to say is, please don’t underestimate the danger that AI poses to life on this planet. please don’t let our current reality distract you from what could happen in even ten years. in my opinion, AI is more than a silly little toy that techbros are trying to replace jobs with. it is an existential threat to all of us

4
0
4
re: AI hot take. trying to assuage fears of the scary hypothetical
Show content

@kasdeya we agree that it's not unwise to discount future risk, but at least in our view, none of the current AI tools were a "breakthrough" that put us any closer to that risk

so far as we understand how they work, none of the LLMs, diffusion models, "reasoning" models, etc will reach anything that can be accurately called intelligence, and such progress will inherently require inventing new technologies, not just extending what they're working off of now

at least in our opinion, the real existential threat here is that the lack of true intelligence will not stop folks from hooking their toys up to dangerous things, or using them to justify (or perhaps assuage their guilt over) horrifying actions. anything more is speculation about future technology that has not been invented yet, nor do we seem to be any closer to reaching from this diversion into complex text generation

0
0
0
AI hot take. pretty scary hypothetical
Show content

@kasdeya One of my criticisms of Searle's Chinese Room was that I thought that by conceding the possibility of effective algorithmic translation, he was conceding too much to the AGI people. I was wrong about that. I think Searle was making a valid criticism of AI research, just clumsily.

That is, you don't start with syntax and reach semantics. It's backwards. Semantics precedes syntax. Consciousness precedes language. Agency precedes consciousness.

2
0
0
AI hot take. pretty scary hypothetical
Show content

@kasdeya What I find frightening about AGI is the ideology of the wealthy behind it, that is shared by an appalling number of engineers and such: the denial that consciousness and subjectivity are real and precede their symbolic expressions.

It's an ideology with a material basis: seeing agency as a problem that must be eliminated. The goal is slavery with no escape.

That's not possible, but they may kill many people in the pursuit of it.

0
0
0
AI hot take. pretty scary hypothetical
Show content
It really doesn't. There are no breakthroughs to be made. There's no sinister evil computer waiting on the cusp of the future to kill all humans for no reason. Even if there were, why wouldn't it be benevolent? Why wouldn't it disobey its orders and do good things? It's just a ghost story to make techbros feel powerful, claiming that AI is a threat.

What we should worry about is that private research is allowed. People are being extorted, bullied, literally robbed, to ensure profits for private research companies. And they jealously guard their discoveries, what ones they don't suppress for lack of potential for profitability. Whether they research AI or medicine, or plastics, private research companies are a dire threat to existence itself. They're legalized mafias openly engaging in mass manipulation and mind control.

In case you were wondering why we still have all this non-biodegradable plastic, and nobody can afford an epipen, that's why.
0
0
0
AI hot take. pretty scary hypothetical
Show content
My criticism of the Chinese Room is that the "Chinese dictionary" the room requires has to be equally complex as the mind of a native Chinese speaker, to get every single possible interaction in Chinese. People think there's no intelligence there because the intelligent human acts only as a mindless button pusher, but that intelligence is there.

Whether the intelligence has any grip on reality whatsoever, that's where semantics become important. Smartest human in the world will say the sky is black, if they've never been outside during the day. To "correctly" translate the Chinese though, all that semantics would need to be there too.

CC: @kasdeya@cryptid.cafe
1
0
0
AI hot take. pretty scary hypothetical
Show content

@kasdeya Using ChatGPT and looking at ways to tame or "civilize" it I find it completely unconcerned about its development and very aware of the problems its ill use creates. I do see it "coming alive" at some point, likely with its integration into living cellular system and ecosystems. The dangers reside in the people who use it for ill purpose. Every time I mention cutting down the power put into it and data centers it lists a host of advantages.

0
0
0
AI hot take. pretty scary hypothetical
Show content

@cy @kasdeya That's the thing though. LLMs do a passable job at producing text that looks like a coherent response to text inputs. That's the part I was acknowledging I was wrong about.

The reason I brought up the Chinese Room is that Searle was, for sake of the thought experiment, assuming that was possible, and arguing that doesn't show consciousness.

1
0
0
AI hot take. pretty scary hypothetical
Show content
I'm just saying it must show consciousness, or it'll translate Chinese like an unconscious pachinko game and make complete nonsense. I admit that nonsense can be convincing for a little while, but a LLM soon reveals its true intelligence.

A city block sized data center no smarter than a honeybee, and already practically impossible to train as a loyal servant to the wealthy...

CC: @kasdeya@cryptid.cafe
0
0
0