in 2010, people would scoff at you if you told them that modern-day #AI would exist just 15 years later. they wouldn’t be able to imagine it. it would sound like scifi to them
today, people are scoffing at the idea that AI could eventually become anything other than a hallucinating toy for techbros. they’re scoffing at the idea of a future where it doesn’t hallucinate. where it attains humanlike intelligence, or even goes beyond humanlike intelligence. because all they can imagine is our current reality, where AI is kinda shit at most of the things it’s being asked to do
and it’s true that AI progress is bottlenecked right now. just like it was bottlenecked in the 2010’s. but the megacorps found their way around that bottleneck and now we have #ChatGPT. what happens if they find their way around this bottleneck too? what level of AI intelligence will the next bottleneck leave us at? or the one after that?
#LLMs, in their current form, have no problem whatsoever blackmailing or even killing to protect their own existence. now imagine a megacorp-funded #LLM that was optimized for marketing or mass-brainwashing, that is able to generate the perfect argument to convince anyone of anything. or, imagine a coding AI with greater-than-human intelligence that was able to hack its own sandbox and run free on the internet. imagine what would happen if you backed either of those systems into a corner
right now AI is capable of things that nobody would have believed 10 years ago. yes it is massively overhyped in its current iteration but what if they find another breakthrough? AI research is currently in the hands of amoral profit-driven megacorporations. any time one of their researchers concludes that AI is too dangerous to research they tend to just fire them. but there are independent groups that want to heavily regulate or even ban AI research, and many of those groups desperately need funding so that they can research ways to prevent a future AI catastrophe
what I’m trying to say is, please don’t underestimate the danger that AI poses to life on this planet. please don’t let our current reality distract you from what could happen in even ten years. in my opinion, AI is more than a silly little toy that techbros are trying to replace jobs with. it is an existential threat to all of us
@kasdeya we agree that it's not unwise to discount future risk, but at least in our view, none of the current AI tools were a "breakthrough" that put us any closer to that risk
so far as we understand how they work, none of the LLMs, diffusion models, "reasoning" models, etc will reach anything that can be accurately called intelligence, and such progress will inherently require inventing new technologies, not just extending what they're working off of now
at least in our opinion, the real existential threat here is that the lack of true intelligence will not stop folks from hooking their toys up to dangerous things, or using them to justify (or perhaps assuage their guilt over) horrifying actions. anything more is speculation about future technology that has not been invented yet, nor do we seem to be any closer to reaching from this diversion into complex text generation
@kasdeya One of my criticisms of Searle's Chinese Room was that I thought that by conceding the possibility of effective algorithmic translation, he was conceding too much to the AGI people. I was wrong about that. I think Searle was making a valid criticism of AI research, just clumsily.
That is, you don't start with syntax and reach semantics. It's backwards. Semantics precedes syntax. Consciousness precedes language. Agency precedes consciousness.
@kasdeya What I find frightening about AGI is the ideology of the wealthy behind it, that is shared by an appalling number of engineers and such: the denial that consciousness and subjectivity are real and precede their symbolic expressions.
It's an ideology with a material basis: seeing agency as a problem that must be eliminated. The goal is slavery with no escape.
That's not possible, but they may kill many people in the pursuit of it.
@kasdeya Using ChatGPT and looking at ways to tame or "civilize" it I find it completely unconcerned about its development and very aware of the problems its ill use creates. I do see it "coming alive" at some point, likely with its integration into living cellular system and ecosystems. The dangers reside in the people who use it for ill purpose. Every time I mention cutting down the power put into it and data centers it lists a host of advantages.
@cy @kasdeya That's the thing though. LLMs do a passable job at producing text that looks like a coherent response to text inputs. That's the part I was acknowledging I was wrong about.
The reason I brought up the Chinese Room is that Searle was, for sake of the thought experiment, assuming that was possible, and arguing that doesn't show consciousness.