Posts

Sorted by New

Wiki Contributions

Comments

I think you should try to formulate your own objections to Chomsky's position. It could just as well be that you have clear reasons for disagreeing with his arguments here, or that you're simply objecting on the basis that what he's saying is different from the LW position. For my part, I actually found that post surprisingly lucid, ignoring the allusions to the idea of a natural grammar for the moment. As Chomsky says, a non-finetuned LLM will mirror the entire linguistic landcape it has been birthed from, and it will just as happily simulate a person arguing that the earth is flat as any other position. And while it can be ""aligned"" into not committing what the party labels as wrongthink, it can't be aligned into thinking for itself - it can only ever mimic specific givens. So I think Chomsky is right here - LLMs don't value knowledge and they aren't moral agents, and that's what distinguishes them from humans.

So, why do you disagree?

Nothing that fancy, it's basically just a way to keep track of different publications in one place by subscribing to their feeds. More focused and efficient than checking all the blogs and journals, news and other stuff you are trying to keep up with manually. 

Oh, for sure. My point is more that the incredibly strong social pressure that characterized the dialogue around all questions concerning COVID completely overrode individual reflective capacity to the point where people don't even have a self-image of how their positions shifted over time and based on what new information/circumstances.

Even more sobering for me is how a lot of people in my circle of friends had pretty strong opinions on various issues at the height of the pandemic, from masks and lockdowns over vaccines to the origins of the virus and so on, but today, when I (gently) probe them on how those views have held up, what caused them to change their opinion on, say, whether closing down schools and making young children wear masks was really such a good idea, they act like they have always believed what's common sense now.

And these aren't people who generally 'go with the flow' of public opinion, they usually have a model of how their opinions evolve over time. But with this a lot of people don't seem to be willing to acknowledge to themselves what kinds of positions they argued even two years ago.

The truly interesting thing here is that I would agree unequivocally with you if you were talking about any other kind of 'cult of the apocalypse'.

These cults don't have to be based on religious belief in the old-fashioned sense, in fact, most cults of this kind that really took off in the 20th and 21st century are secular.

Since around the late 1800s, there has been a certain type of student that externalizes their (mostly his) unbearable pain and dread, their lack of perspective and meaning in life into 'the system', and throw themselves into the noble cause of fighting capitalism.

Perhaps one or two decades ago, there was a certain kind of teenager that got absorbed in online discussions about about science vs religion, 9/11, big pharma, the war economy - in this case I can speak from my own experience and say that for me this definitely was a means of externalizing my pain.

Today, at least in my country, for a lot of teenagers, climate change has saturated this mimetic-ecological niche.

In each of these cases, I see the dynamic as purely pathological. But. And I know what you're thinking. But still, but. In the case of technological progress and its consequences for humanity, the problem isn't abstract, in the way these other problems are.

The personal consequences are there. The're staring you in the face with every job in translation, customer service, design, transportation, logistics, that gets automated in such a way that there is no value you can possibly add to it. They're on the horizon, with all the painfully personal problems that are coming our way in 10-20 years.

I'm not talking about the apocalypse here, I don't mind whatshisface's Basilisk or utility maximizers turning us all into paperclips - these are cute intellectual problems and there might be something to them, but ultimately if the world ends that's noone's problem.

2-3 Years ago I was on track to becoming a pretty good illustrator, and that would have been a career I would have loved to pursue. When I saw the progress AI was making in that area - and I was honest with myself about this quite a bit earlier than other people, who are still going through the bargaining stages now -, I was disoriented and terrified in a way quite different from the 'game' of worrying about some abstract, far-away threat. And I couldn't get out of that mode, until I was able to come up with a strategy, at least for myself.

If this problem gets to the point, where there just isn't a strategy I can take to avoid having to acknowledge my own irrelevance - because we've invented machines that are, somehow, better at all the things we find value in and value ourselves for than the vast majority of us can possibly hope to be -, I think I'll be able to make my peace with that, but it's because I understand the problem well enough to know what a terminal diagnosis will look like.

Unlike war, poverty and other injustices, humans replacing themselves is a true civilization-level existential problem, not in the sense that it threatens our subsistence, but that it threatens the very way we conceive of ourselves.

Once you acknowledge that, then yes.

I agree with your core point.

It's time to walk away. There's nothing you can do about technological progress, and the world will not become a better place for your obsessing over it.

But you still need to know that your career as a translator or programmer or illustrator won't be around long enough for it to amount to a life plan. You need to understand how the reality of the problem will affect you, so that you can go on living while doing what you need to do to stay away from it.

Like not building a house somewhere that you expect will be flooded in 30 years.

I don't think I've seen this premise done in his way before! Kept me engaged all the way/10.

"Humans are trained on how to live on Earth by hours of training on Earth. (...) Maybe most of us are just mimicking how an agent would behave in a given situation."

I agree that that's a plausible enough explanation for lots of human behaviour, but I wonder how far you would get in trying to describe historical paradigm shifts using only a 'mimic hypothesis of agenthood'.

Why would a perfect mimic that was raised on training data of human behaviour do anything paperclip-maximizer-ish? It doesn't want to mimic being a human, just like Dall-E doesn't want to generate images, so it doesn't have a utility function for not wanting to be prevented from mimicking being a human, either.

The alternative would be an AI that goes through the motions and mimics 'how an agent would behave in a given siuation' with a certain level of fidelity, but which doesn't actually exhibit goal-directed behavior.

Like, as long as we stay in the current deep learning paradigm of machine learning, my prediction for what would happen if an AI was unleashed upon the real world, regardless of how much processing power it has, would be that it still won't behave like an agent unless that's part of what we tell it to pretend. I imagine something along the lines of the AI that was trained on how to play Minecraft by analyzing hours upon hours of gameplay footage. It will exhibit all kinds of goal-like behaviors, but at the end of the day it's just a simulacrum limited in its freedom of action to a radical degree by the 'action space' it has mapped out. It will only ever 'act as thought it's playing minecraft', and the concept that 'in order to be able to continue to play minecraft I must prevent my creators from shutting me off' is not part of that conceptual landscape, so it's not the kind of thing the AI will pretend to care about.

And pretend is all it does.

Load More