I can't tell if I'm just getting old, but the last 2 major tech cycles (cryptocurrency and AI) have both seemed like net negatives for society. I wonder if this is how my parents felt about the internet back in the 90s.
Interestingly, both technologies also supercharge scams - one by providing a way to cash out with minimal risk, the other by making convincing human interaction easier to fake.
tudorizer 2 hours ago [-]
This parallel is something that I've been mulling over for the better part of this year.
Are we simply getting old and bitter?
Personally, I would add a previous cycle to this: social media. Although people were quick to point at the companies which were sparked and empowered by having unprecedented distribution.
Are we really better or worse off than a few decades ago?
tines 2 hours ago [-]
> Are we simply getting old and bitter?
No, we are getting wiser. It's not bitterness to look at a technology with a critical eye and see the bad effects as well as the good. It's not foolish to judge that the negative effects outweigh the positive. It's a mark of maturity. "But strong meat belongeth to them that are of full age, even those who by reason of use have their senses exercised to discern both good and evil."
im3w1l 2 hours ago [-]
We know that people can easily end up irrational either way. Some people more naively positive and others more cynical and bitter. Maybe it's even possible to make both mistakes at once: The same person can see negatives that aren't there, positives that won't happen, miss risks, and miss opportunities.
We cannot say "I'm criticial therefore I'm right", neither "I'm optimist therefore I'm right". Right conclusion comes from right process: gathering the right data, and thinking it over carefully while trying to be as unbiased and realist as possible.
marcosdumay 1 hours ago [-]
> Are we simply getting old and bitter?
For crypto, no. It's basically only useful for illegal actions, so if you live in a society where illegal is well correlated with "bad", you won't see any benefit from it.
The case for LLM is more complicated. There are positives and negatives. And the case for social networks is even more complicated, because they are objectively not what they used to be anymore.
walterbell 52 minutes ago [-]
> It's basically only useful for illegal action
Blockchain assets ("controllable electronic records") are defined in the UCC (Uniform Commercial Code) Article 12 that regulates interstate commerce, https://news.ycombinator.com/item?id=33949680#33951026. Some states have already ratified the changes, others are in progress.
U.S. federal stablecoin legislation was passed earlier this year.
RicoElectrico 2 hours ago [-]
Low interest rates favor parasite middlemen, not those who actually do stuff
risyachka 2 hours ago [-]
> Are we simply getting old and bitter?
Maybe, but it has nothing to do with change itself.
Change can be either positive or negative. Often it is objectively negative and can stay that way for decades.
tudorizer 2 hours ago [-]
My theory is that bitterness, at least this particular flavour, stems from seeing this negative impact, more than anything.
Change itself is a must. It's nature's law.
rnxrx 2 hours ago [-]
I think the progression of sentiment is basically the same. There were lots of folks pushing the agenda that connecting us all would somehow bring about the evolution of the human race by putting information at our fingertips that was eventually followed by concern about kids getting obsessed/porn-saturated.
The same cycle happened (is happening) with crypto and AI, just in more compressed timeframes. In both cases the initial period of optimism that transitioned into growing concerns about the negative effects on our societies.
The optimistic view would be that the cycle shortens so much that the negatives of a new technology are widely understood before that tech becomes widespread. Realistically, we'll just see the amorality and cynicism on display and still sweep it under the rug.
j-bos 2 hours ago [-]
> Interestingly, both technologies also supercharge scams
Similar for internet back in the 90s Nigerian princes were provided a means to reach expinentially more people faster.
ysavir 2 hours ago [-]
A large part of it is that we maxed out a lot of how communication tech can impact daily life, at least in terms of communication, but economically and culturally got in the habit of looking for new and exciting improvements to daily life.
The 19th and 20th centuries saw a huge shift in communication. We went from snail mail to telegrams to radio to phones to television to internet on desktops to internet on every person wherever they are. Every 20-30 years some new tech made it easier, cheaper, and faster to get your message to an intended recipient. Each of these was a huge social shift in terms of interpersonal relationships, commerce, and diminishing cycle times, and we've grown to expect these booms and pivots.
But there isn't much of where to go past "can immediately send a message to anyone anywhere." It's effectively an endstate. We can no longer take existing communication services and innovate on them by merely offering that service using the new revolutionary tech. By tech sectors are still trying to recreate the past economic booms by pushing technologies that aren't as revolutionary or aren't as promising and hyping them up to get people thinking they're the next stage of the communication technology cycle.
rightbyte 52 minutes ago [-]
> Every 20-30 years some new tech made it easier, cheaper, and faster to get your message to an intended recipient.
No it has regressed now. We are probably back to the level of 1950s before telephones became common.
People don't answer unknown numbers and are not listed in the telephone book.
When I was a kid in the 90s I could call almost anyone in my town by looking them up in the phone book.
bsenftner 2 hours ago [-]
> A large part of it is that we maxed out a lot of how communication tech can impact daily life, at least in terms of communication,
Perhaps for uneducated casual communications, lacking in critical analysis. The majority of what passes for "communications" are misunderstood, misstated, omit key critical aspects, and speak from an uninformed and unexamined position... the human race may "communicate" but does so very poorly, to the degree much of the human activity in our society is placeholder and good enough, while being in fact terrible and damaging.
adastra22 2 hours ago [-]
It’s how I feel about internet and social media now.
throwaway22032 3 hours ago [-]
They are both force multipliers. The issue of course is that technology almost always disproportionately benefits the more intelligent / ruthless.
podgietaru 3 hours ago [-]
I think the biggest problem with both technologies is how many people seem to think this.
Crypto was a way that people who think they’re brilliant can engage in gambling.
AI is a way for “smart” people to create language to make their opinions sound “smarter”
computerthings 3 hours ago [-]
[dead]
add-sub-mul-div 2 hours ago [-]
I'm not generally anti-capitalist, but what capitalism has become at this point in history means that technology is no longer for helping people or helping society.
Imagine the DVR being invented today. A commercial device that helps you skip ads. It would never be allowed to happen.
neutronicus 1 hours ago [-]
> Imagine the DVR being invented today. A commercial device that helps you skip ads. It would never be allowed to happen.
That's arguably what AI is - it compressed the internet so that you can extract StackOverflow answers without clicking through all the fucking ads that await you on the journey from search bar to the answer you were looking for.
You can of course expect it, over the next decade or so, to interpose ads between you and your goal in the same way that Google and StackOverflow did from 2010-now.
But for the moment I think it's the exact opposite of your thesis. The AI companies are in cut-throat capture-market-share mode so they're purposely skipping opportunities to cram ads down your throat.
add-sub-mul-div 5 minutes ago [-]
Of course LLMs today are the most consumer-friendly they're ever going to be. It's irresponsible not to look ahead to the inevitable 180.
AlexandrB 1 hours ago [-]
Yes, at some point mainstream technology turned on the users. So much modern tech seems to be about exerting control or "monetizing" instead of empowering.
Refreeze5224 2 hours ago [-]
I am generally anti-capitalist, and a big reason is because I don't think capitalism, inherently and fundamentally, can become anything other than what it is now. The benefit its provided is rarely accurately weighed against the harms, and for people who disproportionately benefit, like most here on HN, it's even harder to see the harms.
Anti-capitalist sentiment was incredibly widespread in the US during the 19th century through the 1930s, because far more people were personally impacted, and most needed look no further than their own lives to see it.
If nothing else, capitalism has become more sophisticated in disguising its harms, and acclimating people to them to such an extent that many become entirely incapable of seeing any harm at all, or even imagining any other way for a society to be structured, despite humanity having exited for 100,000+ years.
AlexandrB 1 hours ago [-]
Capitalism has many harms, but what's the alternative? Communism is worse - much worse.
kohsuke 3 hours ago [-]
So they run 5 different experiments to test the hypothesis, and they were nothing like what I imagined.
They are then asked if they agree or disagree with a (presumably hypothetical?) company's proposal to reduce employees' welfare, such as replacing a meal with a shake. Two groups showed a different preference.
This makes me think about that old question of whether you thank LLM or not. That is treating LLMs more like humans, so if what this paper found holds, maybe that'd nudge our brain subtly toward dehumanizing other real humans!? That's so counter intuitive...
3 hours ago [-]
sillysaurusx 3 hours ago [-]
Do you understand how they chose the two groups? And why show one group one video, and the other group the other video? Shouldn’t both groups be shown the same video, then check whether the group division method had any impact on the results? E.g. if group one was dance lovers and group two were dance haters, you wouldn’t get any data on the haters since they were shown the parkour video instead of the dance video.
Also, interesting bit: "Participants in the high (vs. low) socio-emotional capability condition showed more negative treatment intentions toward employees"
daveguy 2 hours ago [-]
Apparently you do not understand how they chose the two groups. Group identity was not based on a survey or any attribute of the participating individuals.
Low and high socio-emotional groups refer to whether the group was shown the low or high socio-emotional video. The pre-test and exclusion based on lack of attention and instruction following was performed before group selection for each individual, which was presumably random.
sillysaurusx 2 hours ago [-]
Thanks! You’re right, I didn’t understand.
cryoshon 3 hours ago [-]
To the point of the paper, it has been a somewhat disturbing experience to see otherwise affable superiors in the workplace "prompt" their employees in ways that are obviously downstream of their (very frequent) LLM usage.
shredprez 3 hours ago [-]
I started noticing this behavior a few months ago and whew. Easy to fix if the individual cares to, but very hard to ignore from the outside.
Unsolicited advice for all: make an effort to hold onto your manners even with the robots or you'll quickly end up struggling to collaborate with anyone else.
chuckadams 2 hours ago [-]
I still say "please" to the AI assistant so that I'll be among the last to be made into paperclips.
topaz0 1 hours ago [-]
I'd take this advice one step further: just don't use the robots
AlienRobot 2 hours ago [-]
What does that sound like?
righthand 1 hours ago [-]
Ask chatgpt ways to instruct an employee on a task.
lordnacho 3 hours ago [-]
One very new behavior is the dismissal of someone's writing as the work of AI.
It's sadly become quite common on internet forums to suppose that some post or comment was written by AI. It's probably true in some cases, but people should ask themselves how the cost/benefit to calling it out looks.
SkyeCA 3 hours ago [-]
Unfortunately it's the correct thing to do. Just like in the past where you shouldn't have believed any stories told on the internet, it's now reasonable to assume any image/text you come across wasn't created by a human, or in the case of images is simply an event that never happened.
The easiest way to protect myself these days is to assume the worst about all content. Why am I replying to a comment in that case? Consider it a case yelling into the void.
AftHurrahWinch 1 hours ago [-]
1. A bot-generated argument is still an argument. I can't make claims about the truth or falsity based on the enunciator, that's simply ad hominem.
2. A bot-generated image is not a record of photon-emissions in the physical world. When I look at photos, they need to be records of the physical world, or they're a creative work.
I think you can't rationally apply the same standard to these 2 things.
rightbyte 3 minutes ago [-]
> 1. A bot-generated argument is still an argument. I can't make claims about the truth or falsity based on the enunciator, that's simply ad hominem.
In classical forums arguments are often some form of stamina contest and bots will always win those.
But ye it is like a troll accusation.
foobiekr 52 minutes ago [-]
The problem is the bullshit asymmetry and engaging in good faith.
AI users aren’t investing actual work and can generate reams if bullshit that puts three burden on others to untangle. And they also aren’t engaging in good faith.
AftHurrahWinch 16 minutes ago [-]
Some discussions are dialectic, where a group is cooperatively reasoning toward a shared truth. In dialectical discussions, good faith is crucial. AI can't participate in dialectical work. Most public discourse is not dialectical, it is rhetorical. The goal is to persuade the audience, not your interlocutor. You aren't "yelling into the void", you're advocating to the jury.
Rhetoric is the model used in debate. Proponents don't expect to change their Opponent's mind, and vice versa. In fact, if your opponent is obstinate (or a non-sentient text generator), it is easier to demonstrate the strength of your position to the gallery.
People reference Brandolini's "bullshit asymmetry principle" but don't differentiate between dialectical and rhetorical contexts. In a rhetorical context, the strategy is to demonstrate to the audience that your interlocutor is generating text with an indifference to truth. You can then pivot, forcing them to defend their method rather than making you debunk their claims.
ncr100 1 hours ago [-]
As a person with trust issues, I find this adaptation to the change in status-quo quite natural for me.
nineplay 2 hours ago [-]
My partner has become tiresome about this - even if I was to tell them that I responded to your comment on HN, they'd go "You probably just responded to a bot".
Are bots really infiltrating HN and making constructive non-inflammatory comments? I don't find it at all plausible but "that's just what they want you to think".
topaz0 1 hours ago [-]
I've seen chatgpt output here as comments for sure. In some cases obvious, in other cases borderline. I wouldn't guess that it's a major fraction of comments, but it's there.
megamix 3 hours ago [-]
How do you guys read through an article this fast after it's submitted? I need more than 1 hr to think this through.
bee_rider 3 hours ago [-]
So far (as of 15 or so minutes after your comment) we have only one top-level comment that really indicates that the poster has started trying to read the paper seriously, Kohsuke’s post.
They actually described the methodology at least (note: I also haven’t fully read the paper yet, but I wanted to post in support of you not having a “take” yet, haha).
broast 3 hours ago [-]
I'm a bot
jncfhnb 3 hours ago [-]
Ask AI to summarize and write a response
skeezyboy 3 hours ago [-]
cos its mostly fluff you can skip over
skeezyboy 3 hours ago [-]
Essentially he did a bunch of surveys. Apparently this is science
fontsgenerator 2 hours ago [-]
Interesting point — AI can automate tasks, but we need to ensure it doesn’t strip away human judgment and empathy
netsharc 2 hours ago [-]
On the opposite side (i.e. the side of what Bender called meatbags), there are a lot of jobs where judgment and empathy are not allowed. E.g. TSA agents examinining babies for bombs in case they're terrorists -- they were told "You must do this to every passenger, no questions asked" and making a decision means deviating from their job description and risking losing it.
cm2012 3 hours ago [-]
Interesting theory with insufficient evidence
temporallobe 3 hours ago [-]
As a Black Sabbath fan, I love that they envisioned dystopian stuff like this. Check out their Dehumanizer album.
inquirerGeneral 4 hours ago [-]
[dead]
cratermoon 4 hours ago [-]
I'm unwilling to accept the discussion and conclusions of the paper because of the framing of how LLMs work.
> socio-emotional capabilities of autonomous agents
The paper fails to note that these 'capabilities' are illusory. They are a product of how the behaviors of LLMs "hack" our brains and exploit the hundreds of thousands of years of evolution of our equipment as a social species. https://jenson.org/timmy/
kohsuke 3 hours ago [-]
But that's beside the point of the paper. They are talking about how the humans perciving the "socio-emotional capabilities of autonomous agents" change their behavior toward other humans. Whether people get that perception because "LLMs hack our brain" or something else is largely irrelevant.
Isamu 3 hours ago [-]
No, I think the thesis is that people perceive falsely that agents are highly human, and as a result assimilate downward toward the agent’s bias and conclusions.
That is the dehumanization process they are describing.
3 hours ago [-]
chrisweekly 3 hours ago [-]
+1 Insightful
Your "timmy" post deserves its own discussion. Thanks for sharing it!
stuartjohnson12 3 hours ago [-]
Your socio-emotional capabilities are illusory. They are a product of how craving for social acceptance "hacks" your brain and exploits the hundreds of thousands of years of evolution of our equipment as a social species.
cwmoore 2 hours ago [-]
Practicing social skills is often disillusioning. Marvin, the sad robot, offers a prediction after autocompleting Wikipedia:
its a next word predictor. if youve been convinced it has a brain, i have some magic beans youd be interested in
stuartjohnson12 2 hours ago [-]
and if it is a sufficiently accurate next word predictor then it may accurately predict what an agent with socio-emotional skills would use as their next word in which case it will have exhibited socio-emotional skill.
empath75 53 minutes ago [-]
Consider whether it is possible to complete sentences about the world coherently in a human like way without knowing or thinking about the world.
ACCount37 2 hours ago [-]
You're saying "next word predictor" as if it's some kind of gotcha.
You're typing on a keyboard, which means you're nothing but a "next keypress predictor". This says very little about how intelligent are you.
skeezyboy 53 minutes ago [-]
not my only trick is it though. human brain engages in all sorts of cognitive enterprises, language formation being just one of them. LLMS are essentially statistical predictors - which is indeed part of what a human brain does but only a small slither of its abilities.
ACCount37 46 minutes ago [-]
And why does it matter?
For all I know, humans are "essentially statistical predictors" too - and all of their insistence on being something greater than that is anthropocentric copium.
kingkawn 3 hours ago [-]
The paper literally spells out that this is a perception of the user and that is the root of the impact
cratermoon 3 hours ago [-]
Perhaps I missed it,
could you help me see where specifically the paper acknowledges or asserts that LLMs do not have these capabilities?
I see where the paper repeatedly mentions perceptions,
but I also see right at the beginning,
"Our research reveals that the socio-emotional capabilities of autonomous agents lead individuals to attribute a humanlike mind to these nonhuman entities" [emphasis added],
and multiple places in the paper,
for example in the section titled "Theoretical Background",
subtitle
'Socio-emotional capabilities in autonomous agents increase “humanness”',
LLMs are implied to have at least low levels of these capabilities,
and contrasts it to the perception that they have high levels.
In brief,
the paper consistently but implicitly regards these tools as having at least minimal socio-emotional capabilities,
and that the problem is humans perceiving them as having higher levels.
cootsnuck 1 hours ago [-]
I can’t tell if you’re being disingenuous, but the very first sentence of the abstract literally says the word "simulate":
> Recent technological advancements have empowered nonhuman entities, such as virtual assistants and humanoid robots, to simulate human intelligence and behavior.
In the paper, "socio-emotional capability" is serving as a behavioral/operational label. Specifically, the ability to understand, express, and respond to emotions. It's used to study perceptions and spillovers. That's it.
The authors manipulate perceived socio-emotional behavior and measure how that shifts human judgments and treatment of others.
Whether that behavior is "illusory" or phenomenally real is orthogonal to the research scope and doesn’t change the results. But regardless, as I said, they quite literally said "simulate", so you should still be satisfied.
empath75 52 minutes ago [-]
Whether they have those capabilities or not is totally irrelevant to the conclusions of the paper, because it is a study of people and not AI.
kingkawn 2 hours ago [-]
“…leads individuals to attribute a human-like mind to these nonhuman entities.”
It is the ability of the agent to emulate these social capacities that leads users to attribute human-like minds. There is no assertion whatsoever that the agents have a mind, but that their behavior leads some people to that conclusion. It’s in your own example.
Interestingly, both technologies also supercharge scams - one by providing a way to cash out with minimal risk, the other by making convincing human interaction easier to fake.
Are we simply getting old and bitter?
Personally, I would add a previous cycle to this: social media. Although people were quick to point at the companies which were sparked and empowered by having unprecedented distribution.
Are we really better or worse off than a few decades ago?
No, we are getting wiser. It's not bitterness to look at a technology with a critical eye and see the bad effects as well as the good. It's not foolish to judge that the negative effects outweigh the positive. It's a mark of maturity. "But strong meat belongeth to them that are of full age, even those who by reason of use have their senses exercised to discern both good and evil."
We cannot say "I'm criticial therefore I'm right", neither "I'm optimist therefore I'm right". Right conclusion comes from right process: gathering the right data, and thinking it over carefully while trying to be as unbiased and realist as possible.
For crypto, no. It's basically only useful for illegal actions, so if you live in a society where illegal is well correlated with "bad", you won't see any benefit from it.
The case for LLM is more complicated. There are positives and negatives. And the case for social networks is even more complicated, because they are objectively not what they used to be anymore.
Blockchain assets ("controllable electronic records") are defined in the UCC (Uniform Commercial Code) Article 12 that regulates interstate commerce, https://news.ycombinator.com/item?id=33949680#33951026. Some states have already ratified the changes, others are in progress.
U.S. federal stablecoin legislation was passed earlier this year.
Maybe, but it has nothing to do with change itself.
Change can be either positive or negative. Often it is objectively negative and can stay that way for decades.
Change itself is a must. It's nature's law.
The same cycle happened (is happening) with crypto and AI, just in more compressed timeframes. In both cases the initial period of optimism that transitioned into growing concerns about the negative effects on our societies.
The optimistic view would be that the cycle shortens so much that the negatives of a new technology are widely understood before that tech becomes widespread. Realistically, we'll just see the amorality and cynicism on display and still sweep it under the rug.
Similar for internet back in the 90s Nigerian princes were provided a means to reach expinentially more people faster.
The 19th and 20th centuries saw a huge shift in communication. We went from snail mail to telegrams to radio to phones to television to internet on desktops to internet on every person wherever they are. Every 20-30 years some new tech made it easier, cheaper, and faster to get your message to an intended recipient. Each of these was a huge social shift in terms of interpersonal relationships, commerce, and diminishing cycle times, and we've grown to expect these booms and pivots.
But there isn't much of where to go past "can immediately send a message to anyone anywhere." It's effectively an endstate. We can no longer take existing communication services and innovate on them by merely offering that service using the new revolutionary tech. By tech sectors are still trying to recreate the past economic booms by pushing technologies that aren't as revolutionary or aren't as promising and hyping them up to get people thinking they're the next stage of the communication technology cycle.
No it has regressed now. We are probably back to the level of 1950s before telephones became common.
People don't answer unknown numbers and are not listed in the telephone book.
When I was a kid in the 90s I could call almost anyone in my town by looking them up in the phone book.
Perhaps for uneducated casual communications, lacking in critical analysis. The majority of what passes for "communications" are misunderstood, misstated, omit key critical aspects, and speak from an uninformed and unexamined position... the human race may "communicate" but does so very poorly, to the degree much of the human activity in our society is placeholder and good enough, while being in fact terrible and damaging.
Crypto was a way that people who think they’re brilliant can engage in gambling.
AI is a way for “smart” people to create language to make their opinions sound “smarter”
Imagine the DVR being invented today. A commercial device that helps you skip ads. It would never be allowed to happen.
That's arguably what AI is - it compressed the internet so that you can extract StackOverflow answers without clicking through all the fucking ads that await you on the journey from search bar to the answer you were looking for.
You can of course expect it, over the next decade or so, to interpose ads between you and your goal in the same way that Google and StackOverflow did from 2010-now.
But for the moment I think it's the exact opposite of your thesis. The AI companies are in cut-throat capture-market-share mode so they're purposely skipping opportunities to cram ads down your throat.
Anti-capitalist sentiment was incredibly widespread in the US during the 19th century through the 1930s, because far more people were personally impacted, and most needed look no further than their own lives to see it. If nothing else, capitalism has become more sophisticated in disguising its harms, and acclimating people to them to such an extent that many become entirely incapable of seeing any harm at all, or even imagining any other way for a society to be structured, despite humanity having exited for 100,000+ years.
For example, in one study, they divide participants into two groups, have one group watch https://www.youtube.com/watch?v=fn3KWM1kuAw (that highlights the high socio-emotional capabilities of a robot), while the other watches https://www.youtube.com/watch?v=tF4DML7FIWk (that highlights the low socio-emotional capabilities of a robot)
They are then asked if they agree or disagree with a (presumably hypothetical?) company's proposal to reduce employees' welfare, such as replacing a meal with a shake. Two groups showed a different preference.
This makes me think about that old question of whether you thank LLM or not. That is treating LLMs more like humans, so if what this paper found holds, maybe that'd nudge our brain subtly toward dehumanizing other real humans!? That's so counter intuitive...
Also, interesting bit: "Participants in the high (vs. low) socio-emotional capability condition showed more negative treatment intentions toward employees"
Low and high socio-emotional groups refer to whether the group was shown the low or high socio-emotional video. The pre-test and exclusion based on lack of attention and instruction following was performed before group selection for each individual, which was presumably random.
Unsolicited advice for all: make an effort to hold onto your manners even with the robots or you'll quickly end up struggling to collaborate with anyone else.
It's sadly become quite common on internet forums to suppose that some post or comment was written by AI. It's probably true in some cases, but people should ask themselves how the cost/benefit to calling it out looks.
The easiest way to protect myself these days is to assume the worst about all content. Why am I replying to a comment in that case? Consider it a case yelling into the void.
2. A bot-generated image is not a record of photon-emissions in the physical world. When I look at photos, they need to be records of the physical world, or they're a creative work.
I think you can't rationally apply the same standard to these 2 things.
In classical forums arguments are often some form of stamina contest and bots will always win those.
But ye it is like a troll accusation.
AI users aren’t investing actual work and can generate reams if bullshit that puts three burden on others to untangle. And they also aren’t engaging in good faith.
Rhetoric is the model used in debate. Proponents don't expect to change their Opponent's mind, and vice versa. In fact, if your opponent is obstinate (or a non-sentient text generator), it is easier to demonstrate the strength of your position to the gallery.
People reference Brandolini's "bullshit asymmetry principle" but don't differentiate between dialectical and rhetorical contexts. In a rhetorical context, the strategy is to demonstrate to the audience that your interlocutor is generating text with an indifference to truth. You can then pivot, forcing them to defend their method rather than making you debunk their claims.
Are bots really infiltrating HN and making constructive non-inflammatory comments? I don't find it at all plausible but "that's just what they want you to think".
https://news.ycombinator.com/item?id=44912783
They actually described the methodology at least (note: I also haven’t fully read the paper yet, but I wanted to post in support of you not having a “take” yet, haha).
> socio-emotional capabilities of autonomous agents
The paper fails to note that these 'capabilities' are illusory. They are a product of how the behaviors of LLMs "hack" our brains and exploit the hundreds of thousands of years of evolution of our equipment as a social species. https://jenson.org/timmy/
That is the dehumanization process they are describing.
Your "timmy" post deserves its own discussion. Thanks for sharing it!
https://en.m.wikipedia.org/wiki/Marvin_the_Paranoid_Android
You're typing on a keyboard, which means you're nothing but a "next keypress predictor". This says very little about how intelligent are you.
For all I know, humans are "essentially statistical predictors" too - and all of their insistence on being something greater than that is anthropocentric copium.
In brief, the paper consistently but implicitly regards these tools as having at least minimal socio-emotional capabilities, and that the problem is humans perceiving them as having higher levels.
> Recent technological advancements have empowered nonhuman entities, such as virtual assistants and humanoid robots, to simulate human intelligence and behavior.
In the paper, "socio-emotional capability" is serving as a behavioral/operational label. Specifically, the ability to understand, express, and respond to emotions. It's used to study perceptions and spillovers. That's it.
The authors manipulate perceived socio-emotional behavior and measure how that shifts human judgments and treatment of others.
Whether that behavior is "illusory" or phenomenally real is orthogonal to the research scope and doesn’t change the results. But regardless, as I said, they quite literally said "simulate", so you should still be satisfied.
It is the ability of the agent to emulate these social capacities that leads users to attribute human-like minds. There is no assertion whatsoever that the agents have a mind, but that their behavior leads some people to that conclusion. It’s in your own example.