There is a discussion on Hacker News, but feel free to comment here as well.

You are viewing a single thread.
View all comments
2 points

This is the best summary I could come up with:


In the recent study, listed on arXiv at the end of October, UC San Diego researchers Cameron Jones (a PhD student in Cognitive Science) and Benjamin Bergen (a professor in the university’s Department of Cognitive Science) set up a website called turingtest.live, where they hosted a two-player implementation of the Turing test over the Internet with the goal of seeing how well GPT-4, when prompted different ways, could convince people it was human.

Surprisingly, ELIZA, developed in the mid-1960s by computer scientist Joseph Weizenbaum at MIT, scored relatively well during the study, achieving a success rate of 27 percent.

In a post on X, Princeton computer science professor Arvind Narayanan wrote, "Important context about the ‘ChatGPT doesn’t pass the Turing test’ paper.

While this generally leads to the impression of an uncooperative interlocutor, it prevents the system from providing explicit cues such as incorrect information or obscure knowledge.

More successful strategies involved speaking in a non-English language, inquiring about time or current events, and directly accusing the witness of being an AI model.

“Nevertheless,” they write, “we argue that the test has ongoing relevance as a framework to measure fluent social interaction and deception, and for understanding human strategies to adapt to these devices.”


The original article contains 904 words, the summary contains 204 words. Saved 77%. I’m a bot and I’m open source!

permalink
report
reply

Hacker News

!hackernews@derp.foo

Create post

This community serves to share top posts on Hacker News with the wider fediverse.

Rules
  1. Keep it legal
  2. Keep it civil and SFW
  3. Keep it safe for members of marginalised groups

Community stats

  • 2

    Monthly active users

  • 9.2K

    Posts

  • 6.1K

    Comments

Community moderators