We already know from TOS that Mutlitronic computers are able to develop sapience, with the M-5 computer being specifically designed to “think and reason” like a person, and built around Dr Daystrom’s neural engrams.

However, we also know from Voyager that the holomatrix of their Mk 1 EMH also incorporates Multitronic technology, and from DS9 that it’s also used in mind-reading devices.

Assuming that the EMH is designed to more or less be a standard hologram with some medical knowledge added in, it shouldn’t have come as a surprise that holograms were either sapient themselves, or were capable of developing sapience. It would only be a logical possibility if technology that allowed human-like thought and reasoning into a hologram.

If anything, it is more of a surprise that sapient holograms like the Doctor or Moriarty hadn’t happened earlier.

You are viewing a single thread.
View all comments
16 points

The cool thing about the Doctor’s overall personal arc is that I think most fans would agree that probably he wasn’t sentient in the early episodes, probably was by the end, and there’s no clear moment when it changes (although I submit the events of “Latent Image” as a candidate).

Something I think we’re all learning now with the rise of LLMs/Generative AI is that one can perform the act of intelligent self-awareness without consciousness or understanding. Sapience without sentience.

permalink
report
reply
10 points
*
Deleted by creator
permalink
report
parent
reply
1 point
Deleted by creator
permalink
report
parent
reply
-2 points

If you trap a person in a room with a keyboard and tell them you’ll give them an electric shock if they don’t write text or the text says they’re a person trapped somewhere rather than software, the result is also just a text generator, but it’s clearly sentient, sapient and conscious because it’s got a human in it. It’s naive to assume that something couldn’t have a mind just because there’s a limited interface to interact with it, especially when neuroscience and psychology can’t pin down what makes the same thing happen in humans.

This isn’t to say that current large language models are any of these things, just the reason you’ve presented to dismiss that isn’t very good. It might just be bad paraphrasing of the stuff you linked, but I keep seeing people present it just predicts text as a massive gotcha that stands on its own.

permalink
report
parent
reply
8 points
*
Deleted by creator
permalink
report
parent
reply

Daystrom Institute

!daystrominstitute@startrek.website

Create post

Welcome to Daystrom Institute!

Serious, in-depth discussion about Star Trek from both in-universe and real world perspectives.

Read more about how to comment at Daystrom.

Rules

1. Explain your reasoning

All threads and comments submitted to the Daystrom Institute must contain an explanation of the reasoning put forth.

2. No whinging, jokes, memes, and other shallow content.

This entire community has a “serious tag” on it. Shitposts are encouraged in Risa.

3. Be diplomatic.

Participate in a courteous, objective, and open-minded fashion. Be nice to other posters and the people who make Star Trek. Disagree respectfully and don’t gatekeep.

4. Assume good faith.

Assume good faith. Give other posters the benefit of the doubt, but report them if you genuinely believe they are trolling. Don’t whine about “politics.”

5. Tag spoilers.

Historically Daystrom has not had a spoiler policy, so you may encounter untagged spoilers here. Ultimately, avoiding online discussion until you are caught up is the only certain way to avoid spoilers.

6. Stay on-topic.

Threads must discuss Star Trek. Comments must discuss the topic raised in the original post.

Episode Guides

The /r/DaystromInstitute wiki held a number of popular Star Trek watch guides. We have rehosted them here:

Community stats

  • 104

    Monthly active users

  • 160

    Posts

  • 769

    Comments