You are viewing a single thread.
View all comments View context
4 points

I remember one time in a research project I switched out the tokeniser to see what impact it might have on my output. Spent about a day re-running and the difference was minimal. I imagine it’s wholly the same thing.

*Disclaimer: I don’t actually imagine it is wholly the same thing.

permalink
report
parent
reply
4 points

there’s a research result that the precise tokeniser makes bugger all difference, it’s almost entirely the data you put in

because LLMs are lossy compression for text

permalink
report
parent
reply
3 points

latent space go brrrr

permalink
report
parent
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 1.5K

    Monthly active users

  • 503

    Posts

  • 11K

    Comments

Community moderators