I followed these steps, but just so happened to check on my mason jar 3-4 days in and saw tiny carbonation bubbles rapidly rising throughout.
I thought that may just be part of the process but double checked with a Google search on day 7 (when there were no bubbles in the container at all).
Turns out I had just grew a botulism culture and garlic in olive oil specifically is a fairly common way to grow this bio-toxins.
Had I not checked on it 3-4 days in Iād have been none the wiser and would have Darwinned my entire family.
Prompt with care and never trust AI dear peopleā¦
never trust AI
Statements from LLMs are to be seen as hallucinations unless proven otherwise by classic research.
We donāt need a fancy word that makes it sound like AI is actually intelligent when talking about how AI is frequently wrong and unreliable. AI being wrong is like someone who misunderstood something or took a joke as literal repeating it as factual.
When people are wrong we donāt call it hallucinating unless their senses are altered. AI doesnāt have senses.
Itās not a āfancy wordā here, but a technical term. An AI making things up is actually called hallucination.
oh but you see, itās āhallucinationā when LLM is wrong and itās hype cycle fuel when itās correct. no, LLMs donāt āhallucinateā, that implies that this state is peculiar, isolated, triggered by very specific circumstances. LLMs bullshit all the time, sometimes they are right, sometimes not, the process that produces both types of response is the same. pushing for āhallucinationā tries to obscure that. use of āhallucinationā also implies that LLMs know something, they donāt, by design. it just so happens that if they āgetā things right, itās because it appeared in training material enough times to make an impression in model.
The wikipedia page you linked to actually states that the term is being pushed by industry (Google, Meta, OpenAI) and that its use is criticized by some researchers.