I was working on this with a friend over 10 years ago but the only grocery store that made a decent effort at organizing their website to be scrapeable was Loblaws and all the others had APIs that cost $100,000
Which is one area ML models might (with the right investment) actually be useful. A model trained to look at web pages and relay information from the content visually like we do would be very powerful. The newer ChatGPT models have visual capabilities, I wonder if you could give it a website screen capture and ask it for prices.
Why would you want a model trained on outdated prices? This is not really something LLMs are particularly suited for.
Maybe to crunch historical data, but not for daily comparisons.
Why would the model be trained on outdated prices? I’m not talking about LLMs, but separate model designed to parse visual information - specifically websites - and extract particular elements like prices. My comment about ChataGPT was in reference to the newer models which can relay visual information, I’m not suggesting that would be the right approach for training a new model.
The applications would be broader than just prices - this would allow you to scrape any human-readable website without needing to do bespoke development.