← Back to Experiments
Tool Comparison

When AI looks in your Fridge

Last night's experiment started with a simple idea: could a large language model make something useful out of the chaos in my fridge after a busy weekend?

When AI looks in your Fridge

Last night's experiment started with a simple idea: could a large language model make something useful out of the chaos in my fridge after a busy weekend?

I took a photo of it and uploaded it to GPT-5. No description, no context, just the image.

What came back was polite and unremarkable: suggestions for kimchi fried rice, brie “risotto,” egg skillets, and cheese plates. None of it was wrong, but none of it was particularly creative either. It felt like the AI equivalent of a Pinterest search result—technically competent, contextually shallow.

Then I asked a better question: What should I eat first?

That answer was stronger. It listed the foods most likely to spoil first, sprouts (which are actually micro-greens), soft cheese, cut fruit and grouped the rest by shelf life. That was actually useful, if only because it reflected something closer to how I think when I open the fridge.

Still, the experience didn’t feel intelligent. It felt procedural.

The Moment That Stood Out

I asked why it assumed I had rice. None was visible in the image.

GPT-5 explained that kimchi, eggs, and butter usually suggest rice as a pantry staple. A fair inference, but then I noticed something it had skipped: a sealed bag of meat clearly visible on a shelf.

It had inferred rice that wasn’t there and ignored the meat that was.

When I asked about it directly, it said it couldn’t tell what was inside the bag and didn’t want to risk suggesting the wrong thing. From a safety perspective that’s defensible. From a reasoning perspective it’s a miss.

AI doesn’t only hallucinate by fabricating facts. It also hallucinates by filling in blanks selectively, making confident assumptions about what “should” exist while omitting the uncertain or inconvenient. It is, in other words, an editor of reality as much as an inventor of it.

The Reveal

The mystery bag contained cooked baby back ribs. Once I shared that, the system immediately shifted: rib salads, rib-and-egg skillets, low-carb brie melts. These were better, more grounded, more specific.

But the improvement only happened after I asked the right question. It never asked me.

The Takeaway

This wasn’t really about dinner. It was about how GenAI performs under partial information.

GPT-5 wasn’t wrong about what it saw. It just didn’t know what to do with what it couldn’t see. It inferred where it felt safe and stayed silent where it felt uncertain.

That gap between confidence and curiosity is where human judgment still matters.

If you want useful output, you have to notice what it ignored, point to the opaque bag, and say, “You missed something.”

Outcome: Three mediocre meal ideas and one clear reminder that AI still needs human inquiry to close the gap between what’s visible and what’s true.

Tools Used

Tool used: GPT-5 with image input - Time: 20 minutes - Cost: Free with ChatGPT Plus

Tags: