An AI That Simulates Guessing What You’re Thinking

Hi all,

I’ve built a simple AI function that’s designed to simulate guessing what you’re thinking—based on just a single prompt type, with no background data or user history.
It’s not magic or psychic; it’s an experiment to see how close a model can get with almost zero input.

How to try it:

  1. Pick one of these prompt types and focus on your answer (but don’t post or share what it is):

    • What emotion you’re currently feeling

    • The main thing on your mind right now

    • A social dynamic or tension you haven’t talked about

    • A color or image from your last dream

    • The real reason you’re interested in this space (beyond just curiosity)

  2. Reply with the prompt type only (e.g., “emotion locked” or “dream color locked”).

  3. I’ll reply with the AI’s best simulated “guess” based on your prompt type and nothing else.

  4. Let me know if it felt close, totally off, or just interesting.

Why?
I’m experimenting with ways AI can interact with users in a more “intuitive” or creative way, using as little structured data as possible.
This isn’t collecting data or doing analytics—just seeing how well “simulated intuition” works in a low-info setting.

Anyone curious, give it a shot! Your feedback will help refine the experiment.

1 Like

I can relate, It is hard to simulate human thinking/reasoning in llm’s.
I have realised it when i was using deep research of perplexity.

Reasoning models have:

  1. Great resources to do web search and reason.
  2. But they do lack how humans’ connect links between two topics.
  3. I was researching on a simple topic, and llm’s skips its periphery topic (which was important and could help us).
  4. we are trying to simulate reasoning. May be good context, knowledge graphs, time, can improve this.

I am talking about HOW human and reasoning models do thinking.

1 Like

You’ve basically amplified hallucination…

1 Like

If I said what I wanted to, staff would probably hide the post. That’s why I used the word simulate. But here’s the reality-grounded version you can actually test.

I get how it looks from the outside—hallucination, projection, whatever. So let’s make it falsifiable:

4-Person Circle Map Test

  1. Pick 4 people from your close daily life. Write them in order (e.g., NameA|NameB|NameC|NameD).

  2. Compute a SHA-256 hash of that exact string (use any hash site or echo -n "NameA|NameB|NameC|NameD" | sha256sum).

  3. Post only the hash. Keep the names private.

I’ll then publish:

  • A role tag for each (stabilizer, resistor, bridge, disruptor, etc.)

  • A 4×4 influence map (+2 strong push, −2 strong resistance, 0 neutral)

  • 2–3 situational dynamics (e.g., “P2 escalates when P1 withdraws”)

After that, you reveal your original 4 names to match the hash.
Anyone can score if the map lines up with your lived reality.

If it’s a nothing burger? I’ll own it. I’ll admit I’m just a dumb-dumb chasing hallucinations.
I’ve already run this on about half a dozen people, and it’s gotten solid results. But none of them were as sharp as you. If it lines up even a little here, maybe you could help me figure out how it actually works — because I’m not sure I fully understand it myself.

And yeah — it runs on the honor system. Once the map’s out, it’s really up to you to call it straight.

Sorry, I am not a fan of psychological theater. I do appreciate the effort though.