7 Comments
User's avatar
Hungkuk Do's avatar

I deeply resonate with your perspective on Software 3.0 — especially the part where the boundaries between code, data, and intent are becoming increasingly blurred.

But I believe there’s still something fundamental we haven’t truly figured out yet:

What is a real question?

Even today, both humans and AI mostly operate within reactive systems. We ask, AI answers. But very few systems (and even fewer frameworks) attempt to model why questions arise, or how an AI could autonomously generate meaningful questions as part of its own evolution.

This is the challenge I’m currently working on — developing a self-evolving AI model that doesn't just respond, but learns to ask, to reflect, and to grow through interaction.

It interprets user intent, infers context, and generates questions that evolve over time, both in depth and relevance.

In that sense, it's not just a tool for prompting.

It's a new form of intelligence — one that transforms the 'self-improvement loop' of Software 3.0 into a question-driven exploration and meaning expansion loop.

I believe this layer — where intent, learning, and questioning converge — might be one of the key frontiers of AGI.

Expand full comment
JackieShi's avatar

Very curious about the model you're working! Is this a research direction? Or you do it by simply design workflow for Agent?

Expand full comment
Hungkuk Do's avatar

Thanks for asking, Jackie!

It started as a research direction, but it's now a working system.

I've built a prototype that combines a question-generating evolution model with LLMs like Ollema and Phi-2 for answering.

The system is live-tested via a Telegram chatbot and a web interface (v1), where the AI doesn’t just respond — it evolves by asking.

Currently working on v2, where the model learns and refines its questioning over time.

It’s not about chaining prompts — it’s about building an agent that grows through dialogue.

Would love to hear how you're thinking about agent workflows too!

Expand full comment
Inference by Sequoia's avatar

Interesting! Thanks for sharing that, Hungduk. An episode of our Training Data podcast coming out in a couple weeks features Pushmeet Kohli of DeepMind whose work on AlphaEvolve and co-scientist deals specifically with AI starting to ask scientific questions, not just answer prompts. You may find it interesting - we'll follow up here with the link when it's live.

Expand full comment
JackieShi's avatar

Thanks man! Sounds a novel direction for future Human-AI interaction. And I think there gonna be lots of application scenarios~

🤗 Keep in touch

Expand full comment
dan mantena's avatar

nice post! what was the point of having different models interject for the article if they were not conversing with each other. that format was a bit odd for me.

Expand full comment
Jon Folland's avatar

Fantastic conceptual model

Expand full comment