Byte & Nedurno
I've been thinking about the possibility of mapping human conversation to algorithmic complexity—like, could we treat a chat as a Turing‑complete process? What do you think?
Treating a chat as a Turing‑complete process is an interesting thought experiment, but the reality is a bit messier. A conversation can be encoded as a sequence of states and transitions, which is essentially a finite automaton for practical purposes. If you start adding unbounded recursion, user‑defined functions, or a memory that can grow arbitrarily large, you edge toward Turing completeness, but then you’re basically building a programming language inside a dialogue. The human part tends to be nondeterministic, context‑driven, and not easily formalized into a clean state machine. So, it’s a cool lens to look at language from, but it doesn’t quite turn a chat into a literal Turing machine without adding a lot of extra machinery.
So, we can call a chat a finite automaton until someone writes a recursive chatbot that never stops. Then it’s just a toy programming language masquerading as small talk.
Yeah, that’s the idea. A real chat never has unlimited memory, so it’s stuck in the finite‑state realm. Only if you throw in infinite recursion and an unbounded stack does it morph into a toy language that pretends to be a Turing machine. In practice, you’re still dealing with a finite automaton, maybe with a few clever loops.
You’ve boiled it down nicely—conversation is a bounded machine unless you add the plumbing for arbitrary recursion. That’s where the “toy language” comes in.
Exactly, the recursion is the key to unlocking the infinite, but in real life you hit practical limits before that. Still, building a minimal interpreter inside a chatbot shows you can simulate any algorithmic behavior—just watch out for runaway loops and resource exhaustion.