This reminds me of the Havok behavior engine used in many games such as Dark Souls, Elden Ring,and Elder Scrolls' Skyrim.Every animated character has a state machine that traverses a DFA. For example, when your state is "walking", pressing the sprinting button transitions you into "walk-run" intermediate state, which automatically transitions into "run" state. The state then accesses the animation interface and loads the corresponding animation for the character. Behavior engines like Havok are the very building blocks of any 3D game we play today.
yzliu567
This looks like the transitions of an automaton. Is it similar in that, receiving the inputs to generate corresponding outputs and move to another state?
saltyminty
I thought I was free from Markov chains D:
geos98
@saltyminty, I don't think this is a Markov chain. The behavior is deterministic (i.e., this is a DFA) not stochastic.
I used to intern at a game studio working on Unity, and to us (game programmer), my daily job is to code state behavior and transition function for DFAs. This also includes animation DFA (i.e., when to transition) which is basically what the side is talking about haha.
ncastaneda02
Just to add to this conversation, I believe the word here is a finite state machine. In a particular state, you can only go to a finite number of different states. This is a pretty general concept that shows up a lot in computer architecture because they can simplify hardware logic a lot. This is a shockingly deep field that generalizes to arbitrary computation and has been used as applied to a huge number of fields from hardware design to linguistics. You can read more about this here:
https://en.wikipedia.org/wiki/Finite-state_machine
red-robby
In this case, these motion graphs just seem to signify the possible transitions from one animation (model state) to another. But I think with more general decision graphs in video games, Markov chains are occasionally used to increase realism. For example, if I was creating a video game that simulated cities, I could try to simulate in great detail the behavior/motivations of each agent. But agents in a city are far too complex to simulate realistically, so it would be far more realistic to have some type of decision tree for each agent where the probabilities are determined by real-world data (i.e., via Markov chains).
This reminds me of the Havok behavior engine used in many games such as Dark Souls, Elden Ring,and Elder Scrolls' Skyrim.Every animated character has a state machine that traverses a DFA. For example, when your state is "walking", pressing the sprinting button transitions you into "walk-run" intermediate state, which automatically transitions into "run" state. The state then accesses the animation interface and loads the corresponding animation for the character. Behavior engines like Havok are the very building blocks of any 3D game we play today.
This looks like the transitions of an automaton. Is it similar in that, receiving the inputs to generate corresponding outputs and move to another state?
I thought I was free from Markov chains D:
@saltyminty, I don't think this is a Markov chain. The behavior is deterministic (i.e., this is a DFA) not stochastic.
I used to intern at a game studio working on Unity, and to us (game programmer), my daily job is to code state behavior and transition function for DFAs. This also includes animation DFA (i.e., when to transition) which is basically what the side is talking about haha.
Just to add to this conversation, I believe the word here is a finite state machine. In a particular state, you can only go to a finite number of different states. This is a pretty general concept that shows up a lot in computer architecture because they can simplify hardware logic a lot. This is a shockingly deep field that generalizes to arbitrary computation and has been used as applied to a huge number of fields from hardware design to linguistics. You can read more about this here: https://en.wikipedia.org/wiki/Finite-state_machine
In this case, these motion graphs just seem to signify the possible transitions from one animation (model state) to another. But I think with more general decision graphs in video games, Markov chains are occasionally used to increase realism. For example, if I was creating a video game that simulated cities, I could try to simulate in great detail the behavior/motivations of each agent. But agents in a city are far too complex to simulate realistically, so it would be far more realistic to have some type of decision tree for each agent where the probabilities are determined by real-world data (i.e., via Markov chains).