Hi,
Today I am studying the overall structure of RunFrame(). Here is what I found, i.e. more or less everything the engine will or might do.
- RunFrame
- Timer::BeginFrame //sends out the E_BEGINFRAME event, signaling the beginning of the frame
- E_BEGINFRAME
- Workqueue::HandleBeginFrame()
- Input::HandleBeginFrame()
- FileSystem::HandleBeginFrame()
- Network::Update()
- ResourceCache::HandleBeginFrame()
- UI::HandleBeginFrame() //for resetting cursor each frame
note: the timer sets the event data that is passed to all 'RunFrame' updates
- frame number: which frame is beginning
- time step: how long ago did the previous frame start
- PlayAudio //turns on audio if engine turned it off in the previous frame, or if we went from minimized->full mode
- Update //sends out a series of update events that organize update workflow
- E_UPDATE
- Scene::Update()
- Material::HandleAttributeAnimationUpdate() //sometimes, not clear what situations
- E_POSTUPDATE
- E_RENDERUPDATE
- Audio::Update()
- Octree::Update() //when in headless mode, manually update Octree
- Renderer::Update()
- Network::PostUpdate()
- UI::RenderUpdate()
- E_POSTRENDERUPDATE
- Render //render the application to screen
- Graphics::BeginFrame()
- Renderer -> Render()
- UI -> Render()
- Graphics::EndFrame()
- ApplyFrameRateLimit //timing magic
- normal: the engine has a 'FrameTimer' that times frames... at engine initialization the timer is started, and then at the end of the first frame it is reset; the recorded value is the 'time step' between frames; at the same point in each frame the timer gets reset, ensuring very minimal clock losses; the actual gap between resetting the timer and the next 'E_BEGINFRAME' event should be on the order of nanoseconds unless the E_ENDFRAME event is abused by the programmer
- real frame rate exceeds max frame rate: the minimum gap between frames hasn't been reached, so a real-time delay is inserted
- real frame rate is below min frame rate: the maximum gap between frames has been exceeded; since it isn't possible to speed up the program execution, the frame rate is artificially slowed down; in other words, the time step is lowered to the max gap between frames; for example, even though 5 seconds have passed in the world, only 4 seconds will be recorded as elapsing in the program's world; in effect, the program will enter slow motion
note: by default the time step recorded is a moving average of real timesteps, presumably as a low-pass filter on frame rates to mitigate jitter; however, I suspect when frame rates increase rapidly then the in-game physics will slow down, and vice versa
- Timer::EndFrame //sends out the E_ENDFRAME event, signaling the end of the frame
- E_ENDFRAME
- Graphics::DebugRenderer()
- Input::HandleEndFrame() // only for 'EMSCRIPTEN' builds
- Log::HandleEndFrame()
From what I can tell, ‘first to subscribe is first to respond on event send; sender-specific responses all go before generic responses (objects only respond once to an event, and sender-specific responses will take precedence over general responses if the object has multiple subscriptions to the same event with different senders [or no sender])’ according to the use of EventReceiverGroup
. This means in the main update cycle, if a scene was initialized before any application objects subscribed to E_UPDATE
, the scene will respond to E_UPDATE
first before those other subscribers. Of course, a scene can technically be initialized anywhere.
I suppose my question is about best practices around order sequences of responses. If order matters, is it best to use different events that are sent sequentially? Should I assume all responses to an event could be randomly sequenced? Is there ever a situation where it is reasonable to use subscription order to control response order?
I’d love to hear anyone’s thoughts on how to fit the structure of an application into the engine’s workflow.