In Dust, there is the notion of a context which has a stack running underneath it. As you drill down into tags, a new context is pushed that can write new context variables as well as access previous variables from previous stack frames.
What we're doing in cloudcms-server is providing a dependency tracker so that each tag gets to declare the output (the HTML that was generated) and the set of dependencies that it depended on in order to generate the output.
As an example, a query might run an use the current locale to produce a list of five nodes. The dependencies are:
{ "node": ["nodeId1", "nodeId2", "nodeId3", "nodeId4", "nodeId5",
"locale": "es-ES"
}
As the dust tags trickle back to produce the page, we now can produce a full set of all dependencies that the page required to generate. It might consist of the set above plus others. They all get merged together.
In the end, we know that the HTML that is sent back depends on the following set of nodes and the following locale. The HTML is written to disk and sent back. The page descriptor that retains dependency information is sent over to Cloud CMS.
On subsequent requests, the rendering chain will discover the HTML on disk and ask whether it is valid. It's only valid if the dependencies match.
Futhermore, this provides really nice invalidation of the cache. You can find page renditions by node ID or by locale in this instance. We store these in Cloud CMS so that when content edits are made, we have the opportunity to a) let end users preview changes since we know what page URLs the content appears on and b) let end users publish changes by invalidating the cache and crawling to generate new page renditions for the modified pages.
The goal is to have everything entirely static-cached if possible. If not possible, the cloudcms-server can dynamically execute the Dust with each request. The cloudcms-server rendering logic can determine whether the page is 100% static or not and set appropriate headers so that the upstream CDN can do the right thing. For dynamic pages, the requests would push through to the origin server and so on.
Our approach to web content is very different than, say, Drupal or Joomla, where they start with pages and work their way down. With 100,000 pages, that would be very tedious. Rather, our philosophy is that folks should work with content and let the pages reflect off of that via templates.