Run LLM extraction on a document, populating running_extraction for the next sync.
This step MUST be called before sync whenever the document content changes. It chunks the content into ~250-word pieces and runs DSPy extraction on each, building up facts, beliefs, types, and entities in running_extraction.
After prepare completes, call POST /compile/documents//sync to archive the extracted knowledge to the graph and charge token quota.