Facilitator Guide
Your delivery script for the IFP v2.0 Technical Enablement Workshop
Blue blocks = what to say. Yellow blocks = what to click or show. Green blocks = facilitator notes (don't read aloud). Keep it conversational — these are the key beats to hit, not a word-for-word script.
Pre-Workshop Checklist
Run through this before participants arrive.
- Verify everyone has access to the IFP v2.0 workspace and all 4 required roles (Application Owner, Workspace Admin, Page Builder, Integration Admin)
- Confirm the Application Framework is accessible in each participant's workspace
- ADO dataspace configured — source files uploaded (BOM ADO templates)
- Demo environment calendar set to July — current period = July 2026 (6+6 layout)
- HC and CE spoke models mapped to the demo workspace copy
- FictoCorp case study page open in a browser tab
- IFP application open in another tab — land on the home page
- Lab guide open at Workshop Overview for participants
- Test the Application Framework once — confirm generation completes for a simple config
- Know the GitHub credentials for the IFP app — contact financeapplications@anaplan.com if needed
Opening 09:00 — 15 min
Good morning everyone — glad you're here. Quick intro: today we're going deep on Anaplan's Integrated Financial Planning application — IFP v2.0. This is the newest version of the finance suite, built on the Application Framework, and it's a significant departure from v1.x in some important ways.
The goal isn't just feature awareness. By the end of today you should be able to walk into a customer engagement, configure the Application Framework from scratch, set up ADO pipelines, and navigate all seven planning modules confidently. Two hands-on labs, full module walkthroughs, and a lot of real implementation context.
Ask questions whenever — don't wait for a Q&A moment.
IFP Suite Overview 09:15 — 20 min
IFP is Anaplan's application suite for the Office of the CFO. Where RPM applications serve the Sales function — territories, quotas, capacity — IFP serves Finance. Connected P&L, balance sheet, and cash flow planning from a single platform.
The pain it addresses: Finance teams spending weeks closing the books in Excel, no connected view of how headcount changes affect OpEx, CapEx tracked in a separate system, multi-currency consolidation that takes days. IFP connects all of it.
Six planning modules, plus Reporting and Analysis. They feed each other — Revenue drives AR on the balance sheet, Headcount costs flow into OpEx, CapEx depreciation flows into P&L. One change cascades through the entire financial model automatically.
V2.0 is a significant architectural change from v1.x. Three big things: the Application Framework replaces the static configurator, ADO replaces the data hub model, and headcount planning moves from named-employee level to job level. If you've done IFP v1.x work before, the concepts carry over — but the mechanics are almost entirely different.
Anaplan Way for Applications 09:35 — 15 min
Quick run through how an IFP implementation actually runs. Six phases: Requirements, Configure, Generate, Data Load, Application Ready, Extend and Expand.
The key shift from traditional Anaplan builds: you configure first, using the Application Framework wizard. The wizard generates four models. Then you do post-generation setup. Then you load data via ADO. Then you extend. The sequencing matters — getting dimension decisions right before you generate saves you from painful re-generation later.
Two things customers get wrong most often: (1) changing their mind on hierarchy levels after generation — adding a tier to Entity after you've built 15 post-gen tasks means redoing everything. (2) Not completing Admin source-to-planning mappings before loading actuals — you'll get blank reports and spend half a day debugging a mapping that takes five minutes to fix.
One more thing on provisioning before we get into the hands-on: you need four specific roles to generate. Application Owner, Workspace Admin, Page Builder, and Integration Admin. If any are missing, the generation fails. And Integration Admins can only generate in their default tenant — if you're a partner, you need an Anaplan account with the customer's tenant as the default. Get this sorted before project kickoff, not the day you're planning to generate.
Application Framework 09:50 — 20 min
The Application Framework is the deployment mechanism for IFP. Instead of building models from scratch, you answer a structured set of questions and the Framework generates four Anaplan models configured to your specifications. About 8,000 objects — modules, lists, dashboards, actions.
Think of it like a really smart form. The questions fall into two buckets: Top-Level questions that apply across all four models — entity structure, dimensions to include, headcount approach, CapEx approach — and then model-specific questions for FP, HC, and CE individually.
Between the two buckets, there's a Hierarchy Configuration screen. This is where you set how many levels each hierarchy has and what to name them. And here's the rule that catches everyone: the name you type into the question must exactly match the name you enter in the Hierarchy screen. If they differ, things don't rename correctly throughout the model. Exact match. Capital letters matter.
One thing to set expectations on: the Framework always generates all four models, even if you choose not to use HC or CapEx as dedicated planning models. If you choose GL-level CapEx planning, a CE model still generates — you delete it after. This is a known limitation; can't exclude models from generation.
Anaplan Data Orchestrator 10:10 — 20 min
ADO replaces the data hub model from v1.x. In v1.x, you had a large data hub model sitting in your workspace consuming memory and requiring complex import actions. In v2.0, that's gone. ADO handles everything — master data loads, actuals, FX rates — through lightweight links that push directly to spoke models.
The Admin model is still the mapping workbench — you map source departments to planning departments, source accounts to planning accounts there. But ADO is the pipeline that moves data. Two types of loads: direct loads for things that don't need transformation — flat lists, hierarchies — and transformation loads for actuals, where you need to join source data to planning mappings before it can land in FP or HC.
Important heads-up before we get to the labs: five ADO links fail to generate correctly out of the box. Vendor Hierarchy in FP, three Job-related links in HC, and Job Grade in HC. Known PAF generation issue. You have to create these manually after generation. We'll cover exactly how in the post-generation section. Flag it now so it doesn't surprise you during the lab.
Model Architecture 10:30 — 15 min
Four models, three of which are spokes feeding into the Financial Planning hub. Admin is the metadata layer — master lists, source-to-planning mappings, nothing transactional. FP is the core — all planning happens here. HC and CE are separate spokes that calculate workforce costs and CapEx, then push their outputs into FP via import actions.
Key dependency: whenever you update HC or CapEx planning, you have to manually run the import action in FP Admin to pull in the new data. It does not sync automatically. This is the most common source of "why doesn't my OpEx reflect my new headcount?" confusion. Run the import, every time, whenever either spoke changes.
Config Question Walkthrough 10:45 — 45 min
Let's walk through every question before participants touch it. This is the map — I want you to understand what each choice does before you make it.
Entity Dimension
Mandatory. This is your primary organizational hierarchy — whatever the customer calls their business units. For FictoCorp it's Entity with two levels: Total FictoCorp at the top, then USA and EMEA. The name you enter here must match the hierarchy rename exactly.
Optional Dimensions
Department, Geography, Functional Area, Vendor — all optional. The rule: include only what you actually need for planning. Every extra dimension adds memory, complexity, and UX surface area. For FictoCorp Lab A: Department yes, everything else no. Geography we'll add in Lab B.
And the warning I give every customer: removing a dimension after generation affects every area of the model that uses it. Don't add dimensions speculatively. Have the conversation during requirements and make a firm decision before you generate.
Headcount Approach
Three options. Option A is the IFP Role-Based HC model — this is what you want for most customers. Deploys both FP and HC, job-level planning flows into FP. Option B is GL-level — you enter headcount costs directly in FP at the account level, no HC model. Quick and dirty. Option C is for when another system — OWP, an external HRIS — owns headcount and you're just loading summarized data into FP.
For FictoCorp: Option A. Always lead with A unless there's a specific reason not to.
CapEx Approach
Same three options. Option A deploys the dedicated CE model with asset-level planning, depreciation automation, disposal modeling. Option B is GL-level — useful when a customer has simple CapEx needs and doesn't need to track individual assets. For Lab A we'll use Option B to keep scope simple; Lab B adds the full CE model.
Balance Sheet and Cash Flow
Yes or no. Lab A: No — we're keeping scope focused on P&L. Lab B: Yes to both. One caveat: if you include Balance Sheet, make sure you're ready for the post-gen mapping work. Cash offset accounts, cash flow mappings — there's meaningful setup required after generation. We'll cover it in the post-gen section.
FP Model — Key Questions
For the FP model: margin planning in FP, Product dimension (two levels for FictoCorp), Customer dimension (two levels), expense planning with Entity and Department dimensionality, no allocations for Lab A. The IS, BS, and CF list names — take the defaults. Don't rename them unless the customer has a strong preference. Renaming adds overhead and the defaults are clear enough.
Lab A: Configure FictoCorp 11:30 — 60 min
Your turn. Lab A is FictoCorp Phase 1 — Revenue, OpEx, and Headcount. Use the requirements table in the lab guide. Work through the Application Framework questions, set your hierarchy levels, and generate.
Remember: the name in the question and the name in the Hierarchy Configuration screen must match exactly. When you click Generate, the logs will tell you if something went wrong.
Post-Generation Checklist 12:30 — 30 min
Generation just ran. Now comes the part that actually makes the app usable. A few things you must do before any data loads or demos.
First — those five broken ADO links. Create them manually. Vendor Hierarchy in FP, the four HC links. The Configuration Guide has exact steps. Don't skip this — you'll get ADO load failures that look like data problems but are actually just missing link definitions.
Second — time settings. Set the current period to July in all four models. This gives you a 6+6 layout — six months of actuals, six months of forecast. Much cleaner on screen than 9+3. All models must have the same current period. Version names must be identical across all models too — a mismatch breaks variance reporting in ways that are annoying to debug.
Third — spoke model mapping. Map HC and CE to your workspace. Without this, data cannot flow from those models into FP. Blank OpEx after a HC import almost always means the spoke model isn't mapped.
Lunch 13:00 — 60 min
Data Load via ADO 14:00 — 30 min
Before you can plan anything, data needs to be in the models. ADO handles this. The flow: source files land in ADO, get pushed to the Admin model, you map them to planning structures, then push the planning structures to all spoke models, then load actuals using transformation views that join your source data to those planning mappings.
The source-to-planning mapping is the most important step. Every source department, entity, and account needs to map to a planning counterpart. The Unmapped KPI shows how many haven't been mapped yet — goal is zero before you push anything to the spokes.
One thing that gets people: load order matters. Hierarchies and master data first, then actuals. If you try to load Trial Balance IS before the planning account list is populated, the load fails with mapping errors. Always hierarchies first.
Lab B: Full 3-Statement 14:30 — 60 min
Lab B extends FictoCorp to full three-statement planning. You'll add CapEx, Balance Sheet, Cash Flow, and a Geography dimension. Fresh configuration from scratch — don't extend Lab A's workspace.
Two things to watch in Lab B that weren't in Lab A. First, CapEx General Admin setup after generation — entity-to-department mapping, asset type to depreciation account mappings, CWIP account selection. All mappings must be to leaf-level accounts; parent accounts cause silent failures. Second, Balance Sheet admin — manage planning methods, cash offset accounts, cash flow mappings. All three of those must be complete before the Balancing Routine will give you a balanced sheet.
Revenue & COGS 15:30 — 15 min
Planning methods are configured per account per product — different product types can use completely different forecasting logic. Physical products might use Units x Rate. Services might use hours times rate per hour. SaaS subscriptions might use subscriptions times rate per subscription. You configure these in Administration before planners ever see the page.
The key UX behavior: select a product and the grid shows only the inputs for that product's method. Change the product and the grid changes too. Progressive Disclosure — planners only see what's relevant to their current context.
Operating Expenses 15:45 — 15 min
One important change from v1.x: one method per account, applies across all entities and departments. In v1.x you could have different methods per entity for the same account. That's gone. If a customer pushes back on this, that's an extension — it's not complex, but it's net-new line items in the planning module.
Two accounts you'll always want in your demo setup: one using Prior Run Rate — shows the growth rate and adjustment fields; and one using Line Item Detail — shows how costs tracked by vendor or functional area work. The detail is important because line items here appear in the Summary but not on the main planning page — that trips up planners constantly.
Headcount 16:00 — 20 min
The big change from v1.x: job level, not employee level. No names. No SSNs. No salary data tied to individuals. This was a deliberate design choice to remove PII from the planning model — which also means no more segregated secure data hubs for headcount data.
Before planners can enter data, four admin steps: create job metadata, map jobs to departments, set job grade pay bands, map GL accounts. If any of those are incomplete, planners either can't see the jobs or the data won't flow correctly to FP. Setup sequence matters here more than almost any other module.
One demo moment worth preparing: the pay band deviation indicator. Enter a salary that's outside the grade band and show the red deviation flag in the Insights Panel. It doesn't block the planner — it flags for management review. Customers love this. "Finally, a way to see when managers are trying to hire above grade."
CapEx 16:20 — 10 min
Entity-level only. No department, no geography. This is a hard constraint in v2.0 — if a customer needs to plan CapEx by department, that's an extension. The entity-to-department mapping in General Admin handles distributing depreciation costs to the right P&L dimensions, but the planning input itself is entity-only.
Show the financial impact timeline on an existing asset — the P&L and BS columns update in real time as you change the order date or in-service date. This is the demo hook: "Change the in-service date from March to July and watch how the depreciation expense shifts. The P&L impact is immediate."
Balance Sheet 16:30 — 15 min
The concept that takes the most time to click for new users: inputs here are activity, not balances. You're entering what changed this period — not what the balance is. The system takes the last actual closing balance and adds or subtracts your activity input. Enter nothing and the last actual rolls forward. Enter something and it rolls forward plus your change.
The cash offset accounts setup is new in v2.0. You're telling the system: when this BS account changes, does that increase or decrease cash? Increases in assets reduce cash. Increases in liabilities increase cash. Every account needs a category assigned, or the cash flow statement will be incomplete. Missing categories don't generate errors — they just silently omit cash movements.
Top-Down Planning 16:45 — 10 min
Quick but important caveat: dynamic top-down disaggregation is not available in v2.0. In v1.x, entering a high-level target would automatically cascade it down. Here, targets are entered manually at Level 2 of each hierarchy. That's a known limitation — it's on the roadmap for a future release.
What's useful now: the TD Summary page. Side by side — what Finance wants at L2 vs. what the teams are planning bottom-up. If there's a $10M gap at the Americas level, it's immediately visible. That conversation with the CFO and the regional VP is the value prop — not the mechanics of how targets were entered.
Reporting & Analysis 16:55 — 15 min
Start with the IS report. Enable suppression — watch the empty accounts disappear. Show the analysis type switcher: Amounts, Year-over-year, Percentage, Percent of Revenue. These are the modes a CFO actually wants. Show the Balance Sheet Validation — is the BS in balance, yes or no. Show the management reporting pack — executive summary, P&L, BS, cash flow, dynamic commentary that updates when plan data changes.
Two things to point out as new in v2.0: Product Sales Outlier Analysis — statistical detection of unusual product sales patterns, no extension required — and Product Correlation Analysis — shows which products move together across entities. Both are base product. That's usually a surprise to people who think statistical analysis requires custom builds.
Admin Runbook 17:10 — 15 min
What does a model administrator actually do every month? Three things: update the current period in Model Settings across all four models, run the ADO pipelines to refresh hierarchies and actuals, and review the Admin model for any unmapped new codes from the source systems.
Most IFP support calls — not all, but most — trace back to the Admin model. Blank report, wrong variance, missing data — start with mappings in Admin. Is every source account mapped? Is every source entity mapped? Is the current period correct? Those three questions resolve 80% of issues.
Currency Translation 17:25 — 10 min
Triangulation. One set of rates — local currencies to base, base to everything else via reciprocal. The practical implication: adding a new reporting currency is two steps. Add it to the Report Currency list, give it its code. Done. Every report immediately shows data in that currency. No additional rate loads, no module changes.
Show this live. Add Japanese Yen, navigate to the IS Report, select JPY. The data is there. That's the demo moment — customers who've spent weeks configuring currency conversion in a custom build are always stunned by this.
Extensions 17:35 — 15 min
Three rules for extensions that I always give partners. One: don't modify base app line items — create net-new ones. Two: document everything with a naming convention. Three: challenge every extension request — anchor on what the base app already does before you start building.
The most common extension request we see: a custom planning method that drives expense from headcount data instead of generic units. The steps are in the lab guide. The principle applies to any new method — add to the list, configure in the SYS module, add calculation line items to the input module, update the LISS, assign to accounts. Maybe two hours for an experienced model builder.
The hard limit: eight dimensions. Customers who need more than eight require a bespoke extension — that's not negotiable with the Application Framework. Surface this in discovery. It rarely comes up, but when it does it needs to be caught before you commit to a project timeline.
Wrap-Up 17:50 — 10 min
That's the IFP v2.0 workshop. Two labs, seven module walkthroughs, post-gen, ADO, admin, currency, extensions.
Here's what I want you to take away: the Application Framework is not just a faster way to build — it's a structured conversation framework. The configuration questions tell you exactly what you need to discover from a customer. The known issues list tells you what to plan for in your project schedule. The module walkthroughs tell you what to demo and what to qualify.
Contact financeapplications@anaplan.com for support. First three IFP projects, reach out — the Finance Apps CoE wants to be involved. After that, you've got this.
Questions? And — feedback is always welcome. What was useful, what was confusing, what needs more depth.
Lab Debrief Answer Keys
Lab A — Configure FictoCorp
- Name mismatch between question and hierarchy screen: Modules and line items across the model won't be renamed correctly. The hierarchy levels will have inconsistent naming — some using the question response name, others using the hierarchy screen name. Best case: cosmetic annoyance. Worst case: downstream formulas that reference specific module names break silently.
- Why Option A for HC, Option B for CapEx: FictoCorp has ~500 employees and needs job-level workforce cost visibility — Option A is the right choice. For Lab A CapEx scope, we intentionally kept scope simple: GL-level CapEx means no asset tracking, no individual depreciation schedules. Appropriate for a first pass at full 3-statement scope in Lab B.
- Adding Geography after generation: Would require re-generating the application. Geography is used in Revenue/COGS, OpEx, and Top-Down — all three modules would need to be regenerated. All post-generation customizations done so far would need to be reapplied. This is why dimension decisions must be made before generating, not after.
- Minimum/Maximum hierarchy levels: Minimum 2, maximum 8 per model. The hierarchy screen shows the combined count across all selected models — Entity at 3 levels in FP + HC + CE = 9 total shown.
- Why CE generates even with GL-level CapEx: The Application Framework cannot exclude a model from generation — it always generates all four. The CE model generates, then you delete it manually after. This is a documented limitation. Don't count on it being resolved soon.
Lab B — Full 3-Statement
- Cash Offset Accounts — purpose and consequence of missing: Every BS account change that affects cash must be categorized: does a change in this account increase or decrease cash? Missing category = that account's movement is not captured in the cash flow statement. The BS may balance, but the CF statement will be understated. Doesn't generate an error — just silently omits cash movements. Review every BS account in the offset mapping before running the Balancing Routine.
- Balancing Routine — why sequential, not all at once: Retained earnings roll forward from each period into the next. July's retained earnings are an input to August's opening balance. The system can't calculate August until July is complete. Month-by-month is the only way to maintain the roll-forward integrity. This also means the Balancing Routine must run every period after new BS activity is entered — not just once at setup.
- CapEx by department — customer options: (1) Use the entity-to-department mapping in CapEx General Admin for simple proportional allocation — this distributes depreciation to P&L dimensions but the planning input remains entity-level. (2) Build a custom extension: add a Department dimension to the CapEx input modules. Estimated effort: one to two days of model builder time. (3) Use Option C (external model) if the customer's ERP system tracks CapEx by department and they just need summary data in IFP.
- Geography 2 levels vs 3 levels: 2 levels (All Regions → Americas/EMEA) is appropriate for FictoCorp's simple entity structure — two entities map to two regions cleanly. 3 levels would add a sub-region tier (e.g., Americas → US East / US West / LatAm) — useful if revenue planning needs that granularity but adds dimensionality across Revenue/COGS and OpEx. General rule: fewer levels is better for performance and UX unless the business genuinely plans at that detail.
- Direct Load vs Transformation Load: Direct Load = source file loads as-is, no transformation needed. Used for hierarchies, flat lists, reference data that maps directly to the model structure. Transformation Load = source data must be joined to planning mapping data before it can land in the spoke model. Used for actuals (Trial Balance IS/BS, HRIS data) where source account codes need to be translated to planning account codes via the Admin model mappings.