Introducing ContextFabric

Read Now

Back

Article

Context makes all the difference: What a Fortune 500 contracts team taught us about making AI actually work

The most important asset companies will have in the AI era is context: a durable record of how organizations actually get work done.

Reading time: ~10 min

Word count: ~2,000

Draft a supplier agreement for Northstar Logistics.

Supplier Contract (Informed by ACME Corp Context)

Commercial Status & Pricing

Supplier is designated as a Strategic Partner (Level 1 Priority). Products shall be invoiced at Tier-3 volume pricing applicable to suppliers with annual spend exceeding $5 million. Pricing includes the rebate structure applied to strategic suppliers at the same volume tier under the 2024 Northstar benchmark agreements, including Identical volume breakpoints, rebate percentages, quarterly true-up mechanics. These terms reflect the Company’s standard pricing treatment for suppliers with comparable spend profiles.

Performance, Risk & Enforcement

Given documented fulfillment variance exceeding 5 percent in the first quarter, payment terms shall remain Standard Net 30, with no early payment concessions. To address current performance risk, the non-compliance penalty is set at 15 percent, consistent with enforcement terms applied to recent strategic supplier agreements where similar delivery issues were observed. Penalty thresholds and cure periods follow the Company’s established approach for suppliers with comparable performance risk.

Contextualized answer

Sequential Workflow steps

Oracle ERP

Procurement manager verified pricing

Logged into Oracle ERP to confirm the latest volume tiers and checked if the supplier was eligible for rebates

Salesforce

Sales Lead reviewed history

Checked Salesforce records to understand the commercial relationship, deal history, and active negotiations

Ironclad CRM

Legal Counsel benchmarked deals

Searched the Ironclad repository for comparable past agreements to ensure consistency with organizational precedent.

Tableau / OMS

Analyst identified risk signals

Investigated Tableau dashboards to flag recent fulfillment breaches that required stricter contract penalties.

Slack

Finance VP granted exception

Traced Slack threads to confirm VP-approved deviations and special terms.

Notion

Contract Manager noted strategy

Documented the negotiation strategy and conflict resolution logic in a Notion side-note.

Caption: How context from execution traces of a team's workflow leads to the precise context needed to produce the right contract. 

In our April 2025 Harvard Business Review article, we examined a real deployment with a Fortune 500 retailer inside its supplier contracts workflow, the team responsible for drafting and negotiating high-volume commercial agreements. While this work appears to be simple document generation, it actually depends on navigating exceptions, precedents, approval paths, and unwritten rules that govern how terms are negotiated and approved. Each contract reflects institutional judgment, past decisions, exception logic, and off-system approvals that rarely exist in any system of record. By deploying ContextFabric as a context backbone and capturing execution and decision traces across custom applications, email, Slack, deal desk reviews, and approval calls, we made that judgment legible to AI agents. This case shows why enterprise agents will not deliver real ROI without persistent execution-time context, not just rules or static data. 

  1. The core failure mode: AI That Writes Faster but Doesn’t Reduce the Real Work

A Fortune 500 retailer deployed an AI tool powered by a widely used LLM to help its contracts team draft and negotiate supplier agreements. The system could summarize prior contracts, compare clauses, answer legal questions, and generate a draft in seconds. 

 

The introduction of the AI tool changed the workflow. But made it worse. The AI-generated contract was generic and lacked specifics unique to the contract and to the team/org. Therefore, users spent effort in re-working each contract by gathering more data from various systems, making decisions again, and weaving it all together. Add to that each contract is unique and hence the re-work in contract generation became a constant feature. The end result was that the AI-generated contract itself became a bottleneck.

 

This happened because the AI failed to leverage a key advantage: how people ensured accuracy. Before generating each contract, people still manually pulled required data from multiple systems and manually verified it - effectively serving as the verification layer to maintain tight controls over information quality, as mandated by finance and legal.

 

Because the AI couldn’t access or incorporate that verification context, it produced fluent but generic drafts that didn’t materially improve outcomes for the contracts team.

  1. Why the AI output became a bottleneck

A typical AI-generated draft looked like this:

 

“Supplier shall provide products in accordance with agreed pricing and delivery schedules. Payment terms shall be Net 30 unless otherwise specified. Any deviations require written approval.” 

The language was fine. The problem was everything it did not know.

 

Prior to generating a draft, the team had to:

  • Insert supplier-specific rebate structures negotiated in prior quarters.
  • Adjust terms based on volume thresholds and seasonal demand.
  • Account for delivery issues that triggered tighter SLAs.
  • Apply informal exceptions granted by procurement or finance.
  • Ensure consistency with how similar suppliers were handled recently.

 

None of that context appeared in the draft, and none of it could be inferred from contract text alone.

  1. The real workflow: Where the Time Actually Goes

The contracts team’s work followed a repeatable but highly variable process:

  • Log into procurement and ERP systems to confirm current pricing, volume commitments, and rebate eligibility.
  • Check CRM and account records to understand the commercial relationship, deal history, and active negotiations.
  • Review prior comparable agreements to assess precedent and ensure consistency with similar suppliers.
  • Examine order and fulfillment history to identify risk signals that influence SLAs, penalties, or termination clauses.
  • Validate exceptions and approvals by tracing who authorized deviations, under what conditions, and whether they still apply.
  • Synthesize conflicting inputs across systems and stakeholders into a defensible contract position.

Each step exists for a reason: to reduce risk, enforce policy, and preserve institutional consistency. Execution traces for how people do this work is not captured or stored in any system of record. Its dynamic. Nuanced. Specific. People spend most of their time not writing contracts, but assembling and reconciling the context that determines what the contract should say.

  1. Why Historical Contracts Don’t Solve This

It is tempting to view this as a document-generation problem. Feed the model enough past contracts and let it learn the pattern. That framing is incomplete.

 

Every contract is shaped by a unique combination of factors:

  • The supplier’s negotiation history.
  • Exceptions granted in prior deals, sometimes in different accounts.
  • Changes in policy, risk posture, or market conditions.
  • Informal norms about what is acceptable this quarter versus last.
  • Decisions made outside systems, then reflected retroactively.

 

The words in a contract are downstream of this context. Training or retrieving from past contracts mostly improves tone, clause structure, and phrasing. It does not teach the model how people determine which inputs matter, which systems are authoritative, when precedent applies, or how conflicts are resolved. People still have to gather and synthesize that context manually across systems, teams, and judgment calls. 

  1. The Real Bottleneck: Context Gathering and Synthesis

This mirrors what we see in software engineering. Code generation helps individual developers move faster, but overall delivery remains constrained by people aligning on requirements, reconciling feedback, and adapting to shifting priorities. The bottleneck is not typing. It is shared understanding.

 

Contracts work the same way. Language generation accelerates a narrow step, but people remain responsible for assembling context across tools, stakeholders, and precedent. Until that work is captured and delivered at runtime, AI cannot meaningfully change end-to-end productivity.

 

Crucially, context is dynamic. It changes for each contract. Hence, it is not a static or one-time effort to collect and understand context. Context is an execution-time (or run-time) need that the contract agent continuously feeds on to produce the right contract.

  1. The unlock: Learn How the Work Actually Happens (Context), Then Feed That to the Model

The breakthrough came when the team stopped treating contract drafting as a document-generation task and instead modeled the entire supplier contracting workflow end-to-end. Rather than operating on static documents, ContextFabric provided a context backbone: a live, execution-time representation of how agreements are negotiated across systems, stakeholders, and decisions. 

 

As the team worked, ContextFabric captured execution and decision traces in real time: which systems were consulted, which data was authoritative, which policies were evaluated, where exceptions applied, who approved deviations, and how conflicts were resolved. This context emerged from ERP systems, procurement tools, CRM, Slack threads, deal desk reviews, and approval calls. It was captured as part of normal work and implicitly validated by the team’s actions. 

 

Critically, not all context is useful at once. Just as people surface only the facts and precedents that matter for a given decision, ContextFabric organized execution traces into a governed context library and selected the relevant slice for each generated contract. Rather than overwhelming the model with the full body of observed execution history, ContextFabric delivered only the supplier- and workflow-relevant context, including the precedents, policies, and performance signals that actually informed the decision. This ensured the AI received the right context, grounded in approved precedent, active policy, and current performance signals, and could operate with the same situational awareness that people previously assembled manually. 

  1. What Context Means in Practice

For each supplier negotiation, the model received a structured slice of execution-time context, including:

 

  • Supplier economics such as spend tier, volume commitments, rebates, and price concessions.
  • Commercial precedent from comparable suppliers approved in recent months.
  • Policy state including escalation thresholds and pricing guardrails.
  • Exception history captured from approvals and deal desk notes.
  • Performance signals such as fulfillment misses that warranted tighter SLAs.

 

Context here is not raw data or long-term memory. It is a situational record of how steps are executed, the relevant semantic information connected to these steps, and how decisions are made in a specific negotiation.

  1. How That Changed the Output

With this context, the AI produced materially different drafts (sample shortened output):

“Pricing reflects the Tier-3 volume rebate approved on March 12 under Procurement Policy v3.2, consistent with precedent set in the Acme and Northstar agreements. Given two Q1 fulfillment breaches, delivery SLAs are tightened to 98.5 percent, with penalties aligned to the VP-approved exception granted for strategic healthcare suppliers. Payment terms remain Net 30. .......” 

The draft encoded precedent, policy, and supplier-specific risk. It reflected what people previously had to stitch together manually.

  1. What changed end-to-end and where the ROI came from

Once the AI had full situational context, benefits extended across the entire contract lifecycle. Fewer revisions were needed. Approval chains shortened. Supplier negotiations moved faster with less friction. 

 

Measured outcomes included:

  • Contract cycle times compressed by weeks.
  • Manual drafting and context assembly reduced by more than 50 percent.
  • Throughput increased by nearly 30 percent without additional headcount.
  • Fewer SLA breaches and less commercial leakage.
  • Improved supplier relationships due to clearer, precedent-aligned terms.

 

The ROI came from reducing rework, preventing leakage, and tightening the feedback loop between supplier performance and contract terms.

  1. Why This Required a Different Model of AI

Optimizing isolated subtasks was not enough. Copilots and RPA automated fragments of work but did not take the entire workflow's context into account. That, among other factors, made them brittle.

 

This case shows that AI must be grounded in full workflows. When execution-time context is captured and delivered through a context backbone, agents stop behaving like macros and start contributing meaningfully to end-to-end processes. This kind of context is specific to each team in each org. Its not easily replicable and doesn’t generalize because it is in the specifics of the work patterns, the sequence of activities, and the applications that capture the uniqueness of the team.  

  1. Why This Matters for Agents

Agents fail when they optimize isolated tasks while leaving the real work untouched. The truth is, most people in the enterprise operate as glue between siloed systems ....

 

This bottleneck is visible across the enterprise. In software, developer velocity has increased, but delivery remains constrained by product and project management. As output accelerates, organizations respond by adding more coordination layers, not by removing the bottleneck. 

 

The same pattern exists in contracts, finance, support, and operations. Without execution-time context, agents remain brittle and dependent on people. Context is not just knowledge. It is operating intent. 

  1. Why Digital Experiences Are the Missing Layer

The context that made this work did not live in a single system. It lived in the digital experience of work itself: cross-app sequences, judgments, handoffs, and exception handling. This also includes each step that the team performs regularly. 

 

Unlocking this does not require re-architecting workflows. It requires understanding what already happens and treating enterprise work as multi-player, cross-system, and judgment-heavy. 

 

That is why we built ContextFabric. It learns from digital experiences as work happens, governs that context, and delivers the right slice to AI agents at execution time. The result is agents that act with real situational awareness and work reliably in production.

© Workfabric AI

Want smarter, faster, and more cost-efficient agents? 

See how ContextFabric gives your AI agents the business context they need to perform like experts.

Book a Demo

Back

Article

Context makes all the difference: What a Fortune 500 contracts team taught us about making AI actually work

The most important asset companies will have in the AI era is context: a durable record of how organizations actually get work done.

Reading time: ~10 min

Word count: ~2,000

Draft a supplier agreement for Northstar Logistics.

Supplier Contract (Informed by ACME Corp Context)

Commercial Status & Pricing

Supplier is designated as a Strategic Partner (Level 1 Priority). Products shall be invoiced at Tier-3 volume pricing applicable to suppliers with annual spend exceeding $5 million. Pricing includes the rebate structure applied to strategic suppliers at the same volume tier under the 2024 Northstar benchmark agreements, including Identical volume breakpoints, rebate percentages, quarterly true-up mechanics. These terms reflect the Company’s standard pricing treatment for suppliers with comparable spend profiles.

Performance, Risk & Enforcement

Given documented fulfillment variance exceeding 5 percent in the first quarter, payment terms shall remain Standard Net 30, with no early payment concessions. To address current performance risk, the non-compliance penalty is set at 15 percent, consistent with enforcement terms applied to recent strategic supplier agreements where similar delivery issues were observed. Penalty thresholds and cure periods follow the Company’s established approach for suppliers with comparable performance risk.

Contextualized answer

Sequential Workflow steps

Oracle ERP

Procurement manager verified pricing

Logged into Oracle ERP to confirm the latest volume tiers and checked if the supplier was eligible for rebates

Salesforce

Sales Lead reviewed history

Checked Salesforce records to understand the commercial relationship, deal history, and active negotiations

Ironclad CRM

Legal Counsel benchmarked deals

Searched the Ironclad repository for comparable past agreements to ensure consistency with organizational precedent.

Tableau / OMS

Analyst identified risk signals

Investigated Tableau dashboards to flag recent fulfillment breaches that required stricter contract penalties.

Slack

Finance VP granted exception

Traced Slack threads to confirm VP-approved deviations and special terms.

Notion

Contract Manager noted strategy

Documented the negotiation strategy and conflict resolution logic in a Notion side-note.

Caption: How context from execution traces of a team's workflow leads to the precise context needed to produce the right contract. 

In our April 2025 Harvard Business Review article, we examined a real deployment with a Fortune 500 retailer inside its supplier contracts workflow, the team responsible for drafting and negotiating high-volume commercial agreements. While this work appears to be simple document generation, it actually depends on navigating exceptions, precedents, approval paths, and unwritten rules that govern how terms are negotiated and approved. Each contract reflects institutional judgment, past decisions, exception logic, and off-system approvals that rarely exist in any system of record. By deploying ContextFabric as a context backbone and capturing execution and decision traces across custom applications, email, Slack, deal desk reviews, and approval calls, we made that judgment legible to AI agents. This case shows why enterprise agents will not deliver real ROI without persistent execution-time context, not just rules or static data. 

  1. The core failure mode: AI That Writes Faster but Doesn’t Reduce the Real Work

A Fortune 500 retailer deployed an AI tool powered by a widely used LLM to help its contracts team draft and negotiate supplier agreements. The system could summarize prior contracts, compare clauses, answer legal questions, and generate a draft in seconds. 

 

The introduction of the AI tool changed the workflow. But made it worse. The AI-generated contract was generic and lacked specifics unique to the contract and to the team/org. Therefore, users spent effort in re-working each contract by gathering more data from various systems, making decisions again, and weaving it all together. Add to that each contract is unique and hence the re-work in contract generation became a constant feature. The end result was that the AI-generated contract itself became a bottleneck.

 

This happened because the AI failed to leverage a key advantage: how people ensured accuracy. Before generating each contract, people still manually pulled required data from multiple systems and manually verified it - effectively serving as the verification layer to maintain tight controls over information quality, as mandated by finance and legal.

 

Because the AI couldn’t access or incorporate that verification context, it produced fluent but generic drafts that didn’t materially improve outcomes for the contracts team.

  1. Why the AI output became a bottleneck

A typical AI-generated draft looked like this:

 

“Supplier shall provide products in accordance with agreed pricing and delivery schedules. Payment terms shall be Net 30 unless otherwise specified. Any deviations require written approval.” 

The language was fine. The problem was everything it did not know.

 

Prior to generating a draft, the team had to:

  • Insert supplier-specific rebate structures negotiated in prior quarters.
  • Adjust terms based on volume thresholds and seasonal demand.
  • Account for delivery issues that triggered tighter SLAs.
  • Apply informal exceptions granted by procurement or finance.
  • Ensure consistency with how similar suppliers were handled recently.

 

None of that context appeared in the draft, and none of it could be inferred from contract text alone.

  1. The real workflow: Where the Time Actually Goes

The contracts team’s work followed a repeatable but highly variable process:

  • Log into procurement and ERP systems to confirm current pricing, volume commitments, and rebate eligibility.
  • Check CRM and account records to understand the commercial relationship, deal history, and active negotiations.
  • Review prior comparable agreements to assess precedent and ensure consistency with similar suppliers.
  • Examine order and fulfillment history to identify risk signals that influence SLAs, penalties, or termination clauses.
  • Validate exceptions and approvals by tracing who authorized deviations, under what conditions, and whether they still apply.
  • Synthesize conflicting inputs across systems and stakeholders into a defensible contract position.

Each step exists for a reason: to reduce risk, enforce policy, and preserve institutional consistency. Execution traces for how people do this work is not captured or stored in any system of record. Its dynamic. Nuanced. Specific. People spend most of their time not writing contracts, but assembling and reconciling the context that determines what the contract should say.

  1. Why Historical Contracts Don’t Solve This

It is tempting to view this as a document-generation problem. Feed the model enough past contracts and let it learn the pattern. That framing is incomplete.

 

Every contract is shaped by a unique combination of factors:

  • The supplier’s negotiation history.
  • Exceptions granted in prior deals, sometimes in different accounts.
  • Changes in policy, risk posture, or market conditions.
  • Informal norms about what is acceptable this quarter versus last.
  • Decisions made outside systems, then reflected retroactively.

 

The words in a contract are downstream of this context. Training or retrieving from past contracts mostly improves tone, clause structure, and phrasing. It does not teach the model how people determine which inputs matter, which systems are authoritative, when precedent applies, or how conflicts are resolved. People still have to gather and synthesize that context manually across systems, teams, and judgment calls. 

  1. The Real Bottleneck: Context Gathering and Synthesis

This mirrors what we see in software engineering. Code generation helps individual developers move faster, but overall delivery remains constrained by people aligning on requirements, reconciling feedback, and adapting to shifting priorities. The bottleneck is not typing. It is shared understanding.

 

Contracts work the same way. Language generation accelerates a narrow step, but people remain responsible for assembling context across tools, stakeholders, and precedent. Until that work is captured and delivered at runtime, AI cannot meaningfully change end-to-end productivity.

 

Crucially, context is dynamic. It changes for each contract. Hence, it is not a static or one-time effort to collect and understand context. Context is an execution-time (or run-time) need that the contract agent continuously feeds on to produce the right contract.

  1. The unlock: Learn How the Work Actually Happens (Context), Then Feed That to the Model

The breakthrough came when the team stopped treating contract drafting as a document-generation task and instead modeled the entire supplier contracting workflow end-to-end. Rather than operating on static documents, ContextFabric provided a context backbone: a live, execution-time representation of how agreements are negotiated across systems, stakeholders, and decisions. 

 

As the team worked, ContextFabric captured execution and decision traces in real time: which systems were consulted, which data was authoritative, which policies were evaluated, where exceptions applied, who approved deviations, and how conflicts were resolved. This context emerged from ERP systems, procurement tools, CRM, Slack threads, deal desk reviews, and approval calls. It was captured as part of normal work and implicitly validated by the team’s actions. 

 

Critically, not all context is useful at once. Just as people surface only the facts and precedents that matter for a given decision, ContextFabric organized execution traces into a governed context library and selected the relevant slice for each generated contract. Rather than overwhelming the model with the full body of observed execution history, ContextFabric delivered only the supplier- and workflow-relevant context, including the precedents, policies, and performance signals that actually informed the decision. This ensured the AI received the right context, grounded in approved precedent, active policy, and current performance signals, and could operate with the same situational awareness that people previously assembled manually. 

  1. What Context Means in Practice

For each supplier negotiation, the model received a structured slice of execution-time context, including:

 

  • Supplier economics such as spend tier, volume commitments, rebates, and price concessions.
  • Commercial precedent from comparable suppliers approved in recent months.
  • Policy state including escalation thresholds and pricing guardrails.
  • Exception history captured from approvals and deal desk notes.
  • Performance signals such as fulfillment misses that warranted tighter SLAs.

 

Context here is not raw data or long-term memory. It is a situational record of how steps are executed, the relevant semantic information connected to these steps, and how decisions are made in a specific negotiation.

  1. How That Changed the Output

With this context, the AI produced materially different drafts (sample shortened output):

“Pricing reflects the Tier-3 volume rebate approved on March 12 under Procurement Policy v3.2, consistent with precedent set in the Acme and Northstar agreements. Given two Q1 fulfillment breaches, delivery SLAs are tightened to 98.5 percent, with penalties aligned to the VP-approved exception granted for strategic healthcare suppliers. Payment terms remain Net 30. .......” 

The draft encoded precedent, policy, and supplier-specific risk. It reflected what people previously had to stitch together manually.

  1. What changed end-to-end and where the ROI came from

Once the AI had full situational context, benefits extended across the entire contract lifecycle. Fewer revisions were needed. Approval chains shortened. Supplier negotiations moved faster with less friction. 

 

Measured outcomes included:

  • Contract cycle times compressed by weeks.
  • Manual drafting and context assembly reduced by more than 50 percent.
  • Throughput increased by nearly 30 percent without additional headcount.
  • Fewer SLA breaches and less commercial leakage.
  • Improved supplier relationships due to clearer, precedent-aligned terms.

 

The ROI came from reducing rework, preventing leakage, and tightening the feedback loop between supplier performance and contract terms.

  1. Why This Required a Different Model of AI

Optimizing isolated subtasks was not enough. Copilots and RPA automated fragments of work but did not take the entire workflow's context into account. That, among other factors, made them brittle.

 

This case shows that AI must be grounded in full workflows. When execution-time context is captured and delivered through a context backbone, agents stop behaving like macros and start contributing meaningfully to end-to-end processes. This kind of context is specific to each team in each org. Its not easily replicable and doesn’t generalize because it is in the specifics of the work patterns, the sequence of activities, and the applications that capture the uniqueness of the team.  

  1. Why This Matters for Agents

Agents fail when they optimize isolated tasks while leaving the real work untouched. The truth is, most people in the enterprise operate as glue between siloed systems ....

 

This bottleneck is visible across the enterprise. In software, developer velocity has increased, but delivery remains constrained by product and project management. As output accelerates, organizations respond by adding more coordination layers, not by removing the bottleneck. 

 

The same pattern exists in contracts, finance, support, and operations. Without execution-time context, agents remain brittle and dependent on people. Context is not just knowledge. It is operating intent. 

  1. Why Digital Experiences Are the Missing Layer

The context that made this work did not live in a single system. It lived in the digital experience of work itself: cross-app sequences, judgments, handoffs, and exception handling. This also includes each step that the team performs regularly. 

 

Unlocking this does not require re-architecting workflows. It requires understanding what already happens and treating enterprise work as multi-player, cross-system, and judgment-heavy. 

 

That is why we built ContextFabric. It learns from digital experiences as work happens, governs that context, and delivers the right slice to AI agents at execution time. The result is agents that act with real situational awareness and work reliably in production.

© Workfabric AI

Want smarter, faster, and more cost-efficient agents? 

See how ContextFabric gives your AI agents the business context they need to perform like experts.

Book a Demo

Back

Article

Context makes all the difference: What a Fortune 500 contracts team taught us about making AI actually work

The most important asset companies will have in the AI era is context: a durable record of how organizations actually get work done.

Reading time: ~10 min

Word count: ~2,000

Draft a supplier agreement for Northstar Logistics.

Supplier Contract (Informed by ACME Corp Context)

Commercial Status & Pricing

Supplier is designated as a Strategic Partner (Level 1 Priority). Products shall be invoiced at Tier-3 volume pricing applicable to suppliers with annual spend exceeding $5 million. Pricing includes the rebate structure applied to strategic suppliers at the same volume tier under the 2024 Northstar benchmark agreements, including Identical volume breakpoints, rebate percentages, quarterly true-up mechanics. These terms reflect the Company’s standard pricing treatment for suppliers with comparable spend profiles.

Performance, Risk & Enforcement

Given documented fulfillment variance exceeding 5 percent in the first quarter, payment terms shall remain Standard Net 30, with no early payment concessions. To address current performance risk, the non-compliance penalty is set at 15 percent, consistent with enforcement terms applied to recent strategic supplier agreements where similar delivery issues were observed. Penalty thresholds and cure periods follow the Company’s established approach for suppliers with comparable performance risk.

Contextualized answer

Sequential Workflow steps

Oracle ERP

Procurement manager verified pricing

Logged into Oracle ERP to confirm the latest volume tiers and checked if the supplier was eligible for rebates

Salesforce

Sales Lead reviewed history

Checked Salesforce records to understand the commercial relationship, deal history, and active negotiations

Ironclad CRM

Legal Counsel benchmarked deals

Searched the Ironclad repository for comparable past agreements to ensure consistency with organizational precedent.

Tableau / OMS

Analyst identified risk signals

Investigated Tableau dashboards to flag recent fulfillment breaches that required stricter contract penalties.

Slack

Finance VP granted exception

Traced Slack threads to confirm VP-approved deviations and special terms.

Notion

Contract Manager noted strategy

Documented the negotiation strategy and conflict resolution logic in a Notion side-note.

Caption: How context from execution traces of a team's workflow leads to the precise context needed to produce the right contract. 

In our April 2025 Harvard Business Review article, we examined a real deployment with a Fortune 500 retailer inside its supplier contracts workflow, the team responsible for drafting and negotiating high-volume commercial agreements. While this work appears to be simple document generation, it actually depends on navigating exceptions, precedents, approval paths, and unwritten rules that govern how terms are negotiated and approved. Each contract reflects institutional judgment, past decisions, exception logic, and off-system approvals that rarely exist in any system of record. By deploying ContextFabric as a context backbone and capturing execution and decision traces across custom applications, email, Slack, deal desk reviews, and approval calls, we made that judgment legible to AI agents. This case shows why enterprise agents will not deliver real ROI without persistent execution-time context, not just rules or static data. 

  1. The core failure mode: AI That Writes Faster but Doesn’t Reduce the Real Work

A Fortune 500 retailer deployed an AI tool powered by a widely used LLM to help its contracts team draft and negotiate supplier agreements. The system could summarize prior contracts, compare clauses, answer legal questions, and generate a draft in seconds. 

 

The introduction of the AI tool changed the workflow. But made it worse. The AI-generated contract was generic and lacked specifics unique to the contract and to the team/org. Therefore, users spent effort in re-working each contract by gathering more data from various systems, making decisions again, and weaving it all together. Add to that each contract is unique and hence the re-work in contract generation became a constant feature. The end result was that the AI-generated contract itself became a bottleneck.

 

This happened because the AI failed to leverage a key advantage: how people ensured accuracy. Before generating each contract, people still manually pulled required data from multiple systems and manually verified it - effectively serving as the verification layer to maintain tight controls over information quality, as mandated by finance and legal.

 

Because the AI couldn’t access or incorporate that verification context, it produced fluent but generic drafts that didn’t materially improve outcomes for the contracts team.

  1. Why the AI output became a bottleneck

A typical AI-generated draft looked like this:

 

“Supplier shall provide products in accordance with agreed pricing and delivery schedules. Payment terms shall be Net 30 unless otherwise specified. Any deviations require written approval.” 

The language was fine. The problem was everything it did not know.

 

Prior to generating a draft, the team had to:

  • Insert supplier-specific rebate structures negotiated in prior quarters.
  • Adjust terms based on volume thresholds and seasonal demand.
  • Account for delivery issues that triggered tighter SLAs.
  • Apply informal exceptions granted by procurement or finance.
  • Ensure consistency with how similar suppliers were handled recently.

 

None of that context appeared in the draft, and none of it could be inferred from contract text alone.

  1. The real workflow: Where the Time Actually Goes

The contracts team’s work followed a repeatable but highly variable process:

  • Log into procurement and ERP systems to confirm current pricing, volume commitments, and rebate eligibility.
  • Check CRM and account records to understand the commercial relationship, deal history, and active negotiations.
  • Review prior comparable agreements to assess precedent and ensure consistency with similar suppliers.
  • Examine order and fulfillment history to identify risk signals that influence SLAs, penalties, or termination clauses.
  • Validate exceptions and approvals by tracing who authorized deviations, under what conditions, and whether they still apply.
  • Synthesize conflicting inputs across systems and stakeholders into a defensible contract position.

Each step exists for a reason: to reduce risk, enforce policy, and preserve institutional consistency. Execution traces for how people do this work is not captured or stored in any system of record. Its dynamic. Nuanced. Specific. People spend most of their time not writing contracts, but assembling and reconciling the context that determines what the contract should say.

  1. Why Historical Contracts Don’t Solve This

It is tempting to view this as a document-generation problem. Feed the model enough past contracts and let it learn the pattern. That framing is incomplete.

 

Every contract is shaped by a unique combination of factors:

  • The supplier’s negotiation history.
  • Exceptions granted in prior deals, sometimes in different accounts.
  • Changes in policy, risk posture, or market conditions.
  • Informal norms about what is acceptable this quarter versus last.
  • Decisions made outside systems, then reflected retroactively.

 

The words in a contract are downstream of this context. Training or retrieving from past contracts mostly improves tone, clause structure, and phrasing. It does not teach the model how people determine which inputs matter, which systems are authoritative, when precedent applies, or how conflicts are resolved. People still have to gather and synthesize that context manually across systems, teams, and judgment calls. 

  1. The Real Bottleneck: Context Gathering and Synthesis

This mirrors what we see in software engineering. Code generation helps individual developers move faster, but overall delivery remains constrained by people aligning on requirements, reconciling feedback, and adapting to shifting priorities. The bottleneck is not typing. It is shared understanding.

 

Contracts work the same way. Language generation accelerates a narrow step, but people remain responsible for assembling context across tools, stakeholders, and precedent. Until that work is captured and delivered at runtime, AI cannot meaningfully change end-to-end productivity.

 

Crucially, context is dynamic. It changes for each contract. Hence, it is not a static or one-time effort to collect and understand context. Context is an execution-time (or run-time) need that the contract agent continuously feeds on to produce the right contract.

  1. The unlock: Learn How the Work Actually Happens (Context), Then Feed That to the Model

The breakthrough came when the team stopped treating contract drafting as a document-generation task and instead modeled the entire supplier contracting workflow end-to-end. Rather than operating on static documents, ContextFabric provided a context backbone: a live, execution-time representation of how agreements are negotiated across systems, stakeholders, and decisions. 

 

As the team worked, ContextFabric captured execution and decision traces in real time: which systems were consulted, which data was authoritative, which policies were evaluated, where exceptions applied, who approved deviations, and how conflicts were resolved. This context emerged from ERP systems, procurement tools, CRM, Slack threads, deal desk reviews, and approval calls. It was captured as part of normal work and implicitly validated by the team’s actions. 

 

Critically, not all context is useful at once. Just as people surface only the facts and precedents that matter for a given decision, ContextFabric organized execution traces into a governed context library and selected the relevant slice for each generated contract. Rather than overwhelming the model with the full body of observed execution history, ContextFabric delivered only the supplier- and workflow-relevant context, including the precedents, policies, and performance signals that actually informed the decision. This ensured the AI received the right context, grounded in approved precedent, active policy, and current performance signals, and could operate with the same situational awareness that people previously assembled manually. 

  1. What Context Means in Practice

For each supplier negotiation, the model received a structured slice of execution-time context, including:

 

  • Supplier economics such as spend tier, volume commitments, rebates, and price concessions.
  • Commercial precedent from comparable suppliers approved in recent months.
  • Policy state including escalation thresholds and pricing guardrails.
  • Exception history captured from approvals and deal desk notes.
  • Performance signals such as fulfillment misses that warranted tighter SLAs.

 

Context here is not raw data or long-term memory. It is a situational record of how steps are executed, the relevant semantic information connected to these steps, and how decisions are made in a specific negotiation.

  1. How That Changed the Output

With this context, the AI produced materially different drafts (sample shortened output):

“Pricing reflects the Tier-3 volume rebate approved on March 12 under Procurement Policy v3.2, consistent with precedent set in the Acme and Northstar agreements. Given two Q1 fulfillment breaches, delivery SLAs are tightened to 98.5 percent, with penalties aligned to the VP-approved exception granted for strategic healthcare suppliers. Payment terms remain Net 30. .......” 

The draft encoded precedent, policy, and supplier-specific risk. It reflected what people previously had to stitch together manually.

  1. What changed end-to-end and where the ROI came from

Once the AI had full situational context, benefits extended across the entire contract lifecycle. Fewer revisions were needed. Approval chains shortened. Supplier negotiations moved faster with less friction. 

 

Measured outcomes included:

  • Contract cycle times compressed by weeks.
  • Manual drafting and context assembly reduced by more than 50 percent.
  • Throughput increased by nearly 30 percent without additional headcount.
  • Fewer SLA breaches and less commercial leakage.
  • Improved supplier relationships due to clearer, precedent-aligned terms.

 

The ROI came from reducing rework, preventing leakage, and tightening the feedback loop between supplier performance and contract terms.

  1. Why This Required a Different Model of AI

Optimizing isolated subtasks was not enough. Copilots and RPA automated fragments of work but did not take the entire workflow's context into account. That, among other factors, made them brittle.

 

This case shows that AI must be grounded in full workflows. When execution-time context is captured and delivered through a context backbone, agents stop behaving like macros and start contributing meaningfully to end-to-end processes. This kind of context is specific to each team in each org. Its not easily replicable and doesn’t generalize because it is in the specifics of the work patterns, the sequence of activities, and the applications that capture the uniqueness of the team.  

  1. Why This Matters for Agents

Agents fail when they optimize isolated tasks while leaving the real work untouched. The truth is, most people in the enterprise operate as glue between siloed systems ....

 

This bottleneck is visible across the enterprise. In software, developer velocity has increased, but delivery remains constrained by product and project management. As output accelerates, organizations respond by adding more coordination layers, not by removing the bottleneck. 

 

The same pattern exists in contracts, finance, support, and operations. Without execution-time context, agents remain brittle and dependent on people. Context is not just knowledge. It is operating intent. 

  1. Why Digital Experiences Are the Missing Layer

The context that made this work did not live in a single system. It lived in the digital experience of work itself: cross-app sequences, judgments, handoffs, and exception handling. This also includes each step that the team performs regularly. 

 

Unlocking this does not require re-architecting workflows. It requires understanding what already happens and treating enterprise work as multi-player, cross-system, and judgment-heavy. 

 

That is why we built ContextFabric. It learns from digital experiences as work happens, governs that context, and delivers the right slice to AI agents at execution time. The result is agents that act with real situational awareness and work reliably in production.

© Workfabric AI

Want smarter, faster, and more cost-efficient agents? 

See how ContextFabric gives your AI agents the business context they need to perform like experts.

Book a Demo