Most enterprises have process documentation. It lives in Word documents, PDFs, email threads, and team wikis. The problem is that none of this is machine-readable. When an AI agent encounters your legal review process described as prose in a Word doc, it has to infer structure from text. It has to guess what step comes next. It has to interpret ambiguous language. This is where AI agent implementations fail. They fail not because the model is weak, but because the process is not structured in a way the model can reliably execute. Building machine-readable processes is not about better documentation. It's about converting your implicit, human-readable processes into explicit, queryable systems. This is the bridge between "we have documentation" and "AI can actually use it."
The three key points:
-
70% of AI implementation problems come from people and processes, not technology (BCG 2025). The core problem is not your AI model. It's that your processes are not in a form that AI can read and execute reliably.
-
60-80% of enterprise processes are undocumented or only informally documented. When documentation does exist, it's usually human-readable descriptions, not structured data. This makes it invisible to AI agents.
-
The difference between a failed AI pilot and a successful deployment is process architecture. 95% of enterprise AI pilots delivered zero P&L impact (MIT Gen AI Divide 2025). The ones that succeeded had one thing in common: processes were machine-readable, connected, and executable.
The difference between human-readable and machine-readable documentation
Human-readable documentation is written for people. It uses narrative language, contextual explanations, and prose descriptions. It's designed for someone to read, understand, and then execute the process using their own judgment. Machine-readable documentation is structured data. It has defined fields, fixed relationships, and explicit decision logic. It's designed for a machine to parse, query, and execute without interpretation.
This is not a small difference. It's a fundamental shift in how processes are represented.
Example 1: Legal contract approval process
Human-readable version (how it currently exists in most enterprises): "When a contract comes in from a vendor, the contract must be reviewed by the Legal department. The Legal team checks if the contract meets our standard terms. If it doesn't, they send it back to the requester with requested changes. If it meets our terms, it goes to Finance for budget review. Finance checks if the budget line item is approved. If not, it gets rejected. If it is approved, it goes to the VP of Operations for final sign-off."
This is clear to a person who understands business context. But it's ambiguous to an AI agent. What does "meets our standard terms" mean? Is there a checklist? What exactly triggers "send back to requester"? What if the VP doesn't respond in two weeks?
Machine-readable version (structured Notion database):
- Process: "Vendor Contract Approval"
- Step 1: "Legal Review"
- Actor: Legal Department
- Condition: All contract fields populated and document attached
- Action: Evaluate against Legal Checklist (relationship to standards database)
- Decision Points: [Meets Standards, Needs Revision, Rejected]
- If Meets Standards, go to Step 2
- If Needs Revision, send notification to Requester with revision list, return to Step 1
- If Rejected, send notification to Requester with reason, end process
- Step 2: "Finance Review"
- Actor: Finance Director
- Condition: Budget line item provided and approved in budget database (related field)
- Action: Confirm budget allocation matches request
- Decision Points: [Approved, Rejected]
- If Approved, go to Step 3
- If Rejected, notify Requester with reason, end process
- Step 3: "VP Approval"
- Actor: VP of Operations
- Condition: Strategic importance category assigned
- SLA: 3 business days
- If no response in 3 days, escalate to Chief Operating Officer
- If Approved, send signed contract to Vendor, update status to "Complete"
- If Rejected, notify Requester with reason, end process
In the machine-readable version, there is no ambiguity. Every step has an actor, a condition, a decision point, and explicit next steps. An AI agent can read this and execute it.
Example 2: Employee onboarding
Human-readable (current enterprise version): "When a new employee is hired, HR creates an account and sends the welcome packet. IT gets the account request and sets up the computer. The manager schedules a kickoff meeting."
This is missing critical details. What is "sets up the computer"? Does it include software licenses? When exactly does the welcome packet get sent?
Machine-readable version:
- Step 1: "Account Creation" (triggered by hire date)
- Actor: HR System
- Condition: Offer accepted and signed offer in system
- Action: Create employee record, assign employee ID, create email account
- Automation: Send welcome email to personal email address
- Next Step: Step 2
- Step 2: "IT Onboarding Request" (triggered by account creation)
- Actor: IT Service Desk
- Condition: Employee record created
- Action: Create ticket with template (hardware, software list based on department)
- Items: Laptop, monitors, keyboard, mouse, phone, software licenses, VPN access, network drive
- SLA: Complete by day 1
- Next Step: Step 3
The machine-readable version leaves nothing ambiguous. An AI agent can trigger each step. A human can audit the process by checking whether the steps actually happened.
Why most enterprise documentation fails AI agents
Problem 1: Documentation is narrative, not structured
Process documentation is usually written as prose. "After the contract is received, the Legal team reviews it for compliance. If compliant, it goes to Finance." This narrative is clear to a human but opaque to an AI agent. Machine-readable processes eliminate interpretation. There is no narrative. There are fields and conditions. "Compliant" becomes "passes_legal_checklist = true." No ambiguity.
Problem 2: Dependencies are implied, not explicit
In human-readable documentation, dependencies between steps are often implied through the narrative. In machine-readable documentation, dependencies are explicit relationships in the database. An automation triggers Finance review when Legal approval = true. There is no human judgment, no forgetting, no delay.
Problem 3: Decision logic is unclear
Human-readable processes often include conditionals that are not fully specified. "If the contract is too risky, it goes back to the vendor." But what makes a contract "too risky"? Machine-readable processes specify decision logic explicitly: "If risk_score > 8, status = rejected."
Problem 4: Execution responsibilities are fuzzy
Human-readable documentation often says "Finance reviews the budget" without specifying who in Finance, by what date, using what criteria. Machine-readable documentation specifies the actor (Finance Director, specifically), the SLA (3 days), and the escalation path.
Problem 5: Error conditions are not documented
Human-readable processes often describe only the happy path. Machine-readable processes specify what happens in every case, including rejections, timeouts, and missing data.
The 5-step framework for converting processes to machine-readable format
Step 1: Map the current process (as it actually works)
Start by documenting how the process actually works now, not how it's supposed to work. Talk to the people who run it. Ask them what they do, in order. Ask them what they do when something goes wrong. Document any manual work. Who sends emails? Who checks spreadsheets? Who makes judgment calls?
Step 2: Identify all actors, conditions, and decision points
For each step, write down three things:
- Actor: Who or what does this step? (A person, a system, an automation)
- Condition: What must be true for this step to happen?
- Decision Point: What happens at the end of this step?
Step 3: Design the Notion database structure
Convert this into a Notion structure. You need:
- A process steps database (one row per step) with fields for step name, actor, condition, action, decision options, SLA, next step
- A decision logic database (one row per possible decision)
- A process tracking database (one row per instance of the process)
Connect these databases through relations. Each process instance relates to the current process step. Each step relates to the possible decisions.
Step 4: Create templates and automations
Create a template for process instances. Set up automations:
- When a step's status changes to "complete," notify the next actor
- When an approval is made, update the process instance's current step
- When an SLA is about to expire, send a reminder
Step 5: Test and iterate
Run the process in Notion before handing it to AI agents. Ask the team: "Does this match how we actually do it?" Ask them: "If an AI agent followed these steps exactly, would the process work?" Once the team confirms it's accurate, you can give it to an AI agent.
Comparison: human-readable vs. machine-readable process elements
| Element | Human-Readable | Machine-Readable |
|---|---|---|
| Description | Narrative paragraph or bullet points | Structured database fields with defined types |
| Step Definition | "Approver reviews the document" | Actor: [Finance Director], Condition: [Budget Approved = True], Decision: [Approved, Rejected] |
| Condition | "If the request is reasonable" | If [Request Amount] is less than or equal to [Budget Remaining] |
| Decision Point | "Either approve or send back for revision" | If [Decision] = "Approved", go to Step 4. If [Decision] = "Rejected", notify requester |
| SLA | "Should be done quickly" | [SLA Days] = 3, escalate to VP if no response by day 3 |
| Next Step | "Usually goes to Finance next" | [Next Step] relates to Finance Review (explicit database relation) |
| Error Handling | Not usually documented | If [Status] = "Rejected", perform [Escalation Action] and notify [Escalation Recipients] |
| Tracking | Manual tracking or email cc lists | [Process ID] relates to [Step ID] with [Timestamp] and [Status] |
| Automation | Manual action | Automated trigger: When [Previous Step Status] = "Complete", change [Current Step] to "In Progress" |
| Auditability | Limited | Every change logged with timestamp, user, and change summary |
Walkthrough: converting a 3-department approval workflow
A company needs to approve a new software purchase. Currently: department head submits a form in Slack DM to Finance, Finance checks a budget spreadsheet, sends email to IT, IT checks compatibility, Finance compiles response to CFO, CFO makes decision. Actual SLA: 2-10 days with no formal tracking.
Step 1: Identify actors, conditions, decision points
- Step 1: Department Head submits. Condition: Request form completed. Output: Request record created.
- Step 2: Finance reviews. Condition: [Request Amount] does not exceed [Budget Remaining]. Decision: Approve/Reject. SLA: 1 day.
- Step 3: IT reviews. Condition: Finance approved. Decision: Compatible/Not Compatible/Needs Testing. SLA: 2 days.
- Step 4: CFO approval. Condition: Finance and IT approved. Decision: Approved/Rejected. SLA: 2 days.
Step 2: Design Notion structure
- Software Requests database: Request ID, Requestor, Department, Software Name, Version, Cost, Current Step, Status
- Finance Reviews database: Request ID (relation), Amount Requested, Budget Available, Decision, Decision Date
- IT Reviews database: Request ID (relation), Compatibility Status, Testing Required, Decision
- CFO Reviews database: Request ID (relation), Strategic Fit, Decision, Decision Date
Step 3: Automations
- When Finance Review is submitted → email IT Director with link to IT Review form
- When IT Review is submitted → update Request to "CFO Review" and email CFO
- When CFO approves → email department head confirming purchase, create Finance task
The result: a process that previously took 5-10 days with uncertain status now takes 5 days with clear milestones and automatic handoffs. When you hand this to an AI agent, the agent can trigger Finance review automatically, query the IT compatibility database, and log everything.
Common mistakes when building machine-readable processes
Mistake 1: Keeping the process too close to how it works now
Your current process includes workarounds and informal steps. Use the conversion as an opportunity to fix bad practices. If people handle errors by checking email, automate that.
Mistake 2: Not documenting decision criteria
Not "if the contract is good," but "if contract_review_status = 'pass_legal_checklist' and contract_value does not exceed authorization_limit." If decision criteria is ambiguous, the process fails.
Mistake 3: Forgetting error paths
Document what happens when something goes wrong. If an approval is rejected, specifically what happens. If a required field is missing, what happens. If the SLA is missed, what happens.
Mistake 4: Not specifying who is responsible
"The approval step" is not an actor. "The VP of Finance" is an actor. Be specific about which person or role.
Mistake 5: Designing for a process that doesn't exist yet
Convert your current process to machine-readable format first. Then, once you can see it clearly in Notion, you'll be in a much better position to improve it.
Mistake 6: Not giving AI agents access to the data they need
If the process depends on checking a budget database, the agent needs access to that database. Build your process with the assumption that an AI agent will execute it.
Frequently Asked Questions
What's the difference between a machine-readable process and just better documentation?
Better documentation is still human-readable narrative. Machine-readable is structured data with defined fields, decision trees, and explicit relationships. The difference is like a paragraph describing a recipe versus a structured recipe with ingredient list, step numbers, temperature, and yield. One is a narrative. One is machine-executable.
Do I need to rebuild all my processes in Notion right now?
No. Start with your most critical 5-10 processes that consume the most time, have the highest error rates, or have the most stakeholders. Convert those first. Then expand incrementally.
Can I convert a process to machine-readable format if it's not fully documented?
Yes, that's the common case. Map the process as it actually works by talking to the people who run it. Structure it in Notion. It won't be perfect, but it will be clear, auditable, and executable.
What if my process has judgment calls that an AI agent can't make?
Document those judgment calls explicitly. Have the AI agent prepare the information and hand it to a human for the judgment call. The judgment stays with the human. The machine-readable process documents where human judgment is required and automates everything else.
How do I know if my process is machine-readable enough for AI agents?
Test it. Have someone unfamiliar with the process read your Notion structure. Can they execute it exactly as written without clarifying questions? If yes, it's machine-readable. If not, keep refining.
Should AI agents execute processes in Notion directly, or should Notion feed data to other systems?
Both, depending on the process. Decision-heavy processes work well when executed in Notion. Integration-heavy processes work better when Notion is the control center and automations trigger actions in other systems.
The Nor & Int approach
Converting processes to machine-readable format is not a technology problem. It's an architecture problem. Nor & Int builds machine-readable processes as the operational foundation for your AI strategy. This means designing your Notion architecture so that processes are simultaneously clear to humans and executable by AI. We map your actual processes, identify where machine-readable structure adds value, and implement that structure as a connected system. We don't hand you documentation. We hand you a system where processes live, where AI agents can execute them, and where you have complete visibility into what's happening.
This article was created with the assistance of artificial intelligence.
The AI Operating System
Process architecture → Agent deployment → Governance. 90 days.