Skip to main content

Getting Started with Docubat

Welcome to Docubat! This guide will help you create your first audit and understand how to use our platform to ensure your documentation is AI-ready.

What is Docubat?

Docubat is an automated testing platform that verifies whether AI models can successfully understand and implement code based on your documentation. Think of it as "AI-safety testing" for your documentation - ensuring that as more developers use AI assistants, your docs will work seamlessly with these tools.

Quick Start

1. Sign Up and Create Your Account

  1. Visit console.docubat.app
  2. Create your account and verify your email
  3. You'll receive $5 in free credits to get started

2. Create Your First Audit

An audit tests whether AI models can successfully implement your documentation across different programming languages. Here's how to set one up:

Step 1: Define Your Task

  • Task Description: What do you want the AI to accomplish? (e.g., "Implement user authentication using our API")
  • Expected Output: What should the final result look like?

Step 2: Configure Coverage

  • Programming Languages: Select which languages to test (Python, JavaScript, TypeScript, Java, Go, etc.)
  • AI Models: Choose which AI models to test against (GPT-4, Claude, etc.)
  • Documentation URLs: Provide links to your documentation

Step 3: Set Up Authentication (Optional)

If your API requires authentication for testing:

  • Provide test API keys or credentials
  • These are used securely during audit execution

Step 4: Run Your Audit

Click "Run Audit" and wait for results. A typical audit takes 5-15 minutes depending on the number of language/model combinations.

3. Understanding Your Results

Your audit results will show:

  • Success Rate: Percentage of language/model combinations that succeeded
  • Detailed Breakdown: Which specific combinations worked or failed
  • Generated Code: The actual code each AI model produced
  • Error Analysis: Why certain attempts failed
  • Cost Summary: Credits used for the audit

4. Improving Your Documentation

Based on audit results, you can:

  • Identify Gaps: See where your documentation is unclear
  • Fix Language-Specific Issues: Address problems in specific programming languages
  • Test Iteratively: Re-run audits after making improvements

Common Use Cases

API Documentation Testing

Test whether developers can successfully integrate with your API:

Task: "Create a user account using our API"
Expected Output: Successful user creation with proper error handling
Languages: Python, JavaScript, Java

SDK Documentation Validation

Ensure your SDK documentation enables successful implementation:

Task: "Initialize the SDK and send a message"
Expected Output: Message sent successfully with confirmation
Languages: Python, TypeScript, Go

Tutorial Verification

Verify that your tutorials lead to working code:

Task: "Follow the quickstart guide to build a basic app"
Expected Output: Working application that runs without errors
Languages: JavaScript, Python

Best Practices

Writing Effective Task Descriptions

  • Be specific about what you want accomplished
  • Include important context and constraints
  • Specify the expected outcome clearly

Choosing Programming Languages

  • Start with your most important languages
  • Consider your developer community's preferences
  • Test languages you officially support

Selecting AI Models

  • Include popular models your users likely use
  • Test both latest and slightly older models
  • Consider cost vs. coverage trade-offs

Documentation Preparation

  • Ensure your docs are publicly accessible
  • Include both general and language-specific documentation
  • Keep documentation up-to-date

Next Steps

  1. Create Your First Audit: Start with a simple task to get familiar with the platform
  2. Explore Results: Understand what success and failure look like
  3. Iterate: Improve your documentation based on results
  4. Schedule Regular Audits: Set up recurring audits to catch documentation drift
  5. Invite Your Team: Add team members to collaborate on audit configurations

Need Help?

Ready to ensure your documentation works for AI? Get started now!