VANCO

From Ad-Hoc AI to Enterprise Operating Model

  • Brownfield Analysis Engine for Legacy Codebases
  • Role-Specific & On-Demand Playbooks (30 Total)
  • Context Packs for AI-Aligned Code Generation
  • SDLC Transformation Blueprint (AS-IS → TO-BE)
  • Story & Requirements System with Gherkin ACs
  • AI-Augmented Testing (Unit, Integration, E2E)
  • Cursor Rules, Templates & Reference Implementation

HATCHWORKS TEAM

Automation Architects, AI Engineers, Product Leaders, QA Engineers, and SDLC Transformation Specialists

Overview

Vanco Payment Solutions engaged HatchWorks AI across two phases to transform how their engineering organization builds and ships software.

What began as a GenDD training program for 180 people evolved into a full SDLC scaffolding engagement that produced 41 production-ready deliverables, an enterprise operating model for AI-augmented software development.

Results Infographic
0
Team members trained across 8 roles
0
Production-ready deliverables
0
Role-specific playbooks
0
On-demand playbooks
0
SDLC stages transformed
0-pass
Brownfield Analysis engine

The Challenge

Vanco had already proven that generative AI could accelerate individual tasks.

Engineers were using GitHub Copilot. Isolated experiments were showing promise.

But scaling AI across a complex engineering organization with multiple product lines, legacy systems, and distributed teams required something fundamentally different: consistency, governance, and repeatable workflows, not one-off prompt hacks.

What Vanco Was Up Against

Client Quote
Steven Joos, Chief Product Officer at RevTrak
Steven Joos
Chief Product Officer
RevTrak

We wanted repeatability and oversight. HatchWorks AI helped us build an approach that scales responsibly across teams. This work gave us a safer, more standardized way to apply GenAI in delivery.

The Journey

Before touching a single workflow or template, HatchWorks AI delivered a comprehensive GenDD training program to Vanco’s engineering organization.

The rationale was straightforward: scaffolding only works if teams understand the methodology it encodes.

Training created the shared language, the enthusiasm, and the organizational readiness for the transformation that followed.

Training didn’t just teach skills, it created demand for the scaffolding engagement. The Vanco team wasn’t waiting to be told to adopt AI. They were asking for the tools and frameworks to do it properly. That’s the GenDD flywheel: training builds fluency, fluency creates pull, and scaffolding operationalizes the pull into a repeatable system.
Training Stats
0
Attendees trained
12 sessions · 16 hours
0
Role-specific tracks
Devs, architects, POs, QA + more
0
AI tools hands-on
Cursor N8n +4
0
Practical projects
Completed during sessions

With 180 trained practitioners ready to go, HatchWorks AI engaged with Vanco to design and implement a complete SDLC scaffolding system, the playbooks, templates, guardrails, and workflows that make GenDD practical at enterprise scale.

The engagement followed a structured four-sprint methodology.

Sprint Methodology
1
Diagnose
2
Prescribe
3
Implement
4
Socialize
Sprint 1
Diagnose
HatchWorks conducted targeted interviews with product stakeholders, engineers, architects, and QA representatives. The team performed deep repository reviews of representative codebases, mapped workflows from intake through deployment, and established maturity baselines across process, architecture, testing, and AI readiness dimensions.
Key discovery: Vanco's SDLC operated as a handoff-driven, governance-constrained system. Jira data analysis confirmed the primary constraint was input quality, not engineering speed.
Activities
Stakeholder interviews
Repository reviews
Workflow mapping
Maturity baselines
Jira data analysis
Sprint 2
Prescribe
Using discovery findings, HatchWorks defined the target SDLC blueprint: a stage-by-stage transformation plan covering ideation, refinement, development, testing, release governance, and post-release improvement. Every stage received an AS-IS assessment, a TO-BE target state, explicit Human/AI boundary definitions, and mapped playbooks to operationalize the transition.
Deliverables
SDLC blueprint (6 stages)
AS-IS / TO-BE assessments
Human/AI boundary definitions
Story and requirements model
PR review guardrails
AI augmentation map
Sprint 3
Implement
The team built the production-ready assets: the cursor-rules repository, code and test templates, architecture scaffolds, the Brownfield Analysis engine, Context Packs, and a reference implementation repository demonstrating the complete story-to-code-to-test golden path.
Golden path: A complete reference implementation demonstrating the full story-to-code-to-test workflow that teams can follow from day one.
Assets Built
Cursor-rules repository
Code and test templates
Architecture scaffolds
Brownfield Analysis engine
Context Packs
Reference implementation repo
Sprint 4
Socialize
The final sprint focused on embedding the new system into the organization through hands-on enablement, clear ownership, and measurable adoption goals across all teams.
Metrics framework: Tracking adoption and impact over the following 3 to 6 months with defined governance ownership.
Enablement
Role-based enablement sessions
Golden path demonstrations
Governance ownership definitions
Adoption metrics framework
1 / 4

What Was Built

The scaffolding engagement didn’t produce a strategy deck. It produced a working operating system for AI-augmented software development, the VancoSDLC Framework.

The Outcome

How GenDD Powered the Engagement
Methodology
How GenDD Powered the Engagement
This engagement was itself executed using GenDD principles. The methodology informed every phase of the work.
01
Workflow-First Mindset
Before selecting tools or writing templates, HatchWorks mapped how work should flow at each SDLC stage, defining inputs, quality gates, outputs, and handoffs. The tools serve the workflow, not the other way around.
02
AI-Assisted Acceleration
Playbooks, templates, architecture documentation, and test scaffolds were all generated with AI assistance and then validated, refined, and hardened through human review. The deliverables embody the same generate, validate, review, harden loop they encode.
03
Contract-First Thinking
API specifications and interface contracts were defined before implementation, enabling parallel development and reducing integration churn. This same principle was encoded into the story templates and architecture scaffolds.
04
Pattern Capture
Every solution discovered during the engagement was encoded into a reusable scaffold so every team benefits from prior work. The system compounds: each project makes the next one faster.
The engagement didn't just produce GenDD artifacts. It was itself executed using GenDD principles.
Before / After
Requirements
Was6 bullet points per epic, <15% AC field usage
NowStructured stories with Gherkin ACs, edge cases, integration flags, and test scenarios
Architecture Docs
WasDrawn from memory in LucidChart, frequently outdated
NowC4 models generated from actual code via Brownfield Analysis engine
Legacy Knowledge
WasTribal, concentrated in senior engineers
NowNavigable architecture documentation any developer can use on day one
AI Usage
WasUngoverned Copilot accelerating code without standards
NowGoverned AI workflows with Context Packs ensuring output matches Vanco patterns
Testing
Was~20% automated, QA discovering requirements
NowAI-generated unit, integration, and E2E tests from structured ACs; QA validating, not discovering
Onboarding
WasMonths to become productive on legacy codebases
NowBrownfield docs + Context Packs + role playbooks compress ramp-up
Rework
Was10-15% of active work was reversions
NowUpstream quality enforcement and DoR/DoD compliance reduce rework at source
Process
WasStage-gated, approval-driven despite Scrum ceremonies
NowDefined SDLC with clear Human/AI boundaries and governed automation at every stage
Most enterprises are stuck in the same place Vanco was before this engagement: AI is working in isolated pockets, but there’s no system to scale it safely.

The gap between “AI is useful” and “AI is how we work” is enormous, and it can’t be closed with better prompts or more powerful models.

What closes it is what HatchWorks built for Vanco: an explicit operating model that defines where AI enters every stage of the SDLC, what it produces, and where humans remain essential. Gover

nance, golden paths, Context Packs, role-specific playbooks, and a Brownfield Analysis engine that turns the legacy knowledge problem into navigable documentation. The Vanco engagement demonstrates that GenDD isn’t a philosophy, it’s a delivery system. One that trains organizations, then operationalizes the training into production-ready scaffolding that teams actually use.

About the company

Vanco is a leading provider of secure digital payments, online giving, and administrative software for faith communities, schools, and nonprofits. Vanco helps organizations accept and manage payments, streamline everyday tasks, and deepen participation with reliable tools and practical support. The company’s faith business will now operate as ACS Technologies, offering trusted church management platforms such as Realm and MinistryPlatform that help ministries manage membership, contributions, accounting, missions trip management, events, and communication. Vanco also serves K–12 districts with education payments solutions, including RevTrak and SmartCare. Across every offering, Vanco’s purpose is simple: help teams spend less time on systems and more time with people.

About HatchWorks AI

HatchWorks AI turns AI into ROI by automating key business processes, transforming data, deploying intelligent agents, and shipping AI-powered products that deliver measurable results.