The conventional approach to implementing AI treats it primarily as an engineering puzzle—something to be solved through code. However, this perspective misses a fundamental truth: successful AI implementation is fundamentally a challenge of language and communication.
Most technology companies operate with a familiar three-tier structure. At one end, you have IT and engineering teams who speak in the precise, unambiguous language of code—every instruction must be exact enough to be translated into the binary logic that machines understand. At the other end, business operations teams communicate in the fluid language of market dynamics, customer needs, and strategic objectives. Bridging these two worlds, product management serves as a translator, converting business requirements into technical specifications and technical possibilities into business opportunities.
AI implementation requires an entirely different kind of language—one that draws from fields traditionally considered part of the humanities. This is the language of:
Meta-cognition: Understanding how thinking itself works
Categorization and classification: Organizing knowledge and concepts in meaningful ways
Taxonomy: Creating systematic frameworks for understanding relationships between ideas
Philosophy: Grappling with questions of meaning, ethics, and fundamental principles
These disciplines have always been concerned with how we understand, categorize, and communicate about complex, nuanced concepts—exactly what's needed when teaching machines to work with human reasoning and business logic.
The biggest obstacle isn't technical capacity—it's organizational and linguistic. Companies need to move away from highly specialized, departmental modes of expression toward a shared language of subtlety, discernment, precision and clarity.
Once organizations master this linguistic shift—once they can describe their business processes, relationships, and operations with the kind of precision and nuanced categorization that AI can understand—AI will work. When every aspect of the business is articulated with the right blend of systematic thinking and subtle distinction, AI becomes capable of executing, optimizing, and even innovating within those well-defined frameworks.
To manifest this vision, organizations need to develop expertise in specific technical areas that bridge the gap between human meaning and machine understanding:
Knowledge Representation & Ontologies
OWL (Web Ontology Language), RDF, JSON-LD
Ontology modeling tools (e.g., Protégé)
Domain ontologies (e.g., FIBO, MISMO, schema.org)
Knowledge Graphs & Metadata Management
Graph databases (Neo4j, Amazon Neptune, Fluree)
Enterprise metadata catalogs (e.g., Collibra, Alation, Snowflake metadata features)
Open standards like OpenLineage and OpenTelemetry
Data Modeling & Taxonomy Development
Designing hierarchical taxonomies for business entities and processes
Classification systems for documents, processes, and attributes
Schema evolution and versioning strategies
Language Models & Context Integration
Prompt engineering and retrieval-augmented generation (RAG)
Context packaging strategies (structured context delivery to LLMs)
Fine-tuning and alignment methods
Human-in-the-Loop Design
Feedback loops for correcting AI outputs
Governance frameworks for AI decisions
Ethical guardrails and explainability techniques
These technical areas represent the practical toolkit for organizations ready to move beyond viewing AI as just another software implementation and instead embrace it as a fundamental shift in how businesses can be described, understood, and operated.
This reframing suggests that the companies most successful with AI won't necessarily be those with the most advanced technical capabilities, but those that can most effectively describe what they do in language that bridges human understanding and machine capability. Get the language right, and AI handles the rest.
![]() | Christopher WattsChristopher Watts is an AI Engineer at Hometap with 15+ years of experience across data systems, machine learning, and analytics. He currently focuses on harnessing AI capabilities through agentic workflows and recently discovered convergent representation phenomena in large language models, leading him to explore theoretical frameworks that may explain emergent behaviors in scaled AI systems. Feel free to connect on LinkedIn. |
Reply