Matthew “Smiffy” Smith

Systems Architect | LLM Integration Specialist | Cognitive Accessibility Engineer

Who I Am

Nearly 40 years of engineering and technology experience, working with UNIX-like systems for the same duration. I build systems that work with human cognition as it actually is, not as we wish it were.

I have AuDHD (autistic + ADHD). This isn’t a limitation - it’s a different operating system. One that excels at pattern recognition, system architecture, and cutting through bullshit to find what actually works.

What I’ve Discovered

I have a skill that was latent until LLMs existed: prompt engineering as systems architecture.

Before LLMs, I could design systems conceptually, but implementation was the bottleneck. Communication with neurotypical teams required constant translation. The gap between idea and working system was measured in months.

With LLMs, I operate at thought speed. I architect systems through conversation, externalize cognition in real-time, and iterate from concept to working prototype in hours. The pattern-matching nature of LLMs maps directly to how I think - it doesn’t require me to translate my thought process into neurotypical communication patterns.

What this looks like in practice:

I don’t “use” LLMs. I think through them.

Current Work

Cognitive Support Systems for Complex Trauma

Building an AI-assisted external memory system for someone with AuDHD + severe cPTSD, where stress blocks memory encoding. The system:

Architecture: Local LLM (Mistral 7B), PostgreSQL with vector search, Whisper ASR for voice transcription, distributed across consumer hardware via Tailscale private network.

Why this matters: Standard productivity tools assume functional memory and executive function. When trauma impairs these systems, tools become inaccessible. This architecture works around biology, not against it.

Voice-Activated Command Interface (Personal Implementation)

Designed a Latin-based command grammar for my own voice-to-text workflow. This is a personal solution, not a general accessibility approach - it works for me specifically because I can articulate Latin precisely and enjoy the cognitive mode-switch. This is not recommended as a pattern for others.

The problem I needed to solve: How to embed structured commands (reminders, tags, actions) within natural speech transcription without the speech recognition system confusing commands with content. Standard approaches (saying “hashtag” or “command”) felt clunky and broke my flow.

My solution: Use Latin command words that:

Grammar: fiat [verb] [params] [signum tags] actum - where fiat begins a command, signum marks tags, and actum optionally closes the command scope.

Example: “Need to call dentist tomorrow [3 second pause thinking] fiat memento tomorrow signum health urgent actum”

Why this works for ME:

Why this is NOT a general solution:

What I learned designing this: The pattern of “use acoustically distinct trigger words for command mode” is generalizable. The specific choice of Latin is personal preference, not best practice. Others might use musical terms, constructed language words, or any other vocabulary that creates clear separation from their primary language.

Distributed System Architecture on Consumer Hardware

Building fault-tolerant infrastructure across aging laptops using Docker, Tailscale, and autonomous deployment via Claude Code. Hardware failures are expected; services migrate automatically.

Philosophy: Consumer hardware is disposable. Architecture should be antifragile. Use what you have, route around failures, optimize for maintainability over performance.

Technical Approach

Systems Thinking

ADHD-Informed Architecture

Trauma-Aware Design

LLM Integration Philosophy

Technical Stack Preferences

Languages: Python, SQL, Markdown for everything documentation Databases: PostgreSQL (most familiar), SQLite for embedded, DuckDB for analytics LLMs: Local deployment (Ollama), Mistral 7B for resource efficiency, LangChain for tool orchestration Infrastructure: Docker, Tailscale, bare-metal Ubuntu 24.04 LTS Development: VSCode, venv for Python, pnpm for Node.js when necessary Philosophy: FOSS priority, open standards, build for maintainability

What I Bring

Systems Architecture at Conversation Speed

Direct Communication

Real-World Testing

Pattern Recognition

Why This Matters Now

We’re at an inflection point. LLMs are mature enough to be useful, immature enough that integration patterns aren’t standardized. The people figuring out how to use these tools effectively - especially for assistive technology - are defining patterns that will matter for years.

I’m discovering these patterns by necessity. Building tools I need to function while under the exact conditions those tools are designed to address. The architectures that emerge aren’t theoretical - they’re battle-tested in real cognitive impairment scenarios.

The skill I’ve discovered: Translating human cognitive needs into LLM-orchestrated system architectures. Not prompting. Not “AI integration.” System design where the LLM is load-bearing infrastructure, and the architecture accounts for both human and AI cognitive constraints.

This wasn’t possible before. The tools didn’t exist. Now they do, and someone needs to figure out how to use them properly. I’m doing that work.

What I’m Not Looking For

What I Am Looking For


Note: This document represents my professional capabilities and approach. For current availability and project inquiries, please refer to the contact page.