Matthew “Smiffy” Smith
Systems Architect | LLM Integration Specialist | Cognitive Accessibility Engineer
Who I Am
Nearly 40 years of engineering and technology experience, working with UNIX-like systems for the same duration. I build systems that work with human cognition as it actually is, not as we wish it were.
I have AuDHD (autistic + ADHD). This isn’t a limitation - it’s a different operating system. One that excels at pattern recognition, system architecture, and cutting through bullshit to find what actually works.
What I’ve Discovered
I have a skill that was latent until LLMs existed: prompt engineering as systems architecture.
Before LLMs, I could design systems conceptually, but implementation was the bottleneck. Communication with neurotypical teams required constant translation. The gap between idea and working system was measured in months.
With LLMs, I operate at thought speed. I architect systems through conversation, externalize cognition in real-time, and iterate from concept to working prototype in hours. The pattern-matching nature of LLMs maps directly to how I think - it doesn’t require me to translate my thought process into neurotypical communication patterns.
What this looks like in practice:
- Researching, synthesizing, and specifying complete technical systems in single sessions
- Designing domain-specific languages for human-AI interaction
- Building cognitive prosthetics for people with severe impairments
- Architecting distributed systems across heterogeneous hardware
- Using one LLM as assistive technology while designing assistive technology for others
I don’t “use” LLMs. I think through them.
Current Work
Cognitive Support Systems for Complex Trauma
Building an AI-assisted external memory system for someone with AuDHD + severe cPTSD, where stress blocks memory encoding. The system:
- Functions as external memory when biological memory fails
- Learns individual stress patterns and adapts support accordingly
- Provides grounding assistance during dissociation
- Maintains continuity across sessions when the user can’t
- Integrates voice capture, semantic search, and context reconstruction
- Operates with zero judgment - infinite patience for repeated questions
Architecture: Local LLM (Mistral 7B), PostgreSQL with vector search, Whisper ASR for voice transcription, distributed across consumer hardware via Tailscale private network.
Why this matters: Standard productivity tools assume functional memory and executive function. When trauma impairs these systems, tools become inaccessible. This architecture works around biology, not against it.
Voice-Activated Command Interface (Personal Implementation)
Designed a Latin-based command grammar for my own voice-to-text workflow. This is a personal solution, not a general accessibility approach - it works for me specifically because I can articulate Latin precisely and enjoy the cognitive mode-switch. This is not recommended as a pattern for others.
The problem I needed to solve: How to embed structured commands (reminders, tags, actions) within natural speech transcription without the speech recognition system confusing commands with content. Standard approaches (saying “hashtag” or “command”) felt clunky and broke my flow.
My solution: Use Latin command words that:
- Are phonetically distinct from my casual English speech (zero false positives)
- Force me to articulate deliberately (reduces mumbled/unclear commands)
- Create a clear cognitive boundary (“now I’m commanding the system” vs “now I’m capturing thoughts”)
- Work with my ADHD speech pattern (frequent multi-second pauses while thinking)
Grammar: fiat [verb] [params] [signum tags] actum - where fiat begins a command, signum marks tags, and actum optionally closes the command scope.
Example: “Need to call dentist tomorrow [3 second pause thinking] fiat memento tomorrow signum health urgent actum”
Why this works for ME:
- I pause frequently when speaking (ADHD) - the grammar is pause-tolerant
- I can pronounce Latin precisely without effort (personal skill)
- The semantic meaning of the Latin words aligns with their function (satisfying to my brain)
- Whisper ASR transcribes Latin accurately because I articulate it clearly
Why this is NOT a general solution:
- Requires ability to learn/pronounce Latin words
- Only works for people who find language-switching cognitively comfortable
- Assumes user can articulate precisely under stress
- Not accessible for many disabilities
What I learned designing this: The pattern of “use acoustically distinct trigger words for command mode” is generalizable. The specific choice of Latin is personal preference, not best practice. Others might use musical terms, constructed language words, or any other vocabulary that creates clear separation from their primary language.
Distributed System Architecture on Consumer Hardware
Building fault-tolerant infrastructure across aging laptops using Docker, Tailscale, and autonomous deployment via Claude Code. Hardware failures are expected; services migrate automatically.
Philosophy: Consumer hardware is disposable. Architecture should be antifragile. Use what you have, route around failures, optimize for maintainability over performance.
Technical Approach
Systems Thinking
- Build for actual human behavior, not idealized workflows
- Design for cognitive load, especially under stress
- Separate concerns ruthlessly (right tool, right context, right scope)
- Tools should extend cognition, not demand adaptation to their constraints
ADHD-Informed Architecture
- Friction is the enemy - every extra step is a point of failure
- External memory is essential (working memory is unreliable)
- Context switching is expensive - minimize or make explicit
- Hyperfocus is a feature - build systems that preserve flow state
Trauma-Aware Design
- Stress blocks encoding - systems must capture information without relying on user memory
- Dissociation interrupts continuity - maintain state across gaps
- Shame about “forgetting” creates avoidance - remove all judgment from interactions
- Progressive decline requires adaptive support - system recognizes deterioration
LLM Integration Philosophy
- LLMs are orchestrators, not sources of truth
- Tool calls for facts, reasoning for synthesis
- Never let the model guess when it should look something up
- Timezones, currency, calendars, user data - all tool calls
- The LLM understands intent and coordinates execution; tools provide ground truth
Technical Stack Preferences
Languages: Python, SQL, Markdown for everything documentation Databases: PostgreSQL (most familiar), SQLite for embedded, DuckDB for analytics LLMs: Local deployment (Ollama), Mistral 7B for resource efficiency, LangChain for tool orchestration Infrastructure: Docker, Tailscale, bare-metal Ubuntu 24.04 LTS Development: VSCode, venv for Python, pnpm for Node.js when necessary Philosophy: FOSS priority, open standards, build for maintainability
What I Bring
Systems Architecture at Conversation Speed
- Concept to specification in single sessions
- Research, synthesis, and documentation in real-time
- Integration patterns across heterogeneous systems
- Cognitive load optimization as first-class concern
Direct Communication
- No corporate speak, no buzzwords, no bullshit
- Say what I mean, ask when unclear
- Focus on functional utility over social pleasantries
- Structured thinking made explicit
Real-World Testing
- I build systems while experiencing the problems they’re designed to solve
- Under stress, dissociating, fighting impaired memory
- If the tool works for me in that state, the architecture is sound
- This isn’t theoretical - it’s validated in the conditions that matter
Pattern Recognition
- 40 years of systems experience compressed into usable heuristics
- Reverse-engineer LLM behavior by observing outputs
- Recognize when tools guess vs when they know
- Design around limitations, leverage strengths
Why This Matters Now
We’re at an inflection point. LLMs are mature enough to be useful, immature enough that integration patterns aren’t standardized. The people figuring out how to use these tools effectively - especially for assistive technology - are defining patterns that will matter for years.
I’m discovering these patterns by necessity. Building tools I need to function while under the exact conditions those tools are designed to address. The architectures that emerge aren’t theoretical - they’re battle-tested in real cognitive impairment scenarios.
The skill I’ve discovered: Translating human cognitive needs into LLM-orchestrated system architectures. Not prompting. Not “AI integration.” System design where the LLM is load-bearing infrastructure, and the architecture accounts for both human and AI cognitive constraints.
This wasn’t possible before. The tools didn’t exist. Now they do, and someone needs to figure out how to use them properly. I’m doing that work.
What I’m Not Looking For
- Corporate bureaucracy
- Neurotypical communication performance
- Processes that prioritize optics over outcomes
- Teams that mistake activity for progress
- Anyone who needs a LinkedIn profile to validate competence
What I Am Looking For
- Problems that need solving, not theater
- Teams that value results over process
- Work that matters to people who need it
- Collaborators who communicate directly
- Projects where “this is how we’ve always done it” isn’t an answer
Note: This document represents my professional capabilities and approach. For current availability and project inquiries, please refer to the contact page.