The Signal: Voice DNA
Most tools ask for a "style." We extract a biometric profile. In a single 10-minute calibration, Gaspard analyzes your syntax density, your sentence variance, and your specific vocabulary. It identifies your "micro-habits"—the way you use em-dashes, your refusal to use adverbs, your specific rhythm. This DNA becomes the immutable filter for every word generated.
The Underlying Science: Forensic Idiolects
Every human possesses a unique linguistic fingerprint known as an idiolect. Socio-linguistic studies confirm that an individual's identity is signaled not by what they say, but by the specific probability distribution of their syntax and vocabulary choices.
Standard LLMs force a "regression to the mean," stripping away these markers to create a generic voice. Gaspard uses Stylometric Analysis—the same science used in forensic authorship attribution—to isolate your idiolect and force the model to adhere to your specific probability curve, rather than the global average. This is the basis of our stylometric fingerprinting approach.
Reference: Forensic Linguistics / Stylometry
The Context: The Semantic Vault
Context is not just about what you say; it's about what you have said. Gaspard maintains a private, encrypted index of your past high-fidelity writing. When you draft a new thought, the Semantic Vault instantly retrieves the 3–5 most relevant past examples to ground the model. This ensures the AI isn't hallucinating a personality; it is referencing your actual history.
The Underlying Science: Episodic vs. Semantic Memory
In cognitive psychology, Episodic Memory allows you to recall specific past events to inform present decisions. Generic AI lacks this; it only has "parametric knowledge" (what it learned during training).
Gaspard builds an external Episodic Memory System using vector embeddings—technically a Vector Database aligned with Retrieval-Augmented Generation (RAG). By mapping your past writing into a high-dimensional semantic space, we mathematically calculate the cosine similarity between your current thought and your past best work. This mimics the human brain's associative recall process, ensuring consistency over years, not just sessions.
Reference: Tulving's Theory of Episodic Memory
The Guardrails: The Executive Protocol
Standard LLMs are people-pleasers. They want to say "Yes." The Executive Protocol is a layer of negative constraints designed to say "No." It actively blocks the "Beige Web" vocabulary (e.g., delve, tapestry, landscape) before it reaches the draft. It enforces your intellectual seniority by rejecting the fluff that junior copywriters (and generic AIs) rely on.
The Underlying Science: Prefrontal Inhibitory Control
The hallmarks of seniority—restraint, precision, and focus—are functions of the prefrontal cortex exercising inhibitory control. It is the ability to not say the first thing that comes to mind.
Standard AI models function like a brain without inhibition; they output the most statistically likely token (the cliché). The Executive Protocol acts as a Digital Prefrontal Cortex. It applies logit bias penalties (mathematical inhibition) to suppress low-status tokens (e.g., "delve", "synergy") before they manifest. This forces the model to bypass the path of least resistance and find a higher-entropy, higher-value expression.
Reference: Inhibitory Control (Executive Function)
The Evolution: The Delta Engine
This is where the model separates itself. When you edit a Gaspard draft, you aren't just fixing a typo. You are teaching the machine. The Delta Engine analyzes the gap between what it wrote and what you published. It learns from the difference.
You deleted an exclamation point? It lowers enthusiasm.
You added a data point? It increases density. Every edit tightens the calibration.
The Underlying Science: The Cybernetic Feedback Loop
In Systems Theory (Cybernetics), a system cannot correct itself without a closed feedback loop. Static prompts are "open loops"—they never improve.
Gaspard utilizes a Cybernetic Correction Mechanism. By treating your edit as a "loss function signal," the system calculates the delta (the error) between its output and your desired state. This is a micro-scale application of Reinforcement Learning from Human Feedback (RLHF), effectively "fine-tuning" the weights of your specific instance without the cost of retraining the entire model. The system creates a homeostatic balance with your evolving style—analogous to neuroplasticity in the brain.
Reference: Cybernetics (Norbert Wiener)