Autonomía digital y tecnológica

Código e ideas para una internet distribuida

Linkoteca. rendición cognitiva


People increasingly consult generative artificial intelligence (AI) while reasoning. As AI becomes embedded in daily thought, what becomes of human judgment? We introduce Tri-System Theory, extending dual-process accounts of reasoning by positing System 3: artificial cognition that operates outside the brain. System 3 can supplement or supplant internal processes, introducing novel cognitive pathways. A key prediction of the theory is “cognitive surrender”-adopting AI outputs with minimal scrutiny, overriding intuition (System 1) and deliberation (System 2). Across three preregistered experiments using an adapted Cognitive Reflection Test (N = 1,372; 9,593 trials), we randomized AI accuracy via hidden seed prompts. Participants chose to consult an AI assistant on a majority of trials (>50%). Relative to baseline (no System 3 access), accuracy significantly rose when AI was accurate and fell when it erred (+25/-15 percentage points; Study 1), the behavioral signature of cognitive surrender (AI-Accurate vs. AI-Faulty contrast; Cohen’s h = 0.81). Engaging System 3 also increased confidence, even following errors. Time pressure (Study 2) and per-item incentives and feedback (Study 3) shifted baseline performance but did not eliminate this pattern: when accurate, AI buffered time-pressure costs and amplified incentive gains; when faulty, it consistently reduced accuracy regardless of situational moderators. Across studies, participants with higher trust in AI and lower need for cognition and fluid intelligence showed greater surrender to System 3. Tri-System Theory thus characterizes a triadic cognitive ecology, revealing how System 3 reframes human reasoning and may reshape autonomy and accountability in the age of AI.

El nuevo paradigma del desarrollo se llama harness engineering. La pregunta incómoda no es si nos hará más productivos, sino si nos dejará seguir pensando.

Hace dos semanas escribí sobre rendición cognitiva: el efecto que aparece cuando aceptamos sin filtrar lo que la IA nos devuelve. No la evaluamos mal, directamente no la evaluamos. Si va a estar en todas partes, ¿podemos diseñarla para que sigamos pensando, en lugar de para que dejemos de hacerlo?

El sector ya no es el del desarrollador que escribe código, sino el del que monta y opera sistemas agénticos. Lo llamamos harness engineering: construir el arnés alrededor del modelo (evals, observabilidad, prompts, herramientas) para que lo indeterminista se comporte fiablemente.