From industry to academia — building AI that works.
MS candidate in Artificial Intelligence at Hawaii Pacific University (Spring 2026). A decade in tech. Pivoted to AI in 2021. Now formalizing production experience through academic research.
About Tao An (安涛)
Founder of FIM Labs Pte Ltd (🇸🇬 Singapore · 🇨🇳 Beijing), building FIM One — an open-source AI connector hub that links agents to enterprise systems (Feishu, Slack, Teams & more). Also serving government and enterprise clients in China, with production AI systems focused on legal and healthcare domains.
Research interests: Retrieval-Augmented Generation, LLM memory architectures, knowledge graphs.
Featured Project
FIM One
LLM-powered agent runtime that bridges disconnected enterprise systems — ERP, CRM, OA, finance, databases — without modifying existing infrastructure. Features intelligent DAG planning, ReAct reasoning, full RAG pipeline, visual workflow editor, and MCP protocol support.
View on GitHub →Publications
CogCanvas: Verbatim-Grounded Artifact Extraction for Long LLM Conversations
A training-free framework that extracts verbatim-grounded cognitive artifacts from LLM conversations and organizes them into a queryable graph. Achieves 32.4% accuracy on LoCoMo (+7.8pp vs RAG) with decisive advantages on temporal reasoning (+20.6pp) and multi-hop questions.
AI as Cognitive Amplifier: Rethinking Human Judgment in the Age of Generative AI
Accepted at the 5th International Conference on Hybrid Human-Artificial Intelligence (HHAI 2026). Based on extensive experience training 500+ professionals in AI adoption since GPT-3, this position paper argues AI acts as a cognitive amplifier that magnifies existing human capabilities. Proposes a three-level model of AI engagement and advocates for strengthening domain expertise and metacognitive skills over technical prompt engineering.
Cognitive Workspace: Active Memory Management for LLMs
Proposes Cognitive Workspace, a paradigm transcending traditional RAG by emulating human cognitive mechanisms. Features active memory management, hierarchical cognitive buffers, and task-driven context optimization. Achieves 58.6% memory reuse rate (vs 0% for RAG) with 17-18% net efficiency gain.