My Reflections on MCPs Enabling Me to Be an AI Superuser
The promises and pitfalls of building a truly personalized AI assistant
A few weeks ago, I made a Stable Discussion YouTube video on MCP Servers. Since then, I’ve been leveraging these servers every day and they’ve become essential to my daily workflow. In this post, I'll share my personal reflections on how these AI-enhancing tools have transformed my daily workflow, the benefits I've experienced, and the challenges I've encountered while becoming what I consider an 'AI superuser.'
If you’re coming in fresh, here’s a quick refresher:
MCP (Model Context Protocol) Servers connect AI Chat clients to tools and data. Rather than relying on conversation history or what an AI can find online, they instead access data sources and tools. These can be as simple as documents or as complex as APIs. As chat “agents” become more common, these tools are essential for providing valuable grounding for our AI workflows.
My Daily Workflow
My current workflow looks a bit like this:
I keep my notes in Obsidian, my personal tasks in Todoist, and my work tasks in Linear. MCP servers interact with each of these as needed on behalf of my AI clients, Claude or Cursor. I go between these different tools as I work but these connections keeps all of this work in sync. A lot of the outcomes of these conversations become notes that end up back in my Obsidian note files.
This has assembled an AI assistant that is incredibly capable. Acting as my assistant, it will schedule tasks for me, update details as I uncover challenges, and re-organize my day upon asking. I zoom through task creation and organization in the morning, and the AI helps to fill in details that I might otherwise miss.
These little improvements keep pleasantly surprising me. For example, I’ll ask for the AI to create a task to fix a bug with the login page, and it’ll give me a few test steps related to the bug that I should go through to validate that the fix worked as intended. Pretty neat!
How It feels to be a Superuser
I feel like a superuser because I've finally achieved what other AI assistants have promised but failed to deliver: personalized control with powerful capabilities. Let me put this in perspective:
The Evolution of AI Assistants:
Alexa/Siri: Can connect to services via 'skills' but require exact command phrasing with minimal flexibility.
Standard ChatGPT: Offers natural language understanding but lacks meaningful customization. It has tool use and service integrations but is locked into using OpenAI’s models and application.
MCP-Enhanced AI: Combines natural language understanding with direct connections to my chosen tools (Todoist, Linear, Obsidian), preferred AI models, and client interfaces based on my specific needs.
This combination of flexibility and control is what makes the experience truly powerful. Rather than adapting to an AI SaaS offering that is opinionated on how I use AI, I've configured the AI to adapt to a workflow and tools that works for me.

Feedback on MCP Servers
Now, there are always trade-offs when living on the cutting edge. While I'm enthusiastic about these tools, I'm also documenting their limitations to help set realistic expectations and perhaps guide future improvements. Here are the key challenges I've encountered while building my AI superuser setup:
Semantic API Interfaces are Better
As I play with Linear and Todoist, I’ve noticed that there are a few things the AI gets wrong time and time again.
Linear creates a task ID that gets generated when a new task is created, like TASK-123. While the server seems to do a pretty good job creating new tasks, it’ll often use the wrong three-digit number to reference the task.
For Todoist, todos have a priority value from 1 (high) to 4 (low). But for some reason, it regularly misunderstands which is high and which is low when reading or creating todos.

By contrast, one thing that always works in Todoist is adding due dates. I’ll tell it to schedule a task to be done tomorrow, and it always gets that date right! Many AI struggle with dates but Todoist passes with flying colors. What gives?!
This is because the Todoist API can literally be passed “tomorrow” as a value. It has a semantic API and plain language can be interpreted against the interface.
AI models are largely trained on text and, while they can generally read JSON, the results aren’t usually as accurate as when you can interact with plain language. Where possible, we should use natural language to interface with AIs.
Broad or Conditional Interfaces are Trouble
Notion has a very broad interface space. MCP Servers I tried that connect with Notion, could easily create documents but had difficulty working with Notion’s database structure. The agent would error repeatedly and was unable to recover.
I believe this is because of how broad the database schema is in Notion. Notes are parents and children of database entries. This nested structure, and many other optional capabilities, provide a lot of flexibility in what users can create, but it also means that there are a lot of possibilities for what a good API call can look like. Worse, there are may possibilities for what a bad API call looks like.
As we build interfaces for MCP Servers, it’s a good rule of thumb that we make these interfaces obvious and clear without too many optional or conditional options. This complexity seems to be difficult for even the most sophisticated AI LLMS to fully grasp.
If complexity is truly required, offer function “steps” that break down the problem into a series of parts that can be done bit-by-bit to construct a complex step.
Room To Improve Configuration
As things are, MCP Servers are a “use as is” tool. You plug in your API keys to access different services, but there isn’t really a way to specify how you want to leverage that server.
For example, I have projects setup in my Todoist account and I have teams setup on Linear. When I create tasks, I want them to go in the correct places based on what project I’m working on.
As things are, the best way to add this configuration is by creating a Project in Claude Desktop or by adding custom instructions into Cursor for the project being worked on. As we continue to expand our ability to leverage these services, we’ll need to provide some details that pass through from the services and are not always configured on the MCP Client side.

The Approval Interface Feels Limited
As things are, the current interface for MCPs using agents is riding on the systems implemented by Claude, Cursor, and Windsurf. That is the “Approval Interface”. When an AI agent wants to use a tool, you need to approve the tool use and then let it run as desired.
However, I think we can easily do something better here. There are times when I’d love to edit the call at this point, make a clarification, or make a selection of options to pursue. I don’t just want the AI to make these choices, I want to interact.
MCPs Benefit from Self-Configuration
I think the MCP usage is cool BECAUSE I specified it. By contrast, I think I would hate to work with an AI agent that suddenly called tools that I didn’t know about. To some degree, I don’t like the ChatGPT memory for this reason.
When we’re not in control, AI feels scary. Imagine suddenly everything you just chatted with your AI about was going into an email to your colleague. It’s still in draft, but you didn’t even know this was a possibility or something to watch out for.
When you don’t know what tools exist, it’s also frustrating. Similar to trying to make Siri to turn off a light in a room that it doesn’t know about, you’ll spend a lot of time and energy just trying to get things to run. Users need some knowledge of these tools to know email is a capability and not text, for example.
When it’s known and our interactions are intentional, the experience amazing! You can do some interesting research and brainstorming about a topic and then craft an email all within the same AI chat window. These configurations being personalized to the tools we use is a major benefit to us.
As we develop applications that also leverage these sorts of tools, the knowledge of these tools existence is important to note. We’ll need to train users to be aware of these tools and find ways to let them know what is possible.
Setup is still Tricky
I find myself hitting weird issues regularly because we’re still in the first wave of these tools. Releases aren’t managed well and servers regularly break when versions update. Clients seem to have inconsistencies between them when trying to call these servers. And sometimes things just need to be restarted because something got in a weird state.
This also reminds me of being a superuser. Great capability comes with an increased possibility of everything going horribly wrong! Integration is still one of the most difficult challenges of building real systems and services. I’m sure we’ll continue to learn a lot as MCPS become a more widely used pattern for interfacing with AI chats.
Looking Ahead
Despite the current limitations, MCP Servers represent a significant step toward truly personalized AI assistants that work within our existing digital ecosystems rather than forcing us into new walled gardens.
As these tools mature, I expect we'll see improved configuration options, better semantic interfaces, and more intuitive agent workflows. The potential for truly personalized AI assistance is enormous once we solve these integration challenges.
For now, I'm enjoying the productivity benefits despite the occasional hiccups. Being an AI superuser today means embracing both the power and the occasional frustration of working with emerging technology. But the tool capabilities and personalized experience make it well worth the effort.
If you're considering exploring MCP Servers yourself, start small with one or two tools you use daily, and gradually expand as you become comfortable with the workflow. The journey to becoming an AI superuser is iterative—and that's part of what makes it so rewarding.