Why I dropped ChatGPT to explore other options
Challenges to responsibility and control in an evolving AI world
Last week I cancelled my ChatGPT subscription. I’ve had the subscription since ChatGPT first got big last year and I’ve been a fairly active user (although I could probably get away with paying less if I only used the API). However, I’ve noticed that the work I want to do with AI has divulged dramatically from the way that ChatGPT is growing as a product and platform.
First, a little about me. I’m a coder who’s dabbled in entrepreneurship and consulting. I like coding because it helps me build stuff. I also like design because it guides how I build. And I like consulting with customers and clients to understand how to help others with what I build.
I like to create real value. Saving people time, improving the quality of their work, or just helping them to understand something better. I like to advise and assist those trying to make great stuff. So, I was naturally drawn towards ChatGPT for this like so many others.
ChatGPT does amazing things. It can help someone who has a difficult time expressing themselves to clearly express what they’re trying to say. It can explain concepts and break ideas down into parts that are easier to understand. It can even help you to make sense of difficult or hard-to-understand code and provide pointers for how to write better code.
These incredible things are powerful.
But there are also times when ChatGPT does amazingly stupid things. It can remove important context from your words when trying to summarize or express yourself. It can get confused during an explanation and leave out important details. It can even convince you to code against an API that doesn’t exist while including links to documentation that also doesn’t exist.
ChatGPT has the incredible power to lie. And to lie without intention. There isn’t a maniacal or rational explanation for these behaviours. It’s simply the side effect of a new tool that has yet to be guided, honed, and used in a way that can responsibly align itself with a clear objective. At least not generally (yet).
So, there becomes this challenge of responsibility over the output. You are responsible for understanding the result that these machines produce, doing research on the output, and ensuring that you are not being misled. I believe that responsibility requires a better understanding of the machine and the inputs that you’re feeding it.
Similar to understanding how to responsibly create, edit, save, and delete documents on our computers, understanding how to prompt ChatGPT is becoming an essential skill. A skill with responsibility outside the computation being done. Similar to avoiding downloading viruses or sending personal information to strangers when browsing the web, we need to learn where the boundaries are to our usage of these tools and how to interpret them.
Interpreting ChatGPT has only gotten harder over time. I think that this clouds the ability for us to use the system responsibly.
Initially, you could look at your ChatGPT chat history to understand how your conversation developed. You might try different prompts and learn to ask questions in a few different ways that promote different usage. You might also realize a conversation has gone on too long and restarting might get you a better result.
But now there are also many new features including Plugins, Code Interpreters, and GPTs to think about. These complicate things. Chunks of documents are added, API data is injected, and code is run to create results.
You now have “modes” of usage that require an in-depth investigation into the system if you have any hope of understanding how you arrived at a conclusion. The result is now less traceable and it’s difficult to understand how ChatGPT got that result. The new features created this confusion.
At times these features feel like magic. When I first picked up Code Interpreters I thought it was incredible. Then I tried a complex change to some data and realized just how limited the solution actually is. Without looking at the code being generated you might not catch major issues or errors. That investigation is another step that breaks the promise of magic.
That broken promise makes it hard to dream. You think of all of the incredible possibilities of this technology and what could be created. Then you try something and note that what you got really isn’t as incredible as you thought. We’re seeing this a lot with the current wave of AI Video Generation technology. While it’s incredible sometimes, the results still can’t be properly guided. Unless you’re willing to give up creative freedom around your vision, you’re unlikely to direct your exact masterpiece with these tools.
This complication just got even more fraught with difficulty with the addition of memory to ChatGPT. While the AI can learn about you, it’s becoming far less clear how the AI will act when you’re asking it questions. You’re losing traceability which makes it hard to be responsible with your usage. Not only that, but you’re also losing a degree of control.
As a coder, I like to test the edges of systems and tools. I like to test things out by quickly throwing harder problems at the system to see how they might break. I’m always interested in the incredible things I see and then trying to understand how those things were done. I’m getting tired of being disappointed by what the actual solutions look like in many AI-related cases.
HealthGPT connected Apple Health to ChatGPT and sounds really incredible and powerful until I realize that it’s mostly just one prompt that has access to some of the data and not an in-depth connection. Many other solutions being created feel like this as well on There’s an AI for That. Marketing an unrealized vision eclipses the real capabilities that could be equally possible. ChatGPT feels like the embodiment of that disappointment to me.
You see really incredible demos of workflows that people create. There are some incredible ways to use ChatGPT. But then I’m stuck on a problem in my code and occasionally ChatGPT is absolutely useless. Maybe I’m working on such niche problems that no one has ever worked on before (probably not) but it’s disappointing. Because the hype and potential are so big, I’m always left disappointed and underwhelmed.
So, my alternatives at the moment are LM Studio and LobeChat. While they aren’t perfect, I find that they enable me to customize my AI experience and experimentation meaningfully.
LM Studio allows you to run a lot of different open-source custom models on your own computer. It’s cool to see that some of these models even do a better job of answering coding questions than ChatGPT or Github Copilot. And they only cost some computation on your own computer.
LobeChat is a web chat client that basically makes a nice-looking replacement for ChatGPT. It supports several different models from OpenAI, Google, Microsoft, and Amazon as long as you have an API key. The interface feels a lot better than ChatGPT largely because it’s simple and direct about what it does.
I’ve found myself annoyed at the direction OpenAI is taking by adding these features. They’re developing a platform that seems to be aimed at non-technical users being able to converse with an all-knowing oracle. That oracle has an imbued set of powers, but it chooses when to enact them and how. It could be this is the new way of computing, but if so, you’re stuck in the OpenAI way of understanding this new model of thinking. And I frankly find that limiting and alarmingly uncontrollable.
Patterned after and created by the human brain, computers are incapable of distinguishing real from imaginary or virtual or symbolic. Thus, AI programs lie with impunity, just as people do.
have you tried using lobe-chat with lmstudio api?