Google's Antigravity IDE - Still too Early
My initial impressions offer a more nuanced experience than the chipper attitude of this presentation which should help you get a balanced perspective.
I played around with the new Google Agentic IDE, Antigravity, on launch day and created a few features for an app I’ve been playing with.
If you’re unfamiliar this video is a helpful overview of the features:
My initial impressions offer a more nuanced experience than the chipper attitude of this presentation which should help you get a balanced perspective.
Using Antigravity
Agent Manager
This interface feels like a move in the right direction. It offers a means of managing the work done by an agent, ability to see and respond to plans easily, and clear indication of changes made. I like the Agent Manager’s UI but it’s been a little buggy so far. I made some good changes but it is missing some of the context I have in my CLAUDE.md files on how I wanted it to build the app. I’m not sure if it’s reading some of my core information files or docs.
Knowledge
The knowledge base feature looks interesting but in the two medium-sized features I created it didn’t seem to think this needed to be updated. Unsure when it will feel there’s something worth of it. As with all AI memory systems I do worry about it getting the wrong idea and storing that idea for later use.
Intelligent Tool Approval
Antigravity lets the model choose when to run a tool in some cases rather than stopping to ask for permission. This is a cool concept if it works well and I’m curious to explore with this further. I worry that seemingly innocent commands may look non-destructive and get called anyway by the model despite causing some destructive change. This may require a metadata layer similar to how MCP servers have expanded their interfaces to include if a tool mutates data.
Browser Tool
It’s great to see them add a tool for interacting with the code as it’s actually run. However, my initial setup of their browser didn’t seem to workout very well. I installed the extension but the agent had difficulty finding the extension. Eventually it seemed to work when I opened up a new tab after closing Chrome.
Since Chrome isn’t my main browser, it’s not setup quite right for the application I was testing but it worked well on a later project. It seems to be able to record, capture screenshots, and read the console.
Commenting
Having a highlight and comment system on AI plans is plain great UX. In existing solutions I find that there are often times where I’m opening a note or something to put my feedback in and then scrolling down and pasting that feedback into the chat input at the bottom of a agent chat window. When I made comments, it seemed like they were factored in appropriately when I asked the model to apply them.
The pitch of this feature sounded good but I’m curious how the model thinks about incorporating feedback like this in practice.
Gemini 3 Pro (High)
I’ll pay it a good compliment in saying that this model felt a lot like Claude Sonnet 4.5 to the point that I felt like it was working with me in the same way I’m used to. It’s not often that happens when switching models. That said, I still didn’t get my context appropriately in the conversation and worry a bit if I have to start all of my conversations with it saying Read @CLAUDE.md before we can start working.
On release day there’s always a lot of strain on these models and this was no different. There were a couple times I needed to leave and come back when working through the features to let the global limit cool off a bit. Hopefully the usage gets a bit more predictable and lets more people use this regularly.
Onboarding
When I launched Antigravity and onboarded, the onboarding crashed on the final step. Then I had to go through it again and even though I had indicated I wanted it to pull in settings from my Cursor, it ignored that and didn’t pull in any of my extensions or config. This is definitely a headwind in my adoption journey but something that is likely to get fixed with time.
Takeaways
I built out two features at the same time to explore the agentic capabilities of the Antigravity IDE. One feature was to add notes for users in a feed and the other is to allow users to upload images to Google Cloud Storage using a rich text field. However, I think having the changes happening in parallel was a bad idea.
Part of me assumed these changes would exist in separate worktrees or branches so that they wouldn’t conflict. Some of the demo videos made that seem like it might be the case but no, it’s the same as running two separate Claude Code instances in the same repo. Just a new UI.
Ultimately, I wanted to put Antigravity through its paces, but running two things at once confused me a bit while learning a new tool. It also seems to have confused the interface too because one of the agents just stopped responding to my prompts after it tried unsuccessfully to test the feature in the browser. The other agent completed it’s work fine but in the Review tab was still showing both sets of changes which was, again, confusing.
At the end of my coding session I had problems determining how to progress and what to do with the resulting conversations. There’s an ability to review the changes and provide feedback but it’s still a bit confusing how to get the IDE to commit the change from the Agent Manager view. When I did merge I also didn’t know what to do with the conversations. I wish there was an archive feature or something as deleting these conversations doesn’t feel great especially when the Knowledge doesn’t seem to update.
To complete my changes, I ended up just switching back over to Claude Code as that seemed to have overall better context on what I was building and I had better muscle-memory as to how to progress a late stage change.
In Antigravity, there’s a lot of really great intentionality around context but some of the control around context does feel limited. Because it’s a “smart system” there’s a lot less control. That’s helpful in someways but also makes it more difficult to understand exactly what’s going on at any time.
I keep noticing some things I’ve come to appreciate about other interfaces that are missing here. One example is queued changes in Claude Code. If there’s a string of commands that make sense to just run one after another, it’ll queue them up. I find that while the auto approval works well in Antigravity, but I find there are times where I need to wait to approve several changes that were clearly known in advance that could have been approved without delays between each approval.
Release Article & Videos
The release blog mentions “Gemini 3 is also much better at figuring out the context and intent behind your request” but I haven’t found this to be the case. Jumping into an existing codebase some of the core NextJS architecture I had in place for version 16 was ignored despite clear indications. That said, many of the solutions create were well done. They just didn’t retain the high context as this blog post might indicate.
In the getting started video, it was refreshing to see the a Google engineer directly trust the AI with his API key. I think that’s honestly the norm in a lot of cases depending on how permissive the keys are. It enabled the AI to explore and investigate the API with context of the interface due to some Googling of the interface from the web. In all honesty, this is a pretty likely use case for most devs.
Agents Testing During Research
It’s amazing to see the impact of the Intelligent Tool Approval when it comes to agents doing research. That’s where there’s a bit of magic in this release. This makes me think that this might be one of the better agentic interfaces to do coding research within.
Nano Banana Image Gen
It’s awesome to have an image generation module as powerful as Nano Banana running directly in the IDE. It can generate assets and directly add them to the application. That’s pretty incredible. (no transparent backgrounds though unfortunately)
Next
I’m intrigued by Antigravity but it’s still feeling a lot like a Beta of something that could be cool. It’s going to be interesting to watch other competitors in the space learn from these solutions and find ways to improve their services based on these changes. I wouldn’t recommend rolling Antigravity as your main editor for a few weeks while bugs get ironed out but I think it’s great to experiment with and potentially run research tasks on existing projects.



