Blog Discussion: Evolving Dancing Goat with agentic AI software development

Avatar Image
Kentico Community
2025/07/01 1:33 PM

Blog Post: Evolving Dancing Goat with agentic AI software development

Continue discussions 🤗 on this blog post below.

Tags:
Kentico AI Content querying Roadmap v30.6.0 C# Developer tools Software development

Answers

2025/07/01 2:15 PM

Great walk through. However, I don't agree that Cursor is the absolute best. I think VS Code with the correct extensions and GH Copilot agent mode is just as good. Plus, it still keeps all of my non vibe coding tools and extensions to work in the old world.

Still a fan of Cursor too, but you'll have to pry VS Code from my cold, dead, hands... 😃

2025/07/01 2:22 PM

I think I might agree with you, though I can also see how it depends on your goals.

I haven't used Cursor much but I've watched a lot of videos of developers using it. It's impressive and encourages more of a hands-off-the-code approach to software development - something closer to vibe coding even if you're still encouraged to review changes and re-prompt.

VS Code's Copilot features have been improving pretty quickly and fits the Copilot as-a-pair-programmer agentic AI development mindset. You're still in the drivers seat, have all the extensions and tools that help you write correct code quickly and confidently, while AI becomes one of those tools without replacing them.

I'll be publishing a blog post soon about my personal experiences using VS Code and the GitHub Copilot extension when working on the Kentico Community Portal. I've had some really positive experiences and I am definitely writing less code, even if I haven't moved into the no-human-written code world.

I'm trying to keep an open mind because I used VS for years, then VS for backend and VS Code for front-end... finally, once I stopped working on KX13 projects I switched 100% to VS Code. It's a good reminder that tools change and getting too attached to one can prevent you from real productivity improvements.

2025/07/04 8:07 PM

Nowadays, it’s pretty hard to judge which tool is the best, since all of them are developing rapidly, competing with each other, and new ones keep popping up all the time. It also depends a lot on your workflow, projects, and expectations.

From my experience, I worked on a migration project (KX13 → XbyK) where I needed to migrate sections, widgets, and so on, often with some modifications. In my head, I just wanted to say, “Migrate the ABC widget and make XYZ modifications.”

I was really struggling with VS Code + Copilot (Visual Studio isn’t even worth mentioning, haha), because I couldn’t get it to have both the old and the new solutions in context. So I kept switching between them like a fool, copying files back and forth.

Then I heard great feedback about Cursor, gave it a try, and what a miracle. I could add both solutions into a single workspace, and everything was available in the context. And boom, I was able to migrate the features exactly the way I described above. It was a huge productivity booster, and I never looked back.

At the time, VS Code couldn’t do that, but reading the docs now, it looks like it has caught up. Will I go back to VS Code? It’s hard to predict. But never say never. Cursor isn’t all rainbows and unicorns either (yeah, I’m talking about the pricing and transparency around rate limits).

2025/07/07 4:44 PM

Hi Sean,
Appreciate the detailed and insightful post—really great to see applying AI meaningfully in real development scenarios.

Quick question:
Were there any surprises from the AI’s output that improved on previous implementations (beyond 1:1 refactors)? Sometimes AI introduces unintentional optimizations—just curious if that happened here and whether any of those made it into the final code.

2025/07/07 7:49 PM

Nikhila,

Thanks for joining the conversation!

TLDR; Nope! No interesting surprises, but mostly because we intentionally didn't allow them by keeping the context and prompts focused, and applying our normal approach to code security.


There were some of the unwanted "optimizations" that I mentioned. They came from some of the agents and were attempts to "clean up" code that was not explicitly mentioned in the prompts. While the changes might have been a net positive for the code base we were very focused on the refactor to IContentRetriever and we wanted to show a well controlled example of using AI. We considered these changes a failure and either adjusted our prompts or switched agents.

Something that we explored at a surface level (with the agents), but did not include in this work, was asking the agent to limit the columns returned from the queried database data sets.

I believe the original Dancing Goat code did have some of this explicit SQL SELECT ... logic using the content query .Columns() API. But, unless you are querying for content items without any linked content items, the column selection has limited value. This is something we are considering as a future improvement to the IContentRetriever (e.g. limit columns at all linked item levels) and we'll almost surely use an AI agent to do the refactor at that time.

While we intentionally used agents to perform the refactor (as opposed to having human engineers type the code changes), we absolutely had humans review the AI generated code, which had to pass the same security evaluation any other code does that we author and deliver to customers.

To answer this question, you have to login first.