Kentico recently worked on refining the new Content Retriever API over several Refreshes before releasing it as production-ready in the June 2025 Refresh.

We're proud of this feature and we know it helps developers build with Xperience by Kentico more quickly by reducing boilerplate code! But, the API and feature architecture were only one part of the total scope for our development team.

Not only do developers use Dancing Goat to explore and experiment with our product features, our development teams use it validate that new Xperience by Kentico capabilities work as expected. This means we also needed to update Dancing Goat, replacing ContentItemQueryBuilder, custom caching, and repositories with the IContentRetriever where appropriate.

Opportunity: refactoring with AI

Our development teams follow a rapid development schedule based on our public product roadmap for Xperience by Kentico. The more features we can ship, the more value we can deliver to our customers and partner agencies.

Part of our internal AI adoption at Kentico requires identifying opportunities to apply AI so we can do more, faster, and better.

We decided to use AI to refactor Dancing Goat to adopt the Content Retriever API. It was a perfect opportunity for our development team to realize the benefits of AI in their software development practices and share their experiences with our developer community.

Our goal - use AI as much as possible to perform all refactoring and code updates using only prompts, the existing Dancing Goat code base, and AI features like RAG or those provided by the code editor.

Technology selection: IDE and LLM

In recent years, new AI technologies have been released almost daily. We considered tools our developers were already familiar with and new entrants into the AI software development marketplace.

Our developers all have GitHub Copilot business subscriptions which give them access to a variety of AI models, including GPT-4.1, Claude 3.5/3.7/4 Sonnet, and Gemini 2.5 Pro through Visual Studio's and VS Code's Copilot extensions.

Cursor requires a separate subscription which offers a similar variety of AI models.

We tried various combinations of editors and AI models and came to the following conclusions.

Visual Studio: immature AI tooling

Visual Studio's agent mode is very new and can be buggy. Claude Sonnet 3.7 always timed out and did not respond with any suggestions or edits. GPT 4.1 worked, but the suggested code changes were completely irrelevant. This is likely due to the extension not providing the AI model with enough or helpful context from the code base.

Visual Studio has traditionally been slower at adopting new technology and improving extensions, but it also provides a more mature classic development feature set. This trend appears to continue with its AI-based development features.

VS Code: promising but not refined

VS Code's Copilot extension has Edit and Agent modes for AI-driven code changes, and we tried both. The best-performing model through Copilot was Claude Sonnet 3.7, however it sometimes performed extra refactoring we didn't request - it was a little over-eager.

The model required at least 3 prompts to change one method, which meant we spent more time providing context, benefitting less from using AI.

It was clear that Copilot was not able to autonomously get the required context - it only reviewed files that were specifically provided to it by a developer.

Overall, given our expectations, using Copilot was functional but quite time-consuming - not only because of the additional prompting required but also because, at the time of testing, it could take few minutes for Copilot to process all context and perform file edits.

Cursor: seamless AI experience

Here's the TLDR; Cursor is the best IDE for autonomous AI-driven code refactoring.

In Cursor, Agent mode is the default and the only option we tried. The Cursor team describes Agent mode in their documentation.

Agent is the default and most autonomous mode in Cursor, designed to handle complex coding tasks with minimal guidance.

The phrase "minimal guidance" lines up with our experience. Cursor was able to collect all needed context by itself. We didn’t need to manually add any and compared to Copilot in VS Code, Cursor is fast, both in analysis and edits.

When it comes to model selection, similar to our experience in VS Code, when we tried Claude Sonnet 3.7 it had a tendency to make unrequested code changes. So, we instead used o4-mini, which isn't as fast but produced the best results.

Execution: challenges and discoveries

We identified some challenges and advice for other developers that are adding AI development tools to their workflows.

Challenges

  1. The agent would often not realize much of the request context is already handled by the IContentRetriever and passing unnecessary parameters like language or channel name.

    • We resolved this by adding a sentence about this to the first few prompts.
  2. The agent would default to using more flexible methods like RetrievePages<T>() instead of the use-case specific methods like RetrievePagesByGuids and RetrieveContentByGuids.

    • We resolved this by reviewing the generated code and asking the agent to use these convenience methods when it had generated code using GUIDs in a .WhereIn() query method.
  3. Occasionally the agent used a flexible method like RetrievePages<T>() when RetrieveCurrentPage<T>() was a better option.

    • We resolved this by asking the agent to use .RetrieveCurrentPage<T>() when it used a Web Page Item ID in the .WhereEquals() query method.
  4. The agent would search the web for context on Kentico's APIs instead of using the official docs content that was provided to it.

    • We couldn't keep the agent from searching the web, but maybe Cursor editor features like @Docs and @Web will improve over time.
  5. Part of the refactoring involved writing the IContentRetriever code using the parameters that were passed to repository methods which used the ContentItemQueryBuilder internally. From time to time, the agent forgot some parameterization from the repositories.

    • Here, we had to validate the resulting code to see if it was correct, and also run the app and check the website for errors.
  6. After generating the IContentRetriever code, the cache name suffixes did not always make sense and could collide with other cache entries.

    • Using a follow up prompt the agent was asked to adjust all API references to resolve the issues we identified.

      Great, now lets go once again through all usages of ContentRetriever that create new RetrievalCacheSettings with name suffix. Make sure that all of them reflect the lambda method in additionalQueryConfiguration and no other parameters. If not, change it.

Suggestions

  1. Start small with one file or method to refine prompts.

    • Converted files provide high-quality context for subsequent changes. After major refactors, check builds and live sites. Use new chats for distinct tasks to manage agent context effectively.
  2. When running on multiple files, make sure to have one already converted, as it checks it often.

    • The successfully migrated code serves as high quality context for the model's future planned changes.
  3. If there are problems with starting, change one file manually, and then the Cursor will likely work based on this context.

    • Sometimes the agent needs a kick start to get going when there isn't enough context available to achieve the change you're looking for.
  4. After major refactorings, check the build and possibly also the live site. The cursor sometimes forgets some references (mainly when deleting files).

    • This is an area where end-to-end tests can help you validate changes and use agents with more confidence.
  5. Create a new chat when doing something a bit different to not confuse the Cursor.

    • The chat history becomes additional context for the agent. One principle of AI-assisted development isn't only creating context, but also managing it.
  6. The goal is to describe your assumptions, desired outcome, and potential problems to look out for as context for the agent. If something didn't turn out how you wanted, prioritize adjusting your prompt over fixing the issue yourself.

Prompts

Below are some of the prompts we used with Cursor to guide in the refactoring process.

We began with a single file and were detailed with the goal and tactics.

Look at the docs about Content Retriever API and its reference. Based on it, refactor DancingGoatHomeController to not use factory classes but rather use ContentRetriever. Keep in mind that ContentRetriever has knowledge about language from website context. Always use the overload of method with least amount of parameters that will still offer the needed result. Make sure to use the same parametrization that was present there (if not added by default in ContentRetriever). When required to use RetrievalCacheSettings, pass there ctor with only name suffix that should reflect the additional configuration

Once the agent completed this work we requested it to repeat the process with another file.

Can you now refactor article controller

This was repeated several times and occasionally we would re-word the prompt to ensure the goal was in focus.

Refactor DancingGoatProductCategoryController to not use any of the repository classes

The agent would pause when there wasn't enough context and ask for assistance, like when it didn't know how to transform some of the more advanced content queries. So, we gave it that context based on our knowledge of Dancing Goat and the new IContentRetriever API.

You can use methods Retrieve...OfReusableSchemas and Retrieve...OfContentTypes

You can also write prompts in a way that helps the agent create a plan of execution.

Can you list me all classes names ...Repository

Run through all of those and list all usages of them in other files

Go through all the usages you found now and replace them with ContentRetriever API

After everything is updated and compiling, we review the code changes and request more specific adjustments.

Go through all the usages of ContentRetriever and if there is Where condition based on GUID(s), replace that usage with methods Retrieve...ByGuids. OfReusableSchemas and OfContentTypes keep as they are

Finally, we focus on the remaining issue with cache item names and request an update to that specific problem.

Great, now lets go once again through all usages of ContentRetriever that create new RetrievalCacheSettings with name suffix. Make sure that all of them reflect the lambda method in additionalQueryConfiguration and no other parameters. If not, change it

Wrap up

Kentico's development team has been using AI tools in their development workflow for some time now, but this update to Dancing Goat was the first large AI-driven change we made in public facing code. This update had 3 goals.

  1. Demonstrate the Content Retriever API in practice.
  2. Reduce developer time spent on changes.
  3. Share our AI-assisted development learnings with the community.

We'd love to hear about your experiences using AI agents and models when developing with Xperience by Kentico, so share your thoughts in the discussion for this post.