Day 7: The "Data = Content" Confusion

❌ The mistake

Importing all external data into the CMS as content items — even when that data isn’t content at all.

Just because something appears on a website doesn’t mean editors need versioning, workflows, or admin UI visibility for it.

⚠️ What problems this creates

Your CMS fills up with huge datasets no editor will ever touch.

Imports slow down as the system tries to update thousands of "content items" unnecessarily.

Performance tanks because workflows, versioning, and indexing all kick in… for data that doesn’t need any of it.

Developers start compensating with workarounds, and everything gets harder to maintain.

🤦‍♂️ Why teams make this mistake

Most projects already have a healthy flow of actual content coming from external systems — products, properties, schools, locations, etc. So when another dataset arrives (stockists, POIs, dealer networks, store locators, schedules), it feels natural to drop it into the same import routine.

The thinking goes:

  • “We’re already importing external items… let’s just import these too.”

But the key question never gets asked:

  • 👉 Do editors need to manage or use this?

If not, it’s data, not content.

💸 How much it costs to fix

On one project, a client was importing ~200 product pages from a PIM — totally fine.

Then someone added dozens of thousands of stockists from the CRM... also as content items.

With versioning, workflows, and indexing enabled by default, each import took more than a full day to complete.

Cleaning this up took a couple of weeks — removing old structures, clearing the database, and moving the stockist dataset into simple classes storage.

No content repopulation needed, but the architecture cleanup was significant.

📘 Best practice

Kentico’s content modeling guidance focuses more on storing reusable content:

Store content

However, there are examples on how you can model and store operational data as well:

Model product stock

💬 Have you seen a CMS overloaded with data that didn’t need to be content at all?

Day 8: The "Upside-Down Relationships" Problem

❌ The mistake

On a recent project, we found something unusual:

Products didn’t store the list of downloadable files they used — instead, the files stored the list of products they belonged to.

So a small certification PDF “knew” every product that referenced it, instead of each product knowing its own documents.

Technically, it worked… but it felt like the relationship was upside-down.

⚠️ What problems this creates

  • Editors look in the wrong place (“Why is this on the file? Shouldn’t it be on the product?”).
  • Queries get heavier because you must reverse-navigate the relationship.
  • Performance suffers as the larger dataset grows.
  • The mental model becomes unintitive — the wrong entity “owns” the relationship.
  • Reuse becomes harder instead of easier.

🤦‍♂️ Why teams make this mistake

In this case, the relationship direction was dictated by the import order: files arrived first, products later.

So the import routine stored product IDs on the file simply because that’s the only data it had at that moment.

This is a classic trap:

  • Modeling content based on how data arrives, not how content should behave.

And yes, there are scenarios where reversed relationships make sense — but they’re the exception, not the norm.

💸 How much it costs to fix

Correcting this backwards relationship on a live site was a true nightmare!

To fix this one, we had to:

  • Negotiate a new import routine with the client and implement it
  • Introduce the new, correctly oriented relationship
  • Keep the old one running so the live website didn’t break
  • Re-import thousands of records in the background
  • Switch all logic to the new relationship
  • Finally, remove the old data and structures

It took a few weeks — not because the new model was hard, but because fixing this kind of problem is like replacing an aircraft engine while still flying.

You must keep everything running.

📘 Best practice

Kentico’s content modeling guidelines reinforce that the item that uses or owns something should reference it, not the other way around.

This keeps relationships intuitive, performant, and scalable:

💬 Have you ever encountered a site where the relationship direction felt “inside out”?

Or had to flip one on a live project? Would love to hear your stories.

Day 9: The "Field Soup" Content Model

❌ The mistake

Letting the content model grow organically — one feature at a time, one new field at a time, often by different developers — without any central ownership, naming conventions, or editor guidance.

What you get isn’t a content model.

It’s field soup.

⚠️ What problems this creates

  • Editors guessing which of five similar fields they’re supposed to use.
  • Duplicate fields that mean almost the same thing.
  • Inconsistent data across pages and channels.
  • Hard-to-maintain components that depend on historical field accidents.
  • And a long, painful cleanup later — because entropy always wins unless someone owns the model.

🤦‍♂️ Why teams make this mistake

Because skipping the upfront thinking feels agile.

A request arrives ➡️ “Add a field for this.”

Another request ➡️ “Add one more field.”

Multiple developers ➡️ multiple interpretations.

No workshops, no conventions, no documentation, no shared governance.

It feels fast — until you have 60 fields on a content type and editors become amateur archaeologists trying to guess intent from vague labels.

Real agility requires a designed foundation, not constant improvisation.

💸 How much it costs to fix

Untangling a fragmented content model is slow because the first step is reverse engineering what everything means.

On a recent project, we had fields such as Name, Display name, Title, Title override, Card title...

Not even the client knew which one powered which part of the UI.

Developers had to dig through components, templates, import routines, and query logic to map: which field is used where, by what, and why.

Editors then helped determine the correct semantic meaning, so we could consolidate fields safely.

And when fields were merged or removed, the refactored code had to copy or transform existing data to prevent loss or regressions.

None of these tasks is individually hard — but together they took more than a week, even with the help of AI.

That’s what happens when a model is left to grow unchecked over time.

📘 Best practice

Kentico’s documentation emphasizes planning and designing a content model intentionally, with shared understanding and editorial clarity from the start:

💬 Have you ever met a content type with so many fields that you weren’t sure which ones were actually used?

Or inherited a model that felt like a historical record of every feature request ever made?

Day 10: The "Forgotten Functional Pages" Oversight

❌ The mistake

Treating key functional pages — basket, checkout, order history, favourites, account pages — as “purely technical” and therefore keeping outside the content modeling process.

Everything else on the site gets structured, modeled, templated...

But these pages?

They get hard-coded layouts with zero editor control and no Page Builder support.

⚠️ What problems this creates

  • Editors can’t adjust copy during campaigns or peak sales periods.
  • Developers become blockers for even tiny text changes.
  • Personalization becomes nearly impossible.
  • Marketing loses influence on high-value touchpoints.

🤦‍♂️ Why teams make this mistake

Because requirements rarely say: “Basket and checkout must be content modeled and editable.”

So teams default to the path of least resistance: “Just code the screens — they’re functional, right?”

And yes, the e-commerce engine drives the logic, but the content on these pages still matters.

These are some of the highest-intent pages on the entire site.

Treating them as “technical pages” is an easy oversight.

💸 How much it costs to fix

On a recent project, the client wanted to insert promotional messaging into the basket, checkout, and account pages right before a major shopping event.

They couldn’t — none of these pages were editable.

We had to:

  • Remodel these screens into proper page templates
  • Enable Page Builder
  • Preserve all existing functionality
  • Re-test every checkout flow variation

Luckily, it only took a few days, but this is not something that you would want to hear the week before Black Friday.

📘 Best practice

Kentico’s documentation strongly encourages enabling templates and Page Builder wherever editors need influence — including functional pages.

💬 Have you ever seen checkout pages hard-coded into a corner with no way for marketing to reach them?

Day 11: The "Free-Text Tags" Taxonomy Trap

❌ The mistake

Skipping proper taxonomy modeling and instead adding “just another text field” every time something needs to be labelled or categorized visually.

Editors type whatever they want.

The system treats it as a structure.

And suddenly your entire site is powered by creative spelling decisions.

⚠️ What problems this creates

  • Editors invent their own “categories” on the fly.
  • Filters break because nothing matches consistently.
  • Every migration turns into a data archaeology project.
  • Reporting becomes unreliable (is it “B2B”, “b2b”, or “B-to-B”?).

🤦‍♂️ Why teams make this mistake

Teams often don’t fully understand what taxonomies do, so the perceived fastest path is:

  • “Why bother modeling this — we just need a label. Add a text field.”

Add time pressure, a few sprints of shortcuts, and suddenly you’ve built a content model held together by wishful thinking and 15 spellings of “Healthcare.”

💸 How much it costs to fix

We recently migrated a site where every “tag” was a free-text field.

Roughly 8,000 pages were carrying inconsistent, duplicated, half-typed values that powered navigation and filtering.

We had to:

  • Export and analyze all text entries
  • Normalize, merge, and de-duplicate them
  • Derive a clean taxonomy hierarchy
  • Map every legacy value to a proper tag
  • Resolve everything during automated migration

Doable in a few days — but only because automation saved us. Manually, it would have been impossible.

📘 Best practice

Kentico’s taxonomy guidance is clear: model controlled vocabularies upfront and avoid free-text chaos entirely.

💬 Have you ever seen a “tagging strategy” made of 12 text fields and endless typos?

Day 12: The "Atomize Everything" Obsession

❌ The mistake

Taking the atomic content model and applying it everywhere — even to one-off, highly specific components like a single infographic, fancy table, or timeline whose pieces will never be reused.

Every caption, axis label, and note becomes its own micro-item...

And editors have to stitch it all back together every time.

⚠️ What problems this creates

  • Editing one chart feels like assembling IKEA furniture blindfolded.
  • A simple annual update turns into 20+ tiny content edits.
  • Editors lose the “shape” of the content because it’s scattered across micro-items.
  • Devs spend time modeling reusability that never actually happens.
  • Training new editors becomes a guided tour of “here’s why this is so much work, sorry.”
  • Everyone starts quietly avoiding those components.

🤦‍♂️ Why teams make this mistake

Developers love structure — and atomic modeling is a beautiful idea when you first discover it.

So the thinking goes:

  • “Look, we can split everything into reusable atoms!”

And yes, that’s brilliant for truly reusable micro-content (features, benefits, CTAs).

But many components — like a one-off infographic, a bespoke timeline, or our famous “Cartiglio” from Day 2 — are effectively single-use.

The content can be split into atoms, but it will never be reused outside that one context.

💸 How much it costs to fix

We’ve seen components like charts or decorative tables used only once or twice on a site, updated maybe once a year.

Someone enthusiastically atomized everything — axis notes, legend items, captions — into separate reusable types.

Refactoring it back to something sane means:

  • Remodeling the component to store its content as one coherent block
  • Repopulating content
  • Cleaning up now-redundant structures

Often, the price tag just doesn’t justify the benefit... so historically, we let editors suffer a bit.

📘 Best practice

Kentico’s atomic modeling docs are very explicit: avoid going excessively granular when it doesn’t pay off:

💬 Have you ever seen a beautifully over-engineered atomic model built for something used twice a year?