Over-normalizing data creates slow queries.
Excessive table splitting forces complex joins that cripple read performance.
Conversely, reckless denormalization causes update nightmares.
Redundant fields demand widespread updates, inviting data inconsistency.
Balance depends on your workload:
Transactional systems need normalization for write integrity.
Analytical systems benefit from denormalization for read speed.
Practical solutions exist:
Use targeted redundancy for critical fields only.
Maintain normalized core tables with denormalized copies.
Remember the rule:
Normalize where writes dominate; denormalize where reads matter most.
Blind adherence to either extreme sacrifices performance or accuracy.