A redesign that doesn't lose the search visibility you already have. Every URL on the legacy site gets mapped (kept, redirected, or retired), every schema node that was carrying weight is reproduced or replaced, every AI crawler that was allowed before stays allowed after, and the cutover happens without a downtime window. The before-and-after audit numbers ship with the engagement so the redesign's effect on visibility is measured, not assumed.
301 redirect map deployed at the edge (Vercel rewrites, Cloudflare rules)
Schema parity audit: legacy nodes reproduced or improved on the new build
llms.txt and robots.txt parity with the legacy site (no new lockouts)
Sitemap submitted to Google Search Console and Bing Webmaster Tools
IndexNow ping on cutover for fast reindex
Pre-cutover Pulse baseline + post-cutover audit at week 6
Zero-downtime cutover plan (DNS TTL prep, rollback path, monitoring)
Operator runbook covering rollback, redirect updates, and broken-link triage
Foundations
Every website redesign engagement inherits the four UX Studio foundations.
Schema graph wired at every URL. Core Web Vitals budget agreed at scope. Crawler-access policy across 18 named AI crawlers. Schema-per-page rather than templated copies. The full foundations grid lives on the UX Studio overview.
Three layers. URL preservation: every legacy URL that earned traffic stays at the same address or 301s to its closest equivalent; nothing 404s. Schema parity: nodes carrying weight (Product, Article, FAQPage, BreadcrumbList, Review) reproduce on the new build before cutover. Crawler-access parity: the AI-crawler allowlist matches the legacy site, so engines that were citing you continue to. The before-and-after audit ships with the engagement so the rankings effect is measurable.
Yes. Most redesigns happen in parallel: the legacy site continues to publish content, the new build is staged at a preview URL, content from the legacy CMS is exported and re-imported close to cutover. The cutover itself is a DNS flip plus a redirect deploy, typically 30 minutes elapsed. Brief content freezes (12 to 24 hours) are common to ensure exports stay accurate, but a multi-week freeze is not necessary.
It happens, and it's usually one of three causes: a missed redirect (URL inventory was incomplete), a schema regression (a legacy node was carrying citation weight nobody had documented), or a crawler lockout (the new robots.txt was tighter than the old one). The week-6 audit catches all three; the runbook covers how to fix them. Less than 5% of well-scoped redesigns lose net traffic; the few that do are usually back at parity within 8 weeks of cutover.
Work with valUX
Start where it hurts.
If your organic traffic is sliding, start with a Pulse audit. If you want a programme rather than a one-off, ask about a retainer. Either way, every enquiry is read by a senior architect, and you hear back within one working day.