Core Web Vitals explained: how to actually improve them

Core Web Vitals Explained

Most of the content written about Core Web Vitals is aimed at developers. It explains the metrics in technical terms, lists optimization techniques that require engineering resources, and assumes the reader is comfortable reading a Lighthouse report.

This post is for everyone else — the marketer who got flagged in Search Console, the ecommerce operator who knows their site is slow but doesn't know what to fix, and anyone who has Googled "what is LCP" and come away more confused than when they started.

We'll cover what each metric measures, what failing them actually costs you in rankings and revenue, and how to connect those scores to real visitor behavior using Lucky Orange's Discovery and your existing heatmap and session recording data.

What Core Web Vitals are and why Google made them a ranking signal

Core Web Vitals are three specific metrics Google uses to measure how real users experience a page — not how fast a page loads in a lab, but how it feels to an actual person on an actual device with an actual internet connection.

Google began using them as a ranking signal in 2021. The reasoning is straightforward: Google's job is to send people to pages that serve them well. A page that loads slowly, responds sluggishly to clicks, or jumps around during load is a bad experience regardless of how good the content is. Core Web Vitals give Google a standardized way to measure that experience and factor it into rankings.

For competitive ecommerce keywords — where dozens of stores sell similar products — Core Web Vitals can serve as the tiebreaker between otherwise equally relevant results, meaning the difference between page one and page two.

Sites that meet all three Core Web Vitals thresholds see 24% fewer page abandonment rates, according to Google research. That's not a developer metric. That's a revenue metric.

The three metrics, in plain language

LCP — Largest Contentful Paint

What it measures: How long until the largest visible element on your page loads — usually a hero image, a banner, or a large block of text. This is the moment the page stops looking blank and starts looking like something.

Why it matters: LCP is the metric visitors feel most viscerally. A slow LCP means your visitor is staring at a partially loaded page — a white space where your hero should be, a layout without images — for longer than they'll tolerate. Every 100 milliseconds of delay in LCP correlates with a 1.11% decrease in session-based conversion rates, according to a Deloitte study for eBay.

The threshold: Under 2.5 seconds is good. 2.5–4 seconds needs improvement. Over 4 seconds is poor.

What typically causes a failing LCP: Unoptimized hero images are the most common culprit — a 3MB image where a 200KB one would do. After that: slow server response times, render-blocking JavaScript that delays the entire page, and fonts that take time to load and push a text-based LCP element off-screen.

INP — Interaction to Next Paint

What it measures: How quickly your page responds when a user does something — clicks a button, opens a menu, taps a product option. INP measures the delay between the action and the visible response.

Why it matters: A slow INP means your page looks loaded but doesn't work. The visitor clicks "Add to Cart" and nothing happens for 400 milliseconds — long enough to click again, long enough to wonder if the click registered, long enough to leave. This is one of the most invisible sources of friction on ecommerce sites because it doesn't show up in standard page speed scores — you have to be looking at interaction data to catch it.

INP replaced the older FID (First Input Delay) metric in March 2024. FID only measured the delay on the first interaction; INP measures responsiveness throughout the entire page visit, which makes it a significantly more meaningful signal.

The threshold: Under 200 milliseconds is good. 200–500 milliseconds needs improvement. Over 500 milliseconds is poor.

What typically causes a failing INP: Heavy JavaScript running on the main thread is the primary cause. When your browser is busy executing JavaScript, it can't respond to user input — so the interaction queues up and waits. Third-party scripts (chat widgets, analytics, ad pixels) are frequent offenders because they're not always optimized for performance.

CLS — Cumulative Layout Shift

What it measures: How much the visible content of your page moves around during load. Every time an element shifts position — an image pushing text down, a banner popping in above the fold, content jumping as a font loads — CLS accumulates.

Why it matters: Layout shift causes mis-clicks. A visitor goes to click your CTA button, the page shifts at the last moment, and they click something else — or click nothing at all. On mobile, where tap targets are smaller and shifts are more disorienting, this is a significant conversion killer. CLS is also one of the most frustrating experiences from a user perspective because it feels like the site is broken.

The threshold: Under 0.1 is good. 0.1–0.25 needs improvement. Over 0.25 is poor.

What typically causes a failing CLS: Images without defined dimensions (the browser doesn't know how much space to reserve, so it reflows when the image loads). Late-loading ads or embeds that push content down. Web fonts that swap in after the page renders, shifting text.

How Core Web Vitals affect your SEO rankings

Core Web Vitals are a confirmed ranking signal — but their weight needs to be understood correctly. Content relevance is still the number one factor when Google builds a SERP. However, for many queries, there is lots of helpful and relevant content available. In such cases, page experience becomes the main differentiator leading to success in search.

In practice: if your content is significantly worse than the competition, fixing your Core Web Vitals won't save your rankings. But if you're competing against sites with similar content quality and authority — which is most ecommerce and SaaS competition — failing CWV when competitors pass it is a meaningful ranking disadvantage.

Pages ranking at position 1 are 10% more likely to pass Core Web Vitals thresholds compared to URLs at position 9. The correlation between passing CWV and ranking well has strengthened as Google continues refining its page experience signals.

Where to check your Core Web Vitals

Google Search Console — Page Experience report The most important place to check. Search Console shows your CWV scores based on real Chrome user data from the past 28 days — not simulated lab conditions. It segments by desktop and mobile, flags URLs that are failing or need improvement, and lets you filter by property section. This is ground truth.

PageSpeed Insights (pagespeed.web.dev) Run any URL through PageSpeed Insights to see both field data (real user data, same source as Search Console) and lab data (simulated, useful for debugging). The field data is what matters for rankings. The lab data tells you what to fix.

Chrome DevTools For developers who need to go deeper. The Performance panel lets you record a page load and see exactly where time is being spent — which scripts are blocking, where the LCP element is being delayed, what's causing layout shifts.

How to improve each metric — prioritized by impact

Fix LCP first

Image optimization delivers the biggest LCP improvement for the least engineering effort. Convert hero images and large above-the-fold images to WebP format, compress them without visible quality loss, and define explicit image dimensions so the browser knows how much space to reserve. Add fetchpriority="high" to your LCP image element so the browser loads it before other resources.

If your server response time (TTFB — Time to First Byte) is over 600ms, address that before anything else. No amount of image optimization compensates for a slow server. A CDN moves your content closer to your visitors geographically and is one of the highest-ROI infrastructure investments for a site with broad geographic traffic.

Fix CLS second

Define explicit width and height attributes on every image and video element. This alone resolves the majority of CLS issues on most sites. For web fonts, use font-display: swap and preload your key fonts so they're available before the page renders. For ads and embeds, reserve space with placeholder elements that match the expected dimensions.

Fix INP third

INP requires more developer involvement than the other two metrics, but the diagnostic is usually clear. Use the Performance panel in Chrome DevTools to identify long tasks — JavaScript operations over 50ms that block the main thread. Deferring or removing non-critical third-party scripts is often the quickest path to improvement. Every tracking pixel, chat widget, and embedded element adds to the main thread load — audit what you're running and remove anything that isn't earning its weight.

Discovery by Lucky Orange


Connect your Core Web Vitals scores to what visitors are actually experiencing


A CWV score tells you a metric failed. What it doesn't tell you is what that failure looks like from your visitor's perspective — where they're abandoning, what they're clicking prematurely, how far they scroll on a page that's still loading.


Discovery AI can run a page speed audit on any page you point it at, returning a plain-language breakdown of what's failing and what to fix first. But the real value comes when you pair that with your Lucky Orange behavioral data.


A failing LCP shows up in your session recordings as visitors who leave before your hero section finishes rendering — you can watch it happen. A high CLS shows up in your click heatmaps as mis-clicks clustered around elements that shift during load. A slow INP shows up as double-clicks and rage clicks on buttons that didn't respond fast enough. These aren't abstract metrics — they're patterns in your existing data that tell you exactly which CWV failure is hurting you most.


Open Discovery, click the "Page speed audit" prompt, specify the page, and then cross-reference what it finds against your heatmap and session recording data for the same page. You'll have both the technical diagnosis and the behavioral evidence in the same workflow — without switching tools or waiting for a developer to interpret the Lighthouse report.

The fix order that matters most

When everything is failing, the prioritization question is: which metric is actually costing you the most right now?

The answer is in your behavioral data, not your CWV scores. A site with a poor LCP on its homepage but a passing LCP on its product pages should fix the product pages first if that's where the conversion happens. A site with a poor CLS on mobile should check whether most of its traffic is mobile before deciding how urgent the fix is.

CWV scores give you the what. Heatmaps and session recordings give you the where and the how much. Fix in the order that your visitor behavior — not your Lighthouse score — tells you to.

Frequently asked questions

Do Core Web Vitals directly affect Google rankings?

Yes, they're a confirmed ranking signal — but they function primarily as a tiebreaker in competitive SERPs rather than a dominant factor. Content quality and authority still outweigh page experience for most queries. Where CWV matters most is in competitive niches where multiple pages have similar relevance and authority, and technical performance becomes the deciding factor.

What's the difference between lab data and field data in PageSpeed Insights?

Lab data is a simulated page load in controlled conditions — useful for identifying and debugging specific issues. Field data is aggregated from real Chrome users who loaded your page over the past 28 days and is what Google uses for ranking. When they diverge, trust the field data for understanding real-world impact and the lab data for diagnosing what to fix.

Which Core Web Vital should I fix first?

LCP first, in most cases — it has the most direct impact on perceived load speed and conversion rates, and the fixes (image optimization, server response time) tend to have the highest return for the effort. CLS second, because layout shift is directly responsible for mis-clicks and lost conversions, especially on mobile. INP third, as it typically requires more developer involvement but is critical for interactive pages like product configurators or checkout flows.

How do I check my Core Web Vitals for free?

Google Search Console's Page Experience report shows your real-user CWV data across your entire site. PageSpeed Insights (pagespeed.web.dev) shows both real-user and lab data for individual URLs. Both are free. Discovery in Lucky Orange can also run a page speed audit on any page and translate the findings into plain-language recommendations.

How long does it take to improve Core Web Vitals scores?

Field data in Search Console updates based on a 28-day rolling window of real Chrome user data. That means after you make improvements, it takes up to 28 days to see the full effect in your field data scores — even if lab data improves immediately. Don't judge fixes by lab scores alone; wait for the field data to catch up before declaring a metric fixed.

Why do my Core Web Vitals scores differ between desktop and mobile?

Mobile devices have less processing power, slower network connections, and smaller screens that change how layout shifts register. Mobile CWV scores are almost always worse than desktop. Since Google uses mobile-first indexing, your mobile scores are the ones that matter most for rankings.

Your scores are the beginning of the investigation, not the end

A failing CWV score is a flag, not a diagnosis. It tells you something is wrong — it doesn't tell you which visitors are being affected, how it's changing their behavior, or which fix will move your conversion rate most.

That's what your behavioral data is for. Lucky Orange's session recordings show you what visitors do on pages that are failing — where they leave, what they click prematurely, how their behavior changes compared to pages that pass. Heatmaps show you the patterns. Discovery connects the technical audit to both, in the same workflow.

Fix the scores. Watch the behavior change.

Start Your 7-Day Free Trial

Start Your 7-Day Free Trial

Published by: Lucky Orange

Published by: Lucky Orange

May 4, 2026

May 4, 2026

Check out our podcast

Learn how to tell better stories with your data.

Check out our podcast

Learn how to tell better stories with your data.