How to Test Your Own Website Speed in 2 Minutes

Most home-services owners I talk to have been told their site is slow at least once. Sometimes by a marketing person, sometimes by a relative, sometimes by a competitor pitching a rebuild. The next question is always the same: "How do I actually check?" The honest answer is that it takes about two minutes and one free tool, and most of what people obsess over inside that tool is noise.
This post is the short version. Open Google's PageSpeed Insights, paste your URL, look at three numbers, ignore the rest. Then I'll tell you when to bring in a second tool, what those tools disagree about, and what to do if your score is bad. Save this link. It's the page I send to owners who want to pull back the curtain on their own site without paying anyone for the privilege.
How do I test my website speed in two minutes?
Open pagespeed.web.dev, paste your home-page URL, click Analyze, wait about 30 seconds. The Mobile tab opens by default, which is the one that matters. Look at the Core Web Vitals Assessment box at the top. Three numbers. Green, yellow, or red. That's the whole test.
PageSpeed Insights is a free Google tool that runs your URL through two engines at once: a real-world dataset called CrUX (Chrome User Experience Report) and a lab simulation called Lighthouse. The page that comes back is dense, but you only need the top of it to know whether you have a problem.
The actual two-minute flow:
- Go to pagespeed.web.dev.
- Paste your home-page URL.
- Click Analyze.
- When the report loads, leave the Mobile tab selected. Google ranks on the mobile experience.
- Read the Core Web Vitals Assessment box. It says "Passed" or "Failed" in big letters, and shows three numbers: LCP, INP, CLS.
- Stop reading. That's the score that matters.
To test service pages, repeat with each URL. A site can have a fast home page and slow inner pages when the home page got the design attention and the rest got templated.
Which speed test tool should I actually use?
For 99% of local-business owners, the answer is PageSpeed Insights. It's the only free tool that combines real-user field data from Chrome with Google's own ranking thresholds, which means it's the tool whose result actually correlates with whether you're being penalized in search. GTmetrix and WebPageTest exist for different jobs. Use them when you've already failed PSI and need to know why.
How I think about the three:
- PageSpeed Insights. The verdict tool. Tells you whether real users are passing or failing the metrics Google uses for ranking. Free, no account, 30 seconds. Your starting point and usually your only point.
- GTmetrix. The reporting tool. PDFs, historical tracking, location selection. Useful for monitoring over time or sending a clean report to a contractor. Free tier; useful features are paid. Same Lighthouse engine as PSI, so the verdict mostly matches.
- WebPageTest. The diagnostic tool. The most powerful of the three. Pick exact device and network combinations, see waterfall charts of every request, run filmstrip captures of how the page paints frame by frame. Use it when you need to know which specific image or script is breaking the page, not just that the page is broken.
According to DebugBear's tool comparison, GTmetrix and PSI both run on Google's open-source Lighthouse engine, while WebPageTest uses its own measurement stack with deeper instrumentation.
My order of operations: run PSI first. If it passes, you're done. If it fails, fix what's findable in the diagnostics and re-run. Still failing, run WebPageTest to find the specific request that's killing you.
What's the difference between field data and lab data?
This is the single most-confused topic in site speed, and PSI makes it worse because both numbers appear on the same page. Field data is the score from real users on real devices. Lab data is the score from one simulated test on a virtual phone. Field data is what Google ranks you on. Lab data is what gets you to a fix.
The PSI report has two sections. The top, Core Web Vitals Assessment, shows field data. According to Google's documentation, this data comes from CrUX (Chrome User Experience Report), a continuously-running dataset of how real Chrome users have experienced your site over the previous 28 days. Field data is what Google uses to grade your site for ranking.
The bottom section, Diagnose Performance Issues, shows lab data. Lab data is generated by Lighthouse, which loads your page once on a simulated mid-tier Android phone over a simulated 4G connection in Google's data center. Google's web.dev docs are clear: lab data exists to give developers a reproducible debugging environment. It's the diagnostic tool, not the score.
The two will disagree. A lot. It's common for a site to show a green Lighthouse score of 95 with a failing field-data verdict, or the reverse. Real users have older phones than the lab simulation, real cellular instead of simulated cellular, different times of day with different cache states. The lab number is a single optimistic snapshot. The field number is the actual score.
Two practical implications:
- If your field data passes but your lab Lighthouse score is mediocre, you're fine. Don't let an agency sell you a rebuild on a 67/100 Lighthouse score if real users are passing. Google doesn't rank on Lighthouse.
- If field data is missing entirely, your site doesn't get enough Chrome traffic for CrUX to sample. Common for small or new sites. Treat the Lighthouse score as your best estimate until traffic catches up.
Shorthand: field data is the report card, lab data is the practice test. The practice test tells you what to study. The report card is the only thing that grades you.
What numbers actually matter and what's noise?
Look at three numbers. LCP, INP, CLS. Ignore everything else on the first read. The Performance score in the big circle, the Speed Index, the Total Blocking Time, the dozens of "opportunities" below: those are debugging signals, not pass/fail criteria. Google ranks on the three Core Web Vitals only.
What each one is:
- LCP (Largest Contentful Paint). How long it takes for the biggest visible thing on your page, usually the hero image or headline, to finish loading. Threshold under 2.5 seconds. Aim for under 1.5 for a real edge. The metric most local-business sites fail.
- INP (Interaction to Next Paint). How long the page takes to respond when someone taps a button. Threshold under 200 milliseconds. Most local-business sites pass this naturally.
- CLS (Cumulative Layout Shift). How much stuff jumps around as the page loads. Threshold under 0.1. You know when you go to tap a button and an ad pushes it down right as your finger lands? That's CLS, and home-services sites usually fail it because of late-loading review widgets and chat embeds.
All three are measured at the 75th percentile of real-user visits. Out of every 100 people loading your page, 75 have to hit those numbers or better. Fast loads on your office Wi-Fi don't paper over slow ones from real customers on real phones.
Ignore on the first pass: the big 0-100 Performance score (lab Lighthouse, owners get sold rebuilds on this number all the time and it's wrong), Speed Index, First Contentful Paint, Total Blocking Time, the "Opportunities" and "Diagnostics" lists. Those are the to-do list, not the grade.
If you only have ten seconds, look at LCP. According to the 2025 Web Almanac, only 62% of mobile pages pass LCP, making it the hardest of the three to pass and the one most directly tied to how fast your site feels. Over 2.5 seconds, that's the headline. Everything else can wait.
Want a clean read on your site without parsing the Google report yourself?
The free Front Door Score runs the same Core Web Vitals checks, plus a handful of conversion and trust signals, and gives you a one-page summary written for owners, not developers. Takes about 90 seconds, no email required to start.
How accurate are these tools, really?
Field data is highly accurate because it's a direct sample of real users. Lab data is repeatable but optimistic, and individual lab runs can vary by 10-20% depending on test conditions and Google's own server load. Trust field data over lab data, trust a multi-run average over a single run, and trust three different tools agreeing over any one tool's verdict.
If you've run PSI twice and gotten different scores, that's normal. The simulation has natural variance. The score that matters, the field data at the top, won't change between runs because it's a 28-day rolling average. Only the lab numbers fluctuate. So if you ship a major fix today, the field-data score will start improving over the next two to four weeks as new traffic data accumulates. Lab data updates immediately, which is why developers use it as the during-fix gauge.
What do I do if the score is bad?
Figure out which metric is failing. Most local-business sites fail LCP, fewer fail CLS, almost none fail INP. The fixes are different for each.
If LCP is failing:
- Compress and resize the hero image. Max 1600 pixels wide, saved as WebP or AVIF, under 200 KB. A single uncompressed photo can outweigh the rest of the page combined.
- Defer or remove third-party scripts. Chat widgets, review carousels, booking embeds. Each adds 100-300 KB. Most builders have a "lazy load" or "delay" toggle that pushes these to load after the user has been on the page two seconds.
- Audit your tag stack. Open Chrome DevTools (right-click, Inspect, Network tab, reload). Count third-party requests. More than six and half probably aren't earning their keep.
If CLS is failing:
- Set explicit width and height on every image. When the browser doesn't know an image's size until it loads, the layout jumps when it arrives.
- Reserve space for late-loading widgets. Chat bubbles, review carousels, ad units. Give them a fixed slot in advance.
- Avoid above-the-fold late-loading fonts. Use the
font-display: optionalCSS rule or system fonts above the fold.
If INP is failing (rare): you have a JavaScript bottleneck. This is the metric where you actually need a developer. Run WebPageTest, look at the Long Tasks breakdown, find the script blocking the main thread.
If you've done all of this and the score hasn't moved, the platform itself is the ceiling. A site builder's framework code ships before any of your content. A Wix or Squarespace home page typically ships 800 KB to 2 MB of platform JavaScript before your first byte. You can't optimize that away. You can only leave the platform.
I rebuilt TruLight SLC on Next.js after watching the old Lovable.app version land an LCP at 1.92 seconds with a total load time of 4.15 seconds. After the rebuild on Next.js + Vercel: LCP dropped to 391 milliseconds, total load time to 745 milliseconds, page weight from 35.3 megabytes to 10. Same content, same domain. Full breakdown at our TruLight SLC case study.
How often should I run this test?
Quarterly is fine for a static site. Monthly if you're actively shipping changes, adding pages, or installing new tracking. Always after any major change: new theme, new chat widget, new analytics tool, homepage redesign. Speed regressions usually come from added scripts, and they sneak in a quarter at a time.
For automatic monitoring, the Core Web Vitals report inside Google Search Console flags any pages that drop into "Needs Improvement" or "Poor" buckets. Free, same field data as PSI, and honestly the most useful early-warning system Google ships.
Frequently asked questions
What's a good PageSpeed Insights score for a local business?
The right answer is "field data passing on all three Core Web Vitals," not a specific lab Performance score. That said, a Lighthouse mobile Performance score of 90+ is good, 70-89 is acceptable, under 70 means there's real work to do. Don't chase 100. The optimization cost climbs steeply past 95 and visitors don't notice the difference.
Why do I get different scores every time I run the test?
That's the lab data fluctuating. Lighthouse simulates a phone and a 4G connection in Google's data center, and the simulation has natural variance from one run to the next. The field data at the top of the report doesn't change between runs because it's a 28-day rolling average of real users. If you want a stable lab number, run the test three times and take the median.
Should I trust GTmetrix or PageSpeed Insights?
Both are valid. They use the same Lighthouse engine, so the underlying numbers should be similar. The difference is that PageSpeed Insights also shows Google's CrUX field data, which is what Google actually ranks you on. If your goal is to know how Google sees you, PSI is the canonical source. GTmetrix is better for pretty reports and historical tracking.
My site doesn't have field data. Is that a problem?
Probably not. Sites that don't get enough Chrome traffic for CrUX to sample show up with "field data not available." That's a traffic-volume issue, not a speed issue. Use the lab Lighthouse score as your best estimate, focus on the LCP and CLS lab numbers, and check back in a few months once you have more visitors.
You can run the test yourself. You can also let us run it for you, alongside a handful of other things that PageSpeed Insights doesn't check (like whether your site is set up to be cited by AI search, whether your forms convert on mobile, and whether your trust signals actually look real). The free Front Door Score is a one-page diagnostic written for owners, not developers, and it usually tells you within five minutes whether your site needs a tune-up or a rebuild.
Want to know how your site stacks up?
Get a free, no-pitch score on speed, SEO, and AI search. Takes about 90 seconds.