Earlier this week, I posted a video on YouTube about how to create your own personal jQuery, and wrote a text article version of it as well.
On YouTube, someone in the comments was very insistent that jQuery has no impact on performance whatsoever.
What is it, 30kb? Even at 7Mps it still downloads in just a few milliseconds. You don’t need the latest hardware or superfast broadband to handle it. And it could already be cached in the client. Optimizing images will gain far more performance than avoiding a jQuery download.
Today, I wanted to explain why this is wrong.
Let’s dig in!
When a browser downloads an image file, it can start rendering it right away, before it’s even finished downloading the whole thing.
All of this is to say, 1 byte of JS is much more expensive than a byte of JPG or PNG.
If you want to learn more about how this works under the hood, Addy Osmani from Google wrote a detailed post about it.
The actual size of jQuery is much bigger than 30kb
That’s the minified and gzipped size. Once browsers unzip jQuery, it’s actually 285kb, more than nine times larger.
That’s true for all major libraries, by the way.
Those gzipped sizes are super important for transit over the wire and downloading times. But once the file is actually compiled, the bigger “real size” matters more.
Let’s look at some data.
A few years back, the UK.gov team removed jQuery from their site and found substantial performance gains, particularly for folks in the P95 percentile (the 5 percent of users with the slowest connection).
We see many of our key metrics trending down (for p75) after the change, including frontend time, First CPU Idle, JS Long Tasks.
Click through to Matt Hobbs’ Twitter thread to see the charts and metrics they used.
And then just this week, performance specialist Tim Kadlec tweeted out this…
Phew. What a difference a
:lastmakes in jQuery. Third-party @shopify app with a very long list of selectors in a click handler.
With the deprecated
It’s not just jQuery, of course. From Zach Leatherman…
An analysis of Core Web Vitals across 9.3 million web sites as of February 6, 2023 shows that Core Web Vitals for both React and Next.js shows that both perform worse than the aggregation of all other sites in the archive for both mobile and desktop.
Why are libraries slow?
In a word: abstraction.
Every layer of abstraction you add mores more work browsers, takes up more space in memory, and slows processing times down.
A good example of this is React vs. Preact. My friend Jeremy Wagner did some testing, and found that Preact was faster to both render the initial UI and handle DOM updates.
Why? Preact is closer to the metal and uses far fewer abstractions under-the-hood, despite having a nearly identical API to React proper.