708

December 25th, 2023 × #webdev#performance#debugging

How We Made Syntax.fm Faster

Scott and Wes discuss various performance issues encountered while rebuilding the Syntax site, including slow database queries, unnecessary data loading, and Open Graph image generation. They share the optimizations and tools used to diagnose bottlenecks and make improvements.

or
Topic 0 00:00

Transcript

Announcer

Monday. Monday. Monday. Open wide dev fans. Get ready to stuff your face with JavaScript, CSS, node modules, barbecue tips, get workflows, breakdancing, soft skill, web development, the hastiest, the craziest, the tastiest web development treats. Coming in hot. Here is Wes, Barracuda, Boss, and Scott, El Toro Loco, Tolinski.

Scott Tolinski

Welcome to Syntax. It's episode number 708, and we're feeling great. We're gonna be talking about performance here.

Topic 1 00:27

Talking about performance issues we hit building new Syntax site

Scott Tolinski

We pushed a, you know, a new syntax website. We did a ton of work into it. And you know what? It could always be faster. So you know what we did? We we got our hands dirty.

Scott Tolinski

We looked at metrics. We looked at time lines, flame graphs, and all that stuff, and we made it faster.

Scott Tolinski

We're gonna be telling you all about the tips and techniques that we implemented to make this thing faster, what tools we used, and what we learned along the way. My name is Scott Talinski. I'm a developer. I'm from Denver. With me as always is Wes Bos. What's up, my man? Hey. I am

Wes Bos

to talk about this because there was several spots in the new website that were a little sluggish stuff and for different reasons. So I think it's kind of an interesting episode to sort of say, all right, these are the problems that we hit on the different aspects of performance. And and here's how we figured out what was causing it to be slow. And here's how we actually Made it faster.

Topic 2 01:35

Old Syntax site explanation and differences from new one

Scott Tolinski

Totally.

Scott Tolinski

And I think it's also important to note that like, hey, we launched And we went from a site that was relatively simple. Everything was really generated. It was a static site.

Scott Tolinski

It was, it it was Basically, any of these show websites were generated into HTML at compile time, which is really database, you bring in server side processes and things like that, You suddenly have some things you have to worry about. You have areas where it's much easier to be slow because instead of having That data being cached as single files that can be loaded and ready to go, you gotta do a database query. You gotta check some things first. You gotta have all sorts of additional steps that can bring on slowness, and that's really what the problems mostly were. I I would say Just about all of our problems were solved with server side fixes rather than client side fixes. Yeah. And I'll correct you there, though. The old site wasn't

Topic 3 02:52

Clarification on how old Syntax site was generated

Wes Bos

pregenerated. Like, it wasn't HTML.

Wes Bos

No. It was generated on demand, and the only reason it was generated on demand is because We needed to be able to push out new shows as soon as they were released, and we didn't want to have to regenerate every single At like 9 AM on Oh, right. Friday morning. So we we did the stale while we're validate where you visited the website, and then it gave you the hash version, and then it generated a new version in the background for the next person. Yes. And that was in Next. Js,

Scott Tolinski

and, the the one of the reasons why I got that wrong is because Wes Wes primarily knocked out the the first website. So I I didn't have a whole lot of hand in building it initially. But the this website, you know, again, that one didn't have a database. I think it was the big thing. And the database Aspect. Just markdown files? Yeah. Was is really one of the heavier loads of it all. So first and foremost, how did we know It was slow. Where did we go? What did we do to diagnose this for being slow? And and diagnose this for being slow. And and for me, You know, you had some of it with the eye test. You use the site and you say, this thing doesn't feel as fast as it should feel, And I have really fast Internet.

Scott Tolinski

I would imagine that there's some issues here, and that can be backed up by all kinds of data. I mean, granted, you could load up your dev tools. You could, you know, set your Internet speed to be in your your network tab, and you can load it up that way and measure it.

Topic 4 04:09

Ways to test and measure website performance

Scott Tolinski

You can run it through Lighthouse or all kinds of things. But what we did is since, you know, we're Presented by Sentry here. We use Sentry, and I don't want this whole episode to turn into a Sentry ad, but we did use Sentry's performance tools an absurd amount to solve our issues. And and I think it's important to highlight that because we loaded up the the Sentry performance tools, and they're showing you web vitals across every route. And one of the things that shows you, 1st and foremost, on that page is that here is the slowest route of your site. Not the slowest page necessarily, the slowest route. It's able to see show us that the forward slash, show number forward slash slug is the slowest route. Actually, that's the slowest route now, but it wasn't because we'll talk about some of the changes we've made there.

Topic 5 05:14

Using Sentry to identify slowest routes and database queries

Scott Tolinski

And it tells you all kinds of information, like the average time to first bite, The opportunity score, which is like, hey. A lot of people are hitting this thing, and it is slow. Therefore, there's a lot of opportunity here. Right? And because of that, we were able to see exactly what are the things we should be focusing on first, which are the slowest pages, Which have the most opportunity to speed up, and we were seeing a handful of red perf scores For some of the key pages on the site, the index page and the show page primarily were the 2 biggest ones.

Scott Tolinski

So that's the web vitals. Right? You're seeing time to first bite. You're seeing, the what is the LCP largest?

Wes Bos

Largest contentful paint.

Scott Tolinski

Largest contentful paint, the 1st contentful paint, 1st input delay, cumulative layout shift. We're seeing all of that information Across all of the routes of our site. Another thing that we're able to see using the queries feature within within Sentry is we were able to look at all of our database queries, and we're able to see which of our database queries were the slowest consistently.

Topic 6 06:26

Slowest database queries revealed issues with findFirst method

Scott Tolinski

And There were a handful of ones that stood out as consistently being slow.

Scott Tolinski

And so with that information, you have the the web vitals. You have the queries. You have the general performance features here in Century, and then you have things like the eye test. You're using the site. You're feeling how slow it is. You can really dial in exactly what are the biggest things you should be looking at first And then start there. So the first thing we did, knowing that we have a database here, was we looked into the database calls Specifically.

Scott Tolinski

So using the queries tab, we're able to see okay. I'm seeing that I have a bunch of really slow database queries, and all of them were the find first method within Prisma.

Scott Tolinski

And that made me think about something. This is, you know, really my 1st big site with Prisma.

Scott Tolinski

And I was thinking find first. Maybe stuff. Maybe there's an issue with find first. Maybe we're missing an index or something.

Scott Tolinski

Well, it turns out that find first is really just a find mini with a limit of 1, And the method that we actually wanted to use was find unique, better performance. That's an easy fix right there. So We just changed all of our find first to find unique. Easy fix, performance win, nice. So the other thing we had was, the transcripts were being loaded.

Topic 7 07:51

Transcripts were loaded for all show pages causing slowness

Wes Bos

The way that we did the show page. So when you visit one of the episodes on Syntax, we just had, like, We had the player. We had the show notes for that specific show and a couple of other items on that page. And then when it came time to implement the transcripts, What I did is I was like, okay, like the transcript doesn't need to be on every page, so it needs to be on its own, like a separate tab, right? So I wrote some tabs And then I wrote a route that would catch whether you're on the show page or if you're on the show page transcripts, then Hide and show depending on on what we have there. However, first of all, before any of this stuff happened, the transcripts itself are They're absolutely massive because, yeah, you have the entire transcript. And inside of that, you have what are called utterances, which is like a a word or, 2 or 3 sentences. And then inside of the utterances, you have every single word, and it tells you when the words have started and stopped. And I've saved all that data in the database because there may be a time when we want to build like some sort of video tool that will, You know, if you want to highlight the currently spoken word.

Wes Bos

So I was like, all right, we'll save all that data. And I was loading The data for every single word just in in dev. And it was just it's way too slow because the amount of JSON You need, like, imagine an hour's worth of talking, and every word has a start, a stop, a confidence, And a speaker ID attached to every single one is just way too big. So we I scrapped being able to highlight the currently spoken word, And I'm just getting the every single utterance that I'm able to figure out the currently spoken utterance depending on the specific time. But The the way that we did the tabs was that the transcript data was being loaded On when you were visiting the actual show page, which is is is good for, like, performance. You you flip over to it, but it's It takes way too long, and the chances of somebody visiting the page and clicking over to the transcript is is pretty low. Right? Most people are probably not going to the transcript tab. If you are, that's fine. That's fine to have a bit of a larger load on that page, but it doesn't need to be loaded. That data doesn't need to be loaded when you visit just the actual page. So you rejigged

Scott Tolinski

how SvelteKit did all the nest nested routes? Yeah. And this is a testament to the the abilities of nested layout. So, you know, the way that we had it before was that both of the queries, were being done in 1 fell swoop on the page. Right? It was the show page, And then we were just using, like, basically an if statement to show or hide the transcripts. Right? And We realized that not everybody who hits the page is even gonna click on that transcripts transcripts tab to load those transcripts.

Topic 8 10:19

Refactored routing to lazily load transcripts only when needed

Scott Tolinski

So why are we doing that heavy lifting for everybody on the initial load of the page? So what we did is we moved the database calls for Most of the show information to the layout itself, and we moved most of the HTML into the layout of the page.

Scott Tolinski

Then using nested layout routing, we were able to pass in via a slot either the the show nodes Or the transcript. Basically, whatever tab there. So instead of having that tab function with a if statement is being route.

Scott Tolinski

And, likewise, we move the transcript database call To the page of the transcript page, basically saying, hey.

Scott Tolinski

We we are now able use this feature where we can pull out and have this information wrapping some other information as a true nested route.

Scott Tolinski

And that way, just like before, you could hit forward slash transcript and get to the transcript page, but only then are you going to be having to do that massive DB call, which is going to save us a considerable amount of time and did.

Scott Tolinski

So those were some big, big things that we did on the database side of things to improve loading on the heaviest pages, and it it that alone had a huge impact.

Wes Bos

And I I I run into this quite a bit with databases and loading data per page, and I Almost always tell myself, I wish I could put the query in the component itself.

Topic 9 12:17

Wes discusses how he wishes queries could be colocated in components

Wes Bos

You know? And that was We have server components.

Wes Bos

With Apollo, they do that.

Wes Bos

Like, Apollo will take a Next. Js site and walk the component tree looking for queries.

Wes Bos

And I think Quik does that as well where Are you talking about Apollo? Yeah. Apollo, like, Apollo GraphQL.

Wes Bos

Okay. That's a bit nicer because, like, then you don't have to decide at a page level what data you want. You could just put the query in the in the component itself, and then the page should figure out, oh, well, I'm I'm not rendering this component, so I'm not gonna fetch that just yet. So I think that's something that could possibly be improved with a lot of these frameworks.

Scott Tolinski

Yeah. And, do you think React Server Components does, like, solve that directly? Because to me, it does.

Wes Bos

I wonder if everybody else is going to start to do that or if there's a different solution there. Yeah. I I think the the idea of, yeah, just render it on the server and and send the HTML to the client stuff. Fixes a lot of that, and it it makes your components a lot more portable.

Scott Tolinski

Yeah. Totally.

Scott Tolinski

Cool. Well, next thing we did, which is, you know honestly, I would say if you if you're running into perf issues, right, if it's not stuff. Clearly, like, a a client side jank from looping or something like that.

Topic 10 13:55

Caching as solution for many performance problems

Scott Tolinski

I would say so many perf issues in your stacks can be solved by a couple of things, optimizing database calls, adding indexes, those types of things, and then caching. Caching will solve so many perf issues, and it will, solve them really effectively because, you know, again, some of the heaviest lists that we had in the site We're going off to do the database calls. Whether if that database call takes 200 milliseconds, that's 200 additional milliseconds you're tacking on to the entire load of things. Takes longer than that. Again, that's that's tacking on to every single time you do that call. So what you wanna be doing is caching those calls, so that way you're not having to hit the database all the time, and what you're hitting is a cache.

Scott Tolinski

And that cache can be something in memory, which is Basically, just saving it to a value or can be in a, a memory level store like a Redis store.

Topic 11 14:42

Using Upstash Redis caching for database query results

Scott Tolinski

What we used is we used a service called Upstash.

Scott Tolinski

Now if you've heard that Vercel is releasing their own Caching platform. I believe it's called KV.

Scott Tolinski

You might have heard that KV is just a wrapper around Upstash.

Scott Tolinski

So while we could have used Vercel's caching platform and paid the premium on top of it, makes way more sense to just use Upstash itself. I mean, why wouldn't you do that, right, especially if it's the same? And and, basically, it is using Redis, But it's using Redis with a serverless API.

Scott Tolinski

So we were able to use Upstash, with their library to do the Upstash caching, which is basically the exact same as Redis' API. You set something with a a a key, And you retrieve it with that key, key value store. It's really just as simple as that. And the way we're doing it is we're saving Each database call for each show into the cache.

Scott Tolinski

And when we check, we first cache to see if the show's in the cache. If it's in the cache, we return the 1 in the cache. If it's not, we we get the one from the database. So you're only doing the database calls when you have to get, information from the database. Now when do you have to get information from the database? Well, we set it up on a millisecond timeline. So newer shows Automatically will retrieve a new one from the database every 600 milliseconds.

Scott Tolinski

When it retrieves a new one from the database, it then updates the cache. Right? And, likewise, older shows will be there for 3,600 milliseconds.

Scott Tolinski

That way because the older shows, they, you know, they don't get updated as much. And let's say we have an uh-oh in one of our show notes or something. We need to Update it really quick. We can make that update and be assured that not everybody's going to be served the old cash. Now I also created in the admin section list, I don't know if you've seen this, a drop cache button. In an emergency, we wanna just completely nuke the cache. You can click that button. It's just gonna delete the cache entirely.

Scott Tolinski

Oh, that's good. I was gonna think about that. Yeah. I mean, I come from the world of Drupal where anytime you make any change, you click the delete all cache button because everything is cache so hard. Yeah. Everybody knows in in Drupal world that your cache is almost always going to be the problem. So You know what you could also do is

Topic 12 17:19

Option to invalidate cache on new builds

Wes Bos

you could stick the, Like the build cache or the latest git commit as part of the key. Mhmm.

Wes Bos

And then As soon as you do a new build, I know with, like, a lot of Vercel's CDN stuff, they do something similar to that. As as soon as you have a new build, Your entire cache is invalidated. And and generally how I don't know if this is how they do it, but you can just use some sort of unique identifier key as as part of your,

Scott Tolinski

key creation. Yes. That's totally true. And we could do that to extend the lifespan of the cache. But, again, database than it is the build of the site. Right? Exactly. Yeah. You maybe don't want that. Right. You could use the hash file, maybe not the hash, but you could use some sort of unique identifier to say that this show has been updated, maybe the time stamp for when it was last updated,

Wes Bos

and maybe that's the that's the indicator. Right? Yeah. But in most cases, just go grab a coffee, and it will be fixed by the time you're done.

Topic 13 18:20

Caching times for new vs old show pages

Wes Bos

Right? Like, 66 100 milliseconds? 600 milliseconds. Nothing. Yeah. So you're only caching it for

Scott Tolinski

Half a second? Oh, 600 milliseconds. Are you 600 600 seconds, probably? 600 seconds. Oh, man. That always gets me. I'm sorry.

Wes Bos

10 minutes, but it is seconds. Yes. 10 minutes. Plus, Upstash has a nice, like, a nice login API where, like, you can literally if you're wondering, is this thing cached or not, and how much longer is left on the expiry Of this thing, you can just log in to Upsash, and they have a nice little GUI for you can you can go look at it, and you can just Click the delete button if you really want to. Totally. And what's great about Upstash is the pricing was very reasonable. I mean, we're not spending really we haven't even hit

Scott Tolinski

any sort of limits for what we need to be spending yet. So, cost wise, it's really good. I was very apprehensive about paying for a service for this because in the past, I'm used to spinning up a, like, a private Redis server going that way. But, honestly, this has been cheaper than running a private Redis server on render.com where I did it before for level up tutorials.

Scott Tolinski

Stuff. It's it's been cheaper than that. So, you know, hey. I was concerned again that it would start to add to the cost in an unhealthy way, but it's been very cost effective. Yeah. Certainly, I would recommend

Topic 14 19:44

Wes recommends trying Redis caching

Wes Bos

if you're listening to podcasting and you're trying to go a little bit further with your server side development is either a, try to implement a Redis cache yourself.

Wes Bos

It's no different than a lot of like local storage settings that you've probably done in the past or Try build a little in memory cache with a map in JavaScript because you can You do that a lot. You can create a map outside of your function handler and then just Say, well, if the map has this key, then return it.

Wes Bos

You do have to add like a timestamp to your data

Scott Tolinski

as well because you need to know when to expire it, and that's sort of the nice API of this Upstash 1. And in Redis, in general, you can set an expiry in seconds. Yeah. That's actually a really great thing about this is being able to set that expiry in there. That way I don't have to do the math myself. Yeah. I also just wrote a a helper function query, You can pass in the the function you're trying to do, and it will automatically check the cache and return the right one or whatever without you having to do do that manually each time. Just a nice little helper. Adding the if statements? Yeah. Without adding the if statements populating your code. So now you just do a a cache function, cache call function, and and pull in that data directly in there. And, And, honestly, it's been really super nice, and, it it it's sped up the queries instantly. Absolutely.

Topic 15 21:16

Adding cache headers for server side rendered pages

Scott Tolinski

Another thing we did is since our site is fully server side rendered, we just made sure that every single heavy route got cache headers, and we did the exact same strategy for if the episode is newer, it's going to be 600 seconds. If the episode is older, it's gonna be 3,600 in terms of how long it's cached for.

Scott Tolinski

And we also use stale while revalidate as well, Which, if you wanna learn more about that, we did an entire episode on that. Wes, do you have that episode number handy? Let me see. Stale man. The new syntax search. Let me just tell you. The new syntax search is so fast. You need stale while revalidate. Episode 100 692 is a really great lesson to learn a little bit more about why you might want to use that.

Scott Tolinski

Either way, those those caching sped up our initial loading, but we just made sure that every single page that we needed to make sure was fast database caching and had the correct and good cache headers for us to make sure that those were fast. Bingo. Bango.

Topic 16 22:20

Summary of optimizations made

Scott Tolinski

Lots and lots of speed improvements.

Wes Bos

Yeah. In the, episode 692, stay while revalidate, I talked how we're using it to cache our OG images that are generated. So real quick, we use Puppeteer to load part of the website that generates the OG images. That takes a screenshot and we'll send it out the other way. I know there's lots of libraries out there. This is the absolute best way, I guarantee you, if you want the type of control that we want over it. Now, I had hit 1 issue with that, and that was with LinkedIn. So I noticed that whenever we shared one of our episodes on LinkedIn, we weren't getting the really nice Open Graph images. And those Open Craft Images have been doing super well for us on all of the other platforms. We're getting a lot more traction of people clicking them because they they look awesome, right? And they weren't working on LinkedIn. So I went down the whole rabbit hole again, which is my, like, least favorite thing to do is To try change something, commit it, wait for the thing to build, go to the whatever tool that I'm using. In this case, I had to use the LinkedIn debugger tool and then press the button And it would scrape it and give you a little bit of information about what it's going to display, and it would not pick up the Open Graph tags. So I spent like 2 hours trying different tags and different OpenGraph namespaces and name versus description and meta. And I was talking to, Killen from Polypane because, Like, Paul Payne said it was working fine, but it wasn't working fine when I actually posted it on LinkedIn.

Topic 17 24:06

Issue with LinkedIn not displaying proper open graph images

Wes Bos

And finally, I thought, like, I wonder.

Wes Bos

It it takes between 8 and 15 seconds to generate the Open Graph image because it has to fire up 15 seconds if it has to fire up a new browser, about 8 seconds if it doesn't have to fire up the browser and take the thing. And that's too long to wait for an image to load, right? So stuff. Those those images are are pregenerated and cached.

Wes Bos

And I was using stale while revalidate header to just cache them on the 1st sell CDN, right? In the past, I had actually stuck those values in my own website. I stick them in just in memory. I store them for a couple hours or whatever, and I had never had an issue. But on LinkedIn, for some reason, LinkedIn would not hit the cache.

Wes Bos

So I was I was watching the Vercel logs come in. And even though I I guaranteed that there was a CDN hit from generating it beforehand, something with LinkedIn would always cause whether they were putting parameters on the end of the URL or whether sorry, not meta tags, whether their user agent was different and Vercel was allowing them to regenerate it, I could not get them to hit the cache at all.

Topic 18 25:21

Reason LinkedIn issue occurred and solution with more caching

Wes Bos

And that was frustrating because it was taking too long and LinkedIn was telling me it couldn't find the Open Graph image. So I was like, hey, Scott, just put this Redis thing in here. I'm going to use that.

Wes Bos

So I just grabbed Redis Threw the image into Redis and gave it a decent expiry.

Wes Bos

And then next time when somebody requests the image, I check In Redis, if it is in there, I still left the like Vercel CDN stuff on there because I feel like that is ideal at a cache it at a CDN level.

Wes Bos

But to solve the LinkedIn issue, I'm now also caching it in Redis, and I check if it's in Redis before we go ahead and generate a fresh one and that boom. That fixed it, deployed the sucker, and now we got nice Open Graph images nice and fast on LinkedIn. Nice.

Topic 19 26:11

Overall suggestion to use Redis caching

Scott Tolinski

So, basically, the, long story short of this entire episode is dump it in redis.

Scott Tolinski

You start with the heaviest loads, and you dump them in redis.

Scott Tolinski

And then, like, that's it. That's the whole strategy there. So, hey,

Wes Bos

works. Alright.

Wes Bos

That's it. Hopefully, you learned a thing or two about making the site faster. It's kinda interesting. We've we've talked a lot about, like, the client side performance in the past, and we didn't have any. I don't think we had any client side performance issues, did we? I don't really think so. No? I you know, I think the server side rendering aspect of the site generally makes,

Scott Tolinski

some of those things less heavy. You're not having to worry about skeleton screens as much. I I think sometimes maybe the the client side stuff that is even slow is still almost always data loading, but it's not Heavy client.

Scott Tolinski

We we we didn't have to memoize things. We didn't have to use callback. We didn't have to do any of that stuff that you might have to worry about.

Scott Tolinski

Yeah. In general, client side JavaScript performance was nice and easy. Beautiful. Alright. Thanks for tuning in. Catch you later. Deep.

Scott Tolinski

Head on over to syntax.fm for a full archive of all of our shows, and don't forget to subscribe in your podcast player or drop a review if you like this show.

Share