Is Jekyll CPU bound or disk bound

I work with Jekyll daily and am considering an upgrade to my i7-6700k based system to help with build times. I already have a fast nvme drive used for the build, and from what I can see the jekyll build pushes 1 cpu to 100%, so I am thinking the jekyll build process is cpu bound.

Can someone confirm that Jekyll is typically performance bound by the CPU when using an nvme disk? If that is the case, setting up some type of nvme striped raid array isn’t going to make any difference. It would see the fastest intel i9 would be the best option with my current build.

Does anyone have any experience with upgrading their CPU and seeing a measurable impact in performance? I’ve already done many optimizations for how my hand built template is designed to avoid performance pitfalls due to that, plus I am already on Jekyll v4.0.

Thanks.

not an answer to your question but have you tried using gulp to handle some of the asset processing? for instance gulp can handle sass processing much faster than jekyll and then that is one less thing for jekyll to do which makes everything it does faster.

You also have gulp handle the images so jekyll isn’t nuking the images and copying them over again each time you edit one file.

What are your build times and some site stats? like number of pages/images/documents.

Completely anecdotal, but in my tests, slow downs for large sites are due to I/O intensive tasks like reading/writing files.

In some of my sites with gigabytes of image assets, Jekyll would slow down copying those files to _site each build. I went as far as doing what @rdyar suggested and pulled those tasks from Jekyll. Anything related to assets (Sassifying CSS, bundling JS, resizing/moving images) I used Gulp to deal with. I had more control over only touching files that were modified or newer from the previous build so it wasn’t trying to move large sizes of data around each build.

So anything you can do to speed up the disk the better… e.g. install a fast SSD.

CPU bottlenecks might come from templating (e.g. going overboard with for loops on site.posts or other large objects), though I don’t have any hard numbers on that. I’ve built the same repo on different platforms with varying CPUs and OSes and the differences are usually seconds not minutes.

Optimizing for disk related tasks had the bigger impact, which easily shaved minutes off of my builds.

I’m already running with a 2T 4th gen nvme ssd, so that is about as fast as I can get without striping two of those.