I’ve been trying to find ways to improve build times but have also been willing to accept long build times if that is just the way it is going to be. Still, minimal laptop hangup time would be ideal.
When I run JEKYLL_ENV=production jekyll build --profile I get a profile that says a build takes a total time of 35 seconds, but I know that when I actually run it it will take 4 to 5 minutes. I think the profile does not take into account images.
My site has LOTS of images. It is an image-based site, so there are 4,948 images taking up 1.4 GB of disk space.
In my repo, I have a Production config file AND a stage config file for development. (I did that because I thought it might be more performant early on in the site creation process). Local development has built times between 14 and 20 seconds when I run without the incremental --I flag. It seems that the images get written once to the Stage folder and only again if they ever change. I think what might be happening for Production builds is that the /property-img folder is rebuilt every single time a Production build is run, even though I use the --I flag. The bulk of the build time beyond what Jekyll reports would therefore be copying the 1.4gb of photo content from source to the _prod folder.
My development watch command is simple jekyll serve. My production build command is JEKYLL_ENV=production jekyll build --config _config_prod.yml --I
Does any one have any performance tips that I can try?
you should look into using something else to manage the images so jekyll doesn’t have to.
I used Gulp to do this and it worked great. I would have gulp handle all assets - so images, js and css/sass. This worked super well and for your use case would be a really good idea.
When ever jekyll builds it wipes out the entire site folder - so if you have tons of assets this is really bad. I never used the incremental stuff and as far as I know it is still experimental so I don’t think it will always work as you expect.
If you use something else to process your static assets then in the config file you can tell jekyll to not wipe out certain parts of the _site folder - so you could put all your images into a folder named _assets in the root of your repo, then tell Gulp to watch it for changes and if needed move stuff to the site folder, then you tell jekyll to NOT delete the assets folder and if it all works you get very fast build times as jekyll no longer has to copy all the static files on each build. There can be some glitches with this and if you no longer need images you have to delete them yourself. I think at the end I had all the glitches covered.
here is a video I made several years ago on this: Speed up Jekyll with Gulp - YouTube - check the description for a link to a post where you can copy most of the code for the Gulp file.
I no longer do this as jekyll got faster and managing gulp and all that was a little complicated. I now just do it the normal jekyll way. But I don’t have anywhere near that many images.
I’d also look at keeping your images in S3 or some other place besides inside your repo with your other site stuff. Not sure why, just seems like a good idea. I sure hope you are optimizing them all.
@mmistakes also did something similar as far as managing his assets outside of jekyll, I think he might have a post about it too somewhere.
I also tried doing the same thing via NPM instead of Gulp and it worked fine - though Gulp was faster for some reason.
I ran one of your full size images thru TinyPNG and it said it could save you 22% on the file size (not resizing it, just optimizing it).
One of the things I did was to run all the images in the source thru imagemin - in the gulp script I had there is an option to do that. I would just do it every now and then, not every time.
For that much data it would be a really good thing to optimize them as much as possible.
Jekyll simply copies your image files from source to destination.
So if your images take 1.4 GB disk space at source, it would take another 1.4 GB inside the directory _site. Thereby doubling the local disk space usage.
If your templates only references the image files statically, (I did not look into the linked source repo), you could simply use a CDN to host the images for production use.
However, if your site references the image files dynamically, e.g. use of {{ file.url }}, etc, then you may need to come up with smarter solutions. As an outline the solution would involve Jekyll building (and consequent deploy to remote server) with highly optimized, low-res image files but requesting the original high-res images (via CDN) during an end-user page-visit.
I can’t honestly say I know what your exact problem is because my site builds in 15 seconds. It has over 1100 pages and 4300 nested summary details for post tags.
GitHub Pages has a quota of 1 GB. To side-step the quota, what I’ve done with the 9 videos in this blog post, is publish each video as a GitHub Comment. Then link to that comment in markdown/kramdown to view the video.
The videos were initially in .MKV format. They have to be converted to .mp4 format for GitHub. The conversion process also compresses them. So a .MKV over 10 MB is 7.4 MB in .mp4 format.
The disadvantage is each video used in the GitHub Pages takes a minute or so to add as a comment and then reference in HTML.
Open a new issue and post the video into the GitHub Comment.
Open your web browser inspector.
3 Hover over the video and copy the inner-HTML
The last line, style= is the one I have to paste new properties into each time.
I’ve pasted the raw HTML below but the video doesn’t show up in Jekyll Talk. It only works in GitHub Pages.
The advantage of this technique is you are allowed videos up to 10MB in size EACH and none of them count towards your disk quota because they are GitHub comments. – At least that is my understanding.
Another advantage is GitHub Comments are only rendered / built / deployed one time when posted. My site on the other hand has 2,109 commits and counting and every time regular .PNG images and .GIFs are processed again, even though they haven’t changed.
Clearly this isn’t the answer you are looking for based on other answers. However, it may give some ideas for yourself, if not for others following this thread.
One thing I’ve noticed is that static-heavy Jekyll build’s are highly dependent on the type of file system used. For example, one site I work on is 11 GB (many videos, PDFs, Presentation, etc) and builds in ~14 sec on Apple’s APFS, but takes ~250 sec (17x slower) on an Linux machines with Ext4.
I believe the difference is due to one FS supports copy-on-write (APFS), and the other doesn’t (Ext4). With copy-on-write, large files are only every truly copied when the file is modified, which rarely happens to image/movie files. This means that when Jekyll copies large files, only a few bytes are changed on the disk, so builds are faster.