Back to Examples

Giles' Blog

> Giles' Blog is the personal technical blog for Giles Thomas, a software engineer

Lines
669
Sections
46

Want your own llms.txt file?

Generate a professional, AI-friendly file for your website in minutes!

llms.txt Preview

# Giles' Blog

> Giles' Blog is the personal technical blog for Giles Thomas, a software engineer
> and entrepreneur.  Current information about Giles Thomas can be found on the
> [about page](https://www.gilesthomas.com/about.md).

This page lists the 20 most recent posts, and then all categorised posts by their
category (many posts have multiple categories).

## Recent posts

* [Writing an LLM from scratch, part 31 -- the models are now on Hugging Face](https://www.gilesthomas.com/2026/01/llm-from-scratch-31-models-on-hugging-face.md) posted on 2026-01-17T19:45:00+00:00
* [Writing an LLM from scratch, part 30 -- digging into the LLM-as-a-judge results](https://www.gilesthomas.com/2026/01/llm-from-scratch-30-digging-into-llm-as-a-judge.md) posted on 2026-01-09T01:15:00+00:00
* [Writing an LLM from scratch, part 29 -- using DistributedDataParallel to train a base model from scratch in the cloud](https://www.gilesthomas.com/2026/01/llm-from-scratch-29-ddp-training-a-base-model-in-the-cloud.md) posted on 2026-01-07T20:40:00+00:00
* [Writing an LLM from scratch, part 28 -- training a base model from scratch on an RTX 3090](https://www.gilesthomas.com/2025/12/llm-from-scratch-28-training-a-base-model-from-scratch.md) posted on 2025-12-02T18:15:00+00:00
* [Why smart instruction-following makes prompt injection easier](https://www.gilesthomas.com/2025/11/smart-instruction-following-and-prompt-injection.md) posted on 2025-11-12T19:00:00+00:00
* [Writing an LLM from scratch, part 27 -- what's left, and what's next?](https://www.gilesthomas.com/2025/11/llm-from-scratch-27-whats-left-and-whats-next.md) posted on 2025-11-04T00:40:00+00:00
* [Writing an LLM from scratch, part 26 -- evaluating the fine-tuned model](https://www.gilesthomas.com/2025/11/llm-from-scratch-26-evaluating-the-fine-tuned-model.md) posted on 2025-11-03T19:40:00+00:00
* [Writing an LLM from scratch, part 25 -- instruction fine-tuning](https://www.gilesthomas.com/2025/10/llm-from-scratch-25-instruction-fine-tuning.md) posted on 2025-10-29T23:40:00+00:00
* [Writing an LLM from scratch, part 24 -- the transcript hack](https://www.gilesthomas.com/2025/10/llm-from-scratch-24-the-transcript-hack.md) posted on 2025-10-28T20:15:00+00:00
* [A classifier using Qwen3](https://www.gilesthomas.com/2025/10/a-classifier-using-qwen3.md) posted on 2025-10-24T23:30:00+00:00
* [Retro Language Models: Rebuilding Karpathy’s RNN in PyTorch](https://www.gilesthomas.com/2025/10/retro-language-models-rebuilding-karpathys-rnn-in-pytorch.md) posted on 2025-10-24T19:00:00+00:00
* [Writing an LLM from scratch, part 23 -- fine-tuning for classification](https://www.gilesthomas.com/2025/10/llm-from-scratch-23-fine-tuning-classification.md) posted on 2025-10-22T23:40:00+00:00
* [Writing an LLM from scratch, part 22 -- finally training our LLM!](https://www.gilesthomas.com/2025/10/llm-from-scratch-22-finally-training-our-llm.md) posted on 2025-10-15T23:40:00+00:00
* [Revisiting Karpathy’s 'The Unreasonable Effectiveness of Recurrent Neural Networks'](https://www.gilesthomas.com/2025/10/revisiting-karpathy-unreasonable-effectiveness-rnns.md) posted on 2025-10-11T01:00:00+00:00
* [Writing an LLM from scratch, part 21 -- perplexed by perplexity](https://www.gilesthomas.com/2025/10/llm-from-scratch-21-perplexed-by-perplexity.md) posted on 2025-10-07T20:00:00+00:00
* [Writing an LLM from scratch, part 20 -- starting training, and cross entropy loss](https://www.gilesthomas.com/2025/10/llm-from-scratch-20-starting-training-cross-entropy-loss.md) posted on 2025-10-02T22:10:00+00:00
* [How do LLMs work?](https://www.gilesthomas.com/2025/09/how-do-llms-work.md) posted on 2025-09-15T23:20:00+00:00
* [An addendum to 'the maths you need to start understanding LLMs'](https://www.gilesthomas.com/2025/09/maths-for-llms-addendum.md) posted on 2025-09-08T18:15:00+00:00
* [The maths you need to start understanding LLMs](https://www.gilesthomas.com/2025/09/maths-for-llms.md) posted on 2025-09-02T23:30:00+00:00
* [What AI chatbots are actually doing under the hood](https://www.gilesthomas.com/2025/08/what-ai-chatbots-are-doing-under-the-hood.md) posted on 2025-08-29T20:00:00+00:00


## Posts in category AI

* [Evolution in action](https://www.gilesthomas.com/2008/10/evolution-in-action.md) posted on 2008-10-03T17:52:56+00:00
* [Building an AI chatbot for beginners: part 0](https://www.gilesthomas.com/2023/03/ai-llm-bot-beginners-tutorial-00.md) posted on 2023-03-19T20:45:00+00:00
* [Building an AI chatbot for beginners: part 1](https://www.gilesthomas.com/2023/03/ai-llm-bot-beginners-tutorial-01.md) posted on 2023-03-19T21:45:00+00:00
* [Building an AI chatbot for beginners: part 2](https://www.gilesthomas.com/2023/04/ai-llm-bot-beginners-tutorial-02.md) posted on 2023-04-04T19:45:00+00:00
* [Giving up on the AI chatbot tutorial (for now)](https://www.gilesthomas.com/2024/02/giving-up-on-tutorial-and-link-to-new-pythonanywhere-blog-post.md) posted on 2024-02-27T20:45:00+00:00
* [LLM Quantisation Weirdness](https://www.gilesthomas.com/2024/02/llm-quantisation-weirdness.md) posted on 2024-02-27T22:45:00+00:00
* [Messing around with fine-tuning LLMs](https://www.gilesthomas.com/2024/04/fine-tuning.md) posted on 2024-04-27T22:45:00+00:00
* [Messing around with fine-tuning LLMs, part 2 -- to the cloud!](https://www.gilesthomas.com/2024/04/fine-tuning-2.md) posted on 2024-04-28T22:45:00+00:00
* [Messing around with fine-tuning LLMs, part 3 -- moar GPUs](https://www.gilesthomas.com/2024/05/fine-tuning-3.md) posted on 2024-05-15T23:45:00+00:00
* [Messing around with fine-tuning LLMs, part 4 -- training cross-GPU.](https://www.gilesthomas.com/2024/05/fine-tuning-4.md) posted on 2024-05-21T21:45:00+00:00
* [Messing around with fine-tuning LLMs, part 5 -- exploring memory usage](https://www.gilesthomas.com/2024/07/fine-tuning-5.md) posted on 2024-07-05T17:45:00+00:00
* [Messing around with fine-tuning LLMs, part 6 -- measuring memory usage more systematically](https://www.gilesthomas.com/2024/07/fine-tuning-6.md) posted on 2024-07-10T23:45:00+00:00
* [Messing around with fine-tuning LLMs, part 7 -- detailed memory usage across sequence lengths for an 8B model](https://www.gilesthomas.com/2024/08/fine-tuning-7.md) posted on 2024-08-16T23:45:00+00:00
* [Messing around with fine-tuning LLMs, part 8 -- detailed memory usage across batch sizes](https://www.gilesthomas.com/2024/08/fine-tuning-8.md) posted on 2024-08-25T23:00:00+00:00
* [Messing around with fine-tuning LLMs, part 9 -- gradient checkpointing](https://www.gilesthomas.com/2024/09/fine-tuning-9.md) posted on 2024-09-03T23:00:00+00:00
* [Messing around with fine-tuning LLMs, part 10 -- finally training the model!](https://www.gilesthomas.com/2024/12/fine-tuning-10.md) posted on 2024-12-22T19:00:00+00:00
* [Writing an LLM from scratch, part 1](https://www.gilesthomas.com/2024/12/llm-from-scratch-1.md) posted on 2024-12-22T21:00:00+00:00
* [Writing an LLM from scratch, part 2](https://www.gilesthomas.com/2024/12/llm-from-scratch-2.md) posted on 2024-12-23T21:00:00+00:00
* [Writing an LLM from scratch, part 3](https://www.gilesthomas.com/2024/12/llm-from-scratch-3.md) posted on 2024-12-26T22:30:00+00:00
* [Writing an LLM from scratch, part 4](https://www.gilesthomas.com/2024/12/llm-from-scratch-4.md) posted on 2024-12-28T22:30:00+00:00
* [An AI chatroom (beginnings)](https://www.gilesthomas.com/2024/12/ai-chatroom-1.md) posted on 2024-12-29T23:15:00+00:00
* [An AI chatroom (a few steps further)](https://www.gilesthomas.com/2024/12/ai-chatroom-2.md) posted on 2024-12-30T23:15:00+00:00
* [Writing an LLM from scratch, part 5 -- more on self-attention](https://www.gilesthomas.com/2025/01/llm-from-scratch-5-self-attention.md) posted on 2025-01-11T23:30:00+00:00
* [Do reasoning LLMs need their own Philosophical Language?](https://www.gilesthomas.com/2025/01/philosophical-language-llm.md) posted on 2025-01-16T23:30:00+00:00
* [Writing an LLM from scratch, part 6 -- starting to code self-attention](https://www.gilesthomas.com/2025/01/llm-from-scratch-6-coding-self-attention-part-1.md) posted on 2025-01-21T22:30:00+00:00
* [Writing an LLM from scratch, part 6b -- a correction](https://www.gilesthomas.com/2025/01/llm-from-scratch-6b-correction.md) posted on 2025-01-28T22:30:00+00:00
* [Writing an LLM from scratch, part 7 -- wrapping up non-trainable self-attention](https://www.gilesthomas.com/2025/02/llm-from-scratch-7-coding-self-attention-part-2.md) posted on 2025-02-07T21:30:00+00:00
* [On the perils of AI-first debugging -- or, why Stack Overflow still matters in 2025](https://www.gilesthomas.com/2025/02/ai-debugging-is-not-always-the-solution.md) posted on 2025-02-19T02:30:00+00:00
* [Basic matrix maths for neural networks: the theory](https://www.gilesthomas.com/2025/02/basic-neural-network-matrix-maths-part-1.md) posted on 2025-02-20T22:45:00+00:00
* [Basic matrix maths for neural networks: in practice](https://www.gilesthomas.com/2025/02/basic-neural-network-matrix-maths-part-2.md) posted on 2025-02-22T23:45:00+00:00
* [Writing an LLM from scratch, part 8 -- trainable self-attention](https://www.gilesthomas.com/2025/03/llm-from-scratch-8-trainable-self-attention.md) posted on 2025-03-04T21:30:00+00:00
* [Writing an LLM from scratch, part 9 -- causal attention](https://www.gilesthomas.com/2025/03/llm-from-scratch-9-causal-attention.md) posted on 2025-03-09T23:30:00+00:00
* [Adding /llms.txt](https://www.gilesthomas.com/2025/03/llmstxt.md) posted on 2025-03-18T22:30:00+00:00
* [Writing an LLM from scratch, part 10 -- dropout](https://www.gilesthomas.com/2025/03/llm-from-scratch-10-dropout.md) posted on 2025-03-19T23:30:00+00:00
* [Dropout and mandatory vacation](https://www.gilesthomas.com/2025/03/dropout-and-mandatory-vacation.md) posted on 2025-03-24T23:45:00+00:00
* [Writing an LLM from scratch, part 11 -- batches](https://www.gilesthomas.com/2025/04/llm-from-scratch-11-batches.md) posted on 2025-04-19T23:00:00+00:00
* [Writing an LLM from scratch, part 12 -- multi-head attention](https://www.gilesthomas.com/2025/04/llm-from-scratch-12-multi-head-attention.md) posted on 2025-04-21T23:00:00+00:00
* [Writing an LLM from scratch, part 13 -- the 'why' of attention, or: attention heads are dumb](https://www.gilesthomas.com/2025/05/llm-from-scratch-13-taking-stock-part-1-attention-heads-are-dumb.md) posted on 2025-05-08T22:00:00+00:00
* [Writing an LLM from scratch, part 14 -- the complexity of self-attention at scale](https://www.gilesthomas.com/2025/05/llm-from-scratch-14-taking-stock-part-2-the-complexity-of-self-attention-at-scale.md) posted on 2025-05-14T21:00:00+00:00
* [Writing an LLM from scratch, part 15 -- from context vectors to logits; or, can it really be that simple?!](https://www.gilesthomas.com/2025/05/llm-from-scratch-15-from-context-vectors-to-logits.md) posted on 2025-05-31T23:55:00+00:00
* [Writing an LLM from scratch, part 16 -- layer normalisation](https://www.gilesthomas.com/2025/07/llm-from-scratch-16-layer-normalisation.md) posted on 2025-07-08T18:50:00+00:00
* [Writing an LLM from scratch, part 17 -- the feed-forward network](https://www.gilesthomas.com/2025/08/llm-from-scratch-17-the-feed-forward-network.md) posted on 2025-08-12T23:00:00+00:00
* [The fixed length bottleneck and the feed forward network](https://www.gilesthomas.com/2025/08/the-fixed-length-bottleneck-and-the-feed-forward-network.md) posted on 2025-08-14T23:00:00+00:00
* [Writing an LLM from scratch, part 18 -- residuals, shortcut connections, and the Talmud](https://www.gilesthomas.com/2025/08/llm-from-scratch-18-residuals-shortcut-connections-and-the-talmud.md) posted on 2025-08-18T20:20:00+00:00
* [Writing an LLM from scratch, part 19 -- wrapping up Chapter 4](https://www.gilesthomas.com/2025/08/llm-from-scratch-19-wrapping-up-chapter-4.md) posted on 2025-08-29T17:00:00+00:00
* [What AI chatbots are actually doing under the hood](https://www.gilesthomas.com/2025/08/what-ai-chatbots-are-doing-under-the-hood.md) posted on 2025-08-29T20:00:00+00:00
* [The maths you need to start understanding LLMs](https://www.gilesthomas.com/2025/09/maths-for-llms.md) posted on 2025-09-02T23:30:00+00:00
* [An addendum to 'the maths you need to start understanding LLMs'](https://www.gilesthomas.com/2025/09/maths-for-llms-addendum.md) posted on 2025-09-08T18:15:00+00:00
* [How do LLMs work?](https://www.gilesthomas.com/2025/09/how-do-llms-work.md) posted on 2025-09-15T23:20:00+00:00
* [Writing an LLM from scratch, part 20 -- starting training, and cross entropy loss](https://www.gilesthomas.com/2025/10/llm-from-scratch-20-starting-training-cross-entropy-loss.md) posted on 2025-10-02T22:10:00+00:00
* [Writing an LLM from scratch, part 21 -- perplexed by perplexity](https://www.gilesthomas.com/2025/10/llm-from-scratch-21-perplexed-by-perplexity.md) posted on 2025-10-07T20:00:00+00:00
* [Revisiting Karpathy’s 'The Unreasonable Effectiveness of Recurrent Neural Networks'](https://www.gilesthomas.com/2025/10/revisiting-karpathy-unreasonable-effectiveness-rnns.md) posted on 2025-10-11T01:00:00+00:00
* [Writing an LLM from scratch, part 22 -- finally training our LLM!](https://www.gilesthomas.com/2025/10/llm-from-scratch-22-finally-training-our-llm.md) posted on 2025-10-15T23:40:00+00:00
* [Writing an LLM from scratch, part 23 -- fine-tuning for classification](https://www.gilesthomas.com/2025/10/llm-from-scratch-23-fine-tuning-classification.md) posted on 2025-10-22T23:40:00+00:00
* [A classifier using Qwen3](https://www.gilesthomas.com/2025/10/a-classifier-using-qwen3.md) posted on 2025-10-24T23:30:00+00:00
* [Retro Language Models: Rebuilding Karpathy’s RNN in PyTorch](https://www.gilesthomas.com/2025/10/retro-language-models-rebuilding-karpathys-rnn-in-pytorch.md) posted on 2025-10-24T19:00:00+00:00
* [Writing an LLM from scratch, part 24 -- the transcript hack](https://www.gilesthomas.com/2025/10/llm-from-scratch-24-the-transcript-hack.md) posted on 2025-10-28T20:15:00+00:00
* [Writing an LLM from scratch, part 25 -- instruction fine-tuning](https://www.gilesthomas.com/2025/10/llm-from-scratch-25-instruction-fine-tuning.md) posted on 2025-10-29T23:40:00+00:00
* [Writing an LLM from scratch, part 26 -- evaluating the fine-tuned model](https://www.gilesthomas.com/2025/11/llm-from-scratch-26-evaluating-the-fine-tuned-model.md) posted on 2025-11-03T19:40:00+00:00
* [Writing an LLM from scratch, part 27 -- what's left, and what's next?](https://www.gilesthomas.com/2025/11/llm-from-scratch-27-whats-left-and-whats-next.md) posted on 2025-11-04T00:40:00+00:00
* [Why smart instruction-following makes prompt injection easier](https://www.gilesthomas.com/2025/11/smart-instruction-following-and-prompt-injection.md) posted on 2025-11-12T19:00:00+00:00
* [Writing an LLM from scratch, part 28 -- training a base model from scratch on an RTX 3090](https://www.gilesthomas.com/2025/12/llm-from-scratch-28-training-a-base-model-from-scratch.md) posted on 2025-12-02T18:15:00+00:00
* [Writing an LLM from scratch, part 29 -- using DistributedDataParallel to train a base model from scratch in the cloud](https://www.gilesthomas.com/2026/01/llm-from-scratch-29-ddp-training-a-base-model-in-the-cloud.md) posted on 2026-01-07T20:40:00+00:00
* [Writing an LLM from scratch, part 30 -- digging into the LLM-as-a-judge results](https://www.gilesthomas.com/2026/01/llm-from-scratch-30-digging-into-llm-as-a-judge.md) posted on 2026-01-09T01:15:00+00:00
* [Writing an LLM from scratch, part 31 -- the models are now on Hugging Face](https://www.gilesthomas.com/2026/01/llm-from-scratch-31-models-on-hugging-face.md) posted on 2026-01-17T19:45:00+00:00
Preview of Giles' Blog's llms.txt file. View complete file (669 lines) →

Ready to create yours?

Generate a professional llms.txt file for your website in minutes with our AI-powered tool.

Generate Your llms.txt File