Cog's 2498-line llms.txt shows what thorough AI preparation looks like
Cog is an open-source tool that lets you package machine learning models in a standard, production-ready container.
2,498
Lines
+77% vs avg
59
Sections
+146% vs avg
742+
Companies
using llms.txt
1
Files
llms.txt
Key Insights
Comprehensive structure
With 59 distinct sections, this file provides thorough coverage for AI systems.
Comprehensive detail
2498 lines of thorough documentation for AI systems.
llms.txt Preview
First 100 lines of 2,498 total
# Cog: Containers for machine learning
Cog is an open-source tool that lets you package machine learning models in a standard, production-ready container.
You can deploy your packaged model to your own infrastructure, or to [Replicate](https://replicate.com/).
## Highlights
- 📦 **Docker containers without the pain.** Writing your own `Dockerfile` can be a bewildering process. With Cog, you define your environment with a [simple configuration file](#how-it-works) and it generates a Docker image with all the best practices: Nvidia base images, efficient caching of dependencies, installing specific Python versions, sensible environment variable defaults, and so on.
- 🤬️ **No more CUDA hell.** Cog knows which CUDA/cuDNN/PyTorch/Tensorflow/Python combos are compatible and will set it all up correctly for you.
- ✅ **Define the inputs and outputs for your model with standard Python.** Then, Cog generates an OpenAPI schema and validates the inputs and outputs with Pydantic.
- 🎁 **Automatic HTTP prediction server**: Your model's types are used to dynamically generate a RESTful HTTP API using [FastAPI](https://fastapi.tiangolo.com/).
- 🥞 **Automatic queue worker.** Long-running deep learning models or batch processing is best architected with a queue. Cog models do this out of the box. Redis is currently supported, with more in the pipeline.
- ☁️ **Cloud storage.** Files can be read and written directly to Amazon S3 and Google Cloud Storage. (Coming soon.)
- 🚀 **Ready for production.** Deploy your model anywhere that Docker images run. Your own infrastructure, or [Replicate](https://replicate.com).
## How it works
Define the Docker environment your model runs in with `cog.yaml`:
```yaml
build:
gpu: true
system_packages:
- "libgl1-mesa-glx"
- "libglib2.0-0"
python_version: "3.12"
python_packages:
- "torch==2.3"
predict: "predict.py:Predictor"
```
Define how predictions are run on your model with `predict.py`:
```python
from cog import BasePredictor, Input, Path
import torch
class Predictor(BasePredictor):
def setup(self):
"""Load the model into memory to make running multiple predictions efficient"""
self.model = torch.load("./weights.pth")
# The arguments and types the model takes as input
def predict(self,
image: Path = Input(description="Grayscale input image")
) -> Path:
"""Run a single prediction on the model"""
processed_image = preprocess(image)
output = self.model(processed_image)
return postprocess(output)
```
In the above we accept a path to the image as an input, and return a path to our transformed image after running it through our model.
Now, you can run predictions on this model:
```console
$ cog predict -i [email protected]
--> Building Docker image...
--> Running Prediction...
--> Output written to output.jpg
```
Or, build a Docker image for deployment:
```console
$ cog build -t my-colorization-model
--> Building Docker image...
--> Built my-colorization-model:latest
$ docker run -d -p 5000:5000 --gpus all my-colorization-model
$ curl http://localhost:5000/predictions -X POST \
-H 'Content-Type: application/json' \
-d '{"input": {"image": "https://.../input.jpg"}}'
```
Or, combine build and run via the `serve` command:
```console
$ cog serve -p 8080
$ curl http://localhost:8080/predictions -X POST \
-H 'Content-Type: application/json' \
-d '{"input": {"image": "https://.../input.jpg"}}'
```
<!-- NOTE (bfirsh): Development environment instructions intentionally left out of readme for now, so as not to confuse the "ship a model to production" message.
In development, you can also run arbitrary commands inside the Docker environment:
```console
$ cog run python train.pyCog is ready for AI search. Are you?
Join 742+ companies preparing their websites for the future of search. Create your llms.txt file in minutes.
Generate Your llms.txtDon't get left behind
Your competitors are preparing for AI search.
Cog has 59 organized sections ready for AI crawlers. Generate your llms.txt file and join the companies optimizing for the future of search.