In recent days, we have been faced with a task that would have taken weeks of manual work without the support of artificial intelligence: analysing, summarising, and categorising thousands of web pages. We originally wanted to use the GPT-4 model for this purpose, but the cost would have quickly risen to prohibitive heights. Then we discovered the new 4.1 series models - specifically the 4.1-nano and 4.1-mini - and our project took on a whole new dimension.
Giant context (1,000,000 tokens): allows the model to process entire blocks of text at once, without losing context.
Low cost: We processed over 30 M tokens for only $3.
Speed: Optimizing for parallel batch processing significantly reduces response time.
4.1-nano: Cheap prototyping and fast rough summarization (e.g., 10,000 pages for ~$2).
4.1-mini: Higher accuracy for detailed processing and analysis (5 M tokens for ~$1).
Rough summarization of 2,000+ pages in less than an hour for approximately ~1 USD (4.1-nano).
Detailed processing and analysis ~2 USD (4.1-mini).
With ChatGPT 4.1-nano and 4.1-mini, we have moved from potentially expensive manual processing to a fast, cheap, and fully automated pipeline. This tool saves us time and money and becomes an indispensable tool for analyzing large volumes of text.