ChatGPT 4.1 : Efficient analysis of large volumes of text

A few days ago, we faced a challenge: summarizing and categorizing thousands of websites. Initially, we deployed GPT-4, but the costs were rising fast. However, the new OpenAI 4.1-nano and 4.1-mini models have helped us a lot.

Stepan Mocjak Executive director
published 3 days ago

In recent days, we have been faced with a task that would have taken weeks of manual work without the support of artificial intelligence: analysing, summarising, and categorising thousands of web pages. We originally wanted to use the GPT-4 model for this purpose, but the cost would have quickly risen to prohibitive heights. Then we discovered the new 4.1 series models - specifically the 4.1-nano and 4.1-mini - and our project took on a whole new dimension.

Key benefits of the 4.1

Giant context (1,000,000 tokens): allows the model to process entire blocks of text at once, without losing context.
Low cost: We processed over 30 M tokens for only $3.
Speed: Optimizing for parallel batch processing significantly reduces response time.

Why nano and mini?

4.1-nano: Cheap prototyping and fast rough summarization (e.g., 10,000 pages for ~$2).
4.1-mini: Higher accuracy for detailed processing and analysis (5 M tokens for ~$1).

Our experience

Rough summarization of 2,000+ pages in less than an hour for approximately ~1 USD (4.1-nano).
Detailed processing and analysis ~2 USD (4.1-mini).

Conclusion

With ChatGPT 4.1-nano and 4.1-mini, we have moved from potentially expensive manual processing to a fast, cheap, and fully automated pipeline. This tool saves us time and money and becomes an indispensable tool for analyzing large volumes of text.

Share this article

Written by
Stepan Mocjak Executive director
Contact me stepan.mocjak@guava.cz
Get in touch