Learn about what makes AskLibrary special, and how it delivers superior results compared to other Chat with PDF apps out there


There are many Chat with PDF apps out there; they’re the “hello, world!” of the AI app world. In such a crowded landscape, choosing the right tool can be pretty difficult. In this blog post, we will look at various Chat with PDF apps, namely: CoralAI, NotebookLM, ChatPDF, MyReader, and ChatGPT.

We are going to focus on mainly one thing here. These apps can have tons of bells and whistles, but none of that matters if the core functionality, the quality of the answers you receive, aren’t up to the mark.

We will be looking at one book, “The Power of Habit” by Charles Duhigg, and we will ask the same question to all the tools, “What is the habit loop and how does it function?”.

Some important information

  • In RAG (Retrieval Augmented Generation) tools, there’s something called a “chunk”, which is a piece of text from the source documents of an arbitrary length. This is used to improve speed, reduce costs, and improve performance. Your answers don’t see the full book but the most relevant pieces or “chunks” selected through various techniques
  • 1000 tokens is roughly 750 words per OpenAI. These numbers can vary based on the models, but this is a good approximation
  • An average non-fiction book has 250 words per page, and 273 pages

So without further ado, let’s get started!

What We Tested & Why

We put six leading PDF chat tools through their paces, using Charles Duhigg's "The Power of Habit" as our test case. Rather than getting lost in feature comparisons, we focused on what matters most: answer quality. After all, the best UI in the world doesn't matter if the answers aren't helpful.

Our test question - "What is the habit loop and how does it function?" - may seem simple, but it reveals crucial differences in how these tools process and understand book content. The results were fascinating, with significant variations in context usage, answer depth, and overall quality.

NotebookLM

NotebookLM is a viral app that has a standout feature of letting you generate podcasts from any documents you upload, where two hosts discuss the uploaded content, and you can even jump in to ask them questions. It’s seriously a very cool feature and can make boring content more engaging.

When it comes to the quality of answers, NotebookLM answers can be sparse. When I’m asking a question to a tool like this, I want it to be a decent substitute to reading the content myself, and better than Blinkist summaries.

Here are the settings I’ve used for NotebookLM

Context Used

The answer I received includes 11 references, with each reference (chunk) being on average ~125 tokens, which equates to ~94 words.

11 references of 94 words each, gives us 1034 words. This equates to ~4.2 pages worth of text.

Answer Length

The answer I received was 594 tokens (~445 words) long with the Longer response length option chosen.

Answer Quality

The answer is to-the-point and covers everything that was asked in the question and includes basic examples as appropriate. It doesn’t go into too much detail, doesn’t talk about any backstory or give any further context on how these can be leveraged.

Overall, I would rate this answer a 6-7/10

Answer Process

Based on the speed of the answer, it seems to be a single-shot answering process.

Coral AI

Coral AI is a Chat with PDF app aimed at students and researchers from their positioning on their landing page.

Coral AI has a lot of dialogs and knobs that can be confusing to work with. In my experience, the answers are pretty shallow. I asked the same question and used all the default settings (100 references, the Default model)

Context Used

I chose 100 references, and it looks like 81 unique references were used going by their UI. From my testing, it looks like each reference or chunk, is on average ~30 words long, and sometimes shorter too. Going by 81 references, that’s 81 x 30 = 2430 words. Clicking on the answers themselves displays closer to 10 references actually used, which would be closer to 10 x 30 = 300 words.

This would be either ~10 pages or ~1 page worth of text depending on which set of references we look at.

Answer Length

The answer I received was 287 tokens (~216 words) long. There are no dials to adjust the answer length.

Answer Quality

The answer is to-the-point and covers everything that was asked in the question, but it doesn’t really include many examples. It doesn’t go into too much detail, doesn’t talk about any backstory or give any further context on how these can be leveraged.

Overall, I would rate this answer a 5-6/10

Answer Process

Based on the speed of the answer, it seems to be a single-shot answering process.

ChatGPT

ChatGPT has a feature that allows you to upload PDF files and ask questions to them. There’s not a lot you can do with it, it’s a bare bones feature but it’s simple and fast. And actually the answers do seem to be better than many purpose built “Chat with PDF” tools. I used gpt-4o as the model here.

Context Used

It looks like 20 references were used by ChatGPT, and each reference seems to be an average of 780 tokens (585 words) for a total of 11700 words of context. This equates to ~47 pages of text. This is actually pretty good.

Answer Length

The answer I received was 457 tokens (~343 words) long. No special instructions about length were given, but you can do so within your question text if you so desire.

Answer Quality

The answer explains the concepts that we asked about, gives examples for each of them, gives additional context and insights on how to use this information.

Overall, I would rate this answer a 7-8/10

Answer Process

Based on the speed of the answer, it seems to be a single-shot answering process.

AskLibrary

AskLibrary is custom designed for working with books, and aims to answer questions with good depth and breadth. The answers reference extensive amounts of information from your books and are refined through a 3 stage process before you see the final answer.

Context Used

For AskLibrary, we can provide more details as we have full information on what goes on behind the scenes.

The moment you ask your question, your question is expanded into several more queries that aim to go broader or deeper, or to look at angles you might not have considered. Using all these queries, we fetch chunks that are most relevant for the answer. At this stage, we have fetched 27440 tokens worth of text, or 20580 words (~83 pages of text).

Then this information goes through another AI that shortlists the most relevant information, and we end up with 11013 tokens or ~8260 words (~33 pages of text). We further deduplicate this information and end up with 4540 tokens or 3405 words (~14 pages of text).

Then this shortlisted set of chunks goes through another AI model that aims to find any concepts that are discussed in the chunks but missing an explanation, or any other relevant ideas that would make the answer more robust. At this stage, we pass another 7157 tokens or ~5368 words (~22 pages of text) to a model that summarises just this information.

Ultimately we passed 5092 tokens to our final answer generation AI, which is equal to 3819 words or ~16 pages of text.

These numbers are always going to be different, even for the same question asked at different times, but it gives us a good overview.

Answer Length

The answer I received was 860 tokens (~645 words) long. You can ask for longer responses using a dropdown. Our reasoning mode produces answers that are much longer and thorough, producing an answer that is 1115 tokens long (~837 words).

Answer Quality

The answer explains the three parts of the habit loop and how they connect to each other, provides context about why the habit loop is important and how it forms, how we can use this information to change our behaviour, and closes with some further insights into habit change.

Overall, I would rate this answer a 8-9/10

Answer Process

We are already aware of our answer process, it’s methodical and painstaking, and therefore slow.

Input Query → Transformed into multiple queries → 100+ pages fetched → pages shortlisted → missing information identified → additional pages fetched for missing information and summarised → shortlisted pages, additional pages summary and user question are sent to AI model to generate answer

MyReader

MyReader is a Chat with PDF app aimed at students and researchers. Their standout features are allowing links and YouTube videos to be uploaded, and converting uploaded books into audiobooks.

Context Used

MyReader seems to use extensive context, referencing 15 citations totaling 10645 tokens

Answer Length

The answer I received was 460 tokens (~345 words) long. There are no dials to adjust the answer length.

Answer Quality

The answer is to-the-point and covers everything that was asked in the question, explaining the three parts of the habit loop with examples.

It gives a brief overview of the significance of the habit loop, but doesn’t go into too much detail about how to do so, and the answer formatting can make it a little harder to read.

Overall, I would rate this answer a 7-8/10

Answer Process

Based on the speed of the answer, it seems to be a single-shot answering process.

ChatPDF

ChatPDF is a Chat with PDF tool aimed at everyone and designed to work with generic documents.

Context Used

ChatPDF referenced 5 chunks for this answer, averaging roughly 150 tokens each, or a total of 750 tokens (~562 words). This is roughly 2 pages of text.

Answer Length

The answer I received was 295 tokens (~222 words) long. There are no dials to adjust the answer length.

Answer Quality

The answer covers the basic ideas for the question that was asked, but it’s very shallow, doesn’t use many examples, or explain much additional context.

Overall, I would rate this answer a 4-5/10

Answer Process

Based on the speed of the answer, it seems to be a single-shot answering process.

The Complete Picture: How These Tools Stack Up

Tool Context Processing Answer Quality Distinctive Features
AskLibrary Multi-stage processing, 83 pages initial context 8-9/10 Deep book understanding, refined through multiple AI passes
ChatGPT Single-pass, 47 pages context 7-8/10 Surprisingly comprehensive despite minimal features
MyReader 35 pages context with citations 7-8/10 Strong answer quality plus audiobook conversion
NotebookLM 4.2 pages with focused processing 6-7/10 Interactive podcast-style engagement
CoralAI Variable (1-10 pages) context 5-6/10 Research-oriented but inconsistent depth
ChatPDF 2 pages with basic processing 4-5/10 Straightforward document handling

Key Insights & Recommendations

Our detailed testing reveals that while all these tools can handle basic PDF interactions, their approaches to understanding and processing book content vary dramatically. Here's what stands out:

Context Processing Matters

The depth of context processing directly correlates with answer quality. AskLibrary's multi-stage approach, while more time-intensive, consistently produces more comprehensive and nuanced responses.

Quality vs. Speed Trade-offs

Some tools prioritize quick responses over depth. While ChatGPT's implementation shows this can work well, most rapid-response tools sacrifice important context and connections.

Specialized vs. General Tools

Generic PDF chat tools often struggle with book-specific content, missing important context and connections that specialized book tools catch. This becomes particularly evident in responses requiring deeper understanding of concepts.

Choosing the Right Tool

Your choice should align with your specific needs:

  • For in-depth book understanding and research: AskLibrary's thorough processing yields superior results
  • For quick reference and basic queries: ChatGPT offers solid performance
  • For multimedia learning: Consider NotebookLM or MyReader
  • For simple document Q&A: ChatPDF suffices

What's clear is that the future of book interaction lies not just in accessing content, but in truly understanding it. The tools that succeed will be those that can effectively bridge the gap between simple text processing and genuine comprehension.