Google launches AI textual assistant, NotebookLM
Distinct from the available AI chatbots, NotebookLM functions as a personalised AI assistant “grounded” in the document you choose to input. The app is set to work in conjunction with Google Drive, analysing your documents to help extract key points, summarise texts, or brainstorm new ideas.
Project Tailwind was one of a raft of new features unveiled at the Google I/O 2023 Event. Touted as a new “AI-first notebook”, Google promised an assistant to help students, academics, and professionals structure and summarise their documents. Tailwind has since launched as NotebookLM — and it’s available now, if you join the waitlist.
The concept behind NotebookLM is to offer AI-based language model assistance based on specified textual data. Instead of forming responses based on pre-fed datasets or info pulled from the web, NotebookLM focuses solely on the unique set of documents given by the user.
Google’s latest tool integrates within Google Workspace, offering users the capability to summarise and develop their documents more efficiently. This innovative feature employs large language models to facilitate a more refined understanding of the user’s material, with the ability to summarise text, generate questions for deeper exploration, and even create new content.
Users can select from an array of documents, including PDFs. The key point is that the data is based on the source. Imagine the capabilities of a model similar to ChatGPT, but without the need for external sources.
Google refers to the process specifically ‘source-grounding’, which effectively creates a localised AI response. As Google states in their blog, “The model only has access to the source material that you’ve chosen to upload, and your files and dialogue with the AI are not visible to other users. We do not use any of the data collected to train new AI models.”
Once the data is in, NotebookLM can promptly curate a summary of the document. It is able to provide smart and innovative assistance in pouring over reams of text and rapidly extracting the key points. It is able to generate new content — expanding bullet-points or outlines into full-fledged texts. Moreover, NotebookLM has added features to help users collaborate on projects more easily.
We know people are struggling with the rapid growth of information — it’s everywhere and it’s overwhelming. As we’ve been talking with students, professors and knowledge workers, one of the biggest challenges is synthesizing facts and ideas from multiple sources . . . NotebookLM is an experimental product designed to use the power and promise of language models paired with your existing content to gain critical insights, faster. Think of it as a virtual research assistant that can summarize facts, explain complex ideas, and brainstorm new connections — all based on the sources you select.
Google explains that NotebookLM is capable of handling three major tasks mainly:
- Generate a summary
Once you initially import a Google Doc into NotebookLM, the tool promptly creates a summary and identifies central topics. It also suggests questions you might ask to gain comprehension of the content.
- Ask questions
When you want to explore more thoroughly, you can pose questions related to the documents you’ve added.
For instance, a medical student might import a scientific neuroscience article and instruct NotebookLM to “formulate a glossary of crucial terms connected to dopamine”. Similarly, an author penning a biography could submit their research notes and pose a question such as: “Condense all the instances of interaction between Houdini and Conan Doyle.”
- Generate ideas
NotebookLM goes beyond just providing answers and can actually assist in coming up with new ideas
A content producer could submit their thoughts for future videos and request: “Compose a script for a brief video based on this subject.”
Alternatively, an entrepreneur looking for investment could upload their pitch and ask: “What are the questions likely to be asked by potential investors?”
In a sense, NotebookLM brings little more to the table than is already available through ChatGPT. However, the aim is to address (or at least lessen) the so-called “’hallucination” issues linked with artificial intelligence systems. That is, when large language models (LLM) come up with inaccurate information and assert it as if it were correct.
“As we’ve been talking with students, professors and knowledge workers, one of the biggest challenges is synthesising facts and ideas from multiple sources,” stated Raiza Martin, product manager at Google Labs, and Steven Johnson, editorial director of Google Labs: “You often have the sources you want, but it’s time consuming to make the connections.”
Google provided a screenshot of the NotebookLM interface. It’s green and blue with academic notes in the middle and the AI generated content on the right. The chatbot starts with “1 source,” which refers to the document the user has uploaded. There’s a navigation panel on the left that lets you switch between notes, and a back button leading to an main page. The tool is not entirely finished yet, but Google has said they will keep gathering feedback from users to find out what is effective.
Google has emphasised that NotebookLM is designed with user privacy in mind. The app only has access to the documents that users choose to upload or interact with. User data is not available to others, and it is not used to train new AI models. Google has implemented measures to ensure that user data remains private and secure.
The potential benefits of NotebookLM are clear to see — Google’s new feature has the potential to streamline research and academic study by summarising lengthy documents, simplifying complex information, and generating ideas for essays or presentations. Nevertheless, users will need to remain aware of the potential pitfalls of relying too heavily on AI for critical thinking and original content creation. There’s always the risk that it could misinterpret information or overlook nuances in the text, leading to potential misinformation.
Google itself offers a grain of salt: “While NotebookLM’s source-grounding does seem to reduce the risk of model hallucinations, it’s always important to fact-check the AI’s responses against your original source material. When you’re drawing on multiple sources, we make that fact-checking easy by accompanying each response with citations, showing you the most relevant original quotes from your sources.”