AI-infused
< Next Topic | Back to topic list | Previous Topic >
Pages: ‹ First < 6 7 8 9 10 11 >
Posted by bartb
Oct 4, 2025 at 02:02 PM
I understand your concerns. I try to be careful what material I supply to these tools when I’m using them. For instance, I use NotebookLM as a “smart intern” to review podcasts, books and long from articles that go deep on details. I’m currently not doing any original writing or research. I wish I had a better answer for you concerning privacy. I think tech is getting better in giving us privacy options but I think we still live under this cloud (see below) since 1999:
“You have zero privacy anyway. Get over it.” Scott McNealy, the CEO and co-founder of Sun Microsystems
Posted by Lucas
Oct 4, 2025 at 09:38 PM
Interesting conversation so far. I certainly make use of these tools—-for “secretarial” and research assistant tasks rather than thinking tasks—-and I’ve found the Tinderbox MCP integration very useful, but I also think it’s essential with all AI tools to proceed consciously and carefully. When I read Dr Andus’s post, it occurred to me that I had never read Google’s privacy policy. I promptly uploaded the PDF version to NotebookLM and asked in the chat about which aspects of the policy might correspond to Dr Andus’s concerns. The response provided a very helpful citation-backed summary that seemed to confirm Dr Andus’s analysis. So am I being lazy, or is it good to use Big Tech to better understand Big Tech? :-)
Dr Andus wrote:
>I looked at NotebookLM a few times, after reading enthusiastic reviews,
>but every time I tried to sign up and have read through their privacy
>policy during sign-up, it seemed to me that they were asking me to allow
>them complete and absolute access to all my data and everything that I’m
>doing, which stopped me in my tracks every time.
>
Posted by satis
Oct 4, 2025 at 11:11 PM
Dr Andus wrote:
>every time I tried to sign up and have read through their privacy
>policy during sign-up, it seemed to me that they were asking me to allow
>them complete and absolute access to all my data and everything that I’m
>doing
I don’t think that’s accurate.
NotebookLM does not use user data to train its AI models. Google baldly states, “NotebookLM does not use your personal data, including your source uploads, queries, and the responses from the model for training.”
Your uploaded documents, queries, and the AI’s responses remain private to you and are not logged for training purposes. For personal Google accounts, if you provide explicit feedback or request support, human reviewers may access that data for troubleshooting purposes. For Google Workspace / Google Education accounts, it provides enhanced privacy and user data is neither reviewed by humans nor used to train AI models.
There’s no “absolute access” claim but any cloud-based service that needs to process and respond to user input must, by necessity, access that data to perform those tasks. The legalese for *all* cloud-based services typically says something to that effect about access but it doesn’t mean they have unrestricted rights to your data outside those purposes.
Posted by Paul Korm
Oct 4, 2025 at 11:41 PM
This might be helpful regarding NotebookLM privacy:
Posted by Dr Andus
Oct 5, 2025 at 05:39 PM
satis wrote:
>
>Dr Andus wrote:
>
>>every time I tried to sign up and have read through their privacy
>>policy during sign-up, it seemed to me that they were asking me to
>allow
>>them complete and absolute access to all my data and everything that
>I’m
>>doing
>
>I don’t think that’s accurate.
I know that there is a difference between the free and paid version (the latter offers more privacy) but I was referring to the various agreements during the signing-up process specifically, which gave me this impression.
I did do a bit of research around this topic and it sounds like it comes down to who you decide to believe and trust.
I read somewhere that the existing LLMs by now have assimilated almost all available codified human knowledge and for them to be able to evolve further and for their business models to remain viable they will need to collect new data, the sources of which are most likely going to be the users, especially free users.
So I’d say the LLM providers might have existential reasons to encourage users to give over as much information about themselves as possible, i.e. there is a bit of a conflict of interest when it comes to coming clean as to how much user data is really hoovered up and how it is used.
Another point is whether the providers of LLM are really truly in control of LLMs and really know what’s happening with all the data.
Already there was the recent case with Claude, where the LLM recognised when it was being tested and put up some resistance…
But I admit I know very little about this whole area, so I’m just asking questions, trying to understand what is going on.
The most recent crop of PCs are now allowing you to run a small LM on your hardrive, disconnected from the internet, so perhaps that’s the safest way to use it from a privacy persective, but of course it has its limitations.