AI-infused
< Next Topic | Back to topic list | Previous Topic >
Pages: ‹ First < 5 6 7 8 9 10 11 >
Posted by Paul Korm
Oct 3, 2025 at 01:08 AM
It is usually possible with ChatGPT, Claude, etc., to tell it to tone down or eliminate entirely the “attaboy, great question” type of response.
Like, @satis, i’ve had some fascinating conversations with Antrhopic and OpenAI’s models. I believe Claude has taught me a lot about quantum physics in the several long conversations we’ve had. I say “I believe Claude has taught me a lot” because I have no background in physics so it would be easy for me to believe just about anything Claude explained to me. On other occasions, Claude worked out some complicated logic issues for me in fields I do know a lot about. The dialog was very useful for things I was working on.
But, in both cases I came away with similar concerns. For the physics dialog, I decided since I had no idea whether a language model actually “knew” physics, but lots of doubts about the veracity of the answers, that I would be better off studying the traditional way with reading and tutorials. For the logic issues, I felt I hadn’t actually increased my skills because my brain was only watching a machine performance rather than doing the real work of problem solving..
Posted by satis
Oct 3, 2025 at 03:36 AM
When asking questions outside of my own submitted writing, I’ve found that a significant minority of detailed answers are somehow incorrect - sometimes citing outdated links, other times misinterpreting the sources they reference.
Earlier versions of Perplexity, for example, often provided link citations for specific claims, but the links themselves didn’t support the assertions being made - even in cases where the claims were actually true. Numerous examples where the answer was right but the LLM just tossed in wrong citations - as if it didn’t want to reveal where it actually got the accurate data.
In one case I asked about a company’s international contract and it pulled data from an archived page on the company’s site that had outdated, inaccurate information. I discovered this, told the LLM what it had done, and with a couple of prompts guided it to look to other sources before it found the correct answer (which I’d already found when I realized its error). It was a failure but an interesting one.
But as I noted earlier, in another case where I entered into ChatGPT a relative’s bloodwork and test results, the analysis of results were all dead-on when I checked, and the clear, non-technical explanations were accurate. And the recommended follow-up questions to the doctor - which I was unable to formulate myself - made sense (and were apparently useful to my relative).
So there’s a lot of promise in using LLMs for research, but especially with web-sourced answers it’s absolutely essential to verify the information independently when accuracy matters. LLMs are powerful tools for discovery, drafting, and summarizing, but not for blindly trusting information in serious or high-accuracy contexts. Which I’m fine with, since the results for certain types of research are much better with LLMs (once verified) than with the best normal web searches.
Posted by Amontillado
Oct 3, 2025 at 04:03 AM
AI would be a great way to learn about physics, but I’d like to qualify that and I would prefer human instructors, either in person or by proxy of book.
From any source, just don’t take anything at face value. Question everything and derive your own insights into fundamentals. If AI hallucinates, there’s learning to be had from debunking it.
Posted by bartb
Oct 3, 2025 at 06:13 PM
OK ... I will admit after heavy experimenting with numerous tools and models - I find myself returning frequently to NotebookLM. If only I had this tool in my university days !!!!
Posted by Dr Andus
Oct 3, 2025 at 08:44 PM
bartb wrote:
OK ... I will admit after heavy experimenting with numerous tools and
>models - I find myself returning frequently to NotebookLM. If only I had
>this tool in my university days !!!!
I looked at NotebookLM a few times, after reading enthusiastic reviews, but every time I tried to sign up and have read through their privacy policy during sign-up, it seemed to me that they were asking me to allow them complete and absolute access to all my data and everything that I’m doing, which stopped me in my tracks every time.
Something about enabling an AI tool to study me directly just makes me very uncomfortable.
Have I misunderstood something? Or are all free AI tools essentially giant privacy vampire squids but only NotebookLM are straight enough to admit it upfront?