AI-infused
Started by satis
on 9/15/2025
satis
9/15/2025 5:08 pm
This morning I got an email from Mark Bernstein noting that Tinderbox will now not only work with major AI models but the app will offer workspace to AI models to take notes for itself.
And Momenta has now issued version 21 of Agenda with the new Ask Agenda AI chat assistant, an on-device model that doesn’t use the cloud, and it has access to your Agenda notes. You can ask questions about your notes in everyday language, get links to the most relevant notes in responses, and limit Ask Agenda queries to notes in date ranges.
To ask follow-up Ask Agenda questions, you’ll need to purchase Agenda Premium, which is either a $34.99 annual subscription or a $119.99 one-time purchase.
In recent years, I’ve been outspoken in my criticism of much AI Research,
but in recent months it has also become clear to me that Large Language
Models are the most important new technology of our time. With Tinderbox
11, you can sit down and discuss your Tinderbox work with an AI....
If you give permission — and only if you do so — AIs such as Claude
Desktop can see your Tinderbox work and explore it along with you. Of
greatest interest, the AI itself can have a section of your Tinderbox
document in which to make its own notes. This allows AIs to work on long
And Momenta has now issued version 21 of Agenda with the new Ask Agenda AI chat assistant, an on-device model that doesn’t use the cloud, and it has access to your Agenda notes. You can ask questions about your notes in everyday language, get links to the most relevant notes in responses, and limit Ask Agenda queries to notes in date ranges.
To ask follow-up Ask Agenda questions, you’ll need to purchase Agenda Premium, which is either a $34.99 annual subscription or a $119.99 one-time purchase.
Paul Korm
9/15/2025 8:56 pm
Working with **all** AI models is not really the case, out of the box, with Tinderbox 11. There's a built-in MCP to work with Claude Desktop. It's certainly possible to roll one's own integration with other models. The Claude one is nice -- but Claude is as much a liar as any AI and will gladly take any prompt to do something with a Tinderbox file, and report success, when in fact Claude did nothing at all.
satis wrote:
satis wrote:
This morning I got an email from Mark Bernstein noting that Tinderbox
will now not only work with major AI models but the app will offer
workspace to AI models to take notes for itself.
satis
9/17/2025 1:16 am
LLMs definitely aren’t perfect and can make confident mistakes, but they’re still useful tools for brainstorming and exploration as long as you double-check the results and keep their limits in mind. Used with care chatbots can help with drafts or ideas, though fact-checking is always wise. When you consider how blindingly fast the technology has been advancing compared to just five years ago, current shortcomings like overconfidence or “hallucination” are very likely to improve so in the meantime I’d rather see them as helpful tools when used carefully, not something to dismiss entirely because of occasional bad answers, especially in longer conversations where they can lose track of context or misinterpret earlier data.
Paul Korm wrote:
Agreed, but nobody claimed 'all' AI models would be supported.
Paul Korm wrote:
Working with **all** AI models is not really the case
Agreed, but nobody claimed 'all' AI models would be supported.
eastgate
9/24/2025 2:14 pm
Tinderbox 11 uses the Model Context Protocol (MCP) standard to communicate with any compliant LLM. Claude Desktop is the one on which we’ve focused initially, but several other LLMs have adopted or have announced plans to adopts MCP.
An important part of using any LLM is finding tasks in which good performance saves you time, while poor performance causes scant trouble. "Do my homework for me" is a dark pattern; if the LLM does well, you still don't get much benefit, while if the LLM does a lousy job you’re going to be embarrassed. "Tell me the three best books about Rome in the 2nd century” is better: if it's right, you have good book recommendations, and if it tells you about books that don’t exist, you’ll find out right away and it's no big deal.
An important part of using any LLM is finding tasks in which good performance saves you time, while poor performance causes scant trouble. "Do my homework for me" is a dark pattern; if the LLM does well, you still don't get much benefit, while if the LLM does a lousy job you’re going to be embarrassed. "Tell me the three best books about Rome in the 2nd century” is better: if it's right, you have good book recommendations, and if it tells you about books that don’t exist, you’ll find out right away and it's no big deal.
satis
9/24/2025 7:23 pm
I've found LLMs useful in summarizing and evaluating blocks of text I've inputted for grammaticality, organization and coherence. And when evaluating multi-thousand word submissions I'm offered suggestions for editing that are orderly and cogent and blend together *ideas* from different sections that surprisingly go together.
But when I've asked for suggestions on wholesale rewrites the responses typically lack nuance and sometimes fail to understand context.
I've also had multiple experiences where my text contained mistyped or misspelled words, or didn't finish thoughts or sentences, and the LLMs wrongly inferred what was written, sometimes hilariously or horrifyingly.
And generally, when asked to rewrite passages the results iron out my style to the point of blandness.
But understanding those limitations still lets me get a lot of selective benefit I can work with.
But when I've asked for suggestions on wholesale rewrites the responses typically lack nuance and sometimes fail to understand context.
I've also had multiple experiences where my text contained mistyped or misspelled words, or didn't finish thoughts or sentences, and the LLMs wrongly inferred what was written, sometimes hilariously or horrifyingly.
And generally, when asked to rewrite passages the results iron out my style to the point of blandness.
But understanding those limitations still lets me get a lot of selective benefit I can work with.
Skywatcher
9/25/2025 9:39 am
Somehow, the idea of having a cloud LLM train on thousands of personal notes , and spitting them somewhere else to other users , even indirectly, doesn't excite me much. Cloud LLM's are another privacy scandal in the making ( just look at what is happening with OpenAI having to disclose thousands of personal chats of their users with ChatGpt, during the lawsuit against them by the NY Times ).
Integration with local LLMs would be a good addition for those who aren't into disclosing their entire private life once again with OpenAI, Anthropic, Meta / Google etc..
Integration with local LLMs would be a good addition for those who aren't into disclosing their entire private life once again with OpenAI, Anthropic, Meta / Google etc..
Skywatcher
9/25/2025 9:42 am
Amontillado
9/25/2025 8:32 pm
LinkedIn is widely reported to start forwarding their user data to Microsoft for AI training.
Opt out now, before it's too late.
The actual LinkedIn announcement said the training would start in Europe in another month. LinkedIn was sued last year for training AI models on user data without permission.
If AI is trained on contemporary data, it's going to be trained at least a little (and probably a lot) on its own output, a process I like to refer to as a closed loop alimentary canal.
Frankly, it's enough to make me want to stop bathing, get a ratty robe, and chant "Pie Jesu, Domine" while slapping myself in the forehead with a deck of punch cards.
Harumph!
Opt out now, before it's too late.
The actual LinkedIn announcement said the training would start in Europe in another month. LinkedIn was sued last year for training AI models on user data without permission.
If AI is trained on contemporary data, it's going to be trained at least a little (and probably a lot) on its own output, a process I like to refer to as a closed loop alimentary canal.
Frankly, it's enough to make me want to stop bathing, get a ratty robe, and chant "Pie Jesu, Domine" while slapping myself in the forehead with a deck of punch cards.
Harumph!
gunars
9/25/2025 9:21 pm
Amontillado wrote:
Frankly, it's enough to make me want to stop bathing, get a ratty robe,
and chant "Pie Jesu, Domine" while slapping myself in the forehead with
a deck of punch cards.
Just make sure you have sequence numbers in columns 73-80.
MadaboutDana
9/26/2025 3:36 pm
Speaking of MCP, it might be worth taking a look at this report: https://www.itpro.com/security/a-malicious-mcp-server-is-silently-stealing-user-emails
Many software engineers have warned that MCP is singularly insecure...
Just sayin’!
Bill
Many software engineers have warned that MCP is singularly insecure...
Just sayin’!
Bill
eastgate
9/26/2025 5:39 pm
Perhaps other uses of MCP are insecure, but this is NOT a concern in with Tinderbox (https://www.eastgate.com/Tinderbox/
1. Your notes are in Tinderbox. If Tinderbox wanted to do something nefarious with them (it doesn’t!), it could just go ahead and to that without MCP.
2. You’re using Claude Desktop, or ollama, or whatever, on your machine. They pose the same threat they would pose, or that ANY software poses, without MCP.
3. The client (Claude Desktop) can use MCP to try to communicate with Tinderbox. If you haven't given permission, nothing happens. If you have, you have the same risk you'd have without MCP.
4. There might be a vulnerability if some miscreant broke into your house, deleted Tinderbox, and replaced it with a different application that is also named TINDERBOX 11. But to do this, the bad guy has access to your house, your computer, and your passwords. If the bad guy has this much access, you're stuck — and you'd be stuck without MCP.
There might be good reasons to ignore the great breakthrough of our time. (I remember when people wouldn't fly because planes were too dangerous.) Insecurity of MCP is not among those reasons.
1. Your notes are in Tinderbox. If Tinderbox wanted to do something nefarious with them (it doesn’t!), it could just go ahead and to that without MCP.
2. You’re using Claude Desktop, or ollama, or whatever, on your machine. They pose the same threat they would pose, or that ANY software poses, without MCP.
3. The client (Claude Desktop) can use MCP to try to communicate with Tinderbox. If you haven't given permission, nothing happens. If you have, you have the same risk you'd have without MCP.
4. There might be a vulnerability if some miscreant broke into your house, deleted Tinderbox, and replaced it with a different application that is also named TINDERBOX 11. But to do this, the bad guy has access to your house, your computer, and your passwords. If the bad guy has this much access, you're stuck — and you'd be stuck without MCP.
There might be good reasons to ignore the great breakthrough of our time. (I remember when people wouldn't fly because planes were too dangerous.) Insecurity of MCP is not among those reasons.
Lucine
9/26/2025 10:11 pm
2. You’re using Claude Desktop, or ollama, or whatever, on your
machine. They pose the same threat they would pose, or that ANY software
poses, without MCP.
Does that mean the TB notes won't be used by Claude to train data on? If not, how would any software pose a similar risk? MCP isn't the biggest issue for me and maybe not for some others either, rather the training of LLMs from private notes.
satis
9/26/2025 11:43 pm
Lucine wrote:
Does that mean the TB notes won't be used by Claude to train data on?
For Free, Pro, and Max consumer accounts, Claude AI *will* by default use your conversations and document inputs to train its AI models *unless* you explicitly opt out of this data usage in your privacy settings. Data you submit for evaluation may be exposed to AI training unless you disable the training consent toggle in privacy settings. Anthropic has a two-step consent process: you must first accept consumer terms which include data usage rights, and then afterwards separately toggle privacy settings for training consent.
If you do not consent, your data retention is limited to 30 days, and your data will not be used for model training.
If you have a Business account, data is never used for training.
Amontillado
9/27/2025 12:41 am
I suspect you, too, have had a deck returned to you scrambled after a reader jam!
gunars wrote:
gunars wrote:
Amontillado wrote:
>Frankly, it's enough to make me want to stop bathing, get a ratty robe,
>and chant "Pie Jesu, Domine" while slapping myself in the forehead with
>a deck of punch cards.
Just make sure you have sequence numbers in columns 73-80.
marlowe
9/28/2025 2:34 pm
The point eastgate makes about dark patterns is a good one, and I recognize there are beneficial use cases, but I'm in the camp of abstaining from using LLMs as much as I reasonably can because the temptation of outsourcing thinking that I ought to be doing is too great. I check in on them about once every three months, use them extensively for a couple of days to avoid being wholly out of touch, but then come back to the same conclusion, which is that they aren't for me, and I am very distrustful of the companies running these services. I see them as a momentous development, but a negative one for society.
satis
9/28/2025 5:36 pm
I think there might be a problem of nomenclature here. LLMs are the technology *underpinning* chatbots, but LLMs are not chatbots themselves.
People are using LLMs right now in everyday usage. Email programs like Gmail and Outlook use LLMs to autocomplete sentences and suggest phrasing. The best translation services and apps use LLMs to generate more natural, context-aware translations and realistic phrasing. (That includes live translation that works system-wide in iOS in Messages, FaceTime, and while your AirPods are on.) Email and audio transcript summarization in podcast apps, which has come online in Pocket Casts, Apple Podcasts and others, is all LLM-based. Message summarization in smartphone lockscreens is based on LLM-training. Apps and services that pull text out of photos (or translate them) are all LLM-based. Product and user-review summaries on retail websites like Amazon are all LLM-based.
And LLMs are being used behind the scenes to protect the websites you visit (and attack them, too).
People are using LLMs right now in everyday usage. Email programs like Gmail and Outlook use LLMs to autocomplete sentences and suggest phrasing. The best translation services and apps use LLMs to generate more natural, context-aware translations and realistic phrasing. (That includes live translation that works system-wide in iOS in Messages, FaceTime, and while your AirPods are on.) Email and audio transcript summarization in podcast apps, which has come online in Pocket Casts, Apple Podcasts and others, is all LLM-based. Message summarization in smartphone lockscreens is based on LLM-training. Apps and services that pull text out of photos (or translate them) are all LLM-based. Product and user-review summaries on retail websites like Amazon are all LLM-based.
And LLMs are being used behind the scenes to protect the websites you visit (and attack them, too).
Paul Korm
9/28/2025 6:52 pm
There are real, well-documented issues with MCP and other technologies employed in the current build-out of "AI" across the globe. I hesitate to label this "the great breakthrough of our time", when it is very difficult for most users to understand precisely what is going on in an interaction with one of these. This is far different than the introduction of air transport, or any other mechanical technology.
eastgate wrote:
eastgate wrote:
There might be good reasons to ignore the great breakthrough of our
time. (I remember when people wouldn't fly because planes were too
dangerous.) Insecurity of MCP is not among those reasons.
MadaboutDana
9/29/2025 2:04 pm
Quite right. A couple of revealing articles here:
https://www.redhat.com/en/blog/model-context-protocol-mcp-understanding-security-risks-and-controls
https://www.atlassian.com/blog/artificial-intelligence/mcp-risk-awareness
Both of them suggest mitigation strategies, but MCP is innately a high-risk architecture for many different reasons. You could argue (as indeed @eastgate has) that such risks are an innate part of evolving (computer) ecosystems, and indeed, most of us here will have happy memories of the early years of the web and some of the more disastrous programming decisions associated with it.
But at the same time, there is a huge amount of security expertise out there nowadays (one of my family members works as a high-level software in banking security – a real laugh a minute!), and it does appear that whoever put MCP together didn’t take the time to consult the really experienced experts in anything like enough detail. Shame, because inevitably this will result in the usual accretive further development (bolt-ons) rather than a complete rebuild.
And just to respond to an earlier point: my local LLM server runs on-device with no access to the web, so even if MCP is involved (which I doubt), I’m not exposed to malicious third parties. I take a regular sniff at ChatGPT (or more often, Claude, which I personally consider superior), but I don’t use either of them in my regular work. In that sense, I am entirely in accord with @marlowe.
Cheers,
Bill
Paul Korm wrote:
https://www.redhat.com/en/blog/model-context-protocol-mcp-understanding-security-risks-and-controls
https://www.atlassian.com/blog/artificial-intelligence/mcp-risk-awareness
Both of them suggest mitigation strategies, but MCP is innately a high-risk architecture for many different reasons. You could argue (as indeed @eastgate has) that such risks are an innate part of evolving (computer) ecosystems, and indeed, most of us here will have happy memories of the early years of the web and some of the more disastrous programming decisions associated with it.
But at the same time, there is a huge amount of security expertise out there nowadays (one of my family members works as a high-level software in banking security – a real laugh a minute!), and it does appear that whoever put MCP together didn’t take the time to consult the really experienced experts in anything like enough detail. Shame, because inevitably this will result in the usual accretive further development (bolt-ons) rather than a complete rebuild.
And just to respond to an earlier point: my local LLM server runs on-device with no access to the web, so even if MCP is involved (which I doubt), I’m not exposed to malicious third parties. I take a regular sniff at ChatGPT (or more often, Claude, which I personally consider superior), but I don’t use either of them in my regular work. In that sense, I am entirely in accord with @marlowe.
Cheers,
Bill
Paul Korm wrote:
There are real, well-documented issues with MCP and other technologies
employed in the current build-out of "AI" across the globe. I hesitate
to label this "the great breakthrough of our time", when it is very
difficult for most users to understand precisely what is going on in an
interaction with one of these. This is far different than the
introduction of air transport, or any other mechanical technology.
eastgate wrote:
>There might be good reasons to ignore the great breakthrough of our
>time. (I remember when people wouldn't fly because planes were too
>dangerous.) Insecurity of MCP is not among those reasons.
MadaboutDana
9/29/2025 2:06 pm
Sorry, I meant “works as a high-level software engineer” there – she isn’t actual software, as it happens!
MadaboutDana
9/29/2025 2:08 pm
For a more brutal takedown of MCP, see e.g. https://medium.com/data-science-collective/mcp-is-a-security-nightmare-heres-how-the-agent-security-framework-fixes-it-fd419fdfaf4e
MadaboutDana
9/29/2025 2:09 pm
... or, with some nice examples, here: https://equixly.com/blog/2025/03/29/mcp-server-new-security-nightmare/
eastgate
9/29/2025 2:45 pm
There comes a point — and we are at that point right here — where this becomes merely speaking of fear, uncertainty, and doubt. This forum is typically better than this.
MCP is very simple.
TRANSPORT: An MCP server, such as Tinderbox, reads from stdin and writes to stdout. There's *no* security risk here; if the bad guys can fiddle with stdin and stdout, they own the entire system already. (There's also a https variant, but that's irrelevant here.)
PAYLOAD: The rest of MCP concerns what messages the client (such as Claude Desktop) sends, and how the server responds. These are really quite straightforward. For example, the client can ask "What tools do you offer?", and the server sends back a list of tools with the name, description, and arguments for each tool. Sure, I might have done things a little bit differently here, but I always feel that way, and so, probably, do you. It’s fine.
Now, let’s look at some little things we can do with MCP that would be a pain in the neck without it.
* A user — a senior German academic — has a hunk of XML that represents a pile of attributes and their values. He wants to parse this into a Tinderbox note, creating user attributes as required. The standard approaches might include (a) request a new Tinderbox operator that converts the XML object to a Tinderbox dictionary; (b) use regular expressions in Tinderbox actions to break up the XML and extract the juicy bits; (c) convert the XML to an intermediate format that’s easier for Tinderbox’s stream operators to parse; or (d) convert the XML to the corresponding JSON, because Tinderbox does have a JSON operator.
This is easy enough for the technically proficient, but plenty of smart people don’t really know enough to do this comfortably.
Instead, we could ask Claude to reformat the XML to a simple textual list. This is boring work, but it's the sort of work LLMs can do. And if the LLM tries to shirk the job or botches it, you’re likely to notice immediately.
* I’m writing a book. I’ve got a resource I want to add to my reference database. Bookends imports RIS/Endnotes, which most digital libraries support. But *this* library only provides BibTeX. Best Answer: add BibTeX to Tinderbox: an easy three-day job for me, challenging for me, but challenging for most people. OK Answer: Ask Claude to convert this BibTeX to RIS, and import the RIS. If it’s right, we’re done. If Claude botches it or invents some other hunk of RIS, it won’t import or the imported reference will refer to the Audubon with the errata on page 23, not the Cognitive Architectures for Language Agents paper you were expecting.
* Ben Shneiderman and his AI list are quite engaged by a recent paper by Floridi on the relationship between Scope and Confidence in computational results. Floridi uses a construct known as Kolmogorov Complexity, of which I have never heard. “Does everyone else know this?” I ask myself. “Is it sound, or cranky?” Old answer: call a few mathematicians and ask them. But lots of people don’t have a lot of math professors on speed dial. Ask Claude, and in minutes you can have a pile of references from a range of fields, and an explanation of why you haven’t heard it before. Yes: it might all be fabulation, but I’ll find out as soon as I read those references.
Can we please stop with the emotional appeals for and against AI — we’ve been doing that since Asimov and Phillip K. Dick — and think about what we can do with the tools?
MadaboutDana wrote:
MCP is very simple.
TRANSPORT: An MCP server, such as Tinderbox, reads from stdin and writes to stdout. There's *no* security risk here; if the bad guys can fiddle with stdin and stdout, they own the entire system already. (There's also a https variant, but that's irrelevant here.)
PAYLOAD: The rest of MCP concerns what messages the client (such as Claude Desktop) sends, and how the server responds. These are really quite straightforward. For example, the client can ask "What tools do you offer?", and the server sends back a list of tools with the name, description, and arguments for each tool. Sure, I might have done things a little bit differently here, but I always feel that way, and so, probably, do you. It’s fine.
Now, let’s look at some little things we can do with MCP that would be a pain in the neck without it.
* A user — a senior German academic — has a hunk of XML that represents a pile of attributes and their values. He wants to parse this into a Tinderbox note, creating user attributes as required. The standard approaches might include (a) request a new Tinderbox operator that converts the XML object to a Tinderbox dictionary; (b) use regular expressions in Tinderbox actions to break up the XML and extract the juicy bits; (c) convert the XML to an intermediate format that’s easier for Tinderbox’s stream operators to parse; or (d) convert the XML to the corresponding JSON, because Tinderbox does have a JSON operator.
This is easy enough for the technically proficient, but plenty of smart people don’t really know enough to do this comfortably.
Instead, we could ask Claude to reformat the XML to a simple textual list. This is boring work, but it's the sort of work LLMs can do. And if the LLM tries to shirk the job or botches it, you’re likely to notice immediately.
* I’m writing a book. I’ve got a resource I want to add to my reference database. Bookends imports RIS/Endnotes, which most digital libraries support. But *this* library only provides BibTeX. Best Answer: add BibTeX to Tinderbox: an easy three-day job for me, challenging for me, but challenging for most people. OK Answer: Ask Claude to convert this BibTeX to RIS, and import the RIS. If it’s right, we’re done. If Claude botches it or invents some other hunk of RIS, it won’t import or the imported reference will refer to the Audubon with the errata on page 23, not the Cognitive Architectures for Language Agents paper you were expecting.
* Ben Shneiderman and his AI list are quite engaged by a recent paper by Floridi on the relationship between Scope and Confidence in computational results. Floridi uses a construct known as Kolmogorov Complexity, of which I have never heard. “Does everyone else know this?” I ask myself. “Is it sound, or cranky?” Old answer: call a few mathematicians and ask them. But lots of people don’t have a lot of math professors on speed dial. Ask Claude, and in minutes you can have a pile of references from a range of fields, and an explanation of why you haven’t heard it before. Yes: it might all be fabulation, but I’ll find out as soon as I read those references.
Can we please stop with the emotional appeals for and against AI — we’ve been doing that since Asimov and Phillip K. Dick — and think about what we can do with the tools?
MadaboutDana wrote:
But at the same time, there is a huge amount of security expertise out
there nowadays (one of my family members works as a high-level software
in banking security – a real laugh a minute!), and it does appear
that whoever put MCP together didn’t take the time to consult the
really experienced experts in anything like enough detail.
Paul Korm
9/29/2025 3:11 pm
It's possible @eastgate is misreading the gist of this thread. If AI is the culmination of technology in our times, then healthy questioning and skepticism is a very good thing. That's all that's going on here. I have no doubt Tinderbox is built with the best of care for its users. However, it's obviously a very small corner of the universe compared to whatever it is that the enormously capitalized tech bros are up to with "AI". (FWIW, "AI" has become undefinable, mainly because the press does a poor job understanding the multiple technologies they lump together under "AI".)
I suspect that everyone in this forum has spent considerable time in recent years "think[ing] about what we can do with the tools". Lack of thought and effort is not the issue. Lack of trust in what's behind the curtain is.
eastgate wrote:
I suspect that everyone in this forum has spent considerable time in recent years "think[ing] about what we can do with the tools". Lack of thought and effort is not the issue. Lack of trust in what's behind the curtain is.
eastgate wrote:
Can we please stop with the emotional appeals for and against AI —
we’ve been doing that since Asimov and Phillip K. Dick — and
think about what we can do with the tools?
eastgate
9/29/2025 4:54 pm
Paul:
There’s a place for skepticism. This is not that.
If you are worried about that which is behind the curtain, LOOK BEHIND THE CURTAIN. Don’t cite irrelevant credulous clickbait like the Perrone article — and that’s probably the best of the lot. Certainly don’t waive it in the air as if it’s some sort of profound gotcha. (Perrone is the fellow behind a newsletter concerned with building audiences, boasting of 30M impressions. A clickbait expert. He’s also a mathematician, and ought to know better than this.).
For those not technically inclined, the dangers Perrone identifies in MCP are the dangers of installing and using software.
Are you worried about Claude? OK: identify the problem, demonstrate it, document it. If you are deeply worried, for example, that racial bias in its training set will interfere with Claude’s ability to translate BibTeX to RIS, well, be my guest: show me. If you are concerned that allowing Claude to persist notes about today’s discussion of Assyrian schools will establish an attack vector through your Tinderbox document, then READ ITS NOTES, which are just a section in YOUR notes.
Before we shun hard-won attainments of arduous research, let’s actually do it for reason, not rumor. And then, could we find out if it's a problem with this LLM? Or inherent to all LLMs? Or to all computation?
At least, can we talk about actual software and actual research, not half-understood phantoms from overheard ghost stories?
There’s a place for skepticism. This is not that.
If you are worried about that which is behind the curtain, LOOK BEHIND THE CURTAIN. Don’t cite irrelevant credulous clickbait like the Perrone article — and that’s probably the best of the lot. Certainly don’t waive it in the air as if it’s some sort of profound gotcha. (Perrone is the fellow behind a newsletter concerned with building audiences, boasting of 30M impressions. A clickbait expert. He’s also a mathematician, and ought to know better than this.).
For those not technically inclined, the dangers Perrone identifies in MCP are the dangers of installing and using software.
Are you worried about Claude? OK: identify the problem, demonstrate it, document it. If you are deeply worried, for example, that racial bias in its training set will interfere with Claude’s ability to translate BibTeX to RIS, well, be my guest: show me. If you are concerned that allowing Claude to persist notes about today’s discussion of Assyrian schools will establish an attack vector through your Tinderbox document, then READ ITS NOTES, which are just a section in YOUR notes.
Before we shun hard-won attainments of arduous research, let’s actually do it for reason, not rumor. And then, could we find out if it's a problem with this LLM? Or inherent to all LLMs? Or to all computation?
At least, can we talk about actual software and actual research, not half-understood phantoms from overheard ghost stories?
Paul Korm
9/29/2025 8:20 pm
Cool down, chap. I don't understand your anger at the members here.
