Outliner Software
Home Forum Archives Search Login Register


 

Outliner Software Forum RSS Feed Forum Posts Feed

Subscribe by Email

CRIMP Defined

 

Tip Jar

AI-infused

< Next Topic | Back to topic list | Previous Topic >

Pages:  < 1 2 3 4 5 6 7 >  Last ›

Posted by eastgate
Sep 26, 2025 at 05:39 PM

 

Perhaps other uses of MCP are insecure, but this is NOT a concern in with Tinderbox (https://www.eastgate.com/Tinderbox/).

1. Your notes are in Tinderbox. If Tinderbox wanted to do something nefarious with them (it doesn’t!), it could just go ahead and to that without MCP.
2. You’re using Claude Desktop, or ollama, or whatever, on your machine. They pose the same threat they would pose, or that ANY software poses, without MCP.
3. The client (Claude Desktop) can use MCP to try to communicate with Tinderbox.  If you haven’t given permission, nothing happens. If you have, you have the same risk you’d have without MCP.
4. There might be a vulnerability if some miscreant broke into your house, deleted Tinderbox, and replaced it with a different application that is also named TINDERBOX 11. But to do this, the bad guy has access to your house, your computer, and your passwords. If the bad guy has this much access, you’re stuck — and you’d be stuck without MCP.

There might be good reasons to ignore the great breakthrough of our time. (I remember when people wouldn’t fly because planes were too dangerous.) Insecurity of MCP is not among those reasons.

 


Posted by Lucine
Sep 26, 2025 at 10:11 PM

 

>2. You’re using Claude Desktop, or ollama, or whatever, on your
>machine. They pose the same threat they would pose, or that ANY software
>poses, without MCP.

Does that mean the TB notes won’t be used by Claude to train data on? If not, how would any software pose a similar risk? MCP isn’t the biggest issue for me and maybe not for some others either, rather the training of LLMs from private notes.

 


Posted by satis
Sep 26, 2025 at 11:43 PM

 

Lucine wrote:

>Does that mean the TB notes won’t be used by Claude to train data on?

For Free, Pro, and Max consumer accounts, Claude AI *will* by default use your conversations and document inputs to train its AI models *unless* you explicitly opt out of this data usage in your privacy settings. Data you submit for evaluation may be exposed to AI training unless you disable the training consent toggle in privacy settings. Anthropic has a two-step consent process: you must first accept consumer terms which include data usage rights, and then afterwards separately toggle privacy settings for training consent.

If you do not consent, your data retention is limited to 30 days, and your data will not be used for model training.

If you have a Business account, data is never used for training.

 


Posted by Amontillado
Sep 27, 2025 at 12:41 AM

 

I suspect you, too, have had a deck returned to you scrambled after a reader jam!

gunars wrote:

>
>Amontillado wrote:
> >>Frankly, it’s enough to make me want to stop bathing, get a ratty robe,
>>and chant “Pie Jesu, Domine” while slapping myself in the forehead with
>>a deck of punch cards.
> >Just make sure you have sequence numbers in columns 73-80.
>

 


Posted by marlowe
Sep 28, 2025 at 02:34 PM

 

The point eastgate makes about dark patterns is a good one, and I recognize there are beneficial use cases, but I’m in the camp of abstaining from using LLMs as much as I reasonably can because the temptation of outsourcing thinking that I ought to be doing is too great. I check in on them about once every three months, use them extensively for a couple of days to avoid being wholly out of touch, but then come back to the same conclusion, which is that they aren’t for me, and I am very distrustful of the companies running these services. I see them as a momentous development, but a negative one for society.

 


Pages:  < 1 2 3 4 5 6 7 >  Last ›

Back to topic list

© 2006-2025 Pixicom - Some Rights Reserved. | Tip Jar