Outliner Software
Home Forum Archives Search Login Register


 

Outliner Software Forum RSS Feed Forum Posts Feed

Subscribe by Email

CRIMP Defined

 

Tip Jar

AI-infused

< Next Topic | Back to topic list | Previous Topic >

Pages: ‹ First  < 2 3 4 5 6 7 8 9 10 >  Last ›

Posted by Amontillado
Sep 30, 2025 at 03:17 AM

 

My concern is the validity of training material. I can be very careful, but the next guy won’t be. Experience, or maybe paranoia, suggests I’ll be outrun, hobbled by my conservative nature and marginalized by what should be lesser sources.

Here are my thoughts, inspired by LinkedIn’s recent announcement it will exfiltrate user data to Microsoft for the purpose of AI training: https://thirdreef.wordpress.com.

I hope it’s amusing enough to take the edge off of any offense. None is intended.

 


Posted by eastgate
Sep 30, 2025 at 02:20 PM

 

“I can be very careful, but the next guy won’t be.”

This argument is not new. It was adduced, for example, to show that students ought not to be permitted to use libraries, because they might make injudicious use of texts that they did not fully understand and this could lead them to adopt heretical views.  It has more recently been adduced to argue that people should not be permitted to install software without official authorization.

If you’re interested in knowledge representation, or reasoning, or personal knowledge management, or note-taking, I don’t think you can stand aside.

 


Posted by Stephen Zeoli
Sep 30, 2025 at 04:10 PM

 

It is not only reasonable to be skeptical of new technology, it is irresponsible not to be. This doesn’t mean no new technology, it just means the burden of proof should be shared with the developers of the technology. Chemists derided Rachel Carson as an hysterical woman until, oops, maybe DDT is bad for the environment. Had more thought and research gone into what the ideal way to drive automobiles was, perhaps we’d have had electric cars from the beginning and global warming would not be the threat it is. I am not saying that every issue has a resolution. But there is no harm in being skeptical and expecting proof on the positive.

Steve Zeoli

 


Posted by Paul Korm
Sep 30, 2025 at 08:57 PM

 

Anthropic today issued some video about Claude Sonnet 4.5, and its ability to ingest financial statements and produce an acquisition analysis, recommendations, plus executive briefings.  Pretty neat trick.  But, in all my years doing acquisition work and working with trained senior analysts, I would never have believed a machine model, no matter how clever, would match the depth of real-world experience held by the humans who were able to suss out the red flags purposefully hidden in the numbers.  I’m not saying Claude’s a fraud, but I do worry that lazy bosses looking to save a buck will trust the Claudes of the world while bypassing the undefinable skills of human analysts.  We will always need great robotics and great human brains working in tandem, not in replacement, with their relative skills.  I don’t care what people do with AI in academia in the humanities, etc., but I do care about how decisions are made in fields that affect our physical quality of life.

Tech M&A guru Ben Evans wrote in a recent news letter about “profound naïvety”:

“OpenAI published a paper trying to create a library of discrete tasks done by expert, experienced white-collar workers, and then benchmarking LLMs against them. Conclusion: AI will have parity with industry experts sometime next year. There’s a profound naivety in these kinds of analyses, that act as though you can reduce the job of someone in their mid or late 30s to ‘how well did they make that PPT/XLS/DOC?’ and ignore everything else they do, and why they do it, and indeed what exactly went into that document. It reminds me of the joke about the physicists who are asked to predict which horse will win a race, and they say “First, we presume the horse is a perfect sphere…”

 


Posted by eastgate
Sep 30, 2025 at 09:42 PM

 

Earlier today, I read a rather sophisticated Twitter thread summarizing a series of recent papers that study prompts.  The Twitter thread summarized the conclusions and provided a nice picture for each paper, but omitted links (and authors) of the papers.  I asked for the links (as did a dozen other readers); he says “I posted them.” Where?  >crickets<.

https://x.com/hasantoxr/status/1973004643677704517

It might be useful to include a link to “Tech M&A guru Ben Evans” and his newsletter. And this guru might have actually linked to the paper ““OpenAI published”. (I don’t know a whole lot of M&A guys. Do they really make decisions based on an unvetted machine model? Or a single 30-something analyst?)

@SteveZeoli: we’re not being skeptical or cautious. (I’ve written a good deal of AI skepticism myself, especially the last sections of “The Web At War: Hypertext, Social Media, and Totalitarianism”  https://dl.acm.org/doi/10.1145/3511095.3536365 ) We’re ignoring (and disbanding) amazing lines of research at alarming speed, in order to lock in short-term benefits for first-movers.  The cost is likely to be steep, as I am told the drone war with China is expected to kick off in about 18 months. One of the great advantages of 19th and early 20th century US tech against Europe was our willingness to build what would work and rebuild it if it didn’t. Britain built railways that would last 50 years; we built railways that would last 5. By the time ours broke, they were obsolete anyway.  We’ve thrown that lesson away.

Finally, those physicists were not assuming a spherical horse. In the classical form, they assumed a spherical cow, for the purpose of estimating they space that a cow occupies. The place I first saw it is better: Stephen Stellman’s 1973 letter to Science that postulates a spherical chicken: https://www.science.org/doi/10.1126/science.182.4119.1296.c

 


Pages: ‹ First  < 2 3 4 5 6 7 8 9 10 >  Last ›

Back to topic list

© 2006-2025 Pixicom - Some Rights Reserved. | Tip Jar