AI-infused
< Next Topic | Back to topic list | Previous Topic >
Pages: ‹ First < 3 4 5 6 7 8 9 10 11 >
Posted by MadaboutDana
Oct 1, 2025 at 08:48 AM
@eastgate, just to come back to what you were saying earlier. I tend to agree that you’re missing the point. Speaking personally, I am not “against” AI as such, but I am very wary, for very specific reasons.
First, AI is a brand-new, emergent technology, for which such absurdly overblown claims have been made that the various AI companies are promising to invest a trillion dollars on data centres, further development etc. over the next few years. That’s more money than is currently managed by the world’s largest wealth/asset fund managers. See also “Fifth” below.
Second, the impacts of AI on all kinds of things – ecological, social, commercial, mental – are only just now being assessed, usually based on pitifully inadequate samples, even as AI is rapidly being incorporated into every possible software niche. This too is ridiculous. Just to give a simple example: AI-focused data centres already running in the USA at this very moment have boosted electricity costs to the local populations by up to 300%.
Third, the issues plaguing the modern LLM – systematic inconsistency, poor/non-existent reasoning, untruths (sorry, hallucinations), failure to follow prompts accurately (including an amazing tendency to argue that the LLM has indeed followed the prompt even when it obviously hasn’t) – are serious issues, not just minor side-effects. These are issues we’ve observed in our own testing of current LLMs, both online and on-device.
Fourth, people are basing corporate strategies on this stuff, for goodness’ sake, often with very little more reason than because “Sam Altman says AGI is nearly here!” This has already resulted in job losses, shrinkage and the growing ubiquity of AI slop (a.k.a. workslop) because AI makes it so easy to generate convincing-sounding stuff that’s actually shallow and poorly reasoned.
Fifth, the hype about AI means that investment in AI currently accounts for a very significant percentage of U.S. economic growth – it appears that the USA would currently be in or near recession were it not for the vast sums being spent by speculators on investments in companies that have not yet presented convincing models for how they’re actually going to monetise these developments.
Many of your posts sound much too sensible and intelligent for me to believe that you haven’t considered these things. And yet you accuse us of being luddites for raising them. Like others, I’m surprised.
Please note, incidentally, that I keep continuous track of developments in the AI field – they are directly relevant to my work in copywriting, translation and communications – and can adduce sources for all of the above, including our own extensive (and ongoing) investigations of generative AI.
Please note, also, that my criticisms of MCP have nothing to do with AI as such, but with the architecture of this vital intermediate layer. This is simply basic due diligence, not some kind of luddite rejection.
Posted by eastgate
Oct 1, 2025 at 01:48 PM
> Please note, also, that my criticisms of MCP have nothing to do with AI as such, but with the architecture of this vital intermediate layer.
OK: so you’re opposed to pipes — the foundation of UNIX?
Or you’re against JSON?
What exactly are you criticizing?
Don’t get me wrong: I’m facing a long day of working on plumbing because MCP’s error mechanism does not conform to the design I expected it to use. I think my design would have been better. They didn’t ask me.
Posted by Paul Korm
Oct 1, 2025 at 02:22 PM
Well stated, Bill. Thank you. I also keep track of developments in AI and tech in general, and I do hands-on exploration of all the models as they evolve. If anything, robust skepticism about world-changing claims is a requirement, as well as is getting into the dirt and understanding the technology.
I suggest we increasingly need to be more wary than ever about the motivations of tech billionaires. For example, Sam Altman’s statement that Sora will create video using copyrighted material unless the copyright owners explicitly “opt out”. Amazing hubris. (WSJ https://bit.ly/48OSCBs)
(Ben Evans is Benedict Evans, a former partner in a16z (Andreeson Horowitz). I recommend following Evans’ analyses. https://bit.ly/4gTSVNB)
MadaboutDana wrote:
>Please note, incidentally, that I keep continuous track of developments
>in the AI field – they are directly relevant to my work in
>copywriting, translation and communications – and can adduce
>sources for all of the above, including our own extensive (and ongoing)
>investigations of generative AI.
Posted by Amontillado
Oct 1, 2025 at 09:14 PM
I’ve expressed doubts about AI hype. My doubts remain, so perhaps I’m a Luddite. Or, maybe sometimes traditional skills offer benefits beyond the immediate gains of newer methods.
Slide rules aren’t great for precision or speed. Grab a calculator if you want a quick precise answer, but slide rules inspire number sense in ways electronics don’t. I think math studies should include at least the basics of how to use a slide rule. Truth to tell, they’re kind of fun, too.
And there I go again. Referring to slide rules in the present tense. I’m hopeless.
Posted by satis
Oct 1, 2025 at 11:08 PM
Some technological skepticism is understandable, but Luddism is misplaced. My own experience with AI-driven tools proved just how transformative they can be.
A relative needed help making sense of a thick stack of medical reports, including bloodwork and details from a thoracic echocardiogram. I spent hours researching the results, trying to understand the terminology so I could explain what I could in plain language.
Only afterward did I think to try out an LLM. I inputted all the data to my Pro ChatGPT account (data which ChatGPT does not retain). I asked it to analyze the results, explain them in simple, non-technical language, summarize the findings, and then generate follow-up questions for the relative’s cardiologist based on the data.
I was seriously shocked: in less than ten seconds ChatGPT delivered far clearer, more comprehensive insights than I was able to produce in hours (with zero errors on subsequent verification). It provided concise summaries of the tests and useful follow-up questions to the cardiologist that logically flowed from the data, which I would have been unable myself to compose.
I’ve also used LLMs to collate and analyze tens of thousands of words I’ve written and collected and I’ve been satisfied by the results, and sometimes startled by the emergent capabilities of the service - able to comprehend inferences in writing that weren’t spelled out, able to detect nuances like sarcasm, and able to make insights and unanticipated connections from sometimes far-flung text from different sections of the writing.
Like it or not, this technology is the real deal, and it’s improving at an extraordinary pace.