Outliner Software
Home Forum Archives Search Login Register


 

Outliner Software Forum RSS Feed Forum Posts Feed

Subscribe by Email

CRIMP Defined

 

Tip Jar

Tinderbox goes AI

< Next Topic | Back to topic list | Previous Topic >

Pages:  < 1 2 3 > 

Posted by tberni
Aug 17, 2025 at 09:12 PM

 

ߤðߤðߤ£  I think so too!!

Amontillado wrote:
AI is pretty amazing, particularly if your definition of “amazing”
>includes a tinge of terror.
> >Or at least humor.
> >Yesterday, I wrote a letter that weighed 0.995 ounces by my scales. It
>could easily read 1.005 ounces on a different scale.
> >The resolution of postal measurement was of critical interest, so I
>asked Google if I should put more postage on a 0.99 ounce letter. Would
>my measurement of 0.99 fall within tolerance if the post office saw a
>different weight?
> >Absolutely not, the Google AI bot told me. A first class stamp covers
>one ounce. Since 0.99 ounces is far more than one ounce, I’d have to
>lick more stamps.
> >So I asked the question a different way, got a different answer, but the
>conclusion was the same. Since 0.99 ounces is so much more than one
>ounce I would have to use more postage.
> >In that experience I think I saw the downfall of mankind.
> >I added postage. Not because I thought 0.99 ounces was more than one but
>because I didn’t want to risk nondelivery.
> >Sensible, I thought, until I realized the AI’s silly argument prevailed.
>Whether or not I believed 0.99 > 1, my actions paralleled a ludicrous
>conclusion.
> >If that’s not the end of the world, it’s at least the beginning of a lot
>of post AI political careers.
> >We’re sunk.

 


Posted by Paul Korm
Aug 18, 2025 at 02:47 PM

 

I’m often reminded that the notice at the bottom of most AI chats (“ChatGPT can make mistakes. Check important info.”) is often a modern take on “Abandon all hope, ye who enter here”.  When ChatGPT heads down the path of being mistaken, it becomes unable to stop messing up.  I get into dialogs like “eleven (11) letter word for inescapable facts”  the answer “certain”.  My reply “no, it has to be 11 letters”.  The answer “you’re right, I apologize.  Try ‘sure’”.  And on and on, never getting to “ineluctable”. 

Amontillado wrote:
AI is pretty amazing, particularly if your definition of “amazing”
>includes a tinge of terror.
> >Or at least humor.

 


Posted by satis
Aug 19, 2025 at 01:18 AM

 

Paul Korm wrote:

>I’m often reminded that the notice at the bottom of most AI chats
>(“ChatGPT can make mistakes. Check important info.”) is often a modern
>take on “Abandon all hope, ye who enter here”.  When ChatGPT heads down
>the path of being mistaken, it becomes unable to stop messing up.  I
>get into dialogs like “eleven (11) letter word for inescapable facts”
>the answer “certain”.  My reply “no, it has to be 11 letters”.  The
>answer “you’re right, I apologize.  Try ‘sure’”.  And on and on, never
>getting to “ineluctable”. 

It’s incredibly frustrating sometimes, but other times it’s quite amazing. I’ve been shopping for music-related electronics and I’ve had questions about impedence and noise and ChatGPT’s answers to my questions and follow-ups have been excellent, on par or better than website results after I’ve done a lot of searching. But then when I asked for comparisons to specific gear or alternatives it gave recommendations that were mostly wrong and which didn’t meet my needs. And when I called it out for repeated mistakes it acknowledged them then confidently offered more bad advice.

As an experiment I entered a lengthy, contentious Reddit discussion-thread to summarize the thread and point out any logical lapses or bad arguments, and to describe what and how one specific participant was arguing, and it did a shockingly detailed analysis that I mostly agreed with. It misunderstood a portion of the discussion and its analysis there was off-base but interestingly so. It was incisive in analyzing the arguments in the thread and understood context in a way that confounded me.

When you consider how much it and other AIs like Perplexity (which can give better results than ChatGPT, expecially when uploading images) have progressed in a matter of months, I think we underestimate what these technologies will be like in just another two years.

Recently Matt Growcoot uploaded a pic taken in Medellin while suspended in a gondola, with no obvious geological or topographical descriptors, and no sign of him being in a gondola, and ChatGPT guessed it all, including the gondola.

https://petapixel.com/2025/04/18/chatgpt-is-scarily-good-at-guessing-the-location-of-a-photo/

In other uploads it was slightly off, or even wrong. But the shockingly good results, with fast continued improvement, suggest we’re rushing towards an inflection point of some kind, technologically, culturally, economically.

 


Posted by Paul Korm
Aug 19, 2025 at 08:03 PM

 

Or, maybe Growcoot just happened to hit the sweet spot of ChatGPT’s training data with one image, but there was no training data that met the contexts of other images.

I think we tend to think that because this chatting thing seems like it is having an actual conversation with us, that it must be “thinking”, and because it hits the right answer (or close to it) often, then it must “know” a lot of things.  But, it’s still just a very fast, very expensive, software trick that has a lot of hard boundaries.  I’m often running into areas in chat with ChatGPT or Claude where it begins to be obvious that this AI bot thing is in a corner where there is no training data relevant to the comment I just made in the conversation.  These AIs rarely respond, simply, “I don’t know”.  Instead, they extrapolate from the data at hand.

satis wrote:

>Recently Matt Growcoot uploaded a pic taken in Medellin while suspended
>in a gondola, with no obvious geological or topographical descriptors,
>and no sign of him being in a gondola, and ChatGPT guessed it all,
>including the gondola.
> >https://petapixel.com/2025/04/18/chatgpt-is-scarily-good-at-guessing-the-location-of-a-photo/
> >In other uploads it was slightly off, or even wrong. But the shockingly
>good results, with fast continued improvement, suggest we’re rushing
>towards an inflection point of some kind, technologically, culturally,
>economically.

 


Posted by Stephen Zeoli
Aug 20, 2025 at 03:19 PM

 

Forgive me if I posted this quote here before, but to me this sums up what I want from AI:

“I don’t want AI to love my kids for me, I want it to do the dishes.”

In this quote, “to love my kids for me” is a metaphor for thinking. I don’t want it to do the thinking for me. “Do the dishes” means mundane crap. As an example of the latter, I have been impressed with how the AI cleans up the emails I forward into the new Mem.ai.

Steve

 


Pages:  < 1 2 3 > 

Back to topic list

© 2006-2025 Pixicom - Some Rights Reserved. | Tip Jar