From andrew at humeweb.com Sun Jun 1 05:31:41 2025 From: andrew at humeweb.com (andrew at humeweb.com) Date: Sat, 31 May 2025 12:31:41 -0700 Subject: [TUHS] On the unreliability of LLM-based search results (was: Listing of early Unix source code from the Computer History Museum) In-Reply-To: References: Message-ID: generally, i rate norman’s missives very high on the believability scale. but in this case, i think he is wrong. if you take as a baseline, the abilities of LLMs (such as earlier versions of ChatGP?) 2-3 years ago was quite suspect. certainly better than mark shaney, but not overwhelmingly. those days are long past. modern systems are amazingly adept. not necessarily intelligent, but they can (but not always) pass realistic tests, pass SAT tests and bar exams, math olympiad tests and so on. and people can use them to do basic (but realistic) data analysis including experimental design, generate working code, and run that code against synthetic data and produce visual output. sure, there are often mistakes. the issue of hullucinations is real. but where we are now is almost astonishing, and will likely get MUCH better in the next year or three. end-of-admonishment andrew > On May 26, 2025, at 9:40 AM, Norman Wilson wrote: > > G. Branden Robinson: > > That's why I think Norman has sussed it out accurately. LLMs are > fantastic bullshit generators in the Harry G. Frankfurt sense,[1] > wherein utterances are undertaken neither to enlighten nor to deceive, > but to construct a simulacrum of plausible discourse. BSing is a close > cousin to filibustering, where even plausibility is discarded, often for > the sake of running out a clock or impeding achievement of consensus. > > ==== > > That's exactly what I had in mind. > > I think I had read Frankfurt's book before I first started > calling LLMs bullshit generators, but I can't remember for > sure. I don't plan to ask ChatGPT (which still, at least > sometimes, credits me with far greater contributions to Unix > than I have actually made). > > > Here's an interesting paper I stumbled across last week > which presents the case better than I could: > > https://link.springer.com/article/10.1007/s10676-024-09775-5 > > To link this back to actual Unix history (or something much > nearer that), I realized that `bullshit generator' was a > reasonable summary of what LLMs do after also realizing that > an LLM is pretty much just a much-fancier and better-automated > descendant of Mark V Shaney: https://en.wikipedia.org/wiki/Mark_V._Shaney > > Norman Wilson > Toronto ON From luther.johnson at makerlisp.com Sun Jun 1 05:46:16 2025 From: luther.johnson at makerlisp.com (Luther Johnson) Date: Sat, 31 May 2025 12:46:16 -0700 Subject: [TUHS] On the unreliability of LLM-based search results (was: Listing of early Unix source code from the Computer History Museum) In-Reply-To: References: Message-ID: <3e4339e9-bf9a-2b72-b47a-f20f81a153b5@makerlisp.com> I think when no-one notices anymore, how wrong automatic information is, and how often, it will have effectively redefined reality, and humans, who have lost the ability to reason for themselves, will declare that AI has met and exceeded human intelligence. They will be right, partly because of AI's improvements, but to a larger extent, because we will have forgotten how to think. I think AI is having disastrous effects on the education of younger generations right now, I see it in my workplace, every day. On 05/31/2025 12:31 PM, andrew at humeweb.com wrote: > generally, i rate norman’s missives very high on the believability scale. > but in this case, i think he is wrong. > > if you take as a baseline, the abilities of LLMs (such as earlier versions of ChatGP?) 2-3 years ago > was quite suspect. certainly better than mark shaney, but not overwhelmingly. > > those days are long past. modern systems are amazingly adept. not necessarily intelligent, > but they can (but not always) pass realistic tests, pass SAT tests and bar exams, math olympiad tests > and so on. and people can use them to do basic (but realistic) data analysis including experimental design, > generate working code, and run that code against synthetic data and produce visual output. > > sure, there are often mistakes. the issue of hullucinations is real. but where we are now > is almost astonishing, and will likely get MUCH better in the next year or three. > > end-of-admonishment > > andrew > >> On May 26, 2025, at 9:40 AM, Norman Wilson wrote: >> >> G. Branden Robinson: >> >> That's why I think Norman has sussed it out accurately. LLMs are >> fantastic bullshit generators in the Harry G. Frankfurt sense,[1] >> wherein utterances are undertaken neither to enlighten nor to deceive, >> but to construct a simulacrum of plausible discourse. BSing is a close >> cousin to filibustering, where even plausibility is discarded, often for >> the sake of running out a clock or impeding achievement of consensus. >> >> ==== >> >> That's exactly what I had in mind. >> >> I think I had read Frankfurt's book before I first started >> calling LLMs bullshit generators, but I can't remember for >> sure. I don't plan to ask ChatGPT (which still, at least >> sometimes, credits me with far greater contributions to Unix >> than I have actually made). >> >> >> Here's an interesting paper I stumbled across last week >> which presents the case better than I could: >> >> https://link.springer.com/article/10.1007/s10676-024-09775-5 >> >> To link this back to actual Unix history (or something much >> nearer that), I realized that `bullshit generator' was a >> reasonable summary of what LLMs do after also realizing that >> an LLM is pretty much just a much-fancier and better-automated >> descendant of Mark V Shaney: https://en.wikipedia.org/wiki/Mark_V._Shaney >> >> Norman Wilson >> Toronto ON > From arnold at skeeve.com Sun Jun 1 06:09:07 2025 From: arnold at skeeve.com (arnold at skeeve.com) Date: Sat, 31 May 2025 14:09:07 -0600 Subject: [TUHS] On the unreliability of LLM-based search results (was: Listing of early Unix source code from the Computer History Museum) In-Reply-To: <3e4339e9-bf9a-2b72-b47a-f20f81a153b5@makerlisp.com> References: <3e4339e9-bf9a-2b72-b47a-f20f81a153b5@makerlisp.com> Message-ID: <202505312009.54VK97bQ4163488@freefriends.org> It's been going on a for a long time, even before AI. The amount of cargo cult programming I've seen over the past ~ 10 years is extremely discouraging. Look up something on Stack Overflow and copy/paste it without understanding it. How much better is that than relying on AI? Not much in my opinion. (Boy, am I glad I retired recently.) Arnold Luther Johnson wrote: > I think when no-one notices anymore, how wrong automatic information is, > and how often, it will have effectively redefined reality, and humans, > who have lost the ability to reason for themselves, will declare that AI > has met and exceeded human intelligence. They will be right, partly > because of AI's improvements, but to a larger extent, because we will > have forgotten how to think. I think AI is having disastrous effects on > the education of younger generations right now, I see it in my > workplace, every day. > > On 05/31/2025 12:31 PM, andrew at humeweb.com wrote: > > generally, i rate norman’s missives very high on the believability scale. > > but in this case, i think he is wrong. > > > > if you take as a baseline, the abilities of LLMs (such as earlier versions of ChatGP?) 2-3 years ago > > was quite suspect. certainly better than mark shaney, but not overwhelmingly. > > > > those days are long past. modern systems are amazingly adept. not necessarily intelligent, > > but they can (but not always) pass realistic tests, pass SAT tests and bar exams, math olympiad tests > > and so on. and people can use them to do basic (but realistic) data analysis including experimental design, > > generate working code, and run that code against synthetic data and produce visual output. > > > > sure, there are often mistakes. the issue of hullucinations is real. but where we are now > > is almost astonishing, and will likely get MUCH better in the next year or three. > > > > end-of-admonishment > > > > andrew > > > >> On May 26, 2025, at 9:40 AM, Norman Wilson wrote: > >> > >> G. Branden Robinson: > >> > >> That's why I think Norman has sussed it out accurately. LLMs are > >> fantastic bullshit generators in the Harry G. Frankfurt sense,[1] > >> wherein utterances are undertaken neither to enlighten nor to deceive, > >> but to construct a simulacrum of plausible discourse. BSing is a close > >> cousin to filibustering, where even plausibility is discarded, often for > >> the sake of running out a clock or impeding achievement of consensus. > >> > >> ==== > >> > >> That's exactly what I had in mind. > >> > >> I think I had read Frankfurt's book before I first started > >> calling LLMs bullshit generators, but I can't remember for > >> sure. I don't plan to ask ChatGPT (which still, at least > >> sometimes, credits me with far greater contributions to Unix > >> than I have actually made). > >> > >> > >> Here's an interesting paper I stumbled across last week > >> which presents the case better than I could: > >> > >> https://link.springer.com/article/10.1007/s10676-024-09775-5 > >> > >> To link this back to actual Unix history (or something much > >> nearer that), I realized that `bullshit generator' was a > >> reasonable summary of what LLMs do after also realizing that > >> an LLM is pretty much just a much-fancier and better-automated > >> descendant of Mark V Shaney: https://en.wikipedia.org/wiki/Mark_V._Shaney > >> > >> Norman Wilson > >> Toronto ON > > > From luther.johnson at makerlisp.com Sun Jun 1 07:53:07 2025 From: luther.johnson at makerlisp.com (Luther Johnson) Date: Sat, 31 May 2025 14:53:07 -0700 Subject: [TUHS] On the unreliability of LLM-based search results (was: Listing of early Unix source code from the Computer History Museum) In-Reply-To: <202505312009.54VK97bQ4163488@freefriends.org> References: <3e4339e9-bf9a-2b72-b47a-f20f81a153b5@makerlisp.com> <202505312009.54VK97bQ4163488@freefriends.org> Message-ID: <0adb7694-f99f-dafa-c906-d5502647aaf0@makerlisp.com> I agree. On 05/31/2025 01:09 PM, arnold at skeeve.com wrote: > It's been going on a for a long time, even before AI. The amount > of cargo cult programming I've seen over the past ~ 10 years > is extremely discouraging. Look up something on Stack Overflow > and copy/paste it without understanding it. How much better is > that than relying on AI? Not much in my opinion. (Boy, am I glad > I retired recently.) > > Arnold > > Luther Johnson wrote: > >> I think when no-one notices anymore, how wrong automatic information is, >> and how often, it will have effectively redefined reality, and humans, >> who have lost the ability to reason for themselves, will declare that AI >> has met and exceeded human intelligence. They will be right, partly >> because of AI's improvements, but to a larger extent, because we will >> have forgotten how to think. I think AI is having disastrous effects on >> the education of younger generations right now, I see it in my >> workplace, every day. >> >> On 05/31/2025 12:31 PM, andrew at humeweb.com wrote: >>> generally, i rate norman’s missives very high on the believability scale. >>> but in this case, i think he is wrong. >>> >>> if you take as a baseline, the abilities of LLMs (such as earlier versions of ChatGP?) 2-3 years ago >>> was quite suspect. certainly better than mark shaney, but not overwhelmingly. >>> >>> those days are long past. modern systems are amazingly adept. not necessarily intelligent, >>> but they can (but not always) pass realistic tests, pass SAT tests and bar exams, math olympiad tests >>> and so on. and people can use them to do basic (but realistic) data analysis including experimental design, >>> generate working code, and run that code against synthetic data and produce visual output. >>> >>> sure, there are often mistakes. the issue of hullucinations is real. but where we are now >>> is almost astonishing, and will likely get MUCH better in the next year or three. >>> >>> end-of-admonishment >>> >>> andrew >>> >>>> On May 26, 2025, at 9:40 AM, Norman Wilson wrote: >>>> >>>> G. Branden Robinson: >>>> >>>> That's why I think Norman has sussed it out accurately. LLMs are >>>> fantastic bullshit generators in the Harry G. Frankfurt sense,[1] >>>> wherein utterances are undertaken neither to enlighten nor to deceive, >>>> but to construct a simulacrum of plausible discourse. BSing is a close >>>> cousin to filibustering, where even plausibility is discarded, often for >>>> the sake of running out a clock or impeding achievement of consensus. >>>> >>>> ==== >>>> >>>> That's exactly what I had in mind. >>>> >>>> I think I had read Frankfurt's book before I first started >>>> calling LLMs bullshit generators, but I can't remember for >>>> sure. I don't plan to ask ChatGPT (which still, at least >>>> sometimes, credits me with far greater contributions to Unix >>>> than I have actually made). >>>> >>>> >>>> Here's an interesting paper I stumbled across last week >>>> which presents the case better than I could: >>>> >>>> https://link.springer.com/article/10.1007/s10676-024-09775-5 >>>> >>>> To link this back to actual Unix history (or something much >>>> nearer that), I realized that `bullshit generator' was a >>>> reasonable summary of what LLMs do after also realizing that >>>> an LLM is pretty much just a much-fancier and better-automated >>>> descendant of Mark V Shaney: https://en.wikipedia.org/wiki/Mark_V._Shaney >>>> >>>> Norman Wilson >>>> Toronto ON From audioskeptic at gmail.com Sun Jun 1 08:36:14 2025 From: audioskeptic at gmail.com (James Johnston) Date: Sat, 31 May 2025 15:36:14 -0700 Subject: [TUHS] On the unreliability of LLM-based search results (was: Listing of early Unix source code from the Computer History Museum) In-Reply-To: <0adb7694-f99f-dafa-c906-d5502647aaf0@makerlisp.com> References: <3e4339e9-bf9a-2b72-b47a-f20f81a153b5@makerlisp.com> <202505312009.54VK97bQ4163488@freefriends.org> <0adb7694-f99f-dafa-c906-d5502647aaf0@makerlisp.com> Message-ID: Well, I have to say that my experiences with "AI based search" have been beyond grossly annoying. It keeps trying to "help me" by sliding in common terms it actually knows about instead of READING THE DAMN QUERY. I had much, much better experiences with very literal search methods, and I'd like to go back to that when I'm looking for obscure papers, names, etc. Telling me "you mean" when I damn well DID NOT MEAN THAT is a worst-case experiences. Sorry, not so much a V11 experience here, but I have to say it may serve the public, but only to guide them back into boring, middle-of-the-road, 'average mean-calculating' responses that simply neither enlighten nor serve the original purpose of search. jj - a grumpy old signal processing/hearing guy who used a lot of real operating systems back when and kind of misses them. On Sat, May 31, 2025 at 2:53 PM Luther Johnson wrote: > I agree. > > On 05/31/2025 01:09 PM, arnold at skeeve.com wrote: > > It's been going on a for a long time, even before AI. The amount > > of cargo cult programming I've seen over the past ~ 10 years > > is extremely discouraging. Look up something on Stack Overflow > > and copy/paste it without understanding it. How much better is > > that than relying on AI? Not much in my opinion. (Boy, am I glad > > I retired recently.) > > > > Arnold > > > > Luther Johnson wrote: > > > >> I think when no-one notices anymore, how wrong automatic information is, > >> and how often, it will have effectively redefined reality, and humans, > >> who have lost the ability to reason for themselves, will declare that AI > >> has met and exceeded human intelligence. They will be right, partly > >> because of AI's improvements, but to a larger extent, because we will > >> have forgotten how to think. I think AI is having disastrous effects on > >> the education of younger generations right now, I see it in my > >> workplace, every day. > >> > >> On 05/31/2025 12:31 PM, andrew at humeweb.com wrote: > >>> generally, i rate norman’s missives very high on the believability > scale. > >>> but in this case, i think he is wrong. > >>> > >>> if you take as a baseline, the abilities of LLMs (such as earlier > versions of ChatGP?) 2-3 years ago > >>> was quite suspect. certainly better than mark shaney, but not > overwhelmingly. > >>> > >>> those days are long past. modern systems are amazingly adept. not > necessarily intelligent, > >>> but they can (but not always) pass realistic tests, pass SAT tests and > bar exams, math olympiad tests > >>> and so on. and people can use them to do basic (but realistic) data > analysis including experimental design, > >>> generate working code, and run that code against synthetic data and > produce visual output. > >>> > >>> sure, there are often mistakes. the issue of hullucinations is real. > but where we are now > >>> is almost astonishing, and will likely get MUCH better in the next > year or three. > >>> > >>> end-of-admonishment > >>> > >>> andrew > >>> > >>>> On May 26, 2025, at 9:40 AM, Norman Wilson wrote: > >>>> > >>>> G. Branden Robinson: > >>>> > >>>> That's why I think Norman has sussed it out accurately. LLMs are > >>>> fantastic bullshit generators in the Harry G. Frankfurt sense,[1] > >>>> wherein utterances are undertaken neither to enlighten nor to > deceive, > >>>> but to construct a simulacrum of plausible discourse. BSing is a > close > >>>> cousin to filibustering, where even plausibility is discarded, > often for > >>>> the sake of running out a clock or impeding achievement of > consensus. > >>>> > >>>> ==== > >>>> > >>>> That's exactly what I had in mind. > >>>> > >>>> I think I had read Frankfurt's book before I first started > >>>> calling LLMs bullshit generators, but I can't remember for > >>>> sure. I don't plan to ask ChatGPT (which still, at least > >>>> sometimes, credits me with far greater contributions to Unix > >>>> than I have actually made). > >>>> > >>>> > >>>> Here's an interesting paper I stumbled across last week > >>>> which presents the case better than I could: > >>>> > >>>> https://link.springer.com/article/10.1007/s10676-024-09775-5 > >>>> > >>>> To link this back to actual Unix history (or something much > >>>> nearer that), I realized that `bullshit generator' was a > >>>> reasonable summary of what LLMs do after also realizing that > >>>> an LLM is pretty much just a much-fancier and better-automated > >>>> descendant of Mark V Shaney: > https://en.wikipedia.org/wiki/Mark_V._Shaney > >>>> > >>>> Norman Wilson > >>>> Toronto ON > > -- James D. (jj) Johnston Former Chief Scientist, Immersion Networks -------------- next part -------------- An HTML attachment was scrubbed... URL: From als at thangorodrim.ch Sun Jun 1 08:30:08 2025 From: als at thangorodrim.ch (Alexander Schreiber) Date: Sun, 1 Jun 2025 00:30:08 +0200 Subject: [TUHS] On the unreliability of LLM-based search results (was: Listing of early Unix source code from the Computer History Museum) In-Reply-To: <3e4339e9-bf9a-2b72-b47a-f20f81a153b5@makerlisp.com> References: <3e4339e9-bf9a-2b72-b47a-f20f81a153b5@makerlisp.com> Message-ID: On Sat, May 31, 2025 at 12:46:16PM -0700, Luther Johnson wrote: > I think when no-one notices anymore, how wrong automatic information is, and > how often, it will have effectively redefined reality, and humans, who have > lost the ability to reason for themselves, will declare that AI has met and > exceeded human intelligence. They will be right, partly because of AI's > improvements, but to a larger extent, because we will have forgotten how to > think. I think AI is having disastrous effects on the education of younger > generations right now, I see it in my workplace, every day. There are quite a few reports from both teachers and university lecturers that draw a pretty grim picture with two bad developments: - a significant fraction of students use LLMs to cheat their way through school and university, which means they'll never really learn how to learn, acquire knowledge and actually understand things, leaving them dependent on LLM support - this is reported to range from students who just straight up use it as a shortcut to those who feel at a severe disadvantage if they don't follow suit - additionally, social media and short format videos appear to have done an impressive job of ruining attention span and the ability to handle a lack of constant stimulation, making it very hard for teachers to even reach their students We'll have to see how that plays out once these kids and young adults approach "educated workforce" age, but I'm not very optimistic for them. But we also see the opposing trend of parents who do understand the technology (and its potential impacts on young, forming minds) trying to carefully restrict their childrens exposure to these things in order to both limit the damage to them and thus give them better chances in their future. Additionally, some of the smarter young adults seem to be realizing what this does to their age group and are essentially going "I won't let my brain get fried by this and will keep a careful distance" - I have hope for them. We're running some very large scale uncontrolled experiments on still forming minds here with the long term consequences being at best hard to predict and the short term consequences not looking pretty already. Kind regards, Alex. -- "Opportunity is missed by most people because it is dressed in overalls and looks like work." -- Thomas A. Edison From luther.johnson at makerlisp.com Sun Jun 1 08:47:49 2025 From: luther.johnson at makerlisp.com (Luther Johnson) Date: Sat, 31 May 2025 15:47:49 -0700 Subject: [TUHS] On the unreliability of LLM-based search results (was: Listing of early Unix source code from the Computer History Museum) In-Reply-To: References: <3e4339e9-bf9a-2b72-b47a-f20f81a153b5@makerlisp.com> <202505312009.54VK97bQ4163488@freefriends.org> <0adb7694-f99f-dafa-c906-d5502647aaf0@makerlisp.com> Message-ID: <5aa768ae-8853-f0da-9780-53ca4e9d486f@makerlisp.com> I think we could call many of these responses "mis-ambiguation", or conflation, they mush everything together as long as the questions posed and the answers they provide are "buzzword-adjacent", in a very superficial, mechanical way. There's no intelligence here, it's just amazing how much we project onto these bots because we want to believe in them. On 05/31/2025 03:36 PM, James Johnston wrote: > Well, I have to say that my experiences with "AI based search" have > been beyond grossly annoying. It keeps trying to "help me" by sliding > in common terms it actually knows about instead of READING THE DAMN QUERY. > > I had much, much better experiences with very literal search methods, > and I'd like to go back to that when I'm looking for obscure papers, > names, etc. Telling me "you mean" when I damn well DID NOT MEAN THAT > is a worst-case experiences. > > Sorry, not so much a V11 experience here, but I have to say it may > serve the public, but only to guide them back into boring, > middle-of-the-road, 'average mean-calculating' responses that simply > neither enlighten nor serve the original purpose of search. > > jj - a grumpy old signal processing/hearing guy who used a lot of real > operating systems back when and kind of misses them. > > On Sat, May 31, 2025 at 2:53 PM Luther Johnson > > > wrote: > > I agree. > > On 05/31/2025 01:09 PM, arnold at skeeve.com > wrote: > > It's been going on a for a long time, even before AI. The amount > > of cargo cult programming I've seen over the past ~ 10 years > > is extremely discouraging. Look up something on Stack Overflow > > and copy/paste it without understanding it. How much better is > > that than relying on AI? Not much in my opinion. (Boy, am I glad > > I retired recently.) > > > > Arnold > > > > Luther Johnson > wrote: > > > >> I think when no-one notices anymore, how wrong automatic > information is, > >> and how often, it will have effectively redefined reality, and > humans, > >> who have lost the ability to reason for themselves, will > declare that AI > >> has met and exceeded human intelligence. They will be right, partly > >> because of AI's improvements, but to a larger extent, because > we will > >> have forgotten how to think. I think AI is having disastrous > effects on > >> the education of younger generations right now, I see it in my > >> workplace, every day. > >> > >> On 05/31/2025 12:31 PM, andrew at humeweb.com > wrote: > >>> generally, i rate norman’s missives very high on the > believability scale. > >>> but in this case, i think he is wrong. > >>> > >>> if you take as a baseline, the abilities of LLMs (such as > earlier versions of ChatGP?) 2-3 years ago > >>> was quite suspect. certainly better than mark shaney, but not > overwhelmingly. > >>> > >>> those days are long past. modern systems are amazingly adept. > not necessarily intelligent, > >>> but they can (but not always) pass realistic tests, pass SAT > tests and bar exams, math olympiad tests > >>> and so on. and people can use them to do basic (but realistic) > data analysis including experimental design, > >>> generate working code, and run that code against synthetic > data and produce visual output. > >>> > >>> sure, there are often mistakes. the issue of hullucinations is > real. but where we are now > >>> is almost astonishing, and will likely get MUCH better in the > next year or three. > >>> > >>> end-of-admonishment > >>> > >>> andrew > >>> > >>>> On May 26, 2025, at 9:40 AM, Norman Wilson > wrote: > >>>> > >>>> G. Branden Robinson: > >>>> > >>>> That's why I think Norman has sussed it out accurately. > LLMs are > >>>> fantastic bullshit generators in the Harry G. Frankfurt > sense,[1] > >>>> wherein utterances are undertaken neither to enlighten nor > to deceive, > >>>> but to construct a simulacrum of plausible discourse. > BSing is a close > >>>> cousin to filibustering, where even plausibility is > discarded, often for > >>>> the sake of running out a clock or impeding achievement of > consensus. > >>>> > >>>> ==== > >>>> > >>>> That's exactly what I had in mind. > >>>> > >>>> I think I had read Frankfurt's book before I first started > >>>> calling LLMs bullshit generators, but I can't remember for > >>>> sure. I don't plan to ask ChatGPT (which still, at least > >>>> sometimes, credits me with far greater contributions to Unix > >>>> than I have actually made). > >>>> > >>>> > >>>> Here's an interesting paper I stumbled across last week > >>>> which presents the case better than I could: > >>>> > >>>> https://link.springer.com/article/10.1007/s10676-024-09775-5 > >>>> > >>>> To link this back to actual Unix history (or something much > >>>> nearer that), I realized that `bullshit generator' was a > >>>> reasonable summary of what LLMs do after also realizing that > >>>> an LLM is pretty much just a much-fancier and better-automated > >>>> descendant of Mark V Shaney: > https://en.wikipedia.org/wiki/Mark_V._Shaney > >>>> > >>>> Norman Wilson > >>>> Toronto ON > > > > -- > James D. (jj) Johnston > > Former Chief Scientist, Immersion Networks -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuhs at tuhs.org Sun Jun 1 09:29:59 2025 From: tuhs at tuhs.org (Warren Toomey via TUHS) Date: Sun, 1 Jun 2025 09:29:59 +1000 Subject: [TUHS] No Further LLM messages on TUHS Message-ID: All, the LLM conversation is no longer Unix-related. Please take it to COFF if you wish to continue it. Thanks, Warren