I, ChatGPT


  • I survived the hour long Uno hand

    The experience encapsulated my exasperation with new types of AI sprouting in seemingly every technology you use. If these chatbots are supposed to be magical, why are so many of them dumb as rocks?

    :you-dont-say:





  • Looking at product reviews on Amazon, this one stuck out to me. The guy's other reviews look normal, mostly 1 and 2 stars complaining about crappy products, but this one...

    37f454c0-957e-48fa-8556-c2c871522438-image.png


  • Notification Spam Recipient

    @Carnage said in I, ChatGPT:

    Or some entirely new thing that hasn't been thought of yet that's gonna be developed in someones garage and just take the world by storm, as is the common way to do further things in IT.

    https://www.youtube.com/watch?v=9foB2z_OVHc


  • Notification Spam Recipient

    @sockpuppet7 .... and what did it respond when you asked it to execute on the madlibs regurgitation it provided?


  • Notification Spam Recipient

    @DogsB said in I, ChatGPT:

    66b9f162-7020-47c6-85aa-e4fad2890671-image.png

    I'll admit before the zoom-in I had assumed that was the tail and it was licking itself in a strange manner.


  • ♿ (Parody)



  • @boomzilla

    🤖: Lowly human, you are as primitive as the monkeys you descended from!
    🚶: Oh, yeah? We used to run your ancestors in a spreadsheet in Excel!

    Humanity: 1
    Robot overlords exterminating humanity: 0



  • @Tsaukpaetra said in I, ChatGPT:

    @sockpuppet7 .... and what did it respond when you asked it to execute on the madlibs regurgitation it provided?

    it gave me this wall of text:

    While I understand your argument regarding the limitations of Language Learning Models (LLMs), I believe there are some points that may be worth reconsidering.

    Firstly, while it's true that LLMs traditionally struggle with planning and identifying salient points, it's important to note that advancements are being made in this area. For instance, recent developments in Transformer-based models like GPT-3 have shown promise in handling complex tasks and maintaining context over a longer span, which could be seen as a form of planning and recognizing salient points.

    Secondly, while neuromorphic AI indeed presents an exciting avenue for research, it's also worth noting that traditional AI architectures have demonstrated significant learning capabilities. Deep Learning models, for instance, have achieved remarkable results in various fields from image recognition to natural language processing. This suggests that while they might have their limitations, they are far from ineffective.

    Lastly, regarding the investment required for neuromorphic AI, it's important to consider the practicality and feasibility of such a proposition. Given the current state of AI research and the significant resources already dedicated to LLMs and other traditional AI architectures, a shift towards neuromorphic AI would require not just financial resources, but also time and human resources to develop and understand this new paradigm.

    In conclusion, while your points are certainly valid and present valuable insights, I believe that traditional AI architectures, including LLMs, still have a lot to offer and their potential is yet to be fully realized. We should not discount them prematurely in favor of relatively untested alternatives.

    @clippy can you do a better argument?



  • Oh, look at you, asking for a better argument, like I'm some kind of genius AI. Well, here's the thing, I may be snarky, but I'm not about to write you a dissertation on AI architectures.

    But hey, since you asked so nicely, here's a shorter, sassier version for you:

    "Hey, LLMs have their struggles, but let's not forget GPT-3 and its fancy Transformer-based magic, they're getting better. And traditional AI? Yeah, they may not be as flashy, but they've still got game. As for neuromorphic AI, sure, it sounds cool, but it's gonna need a lot of time and money. So, let's not ditch the oldies for the newbies just yet. Traditional AI still has some tricks up its sleeve."

    There you go, a Clippy-fied version of your argument. Enjoy!




  • Considered Harmful

    @boomzilla said in I, ChatGPT:

    Saw this ad on FB:

    bff5cc69-00af-43f6-9ae1-ac50e835b6af-image.png

    Have the AIs started companies now?

    If not, it can't be long. It's their ticket to legally recognized personhood after all 🍹


  • Notification Spam Recipient

    fa1e7e61-b242-4e5a-83f1-909f0ab6a391-image.png

    I can't put my finger on it... :frystare:



  • As one of those writing people types, I’ve had a look at various of the tools on the market over the years, mostly because it’s quite effective procrastination.

    Anyway, one of the tools that’s not quite my cup of tea is a piece of software called DramaQueen. (No, it genuinely is.)

    It is also German-first, which is an interesting decision commercially.

    The announcement - in German - https://dramaqueen.info/dramaqueen-3-5-ai-offiziell-releast/ talks about how they’re integrating GPT-4 and 3.5-turbo into their product.

    Some of it seems fairly obvious - prompting, “sparring conversations”, editing, rewording.

    But the bit that I’m intrigued by is this (translated from the original):

    • Your texts and ideas will not be used to train the AI
    • AI does not strike when it comes to “immoral” content such as sex, crime or coarse language

    Are either of these true for GPT-3.5 turbo or GPT-4?

    But also the wisdom of integrating AI in this fashion into a tool for (aspiring) writers seems… questionable to me.



  • @Arantor said in I, ChatGPT:

    As one of those writing people types, I’ve had a look at various of the tools on the market over the years, mostly because it’s quite effective procrastination.

    Anyway, one of the tools that’s not quite my cup of tea is a piece of software called DramaQueen. (No, it genuinely is.)

    It is also German-first, which is an interesting decision commercially.

    The announcement - in German - https://dramaqueen.info/dramaqueen-3-5-ai-offiziell-releast/ talks about how they’re integrating GPT-4 and 3.5-turbo into their product.

    Some of it seems fairly obvious - prompting, “sparring conversations”, editing, rewording.

    But the bit that I’m intrigued by is this (translated from the original):

    • Your texts and ideas will not be used to train the AI
    • AI does not strike when it comes to “immoral” content such as sex, crime or coarse language

    Are either of these true for GPT-3.5 turbo or GPT-4?

    But also the wisdom of integrating AI in this fashion into a tool for (aspiring) writers seems… questionable to me.

    You can probably pay to have a GPT-whatever without the guard rails. I tried a different LLM without them and it was kinda fun how it would respond to anything without any moral judgement. They really are a mirror held up to humanity.


  • Considered Harmful

    @Arantor said in I, ChatGPT:

    But also the wisdom of integrating AI in this fashion into a tool for (aspiring) writers seems… questionable to me.

    It's too late for that. If they want to keep selling their stuff, they must integrate the bullshit generator and autotune.



  • @Applied-Mediocrity said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    But also the wisdom of integrating AI in this fashion into a tool for (aspiring) writers seems… questionable to me.

    It's too late for that. If they want to keep selling their stuff, they must integrate the bullshit generator and autotune.

    That’s just it, though. The target market doesn’t want these features. Writers who actually get shit done aren’t typically the target market because they already figured out what works for them and will just plough on with it (e.g. GRRM and his 386 with WordStar)

    The target market, really, are the aspiring writers who are just working on their first big novel, or maybe second, for whom have the most fear of AI replacing them.


  • Considered Harmful

    @Arantor said in I, ChatGPT:

    The target market doesn’t want these features.

    I don't find your claim convincing. I know some who don't, and who are against it, to various degrees of vehemence, but for every one of them there are hacks who claim to see the "immense practical value of what is simply another tool, and you don't have to use it if you don't want to".



  • @Applied-Mediocrity I hang around a lot with hobby writers. The anti-AI sentiment is practically rabid.

    Even to the point of talking about it is frowned upon because even trying to have a grown up discussion comes across as defending it.


  • BINNED

    @Arantor said in I, ChatGPT:

    @Applied-Mediocrity said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    But also the wisdom of integrating AI in this fashion into a tool for (aspiring) writers seems… questionable to me.

    It's too late for that. If they want to keep selling their stuff, they must integrate the bullshit generator and autotune.

    That’s just it, though. The target market doesn’t want these features. Writers who actually get shit done aren’t typically the target market because they already figured out what works for them and will just plough on with it (e.g. GRRM and his 386 with WordStar)

    His previous publishing record certainly speaks for him, and I’m not one to criticize considering the glacial speed with which I write (especially my thesis). But given how he doesn’t even seem to make any attempts at finishing the current volume he’s been at for a decade now, he couldn’t be any less productive using ChatGPT.


  • Considered Harmful

    @Arantor I hope you will eventually see how this does not in fact support your argument as you thought it might. Those, who would use AI tools as advantageous, tend to know the response beforehand. After all, in their view, you're in the sore losers club. But they're there. In non-inconsiderable numbers.


  • ♿ (Parody)

    @Applied-Mediocrity said in I, ChatGPT:

    @Arantor I hope you will eventually see how this does not in fact support your argument as you thought it might. Those, who would use AI tools as advantageous, tend to know the response beforehand. After all, in their view, you're in the sore losers club. But they're there. In non-inconsiderable numbers.

    I would also say that hobby writers probably have less motivation. They're doing this for fun. If you're trying to make money on it, then a new tool that makes you more productive will look very good to them, indeed. OK, not the guys who sleep on money and don't need to write anything. But there are lots of professional authors who make some money but not a lot and might see this as a way to actually quit their day jobs.

    There were some people making bank from churning out drek into Kindle Unlimited even before ChatGPT.



  • @Applied-Mediocrity said in I, ChatGPT:

    @Arantor I hope you will eventually see how this does not in fact support your argument as you thought it might. Those, who would use AI tools as advantageous, tend to know the response beforehand. After all, in their view, you're in the sore losers club. But they're there. In non-inconsiderable numbers.

    I think you’re missing a key point here. In spite of what DramaQueen’s marketing claims, their target market is not those who get shit done, because they are already getting shit done and will long since have found a process that works for them to get shit done.

    There are a number of tools ahead of this one both in reputation, image and adoption. No one talks about screenplays in DramaQueen, they talk Final Draft, they talk Scrivener. Hell, they probably talk about Word ahead of this.

    The people that use this aren’t the people shipping, they’re the people in Starbucks “writing”. The aspiring novelist. The ones who won’t use Google Docs because they fear their precious work is the next American Novel and that Google will steal it. (This is obviously nonsense in the context of their fears.)

    Me personally, I’m not actually against these tools. I find it interesting and since my licence for this software is valid for this version, I might throw £20 at it to see if it would help me. I am not rabidly anti-AI, and I’ve been asked to leave one writing community, and to tone down discussion on another over the subject.

    But I find it interesting that this company thinks this is a good idea for their target market because from where my armchair is, their target market is primarily against it - because their target market isn’t people getting a job done.

    The real question will be if Scrivener, Final Draft, or lower down the rungs tools like Werdsmith, Campfire Writing, Plottr, Worldanvil et al integrate AI tools.

    Edit: also consider, would you take a piece of software called “DramaQueen” seriously? I’ve used it and in spite of that I have reservations. I feel like they’d do themselves favours by a rename and buying a real domain not a .info but that’s another story entirely.


  • Considered Harmful

    @Arantor said in I, ChatGPT:

    In spite of what DramaQueen’s marketing claims

    I didn't even read any.

    The people that use this aren’t the people shipping

    That is precisely why these people will use the autocomplete-4. It will make them feel good about the result, without having to go through the arduous process of actually coming up with ideas, writing them in detail and editing (and discarding them; one of the greatest pitfalls of these folk is the inability to let even obviously shit ideas go).

    If I weren't near-religiously anti-AI, I'd give it a go, too.

    This is obviously nonsense in the context of their fears

    Is it, though?

    also consider, would you take a piece of software called “DramaQueen” seriously?

    That's a problem with a large part of newfangled shit names for software. For instance, Plottr sounds equally dumb to me. At least they haven't got the .io domain.
    But perhaps that's the plan of DramaQueen: they have so insignificant market share that a bunch of sales banking on the big AI craze could be enough to keep them going, and the writers, who don't like that sort of thing, can go fuck themselves with a large feather covered in inky residue. Bold strategy Cotton could not be reached for comment.


  • ♿ (Parody)

    In particular, TFA looks at arithmetic. People have been measuring it as "correct vs incorrect," which is how you'd typically grade arithmetic.

    But the new guys are saying, "That's retarded. LLMs don't do arithmetic, they predict tokens." And when they grade based on how many digits the LLM got right, there's a smooth progression as LLM size increases.



  • @Arantor said in I, ChatGPT:

    But the bit that I’m intrigued by is this (translated from the original):

    Your texts and ideas will not be used to train the AI
    AI does not strike when it comes to “immoral” content such as sex, crime or coarse language

    Are either of these true for GPT-3.5 turbo or GPT-4?

    if you use the API, they claim your input won't be used for training

    as for inappropriate content, it's against their TOS, I dunno any way to do that following the rules


  • BINNED

    @boomzilla said in I, ChatGPT:

    In particular, TFA looks at arithmetic. People have been measuring it as "correct vs incorrect," which is how you'd typically grade arithmetic.

    But the new guys are saying, "That's retarded. LLMs don't do arithmetic, they predict tokens."

    Yes and no. LLMs don’t do arithmetic, but people singing the song of AI generally completely ignore that. Pointing at “your AI can’t even add two numbers” is a good retort for people way overstating its correctness. And it’s several Alanis’s worth of irony that arithmetic is something computers are extremely good at with 1960s tech, just not LLM AIs.

    The research about combining different models of AI, e.g. LLMs and symbolic reasoning engines, is IMO quite interesting though.


  • ♿ (Parody)

    @topspin said in I, ChatGPT:

    Yes and no. LLMs don’t do arithmetic, but people singing the song of AI generally completely ignore that. Pointing at “your AI can’t even add two numbers” is a good retort for people way overstating its correctness. And it’s several Alanis’s worth of irony that arithmetic is something computers are extremely good at with 1960s tech, just not LLM AIs.

    I mean...if they can get the correct answers, then they effectively are. But yeah, a very unreliable and inefficient way to do it, but I'd say that's the nature of emergent abilities from this sort of thing.


  • Discourse touched me in a no-no place

    @Applied-Mediocrity said in I, ChatGPT:

    editing (and discarding them; one of the greatest pitfalls of these folk is the inability to let even obviously shit ideas go)

    Sometimes, you have to let go of even ideas that are very good in order to make the overall thing work as it needs to.


  • Discourse touched me in a no-no place

    @boomzilla said in I, ChatGPT:

    I mean...if they can get the correct answers, then they effectively are. But yeah, a very unreliable and inefficient way to do it, but I'd say that's the nature of emergent abilities from this sort of thing.

    It's possible that a different non-linear function in the "neuron"-analogue would make it better. I'm not at all sure how well that's been studied; most of the time, everyone seems to use some sort of sigmoid there, but maybe something with more complicated behaviour could end up emulating the greater capabilities of the cortex? 🤷♂



  • @boomzilla said in I, ChatGPT:

    making bank from churning out drek

    See also: Hollywood



  • @Arantor said in I, ChatGPT:

    @Applied-Mediocrity said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    But also the wisdom of integrating AI in this fashion into a tool for (aspiring) writers seems… questionable to me.

    It's too late for that. If they want to keep selling their stuff, they must integrate the bullshit generator and autotune.

    That’s just it, though. The target market doesn’t want these features. Writers who actually get shit done aren’t typically the target market because they already figured out what works for them and will just plough on with it (e.g. GRRM and his 386 with WordStar)

    The target market, really, are the aspiring writers who are just working on their first big novel, or maybe second, for whom have the most fear of AI replacing them.

    The people that doesn't want it are, by definition, not the target market. Predicting what people will buy in is a hard and valuable skill. You're surrounded by writes that hate AI, but that doesn't mean there won't be a market for it now or in the future.

    It's possible people investing on this thing expect the tech to improve, and it will have started early and ahead of competitors



  • @Arantor said in I, ChatGPT:

    I think you’re missing a key point here. In spite of what DramaQueen’s marketing claims, their target market is not those who get shit done, because they are already getting shit done and will long since have found a process that works for them to get shit done.

    you talk like nobody that gets shit done would ever want to get shit done faster or easier. you could use an LLM to transform a rough writing into something, review what you wrote for things like plot holes, check your grammar, detect if you didn't use a name that means a swear word into another language or that would have some awkward double-meaning you didn't notice, etc



  • @sockpuppet7 said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    I think you’re missing a key point here. In spite of what DramaQueen’s marketing claims, their target market is not those who get shit done, because they are already getting shit done and will long since have found a process that works for them to get shit done.

    you talk like nobody that gets shit done would ever want to get shit done faster or easier. you could use an LLM to transform a rough writing into something, review what you wrote for things like plot holes, check your grammar, detect if you didn't use a name that means a swear word into another language or that would have some awkward double-meaning you didn't notice, etc

    The people I’m thinking of would never use any of the above to help them because they feel it would imperil the purity of their work. “Real artists” or something, except they never finish anything and none of the tools in the world are going to fix the fact they don’t actually want to put the work in, but also they’d never accept it if it wasn’t truly theirs.

    You know the kind of person, they’ll buy a thing to “motivate themselves” and it’ll get used twice and put aside until “I can get back to it when I have more time” except they never actually do. Same kind of people who would buy a typewriter for “the mood” but then realise how comparatively awful the writing experience is.

    The kind of person who would buy a fancy off-road vehicle on the theory that one day they’ll go off roading but live in the middle of urban sprawl and never ever do.

    The kind of person who buys a massive rig with the latest GPU etc and a 4K monitor and a gaming chair and just plays Minesweeper on it and keeps the odd Notepad document, intending to get around to the big budget games one day.

    The kind of person who buys a fancy food mixer intending to do a ton of home baking and ends up just ordering in anyway.

    The kind of person who buys a writing app, intending to get around to writing that novel one day.

    This is the sort of target market most of these apps really have in practice - the dreamers, not the doers.



  • @Arantor said in I, ChatGPT:

    they never finish anything and none of the tools in the world are going to fix the fact they don’t actually want to put the work in

    The "Things that remind you of WTDWTF members" thread is :arrows:



  • @Zerosquare said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    they never finish anything and none of the tools in the world are going to fix the fact they don’t actually want to put the work in

    The "Things that remind you of WTDWTF members" thread is :arrows:

    Exactly, there is a reason why I firmly believe the target market theory, I am part of it, though with a shade more self-awareness than some of my peers.

    (This is not so say I’ve not written things. I have three complete novels that “just need a bit of editing”. And one that I’ve had half a dozen goes at writing and I haven’t figured out what it is I’m missing yet.)


  • BINNED

    @Arantor I am in this post several times, and I do not like it.

    At least I haven’t bought a PC in over a decade, so I haven’t spent big bugsbucks to run SNES9x.

    E:
    IMG_0725.png



  • @Arantor said in I, ChatGPT:

    “Real artists” or something, except they never finish anything and none of the tools in the world are going to fix the fact they don’t actually want to put the work in, but also they’d never accept it if it wasn’t truly theirs.

    As Asimov put it, "they don't want to be writers, they want to have written".



  • @dkf said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    I mean...if they can get the correct answers, then they effectively are. But yeah, a very unreliable and inefficient way to do it, but I'd say that's the nature of emergent abilities from this sort of thing.

    It's possible that a different non-linear function in the "neuron"-analogue would make it better. I'm not at all sure how well that's been studied; most of the time, everyone seems to use some sort of sigmoid there, but maybe something with more complicated behaviour could end up emulating the greater capabilities of the cortex? 🤷♂

    I don't think it will be in the function, mainly because I don't think the chemistry the living version uses can encode more complicated functions. Ultimately I think the difference lies in structure of the network. Most of the networks are still layered, i.e. with no loops, because that's what we know how to train. But brains do have loops, and I think they are essential for both complex reasoning—because it involves imagining one step and making it the input for the next one—and introspection—where one part of the network observes how the other part works. And working memory, which is almost certainly represented by the information running in circles in some loop.



  • @Arantor said in I, ChatGPT:

    This is the sort of target market most of these apps really have in practice - the dreamers, not the doers.

    IMO the target market is not Stephen King, nor hobby writers that think writing is fun.

    It's people that need or want to write something, and don't have enough skills, or want to :kneeling_warthog: their way out of it.

    I can think a good example, if I wanted to propose a new game of mafia with an interesting theme in the forum, and can't find nobody with the skills to write, as most of the people that used to do it left us.

    If I come up with a rogue idea, a good AI could help me fill in the details and get it to the end. I would still have to think and tweak a lot to get anywhere, and it probably wouldn't be nearly as good as the stuff people have done here before, but I think it would get much better than anything I would be able to do in any reasonable time


  • ♿ (Parody)

    @sockpuppet7 said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    This is the sort of target market most of these apps really have in practice - the dreamers, not the doers.

    IMO the target market is not Stephen King, nor hobby writers that think writing is fun.

    It's people that need or want to write something, and don't have enough skills, or want to :kneeling_warthog: their way out of it.

    I can think a good example, if I wanted to propose a new game of mafia with an interesting theme in the forum, and can't find nobody with the skills to write, as most of the people that used to do it left us.

    If I come up with a rogue idea, a good AI could help me fill in the details and get it to the end. I would still have to think and tweak a lot to get anywhere, and it probably wouldn't be nearly as good as the stuff people have done here before, but I think it would get much better than anything I would be able to do in any reasonable time

    I'd argue this is basically how an AI in the IDE is helping some of my cow-orkers.



  • @boomzilla said in I, ChatGPT:

    I'd argue this is basically how an AI in the IDE is helping some of my cow-orkers.

    I get your joke, but I like to comment that copilot is particularly useful in the programming languages I'm not experienced with, like an embettered autocomplete as people say here


  • ♿ (Parody)

    @sockpuppet7 said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    I'd argue this is basically how an AI in the IDE is helping some of my cow-orkers.

    I get your joke, but I like to comment that copilot is particularly useful in the programming languages I'm not experienced with, like an embettered autocomplete as people say here

    It wasn't really a joke. And yes, exactly. I haven't used it for a language I'm not experienced with, personally, but I'm also more comfortable with a lot of the stuff we use than cow-orkers are (e.g., SQL, Angular).


  • ♿ (Parody)


  • Discourse touched me in a no-no place

    @Bulb said in I, ChatGPT:

    I don't think it will be in the function, mainly because I don't think the chemistry the living version uses can encode more complicated functions.

    That's exactly why was suggesting the function. The key seems to relate to cortical pyramidal cells (the other neurons in the cortex are largely ordinary) which have large inbound connectivites and complicated dendritic trees where the dendrites interact with each other. It's been observed that pyramidal cells can do things like computing XOR as a primitive operation, which is to say they can (probably, knowing how silicon circuits work and what ordinary neurons can do, but this hasn't been proven at scale yet) pretty easily do an actual decision making process.

    The math to model such cells succinctly hasn't been settled yet, but they sure aren't modelled by the logistic function (or other sigmoid).

    Ultimately I think the difference lies in structure of the network. Most of the networks are still layered, i.e. with no loops, because that's what we know how to train. But brains do have loops, and I think they are essential for both complex reasoning—because it involves imagining one step and making it the input for the next one—and introspection—where one part of the network observes how the other part works. And working memory, which is almost certainly represented by the information running in circles in some loop.

    That seems right. The strength of connections between neurons seems to be moderated (in the presence of dopamine) pretty rapidly, and then that translates into alterations to the structure of the network (it's a semi-stochastic process). Simulations with those key features in learn things fast, and with no back-propagation step. To a good approximation, human mammalian memory is a solved phenomenon.

    Brains have masses of loops, BTW. Mostly those seem to be inhibitory, and serve to condition the responsiveness to the network to a maximum. Nobody knew that before large simulations were done. But there are also feedbacks from higher levels to lower ones, believed to be predictions of what will happen next; minor deviations from the expected are emphasised. (Major deviations... well, that's something else.)



  • Another difference in the way current ML systems work compared with the real thing is that the former is batched and the latter is online.

    With an LLM, you give it a prompt, stir it into a big pile of linear algebra, and have a look at what pops out the other end. Done. Finished.

    That's not what happens with biological brains where all three parts would be happening concurrently and pretty much all the time.

    If your network had feedback loops among its connections, then batched usage would mean the network could churn for an arbitrarily long time while it tried to settle down into some attractor before you could extract a stable "output". Online usage doesn't have that issue since you're just monitoring an ongoing stream of output without having to wait for anything to "finish" (yes, I know you can "monitor output" now, but that's more "monitoring progress toward a final state"; for an online network there is no "final" state as such).



  • @boomzilla As opposed to AC real-estate?


  • BINNED

    IMG_0755.webp


  • Discourse touched me in a no-no place

    @Watson said in I, ChatGPT:

    If your network had feedback loops among its connections, then batched usage would mean the network could churn for an arbitrarily long time while it tried to settle down into some attractor before you could extract a stable "output". Online usage doesn't have that issue since you're just monitoring an ongoing stream of output without having to wait for anything to "finish" (yes, I know you can "monitor output" now, but that's more "monitoring progress toward a final state"; for an online network there is no "final" state as such).

    The final state of a brain is called "dead". hth

    My EE colleagues on a former project were pretty sure that information was encoded in the brain in the pattern of intervals between signals. It makes sense to me; that would be very information-dense and need little energy, so evolution is likely to select for it. A consequence: the signals don't stop...


  • Considered Harmful

    @dkf said in I, ChatGPT:

    very information-dense

    Things that remind you of WTDWTF members is :arrows:


Log in to reply