Bonus issue:

This one is a little bit less obvious

  • @[email protected]
    link
    fedilink
    English
    17
    edit-2
    21 days ago

    My conspricy theory is that early LLMs have a hard time figuring out the logical relation between sentenses, hence do not generate good transitions between sentences.

    I think bullet point might be manually tuned up by the developers, but not inheritly present in the model; because we don’t tend to see bullet points that much in normal human communications.

    • Possibly linux
      link
      fedilink
      English
      3
      edit-2
      21 days ago

      That’s not a bad theory especially since newer models don’t do it as often