Code and architecture often fail to convey meaning understandably. Not only humans but also AI models fail due to the consequences.
Emphases mine to make a point. "This suggests models absorb both meaning and syntactic patterns, but can overrely...." No, LLMs do not "absorb meaning," or anything like meaning. Meaning implies ...