The Nutriscore system – in yogurts, in hummus jars, in Bollycaos – has multiple shortcomings, but it gives a quick visual idea of whether that product you’re holding in front of the supermarket shelves is more or less nutritious. Now, the British and progressive Institute for Public Policy Research proposes that artificial intelligence systems include in their answers a similar classification, in which the user knows whether the information supplied comes from healthy sources, such as peer-reviewed scientific articles or accredited media, or from fatter sources, such as pseudo-media or forums without review by any authority in the field.
It seems to me a timely measure. First, because unlike food, where everyone is more than clear that cauliflower beats Oreo cookies in nutritional value, in the case of information there is not enough media literacy to easily distinguish protein from bad cholesterol. But there is a bigger benefit. The adoption of this classification would also force us to be more explicit with the sources, which is the other big problem with AIs: they want to infuse the idea that information is an undifferentiated and free flow that magically emanates from digital nowhere. In fact, the Institute is not only demanding the Nutriscore, but also demanding compensation for publishers. In the end, they rub piracy, if the raw material is stolen without clear consideration. Now, as reasonable as it seems to me, I think the claim will fall short because it would lead to the assumption that a considerable part of the AI’s responses are made from material of dubious provenance. Wow, that has more “E-” than the worst industrial pastry. You see that Google, which is the other big one threatened by AI, doesn’t end up getting off its ass and making a serious deal with publishers to make a common front.