Thursday, 16 February 2023

A caution against AI in your posts

Edit: one thing brought to my attention is the lack of translation for a plethora of zen materials. In those cases we have no choice but to use a virtual translator, and cross reference other translates zen texts, until we can get a human translation.

There’s a few too many posts that involve ChatGPT in this sub and as a Machine Learning Engineer with an Undergrad and Masters Degree in Computer Science, a Thesis in Deep Learning, and four years of work experience in my field I want to discourage its usage. To understand what these models are requires Linear Algebra, Statistics, Calculus, and Computer Science.

These models are not what you think they are. They do not “learn”, they do not “know” and they do not “create”, not in the way humans do.

These are high dimensional, highly non-linear mathematical models that regress an approximation of some underlying distribution of data. What you’re looking at when you see the output of one of these models, like Dalle or ChatGPT is the sampling of the underlying distribution.

These are extremely impressive human achievements in our ability to model data, but they are not reliable. They produce results which themselves cannot be verified for accuracy. There’s no way to tell a valid output from an invalid output on its face (without a human or even expert human). If you want to understand this better look up Adversarial Examples https://openai.com/blog/adversarial-example-research/

Machine learning has a problem in the industry: it almost always cannot be used for mission critical work. There is a trade secret that fully automated systems only work well if the last step in the pipeline is traditionally mathematical, for example augmented reality. In this example a vision system might use a neural network to find internest points in each image, relate them between images, but finally use geometry to triangulate them in 3D. Even these are flawed because there’s no real way to know if the interest points are valid, and thus the final geometry will be good but approximate. For almost anything else there is a human in the loop, either throughout the entire process or at the very end for Quality Assurance.

These are statistical models that perform well in the aggregate but cannot be used in mission critical situations. Can a modern AI land a plane better than the average pilot? Probably. But because they are statistical models that means that there is a … let’s say 1/1000 chance of it failing to land with no prior indication that it will fail… which is entirely unacceptable.

Okay so what’s up with self driving cars then? The final result is a product of multi view geometry. There is no pure deep learning self driving car.

What about AI art? Any mission critical ai art (animation, etc) goes through a clean up phase by humans.

What about ChatGPT? This is where this sub makes its mistake with AI. Can ChatGPT be used to translate something? Yes. Should you trust this translation? The better question is do you think that translation is mission critical? There are no ambassadors using ChatGPT to communicate with leaders of other nations. There are no lawyers using ChatGPT to understand foreign law. They rely on human experts. Is the Dharma less mission critical than those?

That’s not to say you shouldn’t use ChatGPT to translate, it’s that you need to be very careful about what you think you’re getting out of these algorithms.

As someone in the industry I ignore any post that uses ChatGPT for any purpose, even if it’s not a translation, because it’s like building a tower on top of sand.



Submitted February 17, 2023 at 05:42AM by junetwentyfirst2020 https://ift.tt/OjM7ITP

No comments:

Post a Comment

Blog Archive