At a high level, AI works by learning from data. You feed a system large amounts of examples, and it gradually identifies patterns that allow it to make predictions or generate outputs on new, unseen inputs.

The most powerful modern AI systems use neural networks, computational structures loosely inspired by the brain. These networks consist of layers of mathematical functions. Information flows through each layer, gets transformed, and eventually produces an output. During training, the system compares its output to the correct answer and adjusts its internal settings slightly to reduce the error. Do this millions or billions of times, and the system becomes very good at the task.

This process, called gradient descent and backpropagation, is behind almost every major AI application today, from language models to image recognition to recommendation systems.

One key limitation is that AI learns correlations, not causes. A system trained to identify cats does not know what a cat is in any meaningful sense. It knows what combinations of pixels tend to appear in images labelled cat. This distinction matters a lot when AI is applied in high-stakes settings.

See also: Why You Are Accountable for Every Word Your AI Writes