사진 확대 Professor Kim Eui-seok, Graduate School of Technology Management, KAIST
Recently, the scenery of our lab site is strange. Artificial intelligence (AI) agents, who simultaneously run thousands of virtual simulations instead of the appearance of researchers staying up all night, search for new materials 24 hours a day and correct defects in the design. Data processing speed has increased by 100 times, and costs have been reduced by one-tenth. Ironically, however, our CTOs and researchers are sighing deeply. “The analysis was finished in a day, and the completeness of the results increased, so why isn’t there a ‘one shot’ that will surprise the world?”
The reason is that the AI we have introduced is too smart and reasonable, but it is essentially a ‘probability’ machine. By learning vast amounts of data accumulated in the past, we find the pattern with the highest probability of success. This has tremendous power in ‘gradual innovation’ that improves the performance of existing products. This is because AI guides companies to the highest point of the normal distribution curve, that is, the ‘peak of the mean’. However, the “breakthrough technology” that changed human history never came from the peak of the average. It is born from statistically unexplained outliers that are not always out of the distribution of the data.
So what should businesses do now? This is not to say we should abandon AI and return to the past fist-bump experiment. Research and development (R&D) in the AI era should be systemized so that AI, which is superior to humans, is in charge of increasing and optimizing the efficiency of existing products, but the other axis should be intentionally capable of “human other things” and “inefficiency.”
First, a ‘noise conservation area’ should be set. Human researchers should force a process in which human researchers once again look into data that AI is trying to discard, warning that “this result is outside the margin of error.” This is because next-generation food may be hidden in the pile of data classified as ‘trash’ by AI.
Second, it is the separation of ‘exploration’ and ‘utilization’. AI is optimized for leveraging the knowledge it already knows. Humans, on the other hand, are specialized in exploring places they don’t know. Companies should ask researchers to ‘set up a wild hypothesis that is difficult for AI to do’. Core performance indicators should also be evaluated not by the success rate, but by how far they have attempted to deviate from AI’s predictions.
Third, a change in leadership. In the past, R&D leaders were managers who increased the probability of a project’s success, but now they must be ‘probability-resistant’. Drastic thinking transitions and sometimes rough intuition are needed in the context of problems that can allocate resources on dangerous paths with low probability of success.
Evolutionarily, the most favorable individuals for survival are not the most complete individuals, but groups that harbor variants that can adapt to environmental changes. AI presents us with perfection, but it doesn’t allow variants.
If our company’s R&D centers are running too quietly and efficiently, it is a sign of crisis. Right now, you have to throw a grain of sand of “inefficient other things” into that smooth situation. Innovation has always been where efficiency stops, and there can be a 100 trillion won opportunity hidden in the creaking noise.
[Professor Kim Eui-seok of KAIST Graduate School of Technology and Management]