IESE Insight
AI: Beyond the hype
Any use of AI to augment or automate human work is not an inherent feature of the technology but a decision made by human managers. As such, it’s incumbent on all of us to develop a better understanding of AI and how to leverage it ethically.
Citing ongoing research and case studies, IESE Prof. Sampsa Samila elaborates in this interview.
Mention artificial intelligence (AI), and the first thing that pops into your head is likely “it’s coming for my job.” Or “it’s going to wipe out humanity.” Or maybe it’s the open letter signed by AI developers demanding that we put the brakes on this thing until we understand it better.
As academic director of the AI and the Future of Management Initiative, IESE Prof. Sampsa Samila has been trying to do just that — understand AI better, coordinating several ongoing research projects on AI and the future of work.
Here, he urges everyone to remain calm as he walks us through the issues. As he reminds us, AI is a tool not to be feared, and it’s up to us to use it well, based on solid business concepts.
Tell us about the aims of the AI and the Future of Management Initiative.
It’s a multidisciplinary research agenda drawing together different IESE faculty members to conduct qualitative and quantitative studies on AI across the whole spectrum of business — from labor markets to strategy to organizations to leadership to human-machine collaboration to ethics. We’re also producing case studies for use across all our programs. The aim is to help business leaders develop their knowledge and skills related to AI, so they can manage it in an ethical and socially responsible way.
One of the things we’re studying is the effect of automation on labor markets. MIT professor Daron Acemoglu has done research on the phenomenon of automation technologies displacing workers but without there being any real gains in productivity, cost savings or quality of service — in other words, companies are automating, even when it’s inefficient. In our research, we’re finding a mechanism for why that may be happening.
Part of the explanation may be that automation or technological development is perceived as inevitable. But in the case of trade, that is perceived more as a policy choice, and people whose jobs are directly affected by, say, more imports are more likely to vote for more protectionism, as one example. Yet when it comes to the automation of labor, people take a more passive response. Especially where companies have sufficient labor market power — such as when they are a large local employer with very few competitors in the labor market — the threat of unemployment makes people more willing to accept automation even when it goes against their own interests; it may even push down the wages of those employees who remain in jobs. We have some empirical evidence in this direction.
Regarding the “inevitability” of AI, there has been growing chatter about the “inevitable” threats of AI to our very human existence. Indeed, as we’re doing this interview, there’s a newspaper headline that reads: “Five ways AI might destroy the world: Everyone on Earth could fall over dead in the same second.” How much sleep should we be losing over this?
I think the extinction risk is not something we should be worried about. There are other concerns that are more realistic and pressing. There are real concerns about the adjustment process, income inequality and economic power.
Going back to our research, the concern in that study is not AI; the real concern is labor market concentration. AI or automation is just a tool for giving powerful companies a new weapon. Technologies do not create or destroy jobs by themselves. That is done by companies, which are led by managers who make specific choices. Any use of AI to augment or automate human work is not an inherent feature of the technology but a decision made by human managers.
I’m more worried that the high-profile lobbying to regulate or pause further AI development will do more to block new market entrants, limit competition and increase concentration, until we end up with a global economy dominated by one, two or three large U.S. companies. As our research highlights, concentrated economic power is a far more real and tangible risk to welfare than our hypothetical extinction by AI.
So, if we as companies, managers and employees are ultimately the determinants of AI’s future, what are some of the things we should be doing?
Since the challenges posed by AI lean more toward management rather than the technology itself, the role of managers becomes pivotal. If core business processes are transformed by AI, then that isn’t something you can delegate to third parties or just let happen. Each of us has a personal responsibility to develop a sound conceptual understanding of AI, its application within the business framework, and how to leverage it. And given CEOs’ larger scope of responsibilities and decision-making power, it is even more incumbent on them to be fully up-to-date on the technology and where it might potentially go next. Their capacity to lead and motivate is crucial in propelling the entire organization through this transformative journey.
What about recruiting more junior-level employees with the new AI skillsets you’re looking for?
One approach used by a company we’re studying is, as certain people come up for retirement, you look at ways to automate some of their old tasks, and then recruit new profiles with the new skills and capabilities you require, without introducing a radical restructuring of the entire organization, which is what raises a lot of the worries related to AI. Whenever I talk with executives, their concerns are often about the treatment of existing employees: What do we do with the people who don’t have the requisite skills? How do we teach them? What if they’re not interested in learning these new tools and using them?
Good question: what do you do?
If you want your employees willing and interested in both learning and using AI, one of the key things is designing AI tools that benefit them. This is a relatively obvious idea, well supported by evidence, but not always easy to implement. New technologies create new opportunities, and some employees will be energized by that. But in any transition, there will always be some resistance, and hence challenges, in managing it in a humane way. (These are some of the key things we cover in our IESE Focused Program on Artificial Intelligence for Executives.)
With one executive who came through our AI program, we subsequently developed an ongoing working relationship with his company and we prepared a case study on their experience of AI implementation, which we are then able to use in our classes to help others make the AI transition.
The interesting thing about this case is that the company went away and really thought about where their AI advantage could lie. Then, they used AI to help them work better — to collect and share more information, and to make better decisions using analytics and predictive AI tools. They directed investment in those areas. And while they did hire specific AI talent, crucially they didn’t lay anybody off or fire people for their lack of AI skills.
This is what we, at IESE, in our cases and research, are trying to address: What should a humanistic adoption of AI look like? How do you organize the work, treat your employees, and manage the company in an ethical way?
What other cases are you working on?
We have another new case on OpenAI, considering the ethical as well as the business implications of large language models like GPT-4. We discuss, among other things, how to ensure that AI benefits all humanity. We are currently working with companies that are actively using generative AI at work.
What are some of the dilemmas emerging?
Intellectual property is an important one. Every day we hear of new lawsuits being filed by authors, artists and other content creators who say their copyrighted materials were used without their permission to train the AI algorithms, which are now reproducing their content in whatever they generate. Does training an algorithm fall under “fair use,” whereby a limited amount of copyrighted material is allowed without consent? That’s an ethical but also a legal question for regulators and courts to hash out. In Japan, legislators said training an algorithm on any material was not a violation of copyright, in order to try to encourage more AI development there.
Another dilemma is whether non-human-generated content can be copyrighted or patented. In the U.S., the law is very clear that it can’t — but some of those laws are over 200 years old, back when only humans could be inventors. How much does the original content have to be modified before a person cannot claim copyright over it? What percentage of the AI invention has to involve human intelligence before it qualifies for protection? South Africa became the first country in the world to allow AI-generated inventions to receive patent protection — a move which some say went too far, too soon.
It sounds like there will be different regulatory regimes on AI.
That wouldn’t necessarily be a bad thing. It’s not that different from what we have now. India at some point blocked Chinese apps, and China blocked American ones, which led to the development of domestic Chinese apps. Meta couldn’t launch its Threads app in Europe at first because of the EU’s stricter privacy laws. If having all these different regimes actually encourages a strengthening of antitrust regulation, then we may start to see the benefits of market competition, rather than the negative situation we have now, where all the economic power is concentrated in a few big corporations, leading to higher prices for consumers and lower wages for workers.
As things stand, we’re all dependent on Microsoft. We use Google for searching. All my devices are Apple and I would have a hard time switching to anything else. Will ChatGPT become the next Big Tech player that dominates the field and locks everyone else out?
This brings me to my larger point about AI: It’s a technological tool, and while technology changes, the laws of economics and the fundamentals of competitive advantage do not. Just like the internet changed certain economic features but didn’t change the underlying economic laws, I don’t see AI changing the laws of economics or strategy, either. In our OpenAI case, we look at barriers to entry, which is the same concept as it has always been, but we try to understand it in terms of what it means now in this new context of large language models. It’s important for managers to approach AI with this same kind of conceptual thinking.
Along these lines of sticking to the basics, should we keep learning programming then?
The main reason I believe we need to keep learning programming is that, as programming becomes more efficient and thus cheaper, we will do more of it. In line with the Jevons paradox in economics, as the productivity of programming increases, we may actually end up needing more programmers, because we’re going to have many more things to program. So, it’s not a foregone conclusion that we’re going to need fewer programmers. There’s some evidence that people who used ChatGPT to write code did so faster but it was less good and less secure, with more flaws and bugs, than purely human-written code. Hence, programming is unlikely to be entirely automated anytime soon.
Another issue is that nobody writes code from scratch anymore; they put together existing libraries of code. If the AI puts together the libraries for you, and a mistake is made, then everybody who uses that program is going to be affected by the same embedded flaw. If you know programming, you understand how this works.
But if you don’t, and you start using these AI tools to help you think and reason, and everyone is using the same tool for their thinking, then this could shift the entire distribution of thinking on a topic in one (potentially negative) direction. This is more than bias. Many people have many different biases. I’m talking about an algorithm that has one particular bias that everyone ends up adopting, so that we all assume the same bias in everything we do and don’t even realize it.
Furthermore, I think programming is useful for learning conceptual, logical thinking. It’s the way that large language models learned their “reasoning” abilities, so understanding programming helps to understand how the AI “reasons.”
As AI is evolving so quickly, is it possible to make any future predictions that won’t be out of date when this interview appears?
All I dare say is that every prediction made about AI hasn’t materialized so far — whether it’s that it will completely eradicate some employment category or that AI will progress much faster than it actually has. For example, because of early studies that AI could detect diseases in radiographs better than a human could, there was a prediction that AI would replace radiologists. Obviously, that hasn’t happened; if anything, we have a shortage of radiologists. For most radiologists, reading scans is only one task out of their entire job and, despite those early studies, we still don’t have an actual AI system that reliably diagnoses images better than a human doctor.
So what will the future hold for large language models and generative AI? Progress will depend on computational costs, the availability of training data and improvements in model architectures. There has been a consistent finding that more is better: more computation, more training data and bigger models. Will this continue to hold? Quite possibly but certainly not guaranteed. However, as the models get bigger, the computational costs also rise considerably.
We are also facing memory limits on the current GPU chips that do the computation, limiting the practical size of the models. Technological progress will ameliorate these issues, but how fast and at what cost remain to be seen. Nvidia is launching a new generation of GPUs and the practical impact of that should be felt soon.
The availability of useful training data is also a factor. Is it the case that all the useful training material that developers could get their hands on has already been fed into the system, and there’s a shortage of complex “reasoning” data and materials? Simply feeding in, say, novels may not help the models get any better, because novels don’t contain any fundamentally new, conceptually strong content that can be used for additional training.
All that being said, maybe the next generation will surprise us!
Sampsa Samila acknowledges funding from the Spanish Ministry of Science and Innovation/State Research Agency, the European Commission Horizon 2020 program, the Social Trends Institute and AGAUR (Government of Catalonia) in support of ongoing research with IESE colleagues.
Tips for engaging with AI
- Ignore the hype. Instead of worrying about your eventual extinction, dwell instead on how AI can be useful for your work.
- Separate the signal from the noise. Focus on the core properties of the tool.
- Experiment with it:
- Write short texts. I find it good for writing abstracts or an outline for a paper I’m working on.
- Ask it for ideas. Even if you never use them all, there may be one or two gems.
- Exchange ideas. Having a back-and-forth chat can challenge and hone your thinking on a subject.
- Understand AI at a conceptual level. Make sure you have a deep understanding of how your own business works — the core idea of what you do for your customers and the core value proposition that’s hard for others to replicate — before considering how AI might provide additional value, and how you might capture part of that value.
Customize it. It has to make sense within your context and according to your values.
This interview is published in IESE Business School Insight #165.