AI’s insatiable appetite for cash, energy and data: Bubble ahead ? • FRANCE 24 English
Could the artificial intelligence boom already be running out of road? We’ll look at warning signs. To think that three short years ago, the commercial launch of Chat GPT took the planet by storm. AI since sparked a global race for cash, energy resources, and data. A race that tech giants seem have uh with have captured the world’s attention, all to feed the seemingly unsatiable appetite of large language model computing systems. With a few US companies dominating this AI race and a US president who’s allin with billionaires, market watchers worry about investors tempted by the easy money of rising stocks at the expense of the entire rest of the economy. Is it a bubble? Is it about to burst? And with what consequences? How should Europe and the rest of the world prepare? And more broadly, is AI changing humanity and our world for better or for worse? Today in the France 24 debate, should we brace ourselves for an AI bubble bursting? With us, Rea Stambolisa, CEO of Consultants RS Strategy. You’re also a practitioner and residence at the French public uh science uh research center, CNS. Thank you for being with us. Thank you for having me. Also with us, Leila Merch, partner at Tech Investors, Maris Partners. Good to see you again. Good to be here again. She’s an engineer by training. Tanya Perro Mutter, co-founder of the Fesona, a think tank, a think and do tank you say, that advocates for responsible AI. Welcome to the show. Thank you for having me. And from Dublin, solicitor Simon Maggar, director of data compliance Europe and whose latest post for his newsletter, the gist, the gist, is entitled AI, the sound and fury. Thanks for being with us. Thanks very much. reminder that more and more of you are listening. That’s right, listening and liking and subscribing to the debate wherever podcasts are stream. Yeah, markets in the US are again these days flirting with record highs. It’s been a banner 2025 on Wall Street, but a closer look shows how money has continued to gravitate towards just 10 companies. Traditionally the top companies take yeah more than a quarter of the US stock market but now it’s closer to 40 uh% the top nine of those companies are techreated. Reena Stambolisco what does that tell you? Oh, it tells a lot of things not just about capital flaws but also about distribution redistribution of power and if if you like more broadly um how whatever we are producing the growth that technology enables flows or doesn’t towards a broader public right because your picture your your graphic is really telling so this is what it tells us and what is worrisome here is that this huge accumulation of resources um is prompted but also creates a vicious circle of collecting and not redistributing resources and also gains. That’s so like a black hole for uh uh wealth in yeah in a way I mean I’m not a you know the theoretical phys physicist specialist but it sounds like um an extractive institution as Darren Semogloo um described in his why nations fail where you basically uh concentrate resources and gains within a very limited um elite circle. And we you don’t redistribute you don’t include others in that sort of yeah benefits right do you agree with that lea merge I mean I agree with that uh this is true this is also the how the economy has been working more and more over the past decades it’s like is it that surprising this is where I’m like for me it’s not that surprising this is how the tech industry has shaped the entire VC landscape over the past dozens of years and now it’s just kind of like getting into the public eye. Uh the question for me is again getting back to that graph 38% of the all the money of the 500 companies that are listed in the S&P 500 are with just 10 companies and nine of them are tech. Yeah, I will nine of them are tech and most of them are just this huge giant that we never seen before in the history of humanity. But it’s also because in history of humanity, we never seen that many millions of people using the exact same tools, doing the exact same thing. Like the fact that humanity is getting more and more omin, how they buy, how they do things, it also leads to kind of like putting all the money in a few company that delivers those products. You think about Amazon, you think about Apple. We never had that before. uh and so now I believe that the main question is around in the days of railroad companies in the days of big oil that didn’t exist well it was was it only one company for the entire world you see this is where for me it differs a bit of course we always had same way to build but not at the exact same time not with not with the exact same companies and this is where for me I agree with what you’re saying I’m just saying are we that surprised and how can we act on that and what does that mean for the consumers And what did that mean for the economy and what did that mean for the burbo? So Tanya Perlmut is it just that the world has become a smaller place or should we worry? No, the world has not become a smaller place. The world has been going like it has and uh this is nothing what is happening now uh to me is nothing really new. It’s just going a lot faster. If I may explain myself, uh it’s a question of infrastructure and uh any technological change historically has been always accompanied by uh if I could say a grab uh towards the infrastructure and resources uh to make sure that you stay in the center. So why so few companies? Why? Well, what this is what you have now. You have the the you were talking about the stocks. So we have the seven we call this in English the seven magnificent uh and the seven magnificent magnificent seven the magnificent sorry Nvidia, Microsoft, Meta, Tesla, Alphabet uh Microsoft, Apple uh but you also have VC capital here that is that is playing a very important role and uh the narrative that is being told to us that the only way AI is going to function is that we’re going to create this huge models and these huge models need a lot of data. And they also need the compute and they also need the the infrastructure to to do these calculations and then distribute it to us so we can consume it and uh especially consume it uh down the line uh when when so so you need the the the end of the story is that you need to construct data centers all over the world. Uh this is completely an unsustainable model. uh because uh the data centers need electricity, they need water. So you’re going to end up with communities competing with with with these companies for resources. So from a g geopolitical point of view, it’s it’s it’s a problem, but it lets them concentrate power by concentrating the resources and the infrastructure. Let’s talk more about these data centers uh in a moment. First, let me bring in Simon Maggar on this. and Simon are does it keep you up at night that so few companies have so much money right now? Um I think it’s an anomaly and it’s a historical anomaly even within the history of technology. The technology industry has actually traveled for most of its history in one direction towards decentralization. So we started off with enormous mainframes in the 760s and 70s and that gradually went down to people’s desks by way of the PC revolution and then into people’s pockets by way of the the mobile revolution and that happened for the same reason each time it was allowed to happen it because of Mo’s law the chips become smaller they could become more powerful and more capable and people uh find that what used to require very large centralized systems then later are able to be done on a local level. Now the thing that happened to allow this huge concentration of wealth into a few companies is that we had a hiccup in that general role and that was because social media on one side is not amanable to Mo’s law because it has network effects. There’s no point me trying to run a social media uh program on my own phone by myself. We needed to have a centralized or at least it was much easier to have a centralized model in order to connect people. And originally social media was called social networking. And so that one was one which created that centralization and that was for your your metas, your your Facebooks, your Instagrams and your Twitters. The other one is internet advertising. So internet advertising is the secret uh engine that drives most of the internet economy. And what has happened is that the internet advertising centralization has really aggregated around Google almost to the exclusion of everyone else because the only way that you can do targeted advertising at scale with precision is if you have access to the search uh activity as well. The Microsoft CEO said this in a recent trial where he was ging evidence in relation to um to Google’s monopoly uh powers and he said he would do anything to gain access to Google’s search flow data because only with that could he ever make Bing competitive with uh with with Google’s search engine but only with that also could he make it competitive as an advertising network right and that so that hiccup isn’t the same with AI AI there isn’t the same moat I see for AI over time. So what are you saying over time this is not sustainable? True. It’s not sustainable. And indeed uh we saw just today sort of straws in the wind hints that the that we may be coming to the top of the bubble. The Japanese investment bank uh investor uh Soft Bank for example sold all its Nvidia stock. Nvidia is the company that makes these chips. It says it’s to invest in OpenAI though. Sure. I mean, I also would like to sell my stock and then invested in something else later. But for the moment, it has sold those chips in the one company, the shovels in the gold rush, which is uh almost certain to be left standing uh even if the froth is blown away from the rest of this market. And that’s that’s that’s I think a good indicator. Nvidia uh which makes those chips. Now about some more of these warning signs uh the uh uh so-called hyperscalers companies that are building vast data centers that include uh Alphabet, Meta, uh Microsoft and Oracle. Uh they’ve sustained a hit in recent weeks in the bond market. This is uh uh the uh right this is this is what that that’s the graph we showed you earlier. uh the Financial Times reporting uh as uh such uh the the FT which goes on to say uh that uh tech giants are issuing debt at a quick rate to fund their AI expansion efforts despite having large cash hordes something investors worry could signal a shift to higher levels of leverage. So debt be getting more debt. In his Substack column, The Atlantic magazine journalist Derek Thompson warns that uh total AI capital expenditures in the US are projected to exceed 500 billion dollars in 2026 and 27. Roughly, he says, the annual GDP of Singapore, but the Wall Street Journal, he says, reports that uh people, private citizens are only willing to pay 12 billion a year on AI services. That’s roughly the GDP of Somalia. Worse, some big companies are spending uh less on AI than they used to. Here you see those so-called hyperscalers. Again, these are the ones building those big data centers. Oh, that graph went away. Maybe it’ll come back in a minute. It went away fast. Uh the uh the uh there it is. Um so yeah, building these data centers, is it sustainable? they emerge. If we keep building them and pouring them the way we do now, it is not. Will it ever be sustainable? I hope so. Um either humanity is completely out of control and out of their mind and they just people think they will never manage to make work and that is clearly a possibility. Either uh you were mentioning physics earlier uh we work with physicists. uh the way we produce energy might change, might never change as well, but might change and then maybe one day it could be sustainable, but so far we just So you need an alchemist is what you’re saying. We need a like physicists, we need we need science. Uh and maybe we’ll never never never have them. uh but so far you were mentioning the fact that they are all uh you know kind of like the real economy and the real world is going back and kind of like taking the AI race and making it look like can the I race continue can the bubble continue but if you try to look at it on the other side uh when we talk about bubble and will it explode and will it not explode part of me says because it’s not sustainable the way it is because uh those companies not only the hyperscaler but also open AAI but also entropy um all those huge tech company you you think they are already way too big regarding what they produce and regarding the revenues but if they didn’t have this energy issue if they didn’t have this GPU’s issue if they didn’t have this powering issue and data center they would be twice as big as they are now so on one hand we can say that this energy things is kind of like going to make the bubble exploit but if you look at it the other way it’s also what kind of like keeps the bubble in check I know it’s controversial But I think it’s also very true. Uh, and the fact that the bubble, even if it’s huge and becoming crazy, is also at the same time capped by those energy issues. By those energy issues, yet hardly a day goes by when tech giants don’t announce some investment in some new big data center. Today’s announcement came from Microsoft, OpenAI’s partner, by the way, uh announcing plans for a center in the Portuguese city of Ces, uh 10 billion dollars for that one. Uh last month it was Google Cloud promising uh 15 uh 8.6 billion if you if you count in euros. Yes. Uh last month it was Google Cloud promising 15 billion for a center in India’s Andra Pradesh state. Today though we are incredibly proud with the support of the government of India the and the state of um on the Pradesh we’re very pleased to announce a new gigawatt scale AI hub in Vishak Patna it’s the largest AI hub that we are going to be investing in anywhere in the world uh outside of the US. It represents a capital investment over the next 5 years of $15 billion and will scale to multiple gigawatts. So tell us rea for our viewers watching in India in Andra Pradesh state is this good news or bad news? Well let me give you an answer that is I don’t know. because there are it it there are compound effects of those both infrastructure, technologies, data and everything that we’ve been talking about. And let me circle back to something that really concerns me immensely in this whole debate that we are drowning in digits in market estimates and da da da which is what are we looking to produce here AI technology in general what are we looking to produce we are looking to produce shared prosperity right I mean that’s at least what the discourse the narrative is about right and then let me say something that is I think for a lot of technosolutionist people out there a complete you know controversy which is it’s not because genai specifically is used and more widely used that is it is actually useful right so there are two different things and we have that sort of voodoo incantation going on that it’s fatalistic we only have to do this we have to build ever bigger data centers. We have to build you know to extract ever more data. We have to pillage ever that is an essential question that you raise. When is it useful? When is it not? When is it not? And so then the question do you have the answer? I have answers plural to sub questions. Okay. Which is will the people of Uttar Pradesh be able to pay their energy bills? Will they still have drinking water 10 years from now when this data center will be built? Will they still have a non-polluted earth to cultivate and you know to eat? Um will like will there be a trickle down from those socal you know productivity gains and so on and so forth. I mean when you look at current research from the past 10 plus years about the actual economy of AI workers, you have people in the Silicon Valley who are paid between 1000 and 500 quid a year as a salary developers and even way more. How? Sorry, between 100 and $500,000 a year. 500,000 quid. Yeah, sorry. Um and then you have all those um people in Africa, in Eastern Europe, and so on and so forth who are paid $2 an hour to clean and annotate data. And this is the actual economy of the whole thing. So, I don’t see how all those big declarations and all that spitballing and I’m the better one, look at me, is going to change this dynamic when there is no transparency, no redistribution, and no checks and balances. And this is what’s been happening since Donald Trump came about. And this is what is now perhaps going to happen also in Europe. So th those are the questions that I’m looking at this and and the whole and I agree with with Simon that those the makeup the financial makeup of those big techs it’s an anomaly. If you have a look at it more closely none of them has the traditional uh big banks investors and so on and so forth. When you look at Meta, who has the most chairs and who is the only and the sole decision maker, right? Right. Is Mark Zuckerberg. How how how did he and others I’ll put it to you Tanya. How did he and others convince borrowers to lend them the money? because uh they are building the this the like I said before there is this narrative that they’re putting to us that the only way that AI can move forward is through these very big models that need that huge infrastructure but once you install infrastructure you you are there uh and it’s yours so uh that’s why they are promoting this this discourse but in reality can we do things with smaller models can we do models that address a specific issue that are going do compute on a specific thing rather than make us big thing that will be completely adaptable to each to each situation which will make us completely dependent on on on these big companies and Meta is is good example because if you remember Meta is the ones they are the people who first introduced uh social networks to us and social networks that the whole model is is based on on uh grabbing our attention and and and then using our attention to to show us uh to give us marketing and publicity and all those things. And so the the the this model is is not healthy. Uh it’s not healthy for our economy. It’s not healthy for it’s not healthy for anybody who is the consumer. So what do you do for children for for for adults who are and and we’re about to replicate what happened with social networks uh with we’re about to take that and and uh build it up and and do apply the same thing to it’s going to be enhanced by AA now so it’s going to be even worse. So the real question is do we agree with this narrative or do we want to construct AI with a different business model a business model based on public interest and based on what we really need what we want it’s a very powerful so who does that is it is it through legislation that that’s done who who makes the decision on how to build a better AI we the people all of us together right can make that decision and and concretely how would we the people work. How would we the people work? Well, the first you would of course say that democracy would be the the ideal but true laws is what you’re saying. Uh not necessarily through through through uh it could be through citizen engagement because now we do have a technology to get citizens views instantly on basically any topic and AI could help us do that and could help us moderate that. But that’s that’s one of the ways to do it. uh but we have to uh first understand what is happen in order to build something better you need to understand what is happening now and right now you have a a big uh a group of people controlling uh the whole and building this infrastructure once it’s in place it’s going to be very hard to go back on that. So uh a different discourse is needed. We need data that is curated by by by not by by these companies because there’s absolutely no transparency on how they get the data on how they train there’s uh it’s not open source it’s it’s it’s really really uh non-transparent and there is a a different way to build AI and and uh if different companies European companies could build it differently. Uh just on on this point, you know, the French have been talking a lot about this French company called Mistral uh which is trying to do a large language models and they’ve been championing it. It doesn’t have the kind of money that the Silicon Valley giants do, but they’ve got the three the three co-founders. They have all just I read become billionaires. Do you support a a project like Mistral or are you saying this is the wrong way to tackle the issue? If I am I’m not going to comment on M trial precisely but I could say that if there is a startup that has access to to a lot of data good data that has access to an infrastructure that can do compute and can produce uh models that are beneficial to society. Yes, that that is totally a good way to move forward. But for that uh we need uh we need political influence and we need uh we need politicians to come together and say yes let’s do this. So so Europeans can do this together with with uh with citizen with other companies small startups. The only way they have today to to grow and and move is to ally themselves with with big hyperscalers or where the concentration of power happens. And that is the wrong maybe the wrong way to move forward. But but uh in order for us to uh to to to do that we need to understand collectively where we want to go. I see. So we have to be grassroots. All right. Everyone wants to jump in. I’ll start with you Rand. We have to go to Dublin in a minute, but go ahead. Just quickly, you don’t dismantle the master’s house with the with the master’s tools, right? I mean, we are continental. You’re not style, are you? I’m not a fan of anyone who presents themselves as the providential, as the gatekeeper, as the solution to something that I have never partaken into, you know. So, we the people and that’s the point. I mean even what we are doing here tonight sharing views and confronting views of the world of reality. This is something that when was the last time you saw this happening in terms of public debate? When did you see it happening? When did you see that happening? When the valuation of Mistral went to 11 billion and when it got US investors. When did you see we the people partaking into a more widely accessible public interest infrastructure? I’m still not convinced that small or big LMS are actually useful given their inner by inner and by design hallucinations and so and so on and so forth. While we have been having boring AI for decades and it works and it doesn’t extract resources, people’s data, people’s drinking water and people’s electricity to do that. Simon Simon McGar, what should artificial intelligence and large language models be for? What should they be used for? What should be their practical uh application in our world? I think it’s uh currently a solution in search of a problem. Um and you can say that that solution that we will find the problem it solves and we know that the non-generative form of AI the one that does pattern recognition the one that learn machine learning that has been running for years and has been found to be extremely useful in in the realm of cyber security for example you have billions of u of bytes of uh of logs no human being could read them but throw a uh a a machine learning model at it and they can see anomalies which would indicate for example there’s been a data breach or somebody has has signed in from a country that they don’t normally sign in from that kind of useful thing. The difficulty is when we try and bring the question of language models which are the novelty but are also the core thing driving this enormous bubble and the thing causing this extractive economy around usage of people’s data usage of creativ data usage of our personal data. We have to hold on to the principles that we had before the money started dancing in front of our eyes. And unfortunately uh just recently the EU Commission has proposed for example to start making exemptions from our GDPR rights, our data protection rights for the purposes of doing AI model training for the purposes of feeding data to these machines. Now I don’t think probably that that would stand up to scrutiny before the court of justice of the EU. But the important thing to recognize is that just because we could fight it in the courts, it’s much better that we recognize these are principles that we should hold to as a political question. So going back and just let me ask you about the motivation here, Simon. Going back on the what’s called the GDPR, those European rules that uh regulate uh data protection and uh when it comes to the internet, is it because they’re true believers in the AI race or is it to appease uh the Americans who’ve been throwing their weight around since they kind of owned a lot of this? No, I think it’s much more basic than that. uh when it comes to reacting to the world around us, political systems tend to react to whatever’s on the front page of the paper first. And if you see a huge payment bubble, an investment bubble coming in and they worry our Europe might miss out. And they’re told the problem with Europe is it has too many rights. It considers people’s data more important than making money. And the commission doesn’t want to be said to be getting in the way of making money. But the reality is that those rights in the Charter of Fundamental Rights are treaty rights. They’re common to every citizen in the EU and they’re not to be traded away for a temporary um stock market bubble. Now, the good news is the EU’s legislative process moves so slowly that I suspect that long before these proposals ever actually reach legislation, this bubble will have burst and everyone will be embarrassed to be associated with the ideas. But that doesn’t mean that we shouldn’t be un unhappy that the ideas are being presented. Lea Merch, you agree? Well, uh, yes. I would say that when it comes to Europe and what should we do? Um, to to jump back quickly on what you were saying, I don’t I mean I think that people say, “Oh, we having JVI to like, you know, share the resources and make even better.” I don’t think that’s the case. I think every country is in pure rivalry and everyone wants just to have the greatest business to have control and power. So I mean we all wish for something like that but I’m also trying to be very like realistic. I don’t think that the case and so where should Europe stands and I think so far we’ve been regulating and going to revalry. You are mentioning mistral mistral is exactly rivalry. It’s like how how can we have a European open AI that’s basically what the idea but without the same capacity to fund them without the same infrastructure without all of that. Uh when it comes to regulation and you were mentioning that uh we do a lot of that we are recognized worldwide for that but what can happen if we do too much of it. Uh well basically we have nothing left in the markets and uh we regulate regulate regulate until people and you know we are saying us the people but what do the people do they just they just don’t care they just want to have something that’s efficient not too expensive that will make them uh work better faster and so far that’s what we’ve seen how many European people use mistral how many of them compared to how many people use tragic so I think we need like rivalry and regulation both as a necessary both are in completely insufficient and I think where Europe could really stands for is application and you were mentioning that how can we use AI for what Europe has been good at doing for a long time uh small medium enterprise and you know just building infrastructure that are resilient and how can we use and focus on AI that Europe can push for application I think this is the only way we can make it out of here with good businesses and healthy economy and not always trying to catch a train we just cannot catch and not always trying to regulate and control something that we cannot control unless we force consumers to do just whatever we told them to do. All right, in this conversation we’ve talked about sucking all the money out of the room. We’ve talked about sucking all the energy out of the room and then there’s sucking all the data. Think tank Epic AI last year uh sounded a warning that a human generated data could run short for large language models by the end of the year uh 2027. Critics say that when it comes to using uh our data uh it’s basically some politely you could call it trollling, not so politely you could call it theft. Uh Herbert Grunmar would certainly agree that it’s theft. The words of the popular German singer have been in the crosshairs of the law. Munich court this very Tuesday uh ruling that uh OpenAI infringed copyright law using song lyrics including those of Grenomire to feed its chat models in a case that could have wide implications for European artists. Open AI disagrees. it uh issued a statement saying this decision is for a limited set of lyrics and does not impact the millions of people’s businesses and developers in Germany that use our technology every day. Let’s uh turn to the uh solicitor in the room here. Simon Maggar, uh your reaction to that reaction from Open AI? Well, I mean, you might as well just say, “Well, you only got us on this one, but there’s loads of other things we’ve done that you haven’t got us on.” Which does rather invite other people to uh to react in the same manner as our musician friend. The key issue here is that OpenAI has trained its data on material which it does not have the copyright for. And um once you have used the data to train the model, there’s no way to unpick those threads, they can’t be undone. And so anyone can go to that model afterwards and say, “Give me a song in the style of and give the name of a musician.” And if that musician’s uh songs were used to train it, it will produce a pastiche, but it will be a recognizably a pastiche because that musician’s uh work was used. And this goes for writers. It goes for uh I mean there was a settlement recently where uh writers who were whose data was used their books were used to train uh anthropics claude model. Um they’re they’re going to receive I think a settlement figure of about €1 and a half thousand euros each. I mean that’s a lot of writers and that was only a small subset of all the books that there are in the world. So the problem here is that the models are being trained without really a great deal of consideration of the the legal rights and privileges of the people whose data they’re using. And I mean we we remember all the um the actions by the music companies when people would download songs and the how appalling that copyright theft was. That was individual songs. This is the entirety the entire output of every creative person ever. visual, film, uh uh uh you can go to to literature, anything you like. I mean, this is one of probably the most abusive and extractive elements. What could be at the moment what we’re seeing is a series of lawsuits by the various copyright holders individually suing for the use of their material by way of guilds you know mass actions but mass action by a writers and then mass actions by uh by by uh developers mass action by by by authors and so on. What we’re going to see in the end is either legislation to cover this or and this is what the AI uh companies would want that they’re granted an exemption that they’re just let use the data that copyright is suspended for these purposes. Now, it’s an extraordinary suggestion. It’s amazingly ambitious. But when the bubble gets big enough and the amount of money that appears to be on the line gets large enough, it becomes the kind of thing that policy makers start to talk about considering. I don’t think they should, but they certainly do. They certainly start talking about it. And it’s up to the general public to say actually I think I value the individual artist more than I value Chat GPT in my pocket. Tanya Paramter, if you’re a betting woman, what would you say? Is it in the end? Uh, are they going to be able to uh use everybody everybody’s creative uh content without their permission or is that I don’t think so. I think they know very well that they’re going to have to pay and they’re just using this moment where uh where the the the cards have not organized and you can do anything you want for now. So they’re they’re grabbing but but they already know that that the this is this is going to come to an end. Uh media is the first example. uh journalism is being as you very well know is being disrupted uh disrupted by AI and the traditional model you you you had before you had uh you still had advertising and you knew uh that for example if you take uh Wall Street Journal they always knew that their their uh the client that they were looking for were not they were not the it were the readers who who were 50 years old men and driving Audi’s. Uh so now this is this is no longer the case. They have to come up with new ways of doing things. But the data that is uh that that is being trained on they they they have the rights to that data and uh you are producing uh content. So you’re you’re going to be paid for that content. It’s just that the the the market value of it is not clear yet. And it’s going to New York Times is suing of course uh Open AAI and there are many other lawsuits uh going on at the same time. Uh there’s going to have to be the the the it’s it’s it’s going to be settled at some point, but uh not not not yet clear how. But of course, the the uh Silicon Valley is using American power uh and the government to sue BBC for example to to try to make us fear and yeah to to make him fear. Leila Merch, is there this question of uh AI running out of data? Data, it could create its own data perhaps, but it’s uh data created by humans, human-driven data. Well, human-driven data available on the internet. Uh we’re not running out of human created data. You leave, we are creating data right now like just the world existing create data. We are running out of available and stocked and organized and on the internet human data. What could we see? You mentioned the fact that um the machine could produce its own data which we call synthetic data. There are very there are huge question behind that. Can we produce enough fast enough? How much does it cost? And is it good enough? If we think about using only synthetic data which some uh big companies are betting on, you can think about the fact that the world since through AI will be more and more generous because the more you produce synthetic data, the more you use the same and blah blah blah. The other solution that is for me not a solution at all but when you see how those company behave and they just like oh we’ll just take the data is actually going through life capture data. We never been through that quite yet. But if you think about the world of tomorrow, and I don’t want to scare anyone, but if you think about cameras and IoT and just your fridge being connected and you know your door being connected and just the whole city being connected, then the data production could just go through the roof compared to what we’ve seen now. We produce a lot of data on social medias. But how could how much data could you produce if all of our lives were connected? We have connected watch, we have connected TVs. If everything we do, you know, if we have robots, for example, in your houses tomorrow, they will produce data on how you live. So, we’re not running out of data. We’re running out of the data we used to use. And what will be the future use of data? That is a huge question. Raina Stanisa, the last word. Are the machines taking over? No. But what is taking over is that sort of technomeianic extractivist religious authorit authoritarianism. Sorry. That’s I mean I understand what you’re saying and from a cyber security perspective I see this every day. This however doesn’t mean that I want and I consent to this being my reality. There is something called fundamental rights. There is something called creator’s rights. there is something called I don’t want to live in this world. So again I’m reiterating this this is not a fatality and when this bubble bursts because it will burst because all the money that goes there doesn’t go to other things doesn’t even go to other technology sectors right which is a very huge problem because yes Simon is right saying this is a solution looking for a problem and all the available money goes there all the available resources go there so when this bubble bursts this is the the com bubble from year 2000 is going to look like a very nice walk in the park because this this bubble today is going to touch on every aspect of life, every aspect of society and there are very few people who are going to make it out in one hobbies. So my concern is there. So your question about are the machines taking over? My answer is they must not and it is up to us people but users but also researchers, entrepreneurs, policy makers, journalists, educators, solicitors to make it not take over through upholding a vision for democracy that unfortunately Europe is now the only one to have. Right? So that’s the question. The question is will our political leaders have the courage the cohes if you like to stand up to that sort of incantation that the takeover by the machines is inevit is inevitable. It’s not all right but it takes courage and it takes stamina and it takes a spinal cord to stand up to this. We’ll have to leave it there. priming the pump for what could be another great conversation. Raina Stambuli, I want to thank you. I want to thank as well Tanya Peril Mutter, Leila Merch, Simon Maggar for being with us from Dublin. Thank you for being with us here in the France 24 debate. Thanks a million.
Could the artificial intelligence boom already be running out of road?
#AI #Economy #Tech
🔔 Subscribe to France 24 now: https://f24.my/YTen
🔴 LIVE – Watch FRANCE 24 English 24/7 here: https://f24.my/YTliveEN
🌍 Read the latest International News and Top Stories: https://www.france24.com/en/
Like us on Facebook: https://f24.my/FBen
Follow us on X: https://f24.my/Xen
Bluesky: https://f24.my/BSen and Threads: https://f24.my/THen
Browse the news in pictures on Instagram: https://f24.my/IGen
Discover our TikTok videos: https://f24.my/TKen
Get the latest top stories on Telegram: https://f24.my/TGen
2 comments
Would love to see this monster explode and take all the greedy parasites with it.
Three hours after chatgpt was launched people started talking about the AI bubble.
Comments are closed.