Generativ AI och dess konsekvenser för en uppmärksamhetsekonomi

Igår släpptes en 21-minuters video som demonstrerade den kommande kanalen Channel 1, en helt AI-genererad nyhetsstation. Troligtvis kommer vi att se otaliga liknande initiativ framgent. Utvecklingen väcker många frågor som rör upphovsrätt, bias, journalistik och källkritik etc. Bara timmar efter att videon släpptes har röster höjts som diskuterar detta (se t.ex. Daily Mail eller Hollywood Reporter).

Men en aspekt som inte adresserats lika tydligt är de konsekvenser för en uppmärksamhetsekonomi (attention economy) som generativ AI kan komma att få. Exemplet med Channel 1 handlar om nyheter, men det handlar också om att fånga tittarnas intresse genom ett skräddarsytt innehåll.

Att fånga användaren genom att anpassa innehållet är en central del av flera sociala medier. TikTok är närmast ökänt för sina effektiva algoritmer som genom ett mycket kort användande anpassar flödet av media efter användandet. Men det handlar fortfarande om dels en anpassning baserat på den tid som läggs på exempelvis olika filmer, dels om förmedlandet av användargenererat material. Är det filmer på söta hundar som fångar din uppmärksamhet kommer du att förses med fler filmer på söta hundar som andra användare lagt upp.

Den förändring som vi antagligen ser framför oss är dels att teknologin inte bara registrerar den tid som ägnas åt exempelvis videoklipp, utan också samlar in biometriska data såsom ögonrörelser och pupillreaktioner och använder denna som data; dels att innehållet också helt genereras optimerat för den enskilde användaren.

Om det var svårt att lägga ifrån sig skärmen förut, kommer det att bli i princip omöjligt framgent.

Det är med oro jag tänker på en sådan framtid. För att inte hamna där tror jag vi behöver reglera de ekonomiska incitamenten för företag som TikTok och kommersiellt styrda mediebolag. Jag minns ett möte jag hade för många år sedan med ett av Sveriges reklamfinansierade TV-kanaler där en av kanalens chefer inledde mötet med att påpeka att de fört och främst var ”en marknadsföringsmuskel, inget annat”. Ett sådant krasst perspektiv kommer att bli mycket problematiskt framgent.

Frågan är hur vi kan hantera dessa utmaningar utan att göra det hela till en infekterad höger- och vänsterfråga.

Mitt TEDx-talk är publicerat

Igår publicerades mitt TEDx-talk. Talet höll jag under ett TEDx-event på Chalmers i oktober och efter några veckor med fix och trix är det nu godkänt och tillgängligt för allmänheten. Att titta och lyssna på sig själv tror jag kanske aldrig att man vänjer sig vid, och i efterhand hade jag velat ändra på flera saker. Men nu ligger det där till allmän beskådan. Stort tack till alla på TEDx Göteborg såg till att det blev möjligt!

Nedan talet samt manuset jag utgick ifrån.

They say that the best way to start a speech is by saying ’once upon a time.’ To tell a story. And you begin by painting a picture of something that represents us – a small town. And then you should introduce something that comes and threatens that town. A problem; a dragon. Finally, you should also include a solution; a rescue, a knight. 

But when it comes to AI, it’s not entirely clear what role this new technology is going to play.

Is AI the dragon, or is AI the knight?

I work as a teacher, and I’ve been in the education field for almost two decades. And in this story of us and AI, the matters concerning learning and education interest me the most. 

Lately, I have been given a chance to pursue a PhD. My research focuses on how teachers like me are dealing with the impact of the latest developments in generative AI. In particular, I’m exploring how it’s changing the way we see and evaluate our students’ learning and knowledge.

The most common initial reaction from teachers, or anyone, is that students might use AI technology in order to cheat. However, when it comes to learning, knowledge and technology, it’s not entirely clear what constitutes cheating and what doesn’t. 

To explain, let me give you one well-used analogy – pole vaulting.

Photo and © by Jeff Coen / https://www.jeffcohenphoto.com/index

This is Armand Duplantis. He holds the men’s world record with 6.23 meters. 

Charles Hoff, 1923. Store Norske Leksikon – public domain

Here is Charles Hoff, a Norwegian who held the world record a hundred years ago. In 1923 he cleared 4.21 meters.

Back then, I actually think Hoff was quite happy that he didn’t jump over six meters. Because in 1923, they landed in a thin layer of sand.

In a hundred years, the record has increased by over two meters, or in other words, by 48%.

That’s a lot. 

When one, for example, compares with the corresponding development in the men’s 100 meters, the improvement is significantly smaller, just 8%.

So why has the record increase been so large in pole vaulting?

The most obvious explanation lies in the technological development of the pole. 

When Charles Hoff set his record, he used a pole made of hardwood. And hardwood poles are not great at converting horizontal motion into vertical height (*use hand gestures here to symbolize movement). When flexible poles came along, it transformed the sport.

Here, it is also interesting to note that every time a new, improved pole has been introduced, those sticking to the old ones often point fingers at the newcomers and accuse them of cheating!

The question is whether generative AI, in relation to learning and education, is comparable to what new types of poles have been for pole vaulting. Is it cheating to use ChatGPT to write a text that you are going to submit as a school assignment?

Just a few weeks ago, the results of a major survey here in Sweden were released, where young people between 15 and 24 were asked if they had used generative AI in their school work. 36% said they had, and of them, 55% admitted that they had used AI in order to cheat.

An additional 12% said they were unsure whether it had been cheating or not when they used it.

Now, within education, tech enthusiasts often argue that using technology should be seen as cheating. The argument is that what’s important is learning how to use the new technology. What matters is the result, the height of that bar. If students can use AI as a “flexible pole” to get over even higher obstacles, then surely that’s a good thing?

But it is precisely such arguments that might lead us to mistake the dragon for the knight.

You see, the problem arises when we mistakenly view achieving a goal as the true measure of learning. 

Because in the context of learning, it’s actually the journey towards that goal where the real learning takes place.

Let’s use another metaphor:

Imagine that your coach tells you to run to a particular spot. It’s probably because your coach wants you to build up your physical fitness by covering a specific distance. If you use something like a bicycle or take a shortcut to get there quicker, you might reach the spot, but you won’t get the physical training you were supposed to, even if you arrive at the destination.

Now let’s move from the physical world metaphor to a digital one. 

Who here has played the first classic 8-bit Super Mario game?

Have you ever reflected that Super Mario is the same basic character on every level, and to progress in the game, it is you, the player, that has to get better? Compare that to most popular games today, where you can pick up enhancements along the way that enable you to advance through harder levels. Sometimes, you can even buy these enhancements; no skill development is needed! As you progress through the game, you might live in the delusion that you are getting better than what you actually are. In such cases, technological enhancements might cause an illusion of learning.

So, what’s the point I’m trying to make here? Am I saying we shouldn’t teach students how to use technology like AI in schools? No, that’s not my point. What I’m getting at is that it’s complicated. Cheating and learning aren’t always black and white. Let me give you another example.

This is Greg Brockman, a key figure at OpenAI, the company behind ChatGPT. In March of this year, they unveiled their new AI model, GPT4, in a live YouTube demonstration. Brockman showcased its amazing capabilities.

For instance, he presented a rough sketch of a website and told GPT4 to generate the code for it. In less than a minute, he had usable code, and voilà, a working website was born. The audience was left in awe; it was truly impressive!

But there was something Brockman emphasized during this demonstration. In one instance, When the AI wrote the code, he said:

”You should always look through the code to understand what it does. Never run untrusted code.”

At that moment, it was clear that Greg Brockman could read code as easily as I read my mother tongue – Swedish. How did he become so skilled at this? Well, probably because he’s spent years writing a ton of code.

For someone with such expertise, having a system that automates this process is a huge time-saver and makes them more efficient. And that’s a good thing. But for those who aren’t as skilled, this kind of automation might mean missing out on a valuable learning opportunity.

Now, some might argue that in the future, we won’t need to know how to write and read code anymore. When machines can do something, why should humans bother doing it? And there are certainly cases where that’s true. If machines can handle tasks that are harmful or dangerous for humans, they arguably should.

But in many other cases, just because a machine can do something doesn’t automatically mean that it ought to. This becomes especially clear when we make the mistake of focusing too much on the goal. When we see the height of that bar as the only thing that matters.

It’s not only the most obvious learning that can get lost with automation. There are other learnings and valuable aspects in this process. And we risk losing them with automation. 

Let me give you another example. This time, an example from education. 

As mentioned earlier, teachers often worry that their students might try to cheat using AI. But let’s flip the script for a moment. Teachers themselves can be tempted to take shortcuts. If you’ve ever been a teacher, you probably know the feeling of sitting up late on a Sunday evening, facing a mountain of assignments to grade before Monday morning.

Now, imagine there’s a magic button on your computer that can instantly and accurately assess all those assignments. The temptation to push that button would be strong. I’ll admit, I might have pressed it too. But should we?

You see, an essential part of being not just a good teacher but a great one is understanding where your students stumble, what mistakes they make, and what challenges they face. Teachers often gain this knowledge through the process of assessment. So, we need to be cautious when it comes to automating these processes.

So, what’s the solution here?

Firstly, we must recognize that learning always involves overcoming challenges. Without resistance, there is no learning.

The human mind is programmed to look for shortcuts, always to find the most efficient solution. It’s evolutionary, we want to conserve energy. 

But if we always take the shortcut, we’re depriving ourselves of learning skills that will benefit us long-term. 

Think of it like climbing a hill. Technology can offer to carry us to the top, but I believe that would be a mistake.

There is another way! The remarkable thing about AI is that we can ask it to act as a coach and guide, helping us conquer that hill through our own effort rather than taking shortcuts or doing all the work for us.

In the field of education, we’re seeing some exciting solutions that follow this approach. Take Khan Academy, for example. They’ve integrated GPT4 in a way that doesn’t just provide answers but helps students understand how to solve problems, like math, for themselves. You can also use tools like ChatGPT or Bing, or Google Bard in similar ways.

For those of you in the audience, it might be as simple as shifting from asking ChatGPT to write a text for you to writing it yourself and then seeking ChatGPT’s feedback. Such a small change can make all the difference.

We need to continually ask ourselves every time we hand over a task to AI: are we robbing ourselves of a valuable learning experience? 

What I’m getting at is whether AI plays the role of the dragon or the knight in our story is a choice that you can make.

It’s your call. And you can only make that choice if you’re aware of it. 

And now – you are.