LONDON/STOCKHOLM, Sept 7 (Reuters) – At leading Swedish university Lund, teachers decide which students can use artificial intelligence to help them with assignments.
At the University of Western Australia in Perth, staff have spoken to students about the challenges and potential benefits of using generative AI in their work, while the University of Hong Kong is allowing ChatGPT within strict limits.
Launched by Microsoft-backed ( MSFT.O ) OpenAI on Nov. 30, ChatGPT has become the world’s fastest-growing app to date and has inspired rivals such as Google’s ( GOOGL.O ) Bard to drop.
GenAI tools, such as ChatGPT, create patterns in language and data from essays to videos to mathematical calculations that superficially resemble human work, sparking unprecedented transformation in many fields, including academia.
Academics are among the people who could face an existential threat if AI is able to replicate — at much faster speeds — the research currently done by humans. Many also see the benefits of GenAI’s ability to process information and data, which can provide a basis for deeper critical analysis by humans.
Leif Kari, vice president of education at the Stockholm-based KTH Royal Institute of Technology, said, “It can help students adapt course content to their individual needs, helping them like a personal tutor.”
The United Nations Educational, Scientific and Cultural Organization (UNESCO) on Thursday launched the first global guidance on GenAI in education and educational research.
For national regulators, it outlines steps to take in areas such as revising data protection and copyright laws, and urges countries to ensure teachers get the AI skills they need.
A useful short-cut against fraud
Some educators compare the advent of AI to hand-held calculators, which began entering classrooms in the 1970s and sparked debate over how they would affect education before they were quickly accepted as essential aids.
Some have expressed concern that students may rely on AI to produce the same work and effectively cheat – especially as AI content gets better over time. Passing off GenAI as original work could also raise copyright issues, raising questions about whether AI should be banned in academia.
Rachel Forsyth, a project manager at Lund University’s Strategic Development Office in southern Sweden, said the ban “seems like something we can’t enforce”.
“We’re trying to focus on learning and keep students away from cheating and policing,” she said.
Around the world, the software Turnitin has been a main way of checking plagiarism for decades.
In April, it launched a tool that uses AI to discover AI-generated content. Despite plans to charge from January, the tool has been provided free of charge to more than 10,000 educational institutions globally.
So far, the AI Detection Tool has found that only 3% of students used AI for more than 80% of their submissions, and 78% did not use AI at all, Turnitin data shows.
(1/2)In this photo taken on June 23, 2023, the letters AI (Artificial Intelligence) are placed on a computer motherboard. REUTERS/Dado Ruvic/Illustration//File Photo Acquire License Rights
Problems have arisen when human-written text identified as false positives — in some cases by professors trying to test the software — has been flagged as written by AI, although those wrongly accused of using AI can defend themselves if they defend themselves. Drafts of their work.
Students themselves are busy experimenting with AI and some give it poor grades, saying that it can be summarized at a basic level, but that fact should always be checked because GenAI cannot differentiate between fiction or right from wrong.
His knowledge is limited to what he can scrape from the internet, which is not enough for very specific questions.
“I think AI has a long way to go before it can be properly useful,” said Sophie Constant, a 19-year-old law student at England’s Oxford University.
“I can’t ask that about a case. It’s just not known and I don’t have access to the articles I’m studying so it’s not very helpful.”
Corporate speed and slow-moving regulation
UNESCO’s latest guidance shows GenAI risks deepening social divides as educational and economic success depends on access to electricity, computers and the internet, which the poorest lack.
“We are struggling to align the pace of change in the education system with the pace of change in technological progress,” Stefania Giannini, UNESCO’s assistant director-general, told Reuters.
So far, the European Union (EU) has been at the forefront of regulations regarding the use of AI with draft legislation that has yet to be adopted into law. The rules are not specifically related to education but its broad rules on ethics, for example, can be applied to the field.
After leaving the EU, Britain is also trying to work out guidelines for the use of AI in education in consultation with teachers and has said it will announce the results later this year.
Singapore, a leader in efforts to train teachers on how to use the technology, is one of nearly 70 countries that have developed or planned policies on AI.
“In the context of universities, as a professor, rather than fighting it, you have to take advantage of AI, experience it, develop a good framework, guidelines and a responsible AI system, and then work with students to find a system that works for you,” Boston said. said Kirsten Rolfe, Partner at Consulting Group.
As head of digital policy at the German Federal Chancellery, Rolf co-negotiated the European Union AI legislation.
“I think we’re the last generation to live without GenAI.”
To hear how teachers are handling AI advances in the classroom, click here for the Reuters World News Daily Podcast.
(This story has been corrected to clarify that Rolf’s previous role was in the German Chancellery in paragraph 27)
Editing by Deepa Babington and David Goodman
Our Standards: The Thomson Reuters Trust Principles.