The dawn of AI has come, and its implications for education couldn’t be more significant

Ground Picture/Shutterstock

Vitomir Kovanovic, University of South Australia

The release of OpenAI’s ChatGPT chatbot has given us a glimpse into the future of teaching and learning alongside artificial intelligence.

Educators immediately pointed out the chatbot’s ability to generate meaningful responses to questions from assessments and exams. And it’s often not possible to attribute these responses to a particular source – making it difficult to detect plagiarism.

Concerns didn’t go unnoticed. Shortly after ChatGPT’s release, OpenAI announced it was developing a “digital watermark” to embed into the chatbot’s responses. This kind of watermark is embedded as a digital signal that can identify the content as being AI-generated, and which (in theory) should be difficult to remove.

But as experts have pointed it, people will likely find workarounds. And it won’t be long before other AI models just as capable as ChatGPT pop up. What will this mean for our educational spaces?

It’s safe to say we can expect some challenging years ahead.

But first, what is ChatGPT?

ChatGPT is powered by GPT3, a large language model (LLM) trained on an enormous amount of text data — about 500 billion words to be precise. Large language models work by predicting the next most-likely word when given a list of input words (usually phrased as a question).

The model is trained using many billions of parameters that can represent the complexity of the world. By inputting real-world data, the algorithm readjusts its parameters until it can produce human-like responses to a range of questions. For example:

How many eggs can fit into a grain of rice?

It is not possible for eggs to fit into a grain of rice, as a grain of rice is much smaller in size than an egg. A grain of rice is a small, round seed that is typically less than a millimetre in diameter, whereas an egg is a much larger object that typically measures several centimetres in length and width […]

Although it’s not foolproof, ChatGPT’s capabilities both shock and inspire. It can write songs, programming code, and simulate entire job interview sessions. It even passed the Amazon Web Services Certified Cloud Practitioner exam, which typically takes 2-6 months to prepare for.

Perhaps what’s most alarming is the technology is still in its early stages. The millions of users exploring ChatGPT’s uses are simultaneously providing more data for OpenAI to improve the chatbot.

The next version of the model, GPT4, will have about 100 trillion parameters – about 500 times more than GPT3. This is approaching the number of neural connections in the human brain.

How will AI affect education?

The power of AI systems is placing a huge question mark over our education and assessment practices.

Assessment in schools and universities is mostly based on students providing some product of their learning to be marked, often an essay or written assignment. With AI models, these “products” can be produced to a higher standard, in less time and with very little effort from a student.

In other words, the product a student provides may no longer provide genuine evidence of their achievement of the course outcomes.

And it’s not just a problem for written assessments. A study published in February showed OpenAI’s GPT3 language model significantly outperformed most students in introductory programming courses. According to the authors, this raises “an emergent existential threat to the teaching and learning of introductory programming”.

The model can also generate screenplays and theatre scripts, while AI image generators such as DALL-E can produce high-quality art.

How should we respond?

Moving forward, we’ll need to think of ways AI can be used to support teaching and learning, rather than disrupt it. Here are three ways to do this.

1. Integrate AI into classrooms and lecture halls

History has shown time and again that educational institutions can adapt to new technologies. In the 1970s the rise of portable calculators had maths educators concerned about the future of their subject – but it’s safe to say maths survived.

Just as Wikipedia and Google didn’t spell the end of assessments, neither will AI. In fact, new technologes lead to novel and innovative ways of doing work. The same will apply to learning and teaching with AI.

Rather than being a tool to prohibit, AI models should be meaningfully integrated into teaching and learning.

2. Judge students on critical thought

One thing an AI model can’t emulate is the process of learning, and the mental aerobics this involves.

The design of assessments could shift from assessing just the final product, to assessing the entire process that led a student to it. The focus is then placed squarely on a student’s critical thinking, creativity and problem-solving skills.

Students could freely use AI to complete the task and still be marked on their own merit.

3. Assess things that matter

Instead of switching to in-class examination to prohibit the use of AI (which some may be tempted to do), educators can design assessments that focus on what students need to know to be successful in the future. AI, it seems, will be one of these things.

AI models will increasingly have uses across sectors as the technology is scaled up. If students will use AI in their future workplaces, why not test them on it now?

The dawn of AI

Vladimir Lenin, leader of Russia’s 1917 Bolshevik Revolution, supposedly said:

There are decades where nothing happens, and there are weeks where decades happen.

This statement has come to roost in the field of artificial intelligence. AI is forcing us to rethink education. But if we embrace it, it could empower students and teachers.

Vitomir Kovanovic, Senior Lecturer in Learning Analytics, University of South Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement

The White House’s ‘AI Bill of Rights’ outlines five principles to make artificial intelligence safer, more transparent and less discriminatory

Many AI algorithms, like facial recognition software, have been shown to be discriminatory to people of color. Prostock-Studio/iStock via Getty Images

Christopher Dancy, Penn State

Despite the important and ever-increasing role of artificial intelligence in many parts of modern society, there is very little policy or regulation governing the development and use of AI systems in the U.S. Tech companies have largely been left to regulate themselves in this arena, potentially leading to decisions and situations that have garnered criticism.

Google fired an employee who publicly raised concerns over how a certain type of AI can contribute to environmental and social problems. Other AI companies have developed products that are used by organizations like the Los Angeles Police Department where they have been shown to bolster existing racially biased policies.

There are some government recommendations and guidance regarding AI use. But in early October 2022, the White House Office of Science and Technology Policy added to federal guidance in a big way by releasing the Blueprint for an AI Bill of Rights.

The Office of Science and Technology says that the protections outlined in the document should be applied to all automated systems. The blueprint spells out “five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” The hope is that this document can act as a guide to help prevent AI systems from limiting the rights of U.S. residents.

As a computer scientist who studies the ways people interact with AI systems – and in particular how anti-Blackness mediates those interactions – I find this guide a step in the right direction, even though it has some holes and is not enforceable.

A group of people sitting in chairs with one person raising their hand.
It is critically important to include feedback from the people who are going to to be most affected by an AI system – especially marginalized communities – during development. FilippoBacci/E+ via Getty Images

Improving systems for all

The first two principles aim to address the safety and effectiveness of AI systems as well as the major risk of AI furthering discrimination.

To improve the safety and effectiveness of AI, the first principle suggests that AI systems should be developed not only by experts, but also with direct input from the people and communities who will use and be affected by the systems. Exploited and marginalized communities are often left to deal with the consequences of AI systems without having much say in their development. Research has shown that direct and genuine community involvement in the development process is important for deploying technologies that have a positive and lasting impact on those communities.

The second principle focuses on the known problem of algorithmic discrimination within AI systems. A well-known example of this problem is how mortgage approval algorithms discriminate against minorities. The document asks for companies to develop AI systems that do not treat people differently based on their race, sex or other protected class status. It suggests companies employ tools such as equity assessments that can help assess how an AI system may impact members of exploited and marginalized communities.

These first two principles address big issues of bias and fairness found in AI development and use.

Privacy, transparency and control

The final three principles outline ways to give people more control when interacting with AI systems.

The third principle is on data privacy. It seeks to ensure that people have more say about how their data is used and are protected from abusive data practices. This section aims to address situations where, for example, companies use deceptive design to manipulate users into giving away their data. The blueprint calls for practices like not taking a person’s data unless they consent to it and asking in a way that is understandable to that person.

A speaker sitting on a table.
Smart speakers have been caught collecting and storing conversations without users’ knowledge. Olemedia/E+ via Getty Images

The next principle focuses on “notice and explanation.” It highlights the importance of transparency – people should know how an AI system is being used as well as the ways in which an AI contributes to outcomes that might affect them. Take, for example the New York City Administration for Child Services. Research has shown that the agency uses outsourced AI systems to predict child maltreatment, systems that most people don’t realize are being used, even when they are being investigated.

The AI Bill of Rights provides a guideline that people in New York in this example who are affected by the AI systems in use should be notified that an AI was involved and have access to an explanation of what the AI did. Research has shown that building transparency into AI systems can reduce the risk of errors or misuse.

The last principle of the AI Bill of Rights outlines a framework for human alternatives, consideration and feedback. The section specifies that people should be able to opt out of the use of AI or other automated systems in favor of a human alternative where reasonable.

As an example of how these last two principles might work together, take the case of someone applying for a mortgage. They would be informed if an AI algorithm was used to consider their application and would have the option of opting out of that AI use in favor of an actual person.

Smart guidelines, no enforceability

The five principles laid out in the AI Bill of Rights address many of the issues scholars have raised over the design and use of AI. Nonetheless, this is a nonbinding document and not currently enforceable.

It may be too much to hope that industry and government agencies will put these ideas to use in the exact ways the White House urges. If the ongoing regulatory battle over data privacy offers any guidance, tech companies will continue to push for self-regulation.

One other issue that I see within the AI Bill of Rights is that it fails to directly call out systems of oppression – like racism or sexism – and how they can influence the use and development of AI. For example, studies have shown that inaccurate assumptions built into AI algorithms used in health care have led to worse care for Black patients. I have argued that anti-Black racism should be directly addressed when developing AI systems. While the AI Bill of Rights addresses ideas of bias and fairness, the lack of focus on systems of oppression is a notable hole and a known issue within AI development.

Despite these shortcomings, this blueprint could be a positive step toward better AI systems, and maybe the first step toward regulation. A document such as this one, even if not policy, can be a powerful reference for people advocating for changes in the way an organization develops and uses AI systems.

Christopher Dancy, Associate Professor of Industrial & Manufacturing Engineering and Computer Science & Engineering, Penn State

This article is republished from The Conversation under a Creative Commons license. Read the original article.

mylovefairy.com

this love fairy fell in love with her ghost

Mastering My Thoughts

My mind is running a marathon, so I write to relax and release.

Dream Vision

Empowering you to dream big and live your best life.

Gospel road 66

Seeking truth in times of miss information

Life...Take 2!

I hope that someone sees this page and decides not to give up.

Bachir Bastien

Practical advice for a happier, fuller, and more meaningful life

Diving Deep With Christopher

Deep is my love. Deep is my passion. Deep is where I lift a world that's tearing itself down.

MarketingOnTheNet

Online Marketing Resources, News & Info To Help You Promote Your Business

RazzWorks

Keep on Going - Keep On Growing

Streetsister

Conversations with Street People

praktikotips

Wellness tips for healthy & happy living

Andrea, Children’s Book Illustrator

Children's book illustrator

Life Enrichments

Enrich Your Life

Happy Hub

Health, Wellness and Happiness Hub and Blog.

delishbymich

Custom Cakes, Baking and Recipes

Misery Loves Their Own Company

Ego stroking and addiction horror stories

Self-Improvement Now

Live a Better Life Through Personal Development

Tiktalkies

Happenings Around

The Secret Sandbox

What are the differences between men and women?

ThatNursePatty /d

#thatnursepatty, covid nurse Stories, nurse stories, Frontline nurse Stories, nursing, orginize, covid,

BluntPathway

Honest path to a peaceful life

I’m Working on It

A mom on a journey to enlightenment

Estoreknight.com

Best Products To Buy at affordable price

progredi.online

Mental Health

Inner-voice

#truth #findingownself #quest #philosophy #contemplate

the !n(tro)verted yogi

Bernie Gourley. Traveling Poet-Philosopher

Joyful Moments in Christ

Following Jesus while Seeking Joy and Contentment in the Christian Life

Mysterio Worlds

The Truth Is Out There

BPD - Borderline Personality Disorder

UK Help, Advice, Resources and FAQ's all relating to the mental health disorder BPD

Euphoria

Motivation And Self Development

Considering the Bible

Scripture Musings

ULUA.COM

Ultimate Living Ultimate Aloha ULUA

saveyourmarriage

You Can save your marriage with the right knowledge

Chiecyclopedia

Join The Movement

DIGITALNEWSLINK

Bringing The World Together...

Health and Fitness Better

Information for your best health and fitness.

MindYoga4U

A Site To Learn More About Meditation And Yoga

Shaping Prosperity

Shape Your Mind, Body, and Spirit to Prosperity

tonysbologna : Honest. Satirical. Observations

Funny Blogs With A Hint Of Personal Development

Reasonedandseasoned

Rating without complicating

Writing about...Writing

Some coffee, a keyboard and my soul! My first true friends!

%d bloggers like this: