An Interview with Best-Selling Author and Educational Innovator, Mindy Bingham
SANTA BARBARA, Calif., March 31, 2023 /PRNewswire/ — The use of artificial intelligence such as ChatGPT and Bard has sparked concerns about plagiarism in high school and college. However, little attention has been given to the potential harm these tools can do to the intellectual development of elementary age children.
During elementary school, children learn fundamental skills, including writing, math, critical thinking, and problem-solving. The introduction of this new technology may hinder their ability to think for themselves and stifle their creativity.
So, what is the solution? Mindy Bingham, best-selling author of the Career Choices series and awardee of the Certificate of Special Congressional Recognition for Innovative Approaches to Curricula by the United States Congress and The Breaking Traditions Award from the Equity Council of the national Association for Career and Technical Education, suggests keeping these tools out of the hands of our younger students.
“We don’t have time to sleepwalk through this one. The genie is out of the bottle,” said Bingham. “As educators and parents, I recommend caution by not jumping on this newest technology bandwagon, at least until research shows us the positives and negatives.”
Bingham is in a unique place to sound the alarm. Besides being the developer of nationally-acclaimed textbooks and children’s books, Bingham has also developed two proprietary online platforms including the My10yearPlan.com, a tool that thousands of students use each year so they can plan a productive path to self-sufficiency.
“Until a child is literate, can read and comprehend what they are reading, write clearly and convincingly, and compute through basic algebra using only their own brainpower, do not introduce artificial intelligence into the elementary classroom,” Bingham stated.
Problem-solving, critical, creative, and strategic thinking are essential skills required by employers in any field. Writing papers and solving complex word problems are just some examples of how students develop these skills. Yet the use of the chatbots makes this effort no longer necessary.
“Creativity is a fundamental aspect of human expression that distinguishes us from other species,” Bingham reminds us. “From prehistoric cave paintings to modern-day street art, we are driven to express ourselves creatively from an early age. Abdicating this basic human drive to a machine may remove one of the most enjoyable and productive functions of our existence.”
When we adapt a well-known recipe or figure out how to balance our budget, we apply strategic thinking originally practiced in the elementary classroom.
“Imagine what your life would be like if a machine does this for you, and you never get the opportunity to stretch your thinking and apply basic knowledge to common problems,” Bingham cautions. “What would society be like if within one generation we don’t have people with the experience to solve the problems we face.”
Beware the temptation to allow young students to use a chatbot to write their first draft of a paper, justifying they then can do the work to customize it. It is the act of coming up with original ideas and then organizing their thoughts that is the cornerstone to the critical thinking prized in the workforce.
The creators of these artificial intelligence tools have a responsibility to design them in ways that allow their use to be tracked. This could be as simple as incorporating digital watermarks that are traceable to help educators identify original work from machine-generated work. This feature will ensure that students receive appropriate credit for their efforts, and educators can accurately assess their performance.
Here are six things parents can do right now:
- Take a test run and try a chatbot yourself.
- Get this issue on your local school board’s next agenda.
- Talk to your children’s teachers and share this article.
- Create opportunities to enhance your child’s creativity through their own brainpower.
- Lobby congress to make tech companies accountable to the educational process.
- Talk to your children about the downsides of technology.
“A young mind is a precious thing, and it is vital that all necessary precautions are taken to protect it. Let us prioritize brain power over digital power and use technology only as a means to support and enhance learning, not replace it,” Bingham concludes.
Six things parents can do right now:
- Go online and test run one of the ai bots, either Chap GPT (openai.com) or Bard. They are very easy to use and seeing is believing. Know what you are up against.
- Get this discussion on the agenda of your next school board meeting. Large districts such as New York City Schools have already banned the use of these tools in the classroom. Until research shows that the positive impact outweighs the negative, every school district should be cautious about introducing ai bots into the elementary classroom.
- Talk to your children’s teachers and find out what they know about this and what plans they have for its use in the classroom. Share this article with them.
- Create opportunities for your children to explore their own creativity with their own brainpower, such as a storyline round robin: You start a fictional story and each child continues the story line picking up where the last person left off.
- Lobby your congressional representative to move quickly to assure that the technology companies creating these tools have a responsibility to get ahead of any problems to children’s learning.
- Talk to your children about the downsides of technology. Watch the movie,
Here is what ChatGPT said about some of these topics
- I asked how to get on these two platforms and it didn’t really answer. For Bard, it provided information on a Shakespear site.
- What should school districts be doing about artificial intelligence and classroom use?
- School districts should be proactive in considering the use of artificial intelligence in the classroom and developing guidelines and policies to ensure that its use is safe, ethical, and effective. Here are some recommendations for school districts to consider:
- Educate staff and students about AI: Teachers, administrators, and students should receive training on what AI is, how it works, and how it can be used in education. This training can help dispel myths and fears about AI, as well as equip teachers and students with the knowledge and skills to use AI tools effectively.
- Identify appropriate use cases: School districts should carefully consider which AI applications are appropriate for classroom use, and how they can enhance learning outcomes. For example, AI-powered language learning tools can help students practice and improve their language skills, while AI-powered grading tools can help teachers save time and provide more consistent feedback to students.
- Ensure data privacy and security: When using AI in the classroom, it is essential to protect students’ data privacy and security. School districts should establish clear policies for how student data will be collected, stored, and used, and ensure that any third-party AI providers they work with are compliant with data privacy regulations such as the Family Educational Rights and Privacy Act (FERPA).
- Address bias and fairness: AI systems can reflect and amplify human biases if they are not designed and trained appropriately. School districts should work with AI providers to ensure that their systems are designed to be fair and unbiased, and that they are regularly audited to ensure that they are not perpetuating discrimination.
- Foster collaboration and innovation: School districts should encourage collaboration between teachers, administrators, and AI providers to develop new and innovative AI-powered tools that can enhance teaching and learning. This collaboration can help ensure that AI is being used in a way that is responsive to the needs of educators and students, and that it is supporting their goals and objectives.
How should tech companies creating chatbots be held accountable for plagiarism using their bots?
Tech companies that create chatbots should be held accountable for plagiarism using their bots by implementing policies that prevent and discourage plagiarism. Here are some ways tech companies can take responsibility for plagiarism:
- Clear guidelines: Companies should create clear guidelines for their chatbots that discourage plagiarism and outline consequences for violating these guidelines.
- Proper attribution: Chatbots should be programmed to provide proper attribution when using content that is not original. This could include linking to the original source or citing the author of the content.
- Content filters: Companies should use content filters to prevent their chatbots from using plagiarized content. These filters could be programmed to scan for common phrases or sentences that are commonly used in plagiarized content.
- Continuous monitoring: Chatbots should be continuously monitored for plagiarism, and any instances of plagiarism should be immediately addressed.
- Legal action: If a company is found to be using plagiarized content in their chatbots, they should take legal action against the individual or organization responsible for the plagiarism.
By implementing these measures, tech companies can hold themselves accountable for plagiarism using their chatbots and ensure that their bots are operating ethically and responsibly.
When talking about the downside of technology with elementary age children, it’s important to keep your language simple and use examples that are relatable to their everyday experiences. Here are some tips on how to approach this topic:
- Acknowledge the positives: Start by acknowledging the many positive aspects of technology, such as the ability to connect with friends and family, learn new things, and have fun.
- Emphasize balance: Discuss how it’s important to use technology in moderation and balance it with other activities like outdoor play, reading, and spending time with loved ones.
- Use age-appropriate language: Use simple language that is appropriate for the child’s age level. Avoid using jargon or technical terms that they may not understand.
- Address safety concerns: Talk about the potential dangers of technology such as cyberbullying, online predators, and addiction, and teach them how to stay safe online.
- Encourage critical thinking: Teach children to think critically about the information they see online and help them develop healthy skepticism towards what they read online.
- Set boundaries: Encourage children to set boundaries around their technology use, such as limiting screen time before bed, and turning off notifications during homework time.
- Model responsible use: Model responsible technology use yourself and encourage your child to follow your example.
By following these tips, you can have an open and honest conversation with elementary age children about the downside of technology while still emphasizing its positive aspects.
To schedule an interview with Mindy Bingham, contact Gavin Rhoades
View original content to download multimedia:https://www.prnewswire.com/news-releases/why-we-need-to-ban-chatgpt-in-elementary-schools-301787389.html
SOURCE Academic Innovations