Having been to three EdTech related conferences in the past few weeks, it was no surprise that Generative AI would take over the show at ASU/ GSV. After all, the folks who attend this event are the movers, shakers and developers of the EdTech future. What did surprise me were the conversations, announcements and overall mood about Learning Language Models and AI.
There were many suggestions on how educators or others seeking improved productivity could leverage the power of Generative AI (GAI.) One session suggested that anyone serious about their job in the future should use GAI at least 5-10 times a day during their workday to increase productivity and break the doldrums when seeking to find answers to problems or doing research. Another similarly suggested that GAI can be your personal think tank providing you with 15 answers to a question to allow you to gain a range of options to choose from quickly. This jives with the idea shared by many here that AI is a capacity builder improving the human condition by helping the user focus on what they do best and letting the bot churn through the research. I thought one clever and true statement was that “AI can be a mental lubricant to help you start from a blank page.” Sam Altman, CEO of Open AI, suggested, “If you were entering the workforce today…. You should get as comfortable with this tech as possible and prepare for a high rate of change.” One comment that seemed to be very accurate as we observed all of the AI displays and startups at ASU-GSV was that this brave new world has a thousand AI flowers blooming right now. Apparently, some of this is due to the “GAI Hackathons every weekend in Silicon Valley” creating “Unprecedented rates of change and adoption.” One presenter referenced Moore’s Law and indicated that this current rate of change is 3x faster! There was great agreement and evidence that purpose-built education products were coming soon.
The best demo I saw was from Sal Kahn of Kahn Academy who premiered their Khanmigo AI learning platform. You need to see what this can do if you want to see the future of AI assisted learning. It was an amazing demonstration. GAI foundational models designed specifically for EDU and other applications expect to number 100s by the end of this year!
During a panel discussion on the first day panelists shared the best and worst case scenarios for Generational AI as listed below:
GAI Worst case ASU/GSV Keynote scenario
- Accessibility to cheating- Plagiarism
- Inequity Access
- Speed of Dev can create societal impacts- Training of GAI
- People not thinking critically- Less processing to “figure things out”
GAI Best case scenario
- Personalization
- Workforce pathway navigation- higher ed advising
- Coaching & Tutoring over a lifetime
- Inquiry-based models to promote critical thinking, Socratic questioning.
- Catalyzing a revolution- Democratization of Learning
- AI-powered Model personalizing instruction
Sam Altman, CEO of Open AI feared publically during his interview at ASU/GSV that “what scared him most about GAI was the “societal impact related to the immense rate of change that’s happening now.” He characterized the educator in the past few months as moving in a 3 part arc from being amazed, then nervous, and indicating that now teachers are leveraging AI for tutoring. I’m not sure that’s the arc of most teachers yet, but I see what he wants to convey.
One limitation that was discussed, which exists right now with GAI, is its lack of the ability to understand the intent of the user providing the prompt. Chat GPT has indicated since its release that “Rather than ask clarifying questions, the current models usually guesses what the user intended.” Much conversation centered around how this will change and improve substantially in 2023. One other area of agreement was that AI developers need higher guardrails and standards to build to than we’ve had before and government regulation is a part of that. There was significant discussion from stage and audience of concerns if countries do not develop informed legislation in the next year. Ethical standards and limitations on AI development is one of my greatest concerns personally as well and I was able to discuss it with a group called the EdSafe AI Alliance. This non-for-profit consortium is attempting to “provide global leadership for the development of a safer, more equitable and more trusted education ecosystem of AI innovation.” I’m glad someone is working on it, but it’s a significant task.
Some of the conventional wisdom that was dispensed at ASU/GSV by those doing the development included:
- Ethical considerations
- The need for PII protection
- Comments that AI adoption in K12 needs to be faster than coding adoption
- GAI will impact student’s learning velocity- absence or minimization will create inequities
- Teacher efficiency should be improved, (several administrative tasks taken on by AI)
- We are still in a nascent stage of experimentation
- Universal agreement on GAI’s game changing nature
- Universal agreement that development guardrails are needed
- A long-term critical question will be how we combine AI with essential human elements.
Overall there was this sense that we’re running full speed in a fog and hoping for the best. The next year will bring greater clarity, but educators should engage in a way that gives them understanding. This is pretty easy. Use the tools out there for free now. Use them regularly. Understand what they do well and do poorly and fit them into your workflow in ways that help. Do that, of course, with the understanding that your use is training these Learning Language Models and that the information you share will be recorded. There are weekly announcements the past two months of new, specifically trained chatbots, image creators and so on. Try a new one every week or so. Someone commented at the conference that ”AI is too important to be left to AI designers alone- a bigger community is needed to better direct development.” If educators are part of that, and I hope we are, we must know what we’re discussing. Even if we’re not, we’ll need to decide how to operate in this new environment very soon.
Author: Pete Just, Retired Chief Operating Officer, EdTech Thought Leader
ASU GSV 2023 Blog Series
Blog 1: What EdTech Leaders Learned from ASU GSV 2023
Blog 2: AI is Poised to Change the Future of Education: Data Interoperability and Privacy Need to be the Foundation
Blog 3: A Reminder of the Global Perspective
Blog 4: The AI Revolution: How Artificial Intelligence is Shaping Our Future
Blog 5: AI Observations from EdTech Innovators
Published on: June 20th, 2023
CoSN is vendor neutral and does not endorse products or services. Any mention of a specific solution is for contextual purposes.