In the new survey, majorities across all major demographic groups share the view that the U.S. higher education system is going in the wrong direction. But some groups are more likely than others to say this. For example, adults who have a four-year college degree are somewhat more likely than those without a college degree to express this view (74% vs. 69%). A line chart showing that views of higher education have turned more negative in both parties. Similarly, 77% of Republicans and Republican-leaning independents say the higher education system is going in the wrong direction, compared with a smaller majority (65%) of Democrats and Democratic leaners. In both parties, these shares have gone up by at least 10 percentage points since 2020 – and the gap between Republicans and Democrats has narrowed.
Sunday, October 26, 2025
Saturday, October 25, 2025
‘Urgent need’ for more AI literacy in higher education, report says - Anna McKie, Research Professional News
Friday, October 24, 2025
Concern and excitement about AI - Jacob Poushter, Moira Fagan and Manolo Corichi, Pew Research Center
Thursday, October 23, 2025
Sharing Resources, Best Practices in AI - Ashley Mowreader, Inside Higher Ed
While generative artificial intelligence tools have proliferated in education and workplace settings, not all tools are free or accessible to students and staff, which can create equity gaps regarding who is able to participate and learn new skills. To address this gap, San Diego State University leaders created an equitable AI alliance in partnership with the University of California, San Diego, and the San Diego Community College District. Together, the institutions work to address affordability and accessibility concerns for AI solutions, as well as share best practices, resources and expertise. In the latest episode of Voices of Student Success, host Ashley Mowreader speaks with James Frazee, San Diego State University’s chief information officer, about the alliance and SDSU’s approach to teaching AI skills to students.
Wednesday, October 22, 2025
Rethinking student assessment in the age of AI - Max Lu, University World News
As large language models (LLMs) demonstrate astounding capability, they are increasingly being used for tasks once reserved for human judgement. From evaluating essays to assessing conversational exams in medical training, LLMs are increasingly being considered for use beyond formative feedback, including in the high-stakes world of summative assessment. Their appeal is obvious, but before we delegate the complex task of evaluation to algorithms, we must ask a more fundamental question: To what extent does an LLM’s rating represent a student’s actual capability?
Tuesday, October 21, 2025
How to Teach Critical Thinking When AI Does the Thinking - Timothy Cook, Psychology Today
Students who've learned dialogic engagement with AI behave completely differently. They ask follow-up questions during class discussions. They can explain their reasoning when challenged. They challenge each other's arguments using evidence they personally evaluated. They identify limitations in their own conclusions. They want to keep investigating beyond the assignment requirements. The difference is how they used it. This means approaching every AI interaction as a sustained interrogation. Instead of "write an analysis of symbolism in The Great Gatsby," students must "generate an AI analysis first, then critique what it missed with their own interpretations of the symbolism. “What assumptions does the AI make in its interpretation and how could it be wrong?" “What would a 20th-century historian say about this approach?” “Can you see these themes present in The Great Gatsby in your own life?”
Monday, October 20, 2025
‘The Future of Teaching in the AI Age’ Draws Hundreds of Educators to Iona University - Iona University
Sunday, October 19, 2025
‘It would almost be stupid not to use ChatGPT’ - Hoger Onderwijs Persbureau, Resource Online Netherlands
Amid widespread concern among lecturers about students’ use of AI tools, public philosopher Bas Haring mostly sees opportunities: ‘Outsourcing part of the thinking process to AI shouldn’t be prohibited.’ Bas Haring annoyed a lot of people with a provocative recent experiment. For one of his students last year, the philosopher and professor of public understanding of science delegated his responsibilities as a thesis supervisor to AI. The student discussed her progress not with Haring, but with ChatGPT – and the results were surprisingly positive. While Haring may be excited about the outcome of his experiment, not everyone shares his enthusiasm. Some have called it unethical, irresponsible, unimaginative and even disgusting. It has also been suggested that this could provide populists with an excuse to further slash education budgets.
Saturday, October 18, 2025
How to lead through the AI disruption - Ruba Borno, McKinsey
Friday, October 17, 2025
C-RAC Releases Statement on the Use of Artificial Intelligence (AI) - MSCHE
On October 6, 2025, the Council of Regional Accrediting Commissions (C-RAC) released a Statement on the Use of Artificial Intelligence (AI) to Advance Learning Evaluation and Recognition. C-RAC stated:
Thursday, October 16, 2025
As we celebrate teachers, AI is redefining the classroom - Hani Shehada, CGTN
Wednesday, October 15, 2025
Higher Education AI Transformation 2030 - Ray Schroeder, Inside Higher Ed
Tuesday, October 14, 2025
From Detection to Development: How Universities Are Ethically Embedding AI for Learning - Isabelle Bambury, Higher Education Policy Institute
The Universities UK Annual Conference always serves as a vital barometer for the higher education sector, and this year, few topics were as prominent as the role of Generative Artificial Intelligence (GenAI). A packed session, Ethical AI in Higher Education for improving learning outcomes: A policy and leadership discussion, provided a refreshing and pragmatic perspective, moving the conversation beyond academic integrity fears and towards genuine educational innovation. Based on early findings from new independent research commissioned by Studiosity, the session’s panellists offered crucial insights and a clear path forward.
Monday, October 13, 2025
Four Ways To Improve The Selection Of Leaders -Tomas Chamorro-Premuzic, Forbes
Sunday, October 12, 2025
William & Mary launches ChatGPT Edu pilot - Laren Weber, William and Mary
The initiative is a collaboration between the School of Computing, Data Sciences & Physics (CDSP), Information Technology, W&M Libraries and the Mason School of Business and is part of a broader push to embed advanced AI into everyday academic life. The pilot will explore how AI can enhance teaching, research and university operations, while also gathering feedback to guide the responsible and effective use of AI across campus. The results will help shape how W&M leverages AI to advance our world-class academics and research. Additionally, faculty and staff outside of the pilot who are interested in purchasing an Edu license can visit the W&M ChatGPT Edu site for more information.
https://news.wm.edu/2025/10/01/william-mary-launches-chatgpt-edu-pilot/
Saturday, October 11, 2025
UMass Students Showcase AI Tools Built for State Agencies - Government Technology
Friday, October 10, 2025
The agentic organization: Contours of the next paradigm for the AI era - Alexander Sukharevsky, et al; McKinsey
Thursday, October 9, 2025
Winning through the turns: How smart companies can thrive amid uncertainty - McKinsey
Wednesday, October 8, 2025
ChatGPT Study Mode - Explained By A Learning Coach - Justin Sung, YouTube
The main issue is that the interaction remains very user-led, as Study Mode struggles to dynamically adjust its teaching to a beginner's exact level or pinpoint the root cause of confusion without specific, targeted input from the student [10:10]. The coach found that a passive learner could be stuck in confusion for 30 minutes, whereas an active, metacognitive learner was able to break through the same confusion in just two minutes by asking the right questions [16:15]. Ultimately, the host recommends using Study Mode for targeted study with specific questions, advising that users must embrace active, effortful thinking because effective learning cannot be made easy [19:18]. [summary provided in part by Gemini 2.5 Flash]
Tuesday, October 7, 2025
Linking digital competence, self-efficacy, and digital stress to perceived interactivity in AI-supported learning contexts - Jiaxin Ren, Juncheng Guo & Huanxi Li, Nature
As artificial intelligence technologies become more integrated into educational contexts, understanding how learners perceive and interact with such systems remains an important area of inquiry. This study investigated associations between digital competence and learners’ perceived interactivity with artificial intelligence, considering the potential mediating roles of information retrieval self-efficacy and self-efficacy for human–robot interaction, as well as the potential moderating role of digital stress. Drawing on constructivist learning theory, the technology acceptance model, cognitive load theory, the identical elements theory, and the control–value theory of achievement emotions, a moderated serial mediation model was tested using data from 921 Chinese university students. The results indicated that digital competence was positively associated with perceived interactivity, both directly and indirectly through a sequential pathway involving the two forms of self-efficacy.
Monday, October 6, 2025
Sans Safeguards, AI in Education Risks Deepening Inequality - Government Technology
A new UNESCO report cautions that artificial intelligence has the potential to threaten students’ access to quality education. The organization calls for a focus on people, to ensure digital tools enhance education. While AI and other digital technology hold enormous potential to improve education, a new UNESCO report warns they also risk eroding human rights and worsening inequality if deployed without deliberately robust safeguards. Digitalization and AI in education must be anchored in human rights, UNESCO argued in the report, AI and Education: Protecting the Rights of Learners, and the organization urged governments and international organizations to focus on people, not technology, to ensure digital tools enhance rather than endanger the right to education.
https://www.govtech.com/education/k-12/sans-safeguards-ai-in-education-risks-deepening-inequality
Sunday, October 5, 2025
From Veterans to Caregivers—The Importance of Expanding Remote Education for Women Worldwide - Brittany R. Collins, Ms. Magazine
Saturday, October 4, 2025
The relationship between online learning self-efficacy and learning engagement: the mediating role of achievement motivation and flow among registered nurses - Tong Zhou, Frontiers Psychology
Friday, October 3, 2025
We’re introducing GDPval, a new evaluation that measures model performance on economically valuable, real-world tasks across 44 occupations. - OpenAI
Thursday, October 2, 2025
We urgently call for international red lines to prevent unacceptable AI risks. - AI Red Lines
Some advanced AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in the world. Left unchecked, many experts, including those at the forefront of development, warn that it will become increasingly difficult to exert meaningful human control in the coming years. Governments must act decisively before the window for meaningful intervention closes. An international agreement on clear and verifiable red lines is necessary for preventing universally unacceptable risks. These red lines should build upon and enforce existing global frameworks and voluntary corporate commitments, ensuring that all advanced AI providers are accountable to shared thresholds. We urge governments to reach an international agreement on red lines for AI — ensuring they are operational, with robust enforcement mechanisms — by the end of 2026.